uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,563,115 | arxiv |
\section{Convex-concave procedure}%
\label{sec:covexconcave}
Any mathematical program with indefinite quadratic functions in the constraint set or the objective can be expressed as a difference of convex (DC) programming problems~\cite{Lipp2016} of the following form:
\begin{align}
\min_{x} & &f_{0}(x) - g_{0}(x)& \\
\text{s.t. } &&f_{i}(x) - g_{i}(x) &\leqslant 0 \quad i=1,\ldots, m
\end{align}
where $f_{i} : \mathrm{R}^{n} \rightarrow \mathrm{R}$ and $g_{i} : \mathrm{R}^{n} \rightarrow \mathrm{R}$ for $i=0,\ldots,m$ are convex functions.
The convex-concave procedure (CCP) is a heuristic based on DC problems to obtain a stationary point of the original non-convex problem as explained in~\cite{NIPS2009_3646}.
In this case the DC hydroelectric power generation functions are in the power balance constraints~\eqref{eq:powerbalance}, and can be expressed as a difference of differentiable quadratic functions by decomposing $-\widehat{\mathbf{H}}^{\Hindex}$ into a difference of positive definite matrices:
\begin{equation}
\widehat{\mathbf{H}}^{\Hindex} = -\widehat{\mathbf{H}}^{\Hindex}_{+} + \widehat{\mathbf{H}}_{-}^{\Hindex}.
\end{equation}
\begin{algorithm}
\begin{algorithmic}[1]
\Require Solution $\mathbf{x}^{\Timeindex, \Hindex}_{(0)}$ to problem \eqref{eq:cost}--\eqref{eq:snonnegative}, \eqref{eq:q1m}--\eqref{eq:upperv2}
\State Decompose $\widehat{\mathbf{H}}^{\Hindex} = -\widehat{\mathbf{H}}^{\Hindex}_{+} + \widehat{\mathbf{H}}_{-}^{\Hindex} $
\State Set $k \leftarrow 0$
\Repeat
\State Set $k \leftarrow k + 1$
\State Construct $\widetilde{\mathbf{H}}^{\Hindex}_{(k)}$
\State Obtain $\mathbf{x}^{\Timeindex, \Hindex}_{(k)}$ from solving \eqref{eq:cost}, \eqref{eq:QC2}, \eqref{eq:waterbalance}--\eqref{eq:snonnegative}
\Until{$ \sum_{\Tindex \in \NT,\Timeindex \in \Time}\left\| \mathbf{C}^{\Timeindex,\Tindex} \bullet \left( \mathbf{\mathbf{Y} }^{\Timeindex,\Tindex}_{(k)}-\mathbf{\mathbf{Y} }^{\Timeindex,\Tindex}_{(k-1)} \right) \right\|_{2} \leq \epsilon $}
\end{algorithmic}%
\caption{Convex-concave procedure.}
\label{alg:ccp}
\end{algorithm}
Under CCP the concave part of the indefinite quadratic function becomes affine by means of a first-order Taylor series approximation around $\mathbf{x}^{\Timeindex, \Hindex}_{(k-1)}$:
\begin{equation*}
\widetilde{\mathbf{H}}^{\Hindex}_{(k)} = \left[
\begin{matrix}
-\widehat{\mathbf{H}}^{\Hindex}_{+} & \tfrac{1}{2} \left( \mathbf{e}^{\Hindex} + \widehat{\mathbf{H}}_{-}^{\Hindex} {\mathbf{x}^{\Timeindex, \Hindex}_{(k-1)}} \right)\\
\tfrac{1}{2} \left( \mathbf{e}^{\Hindex} + \widehat{\mathbf{H}}_{-}^{\Hindex} {\mathbf{x}^{\Timeindex, \Hindex}_{(k-1)}} \right)^{\intercal} & {\mathbf{x}^{\Timeindex, \Hindex}_{(k-1)}}^{\intercal} \widehat{\mathbf{H}}_{-}^{\Hindex}\mathbf{x}^{\Timeindex, \Hindex}_{(k-1)}
\end{matrix}
\right],
\end{equation*}
where $k$ is the $k$-th iteration of Algorithm~\ref{alg:ccp}, such that the power balance constraints are iteratively reformulated:
\begin{equation}
\sum_{\Hindex \in \NH}\widetilde{\mathbf{H}}^{\Hindex}_{(k)}\bullet \mathbf{X} ^{\Timeindex, \Hindex} +\sum_{\Tindex \in \NT}\mathbf{P} \bullet \mathbf{Y} ^{\Timeindex,\Tindex} \geq d_{\Timeindex}. \label{eq:QC2}
\end{equation}
Each solution $\mathbf{x}^{\Timeindex, \Hindex}_{(k)}$ is recovered as follows:
\begin{displaymath}
x^{\Timeindex, \Hindex}_{(k),j} = \sqrt{\mathbf{X} ^{\Timeindex, \Hindex}_{(k),jj}}, \quad j = 1,2.
\end{displaymath}
\section{Conclusion}
In a recent paper by Yunan et al.~\cite{6514678} a Shor's semidefinite relaxation with global optimality was proposed for the quadratically constrained quadratic hydro-thermal coordination problem.
In this paper, however, we present an empirical analysis showing that such relaxation exactness is only known to be possible if concavity of the hydroelectric power generation function is assumed, therefore resulting, under reasonable assumptions regarding load demand, in an already fully convex problem formulation whose SDP relaxation is redundant.
Furthermore, the concavity assumption hypothesis represents a strong assumption regarding turbine efficiency only reasonable in the short-run operation of hydro plants in very limited situations.
In a numerical case study with average turbine efficiencies, and thus indefinite production functions, the use of McCormick convex envelopes was shown to provide tighter lower bounds on the objective function by orders of magnitude.
Additionally, we provide a reformulation-linearization for stationary point recovery by means of an iterative convex-concave procedure that further suggests the effectiveness of the convex envelope use.
\section{Hydro-thermal coordination}%
\label{sec:htc}
Let forebay and tailwater elevations be described by linear functions of $v$ and $q$, respectively, thus implying cubic geometries of the respective reservoirs, then we have:
\begin{align}
\label{eq:htc:hb}%
h_{b} = & h_{b0}+h_{b1}\cdot v, \text{ and}\\
\label{eq:htc:ht}%
h_{t} = & h_{t0}+h_{t1}\cdot q.
\end{align}
For the sake of consistency with the related literature and the purpose of the present work, losses by hydraulic load and atmospheric pressure differences are assumed constant herein.
If generator efficiency is also assumed constant then we can write $\kappa = g\cdot\rho\cdot\eta_G$, and therefore:
\begin{displaymath}
P_{h} = \kappa\cdot\eta_T\cdot q\cdot(h_{b0}+h_{b1}\cdot v - h_{t0} - h_{t1}\cdot q - \tilde{h}_l - \tilde{h}_a).
\end{displaymath}
If, as it is commonly assumed in the longer-term HTC literature, $\eta_T$ is considered constant, e.g.\ historical mean value, then the hydroelectric power generation function can be expressed as a quadratic function
\begin{equation}
\label{eq:prodfunc}%
P_h = \varepsilon_q \cdot q + \varepsilon_{qq} \cdot q^{2} + \varepsilon_{qv} \cdot v \cdot q
\end{equation}
where $\varepsilon_{q}>0$, $\varepsilon_{qq}<0$, $\varepsilon_{qv} \geqslant 0$, and $\varepsilon_{vv}=0$. Therefore, Eq.~\eqref{eq:prodfunc} is concave with respect to $q$ and indefinite with respect to $v$. In matrix form it is expressed as follows:
\begin{equation}
P_{h} ={ \mathbf{x}^{\Timeindex,\Hindex}} ^{\intercal} \widehat{\mathbf{H}}^{\Hindex}\mathbf{x}^{\Timeindex,\Hindex} +{\mathbf{e}^h}^{\intercal}\mathbf{x}^{\Timeindex,\Hindex},
\end{equation}
where
\begin{align*}
\mathbf{x}^{\Timeindex, \Hindex} &= \left[ \begin{matrix}
v_{\Timeindex,\Hindex}\\
q_{\Timeindex,\Hindex}\\
\end{matrix}\right],\\
\widehat{\mathbf{H}}^{\Hindex} &= \left[ \begin{matrix}
0 & \varepsilon_{qv}/2\\
\varepsilon_{qv}/2 & \varepsilon_{qq}
\end{matrix} \right], \text{ and}\\
\mathbf{e}^{h} &= \left[ \begin{matrix}
0\\
\varepsilon_{q}\\
\end{matrix} \right].
\end{align*}
On the other hand, if $\eta_T$ is considered variable with respect to either $v$ or $q$, higher order terms arise in the hydroelectric generation function.
If, for example, we assume $\eta_T$ to be constant with respect to $q$, and, say, to vary linearly (either increasing or decreasing) with $v$, then a third-order term $vq^2$ arises, which should be ignored if one seeks to constrain~\eqref{eq:prodfunc} to the quadratic order.
Thus in this case, the concavity of $P_h$ is not necessarily defined as shown in~\cite{4113907,260860,6514678}, unless one assumes that the hydraulic turbine efficiency monotonically decreases with head, i.e. $\varepsilon_{vv} < 0$, since~\eqref{eq:htc:hb} is increasing.
This should be true only for restricted values of $v$, which is not a reasonable assumption of more general acceptability, except in cases when the hydro plant is operating with high enough head, i.e.~higher than the reference head.
The HTC problem is then formulated as that of minimizing the variable costs associated with thermoelectric power generation, subject to equations representing power balance between generation and load, and mass conservation of water, as well as inequalities representing engineering constraints, i.e.\ limits on water storage, discharge and spillage, and power output.
A Shor's semidefinite relaxation of the HTC problem is presented in~\cite{6514678} for a quadratic formulation of the hydroelectric generation function, that makes no assumption regarding the value of $\varepsilon_{vv}$.
In such relaxation we have:
\begin{align}
\label{eq:htc:X}%
\mathbf{X} ^{\Timeindex, \Hindex} &= \left[ \begin{matrix}
\widehat{\mathbf{X} }^{\Timeindex, \Hindex} & \mathbf{x}^{\Timeindex, \Hindex} \\
{ \mathbf{x}^{\Timeindex, \Hindex}}^{\intercal} & 1
\end{matrix} \right], \text{ and}\\
\label{eq:htc:Y}%
\mathbf{Y} ^{\Timeindex,\Tindex} &= \left[p_{\Timeindex,\Tindex} \quad 1 \right]^{\intercal}\left[p_{\Timeindex,\Tindex} \quad 1 \right],
\end{align}
where $p_{\Tindex,\Timeindex}$ represents thermoelectric power generation, such that the relaxed HTC problem is formulated as follows:
\begin{alignat}{4}
&& \min_{\mathbf{X} ^{\Timeindex,\Hindex}, \mathbf{Y} ^{\Timeindex,\Tindex}, \mathbf{s}} \sum_{\Tindex \in \NT,\Timeindex \in \Time} \mathbf{C}^{\Timeindex,\Tindex}\bullet\mathbf{\mathbf{Y} }^{\Timeindex,\Tindex} & \label{eq:cost}\\
\text{s.t. } & &\sum_{\Hindex \in \NH}\mathbf{H}^{\Hindex} \bullet \mathbf{X} ^{\Timeindex, \Hindex} +\sum_{\Tindex \in \NT}\mathbf{P} \bullet \mathbf{Y} ^{\Timeindex,\Tindex} & \geqslant d_{\Timeindex} & {\scriptstyle\forall \Timeindex \in \Time} \label{eq:powerbalance}\\
&&\left( \theta_{t} \mathbf{V}+\mathbf{Q} \right)\bullet \mathbf{X} ^{\Timeindex,\Hindex}-\theta_{t}\mathbf{V}\bullet \mathbf{X} ^{\Timeindex-1,\Hindex} & \nonumber \\
&&+ s_{\Timeindex,\Hindex} -\sum_{\widehat{\Hindex}\in \Psi_{h}} \left( \mathbf{Q}\bullet \mathbf{X} ^{\Timeindex,\widehat{\Hindex}}+ s_{\Timeindex,\widehat{\Hindex}} \right) &= e_{\Hindex,\Timeindex} & {\scriptstyle\forall \Hindex \in \NH ; \Timeindex \in \Time } \label{eq:waterbalance} \\
&&\underline{v}_{\Hindex} \leqslant \mathbf{V} \bullet \mathbf{X} ^{\Timeindex,\Hindex} & \leqslant \overline{v}_{h} & {\scriptstyle\forall \Hindex \in \NH; \Timeindex \in \Time} \label{eq:volumelimits} \\
&&\underline{q}_{\Hindex} \leqslant \mathbf{Q} \bullet \mathbf{X} ^{\Timeindex,\Hindex}& \leqslant \overline{q}_{\Hindex} & {\scriptstyle\forall \Hindex \in \NH
; \Timeindex \in \Time } \label{eq:turbinatedlimits}\\
& &\underline{p}_{\Tindex} \leqslant \mathbf{P} \bullet \mathbf{Y} ^{\Timeindex,\Tindex} & \leqslant \overline{p}_{\Tindex} & {\scriptstyle\forall \Tindex \in \NT ; \Timeindex \in \Time } \label{eq:powerlimits}\\
&&\mathbf{X} ^{\Timeindex, \Hindex} &\succeq 0 & {\scriptstyle\forall \Hindex \in \NH
; \Timeindex \in \Time } \label{eq:hydrosemidefinite}\\
&&\mathbf{Y} ^{\Timeindex,\Tindex} &\succeq 0 &{\scriptstyle\forall \Tindex \in \NT ; \Timeindex \in \Time } \label{eq:thermalsemidefinite}\\
&&s_{\Timeindex,\Hindex} &\geqslant 0 & {\scriptstyle\forall \Hindex \in \NH ; \Timeindex \in \Time } \label{eq:snonnegative}
\end{alignat}
where $\mathbf{C}^{\Timeindex,\Tindex}$ represents the variable costs of a thermal plant $\Tindex$ at time $\Timeindex$, typically formulated as a convex function. The hydroelectric generation function is represented by $\mathbf{H}^{\Hindex}$ for a hydro plant $\Hindex$, such that:
\begin{equation}
\mathbf{H}^{\Hindex} =
\left[ \begin{matrix}
\widehat{\mathbf{H}}^{\Hindex}& \mathbf{e}^{\Hindex}\\
{\mathbf{e}^{\Hindex} }^{\intercal} & 0
\end{matrix} \right].
\end{equation}
The set of hydro plants immediately upstream of $\Hindex$ is represented by $\Psi_{h}$. A time-dependent volume-to-flow conversion coefficient is given by $\theta_{t}$. Water storage volume and discharge are represented by $\mathbf{V}$ and $\mathbf{Q}$, as well as thermoelectric power generation is equivalently represented by the Frobenius product $\mathbf{P} \bullet \mathbf{Y} ^{\Timeindex,\Tindex}$, with limits defined in~\eqref{eq:powerlimits}, such that:
\begin{displaymath}
\mathbf{V} = \left[
\begin{matrix}
0 & 0 & \tfrac{1}{2}\\
0 & 0 & 0\\
\tfrac{1}{2} & 0 & 0\\
\end{matrix} \right], \;
\mathbf{Q} = \left[
\begin{matrix}
0 & 0 &0\\
0 & 0 & \tfrac{1}{2}\\
0 & \tfrac{1}{2} & 0\\
\end{matrix} \right], \text{ and }
\mathbf{P} = \left[
\begin{matrix}
0 & \tfrac{1}{2}\\
\tfrac{1}{2} & 0\\
\end{matrix} \right].
\end{displaymath}
In~\eqref{eq:waterbalance}, mass conservation of water is formulated as a linear algebraic system describing reservoir cascades and temporal coupling, where $s_{\Timeindex,\Hindex}$ is a variable representing spillage, and $e_{\Timeindex,\Hindex}$ is given as inflow.
Limits on volume and discharge are described in~\eqref{eq:volumelimits}, and~\eqref{eq:turbinatedlimits}, respectively.
Initial ($v_{0,\Hindex}$) and final target ($v_{T,\Hindex}$) volumes are given as boundary conditions of the problem, and respectively represented in~\eqref{eq:waterbalance}, and~\eqref{eq:volumelimits}.
As a consequence of~\eqref{eq:htc:X} and~\eqref{eq:htc:Y}, semidefiniteness constraints on $\mathbf{X} ^{\Timeindex, \Hindex}$ and $\mathbf{Y} ^{\Timeindex, \Tindex}$ in~\eqref{eq:hydrosemidefinite}, and~\eqref{eq:thermalsemidefinite}, respectively, whereas their respective rank-1 constraints are relaxed.
\section{Introduction}
\label{sec:introduction}
Hydroelectric power generation derives from total physical work available from water elevated by dams. It is commonly expressed as an increasing function of net head ($h_n$), and turbine-released, or (equivalently) discharged water ($q$) for given turbine ($\eta_{T}$) and generator ($\eta_{G}$) efficiencies, such that
\begin{equation}
\label{eq:introduction:production}%
P_{h} = g\cdot\rho\cdot\eta_G\cdot\eta_T\cdot q\cdot h_n,
\end{equation}
where $g$ and $\rho$ are constants representing gravity acceleration, and water density, respectively. Net head accounts for the difference between forebay ($h_{b}$) and tailwater ($h_{t}$) elevations, as well as losses due to hydraulic load ($h_l$) and atmospheric pressure differences ($h_a$)
\begin{equation}
h_{n}(\cdot) = h_{b}(v) - h_{t}(q) - h_{l}(q) - h_{a}(v).
\end{equation}
Hydraulic load losses are commonly formulated as a convex quadratic function~\cite{982207} over $q$, and losses due to atmospheric pressure are more prominent as $h_b - h_t$ increases.
Forebay elevation is a function of volume ($v$) of water in the reservoir.
Analogously, tailwater elevation is a function of water discharge.
Alternatively, it could also be a function of spillage~\cite{1602016}.
Both functions are strictly increasing on their variables if a three-dimensional geometric reservoir model is considered~\cite{VIEIRA2015781}.
If a cubic geometry is assumed then $h_b(v)$ and $h_t(q)$ are described by linear functions.
Otherwise, if trapezoidal geometries are assumed then higher order polynomials are necessary in order to represent variable head.
Nonlinear productivity with respect to water discharge in the hydroelectric power generation function, along with the intrinsic uncertainty with respect to future water availability, comprise two of the major computational challenges of the hydro-thermal coordination (HTC) problem.
Stochastic approaches not uncommonly assume a two-dimensional representation of reservoirs, i.e. constant $h_n$ and, therefore, constant productivity, resulting in linear functions of discharge~\cite{Pereira1991}.
On the other hand, deterministic models of the HTC problem resort to nonlinear formulations of the hydroelectric generation function, from second-order concave approximations~\cite{4113907,260860} to higher-order non-convex polynomial representations~\cite{Martins2014}.
General efficiency of a hydraulic turbine is defined as the ratio of power delivered to the shaft to the power available in the moving water.
Maximum hydraulic efficiency is specified at design time for given reference values of net head and discharge.
It is commonly described in the literature as a normalized concave quadratic function~\cite{4275241,466476} of $h_n$, and $q$,
\begin{equation}
\eta_{T}(\cdot) = e_{0} + e_{h}h_{n}+e_{q}q+e_{hq}h_{n} q+e_{hh}h_{n}^2+e_{h}q^2,
\end{equation}
as illustrated in~\figurename~\ref{fig:efficiency}.
\begin{figure}
\begin{tikzpicture}
\begin{axis}[,xlabel = $h_{n}(m)$
, ylabel = $q (m^{3}/s)$, domain=30:60,domain y =25:320 , view={0}{90},colormap={examplemap}{rgb=(0.9,0.9,0.9) rgb=(0,0,0)},width=8.5cm,height =6cm ]
\addplot3[contour gnuplot={levels={0.2,0.4,0.6,0.8,0.9,0.95,1}},samples=50,thick] {-0.21311 + 0.022762*x+0.0093291*y+0.0000451*x*y-0.000291*x^2-0.000041*y^2 };
\end{axis}
\end{tikzpicture}
\caption{A hypothetical hydroelectric turbine efficiency curve~\cite{466476}.}
\label{fig:efficiency}%
\end{figure}
As far as turbine and generator efficiencies are concerned, other sorts of simplifications are commonly proposed and have to do with the time resolution under consideration.
In close to real-time operation decision-making, it is advisable to take into account turbine efficiency~\cite{Finardi2006,Yeh2013}.
As the time horizon of the HTC problem increases, lower time resolutions are commonly considered, and thus the use of average efficiency is a common model assumption~\cite{VIEIRA2015781,Martins2014}.
Recently, Yunan et al.~\cite{6514678} proposed a Shor's' semidefinite relaxation of the short-term HTC problem for global solution with quadratic formulations of Eq.~\eqref{eq:introduction:production}.
In this paper we show that the relaxation suggested in~\cite{6514678} is only exact if concavity of~\eqref{eq:introduction:production} is assumed.
Moreover, we show how such hypothesis represents a strong assumption regarding turbine efficiency not reasonable in the short-run operation of a hydro plant.
Additionally, we propose the use of McCormick convex envelopes as a strategy to tighten the relaxation, in combination with a standard iterative convex-concave procedure to recover a stationary point of the original non-convex problem.
\section{McCormick convex envelopes}%
\label{sec:mccormick}
The implicit semidefiniteness constraints (by taking the Shur's complement on $\mathbf{X} ^{\Timeindex, \Hindex}$) in Shor's relaxation constitutes a lower bound of every bilinear and quadratic term in~$\widehat{\mathbf{X}}^{\Timeindex,\Hindex}$:
\begin{equation}
\mathbf{X} ^{\Timeindex, \Hindex} \succeq 0 \rightarrow \widehat{\mathbf{X}}^{\Timeindex,\Hindex} \succeq { \mathbf{x}^{\Timeindex,\Hindex}}^{\intercal}\mathbf{x}^{\Timeindex,\Hindex}.
\end{equation}
Because the physically-derived hydroelectric power generation function is indefinite, its terms with nonnegative coefficients will require an upper bound.
These upper bounds can be obtained by using McCormick convex envelopes~\cite{McCormick1976}.
This approach is based on the relaxation of bilinear terms, whose generalizations have been proposed in~\cite{Sherali1992,Anstreicher2012}.
These are called reformulation-linearization and can use the box constraints on $v_{h,t}$ and $q_{h,t}$ to construct bounds for quadratic and bilinear terms resulting from Shor's semidefinite relaxation:
\begin{equation}
(\overline{v}_{\Hindex}-v_{\Timeindex,\Hindex}) ,(v_{\Timeindex,\Hindex}-\underline{v}_{\Hindex}), (\overline{q}_{\Hindex}-q_{\Timeindex,\Hindex}) ,(q_{\Timeindex,\Hindex}-\underline{q}_{\Hindex}) \geqslant 0. \label{eq:bounds}
\end{equation}
We can multiply some of these nonnegative differences and obtain the following inequalities:
\begin{align}
\left(\overline{v}_{\Hindex} +\underline{v}_{\Hindex} \right) v_{\Timeindex,\Hindex} -\overline{v}_{\Hindex} \underline{v}_{\Hindex} & \geqslant v_{\Hindex,\Timeindex}^{2},\\
\underline{q}_{\Hindex}v_{\Timeindex,\Hindex} +\overline{v}_{\Hindex} q_{\Timeindex,\Hindex} -\overline{v}_{\Hindex}\underline{q}_{\Hindex} & \geqslant v_{\Timeindex,\Hindex}q_{\Timeindex,\Hindex}, \text{ and}\\
\underline{v}_{\Hindex}q_{\Timeindex,\Hindex} +\overline{q}_{\Hindex} v_{\Timeindex,\Hindex} -\overline{q}_{\Hindex}\underline{v}_{\Hindex} & \geqslant v_{\Timeindex,\Hindex}q_{\Timeindex,\Hindex}.
\end{align}
It follows from the superlinear monotonicity of~\eqref{eq:cost} and the additivity of~\eqref{eq:powerbalance} that hydroelectric power generation is maximized complementarily to thermal power. Therefore, given that $\varepsilon_{qq}< 0$, and $\varepsilon_{qv} \geqslant 0$, then upper bounds on bilinear terms $q_{\Timeindex,\Hindex}v_{\Timeindex,\Hindex}$ are introduced, if $\varepsilon_{qv} > 0$, by means of the following inequalities:
\begin{align}
\left[ \begin{matrix}
0 & -\tfrac{1}{2} & \tfrac{1}{2} \underline{q}_{\Hindex} \\
- \tfrac{1}{2} & 0 & \tfrac{1}{2} \overline{v}_{\Hindex} \\
\tfrac{1}{2} \underline{q}_{\Hindex} &\tfrac{1}{2} \overline{v}_{\Hindex} & 0
\end{matrix} \right] \bullet \mathbf{X} ^{\Timeindex,\Hindex} &\geqslant \overline{v}_{\Hindex}\underline{q}_{\Hindex}, \text{ and}\label{eq:q1m}\\
\left[ \begin{matrix}
0 & -\tfrac{1}{2} & \tfrac{1}{2} \overline{q}_{\Hindex} \\
- \tfrac{1}{2} & 0 & \tfrac{1}{2} \underline{v}_{\Hindex} \\
\tfrac{1}{2} \overline{q}_{\Hindex} &\tfrac{1}{2} \underline{v}_{\Hindex} & 0
\end{matrix} \right] \bullet \mathbf{X} ^{\Timeindex,\Hindex} &\geqslant \underline{v}_{\Hindex}\overline{q}_{\Hindex}. \label{eq:q2m}
\end{align}
Analogously, $v^{2}$ is also upper bounded:
\begin{equation}\label{eq:upperv2}
\left[ \begin{matrix}
-1 & 0 & \tfrac{1}{2} \left(\overline{v}_{\Hindex} +\underline{v}_{\Hindex} \right) \\
0 & 0 & 0 \\
\tfrac{1}{2} \left(\overline{v}_{\Hindex} +\underline{v}_{\Hindex} \right) &0 & 0
\end{matrix} \right] \bullet \mathbf{X} ^{\Timeindex,\Hindex} \geqslant \overline{v}_{\Hindex} \underline{v}_{\Hindex}.
\end{equation}
\section{Numerical experiments}%
\label{sec:results}
A numerical example is shown to illustrate the inexactness of the Shor's SDP relaxation of Yunan et al.~\cite{6514678} for the simple case in which an average hydraulic efficiency is considered, thus resulting in a non-concave hydroelectric production function as the one formulated in~\eqref{eq:prodfunc}.
Complete case study data are presented in Appendix~\ref{appendix1}.
The case study uses data from 5 hydro plants in the Brazilian Paranaíba river basin.
A fictitious thermal plant complements the hypothetical case study power system.
Hydro plants GH2, GH4, and GH5 are run-off-river, meaning that their reservoir volumes remain constant for all $\Timeindex$ with monthly discretization in a year span.
Boundary conditions at maximum storage volume were equally defined for each of the hydro plants.
\figurename~\ref{fig:system} depicts the transmission-unconstrained system configuration with constant $1551.4$~MW load.
Optimization was carried over in Python~3 and CVXPY~\cite{cvxpy} interfaced with the SDPA~\cite{SDPA} solver for semidefinite programming.
A comparison of the lower bounds provided by the different approaches is listed in Table~\ref{tab:comparison}, along with the stationary point found by CCP.
The introduction of McCormick convex envelopes has allowed for a two orders of magnitude improvement on the objective function lower bound.
\figurename~\ref{fig:iterations} illustrates the objective function values at the end of each of the 9 CCP iterations required for convergence in about 7 seconds.
Table~\ref{tab:powerresult} lists the power generation results for each of the plants.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
height=0.3\textwidth,
width=0.4\textwidth,
xlabel=$k$,
xtick={0,...,9},
ylabel=Obj. Func.,
]
\addplot[only marks,mark=*,black] coordinates {(0,2701900.11389) (1,2730291.78877) (2,2729645.90622)(3,2729579.66422)(4,2729568.20284) (5,2729566.03575) (6,2729565.61775)(7,2729565.53695)(8,2729565.52129) (9,2729565.51821) }
node[pos=0.0, pin=45:SDP + McCormick]{} ;
\end{axis}
\end{tikzpicture}
\caption{Progress of the objective function value at each iteration.}%
\label{fig:iterations}
\end{figure}
\begin{table}
\centering
\scriptsize
\begin{threeparttable}
\caption{Comparison between relaxation strategies.}%
\label{tab:comparison}
\begin{tabular}{lr}
\toprule
Relaxation & Obj.~Func.\\
\midrule
SDP\tnote{1} & 68,083.08\\
SDP + McCormick & 2,701,900.11\\
\midrule
Convex-concave procedure & 2,729,565.52 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[1] Yunan et al.~\cite{6514678}.
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{table}[!t]
\caption{Power generation results.} \label{tab:powerresult}
\centering
\scriptsize
\begin{tabular}{lrrrrrr}
\toprule
Month & GH1 & GH2 & GH3 & GH4 & GH5 & GT1 \\
\midrule
1& 117.49 & 98.73 & 217.53 & 304.40 & 37.06 & 776.19\\
2& 91.91 & 77.27 & 162.84 & 237.01 & 29.48 & 952.90\\
3 & 76.47 & 64.30 & 128.83 & 191.17 & 24.68 & 1,065.94\\
4 & 63.61 & 53.72 & 130.50 & 167.80 & 21.74 & 1,114.03\\
5 & 58.69 & 49.47 & 159.59 & 176.83 & 22.55 & 1,084.27\\
6 & 71.11 & 59.93 & 162.42 & 207.44 & 28.34 & 1,022.16\\
7 & 103.14 & 87.44 & 166.17 & 283.50 & 36.44 & 874.72\\
8 & 164.88 & 139.40 & 209.08 & 437.88 & 50.24 & 549.92\\
9 & 232.45 & 194.34 & 250.80 & 560.90 & 61.37 & 251.55 \\
10 & 239.82 & 200.71 & 302.70 & 577.69 & 66.59 & 163.88\\
11 & 224.86 & 188.42 & 375.03 & 580.51 & 66.02 & 116.56 \\
12 & 169.54 & 142.65 & 335.53 & 454.64 & 52.01& 397.03 \\
\bottomrule
\end{tabular}
\end{table}
\section{Exactness of the semidefinite relaxation}%
\label{sec:relaxation}
Although power balance~\eqref{eq:powerbalance} between load, and total hydro and thermoelectric generation is a constraint that must be strictly~\cite{6514678} observed, it can be formulated as an inequality active in an optimal solution, if~\eqref{eq:cost} is a monotonically increasing function, and the following condition holds:
\begin{equation}
d_{\Timeindex} \geqslant \sum_{\Tindex \in \NT} \underline{p}_{\Tindex} + \max_{\mathbf{X} ^{\Timeindex, \Hindex} \in \Omega} \sum_{\Hindex \in \NH}\mathbf{H}^{\Hindex} \bullet \mathbf{X} ^{\Timeindex, \Hindex} \; \forall \Timeindex \in \Time \label{eq:maxgen}
\end{equation}
where $\Omega = \left\{ \mathbf{X}^{\Timeindex,\Hindex} : \mathbf{X}^{\Timeindex,\Hindex} \in \eqref{eq:waterbalance}, \eqref{eq:volumelimits}, \eqref{eq:turbinatedlimits} \text{ and } \eqref{eq:hydrosemidefinite} \right\}$.
In other words, condition~\eqref{eq:maxgen} establishes the reasonable assumption that, as long as load demand cannot be met exclusively with hydroelectric power, power balance equations can be exactly relaxed into inequalities in a stationary point.
Unless empirically defined by means of statistical regression with a concavity constraint, and subject to overestimation errors since higher order negative terms are dropped, it is physically reasonable to observe that, in a quadratic formulation of the hydroelectric power generation function with constant turbine efficiency (e.g. average), coefficients $\varepsilon_{0}$, $\varepsilon_{v}$, and $\varepsilon_{vv}$ must be zero.
This results in indefiniteness of the function, as its respective eigenvalues of $\widehat{\mathbf{H}}^{\Hindex}$ are given by:
\begin{equation}
\lambda_{1},\lambda_{2} = \frac{\varepsilon_{qq} \pm \sqrt{4\varepsilon_{qv}^{2} + \varepsilon_{qq}^{2} }}{2}
\end{equation}
If, however, $\widehat{\mathbf{H}}^{\Hindex}$ is at least negative semidefinite, such that $\varepsilon_{vv} < 0$, then $P_h$ becomes concave, and the HTC problem as formulated in~\eqref{eq:cost}--\eqref{eq:snonnegative} is convex, and thus no convex relaxation is necessary.
Moreover, as shown in lemmas~1 and~2 of~\cite{2017arXiv170307870P}, Shor's semidefinite relaxation of such quadratically constrained quadratic problems (QCQP) is exact.
Despite the purportedly generality of the relaxation proposed in~\cite{6514678}, all numerical case studies presented therein fall in such QCQP convex formulation.
Its general applicability to non-concave formulations of the hydroelectric power generation function, however, fails the general conditions for relaxation exactness of~\cite{Kim2003}, since $\mathbf{V}$, $\mathbf{Q}$, and $\mathbf{P}$ are off-diagonal nonnegative.
Furthermore, the sign definiteness conditions of Sojoudi et al.~\cite{7039733} for exact relaxation cannot be confirmed since no assumptions regarding the signs of constraint coefficients are provided by Yunan et al.~\cite{6514678}.
|
1,108,101,563,116 | arxiv | \section*{Acknowledgements}
We thank F. Grasselli for helpful discussions, and S. Das, S. B\"auml, M. Winczewski and K. Horodecki for clarifying discussions about of the apparent contradiction of our results with Ref.~\cite{das2019universal}. We also thank an anonymous referee for valuable comments that inspired us to strengthen our results. \noindent This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) under Germany's Excellence Strategy - Cluster of Excellence
Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769.
|
1,108,101,563,117 | arxiv | \section*{ Introduction}
In our previous papers \cite{telegin10,telegin11} we have evaluated contributions from various beam-pipe components to the longitudinal BB impedance of the NESTOR ring which is under construction in NSC KIPT \cite{zelinsky05}. Both analytical formulas and simulation codes were used for this purpose.
In this paper we present the simulation results for the beam position monitor (BPM) and for the small assembly which incorporates a circular ceramic insertion with circular bellows and RF-shields of elliptical cross sections. The latter is intended for mounting the DC-monitor.
\section*{1.Computer simulations }
Computer simulations were performed with CST STUDIO SUITE 2010 \cite{CST}. Both transient and wake field solver were used. In transient simulations (CST Microwave Studio - CST MWS) the faces of the model which correspond to beam-pipe cross sections were considered as waveguide ports. Simulated assembly was excited by a current pulse passing trough a thin wire placed on the beam axis.
In wakefield simulations (CST Particle Studio - CST PS) both short range $(s=10\,cm)$ and long range $(s=5\,m)$ wakes were calculated for $1\,cm$ bunch with a charge $q=10^{10}C$ in order to consider the possibility of exciting some trapped modes in studied structures. The BB impedance $Z_{\parallel}/n$ was calculated from $Z_W(f)$, computed by solver from the wake functions through FFT , by using the following equation:
\begin{equation}\label{c1}
Z_{\parallel}/n=\frac{1}{f_{cut}}\cdot\int \limits_0^{f_{cut}}Z_W(f)\frac{f_0}{f}df \;,
\end{equation}
where $f_0$ is a rotation frequency. The integration was performed up to the cut-off frequency of vacuum pipe $f_{cut}=5.9\,GHz$.
\subsection*{1.1. Beam position monitor}
The beam position monitor (pick-up) represents four electrostatic electrodes - buttons - placed on the inside \
surface of the beam-pipe and separated from the latter by a narrow annular slot. The model of BPM, used in simulations, with the cross section taken through buttons is given in Fig.1.
\begin{figure}[h]
\includegraphics [width=350pt] {Fig1.eps}
\caption {\label{fig1} The cross section of BPM model with a mesh used in the simulations }
\end{figure}
The output of the pick-up electrode is connected to
50 Ohm coaxial line. To simulate propagation of the signal on these outputs the open boundary condition on the boundary box face perpendicular to Z-axis was imposed. Considering two-fold asymmetry a quarter of the pick-up assembly was treated.
Simulations of the BPM with CST Microwave Studio have revealed a group of peaks in S-parameters in the frequency range of $8\div 11\,GHz$. $S_{21}$ parameter calculated in the energy range $0\div 16\,GHz$ is shown in Fig.2.
\vspace{3mm}
\begin{figure}[h]
\includegraphics [width=350pt] {Fig2.eps}
\caption {\label{fig2} $S_{21}$-parameter, calculated with CST MWS }
\end{figure}
The analysis of the field monitors at frequencies $8.4, 9.2$ and $10.5\,GHz$ has shown that all these peaks are associated with excitation of trapped modes ($H_{1j}$) in the pick-up button housing.
Trapped modes in BPM were extensively studied in a number of papers, as a possible threat to precise measurement of beam position in high current storage rings \cite{cameron09,cameron09a}. The frequency of the lowest trapped button mode is given by \cite{cameron09}:
\begin{equation}\label{c2}
f_{button}=\frac{c}{2\pi r} \;
\end{equation}
where $c$ is the speed of light and $r$ is the effective radius of the button. It gives $f_{button}=9.5\,GHz$ for $r=5\,mm$ that is close to $9.2\,GHz$ peak. The electric field of trapped mode at $f= 9.2\,GHz$ is presented in Fig.3.
\begin{figure}
\includegraphics [width=350pt] {Fig3.eps}
\caption {\label{fig3} The electric field in the button housing at $f=9.2\,GHz$ }
\end{figure}
The direct results for longitudinal impedance and loss factor were obtained with CST PS. The wake-field solver calculates the wake function $W(s)$ for a given wake-length $s$ (distance from a driving charge that excites the wake-field). The longitudinal impedance, obtained from the long-ranged wake function ($s=5\,m$) through FFT is portrayed in Fig.4.
\begin{figure}
\includegraphics [width=\columnwidth] {Fig4.eps}
\caption {\label{fig4} Longitudinal impedance Z of BPM in the frequency rage: a) $0\div18\,GHz$, b) $0\div6\,GHz$ }
\end{figure}
Fig.4a shows BPM impedance in the frequency range of $0\div18\,GHz$. It presents a smooth curve up to $f=8\,GHz$ but at higher frequencies a number of peak-like irregularities are seen. Only the first peak at $f=9.2\,GHz$ has explicitly resonance form and can be attributed to a button-trapped mode. Structure of the rest peaks allows us assume that they don't correspond to resonance excitation of any BPM elements.
At high-current facilities like B-factories excitation of button trapped modes can impede the precise measurements of beam position. In our case we are interested in the frequency range up to the cut-off frequency, so in Fig.4b the BPM impedance is shown in the frequency range of $0\div6\,GHz$. The BB impedance obtained with Eq.\eqref{c1} amounts to $0.007\,Ohm$ for a single pick-up (4 pick-up electrodes) that well agree with analytical estimate ($Z_{\parallel}/n=0.01\,Ohm$) obtained earlier \cite{telegin11}. The loss factor calculated by the solver amounts to $0.006\,V/pC$.
\subsection*{1.2. DC-monitor assembly}
Direct current (DC) monitor presents a coil mounted outside of the beam chamber on the cylindrical ceramic insertion ($d_{int}=75\,mm$) incorporated into elliptic beam chamber $(27\times79\,mm)$. To reduce the broadband impedance of a such joint RF-shields are placed inside the ceramic ring from two sides leaving $6\,mm$ gap in the center of the ring, through which the DC-coil is excited. The simulated assembly is presented in Fig.5.
\begin{figure}
\includegraphics [width=\columnwidth] {Fig5.eps}
\caption {\label{fig5} The simulated DC-monitor assembly: a)general view; b)RF-shields, connective ferrule, disk-membrane }
\end{figure}
The model consist of two RF-shields (short and long), a disc-membrane with holes that fixes the short RF-shield inside the vacuum chamber, a ferrule that connects the short RF-shield with the chamber with regular cross section and a number of vacuum volumes. The last correspond mainly to internal volumes of vacuum chamber with different cross sections with one exception: the volume around the ceramic ring is incorporated to ensure the open boundary between ceramics and the DC monitor coils. The dished bellows are presented in the assembly by a cylinder with the radius equal to the external radius of bellows.
From the right of the picture the short RF-shield is connected to the vacuum chamber by the ferrule with elliptic cross section. From the left the model is constrained by the cross section cut through the long RF-shield and the circular vacuum chamber in the region of pumping unit. It is made in order to minimize the model size and and to bring it in correspondence with our computational abilities. In the picture the ceramic ring is given by yellow color and the internal part of the vacuum chamber is given by blue color. In simulations the model is inscribed into boundary box which is filled with PEC material thus imposing the PEC boundary conditions on the chamber walls. At all box faces the open boundary is imposed, i.e. electromagnetic energy can propagate through them. Considering the model configuration the real open boundary is realized on the ceramic insertion and on the model flanks (X-axis). In CST MWS simulations the waveguide ports are assigned at this flanks and a thin wire is stretched along the assembly axis.
\begin{figure}
\includegraphics [width=\columnwidth] {Fig6.eps}
\caption {\label{fig6} The S-parameters of DC-monitor assembly: a) $S_{11}$; b) $S_{21}$.}
\end{figure}
In Fig.6 the S-parameters of the DC-monitor assembly, excited through the ferrule side port (port 1), in the energy range of $0\div6\,GHz$ are presented. The resonant peaks seen in the picture (the most prominent at 672 GHz) are resulting from excitation of e.m. fields in the volume between RF-shields and chamber walls through the gap.
Both the short-range and long-range wave functions $W(s)$ were calculated for the considered assembly and they are portrayed in Fig.7. The reference pulse (normalized charge distribution in the bunch) is also shown. The shape of the short-range wake indicates that it isn't pure inductive and it has an essential resistive component.
\begin{figure*}[h]
\includegraphics [width=\columnwidth] {Fig7.eps}
\caption {\label{fig7} The wake function W(s) of DC-monitor assembly: a) $s=10\,cm$; b) $s=5\,m$.}
\end{figure*}
The longitudinal impedance of DC-monitor assembly, obtained from the long-ranged wake function, is presented in Fig.8. One can see that low-Q resonances in the frequency range of $0\div6\,GHz$ are very similar to those in $S_{11}$-parameter obtained with CST MWS.
\begin{figure}
\includegraphics [width=\columnwidth] {Fig8.eps}
\caption {\label{fig8} The longitudinal impedance of DC-monitor assembly: a) Re\emph{Z}, Im\emph{Z} and Abs\emph{Z} in the frequency range $0\div18\,GHz$; b) Abs\emph{Z} in the range $0\div6\,GHz$.}
\end{figure}
Calculations give for DC-monitor assembly $Z_\parallel/n=0.71\,Ohm$ and $k_{loss}= 0.21\,V/pC$. These estimates are the second largest after those of the RF-cavity that were obtained till now for the NESTOR ring components. It should be also noted that the impedance of DC-monitor assembly shows a steep rise at frequencies $f>12\,GHz$ which is stipulated by slots in RF-shields.
The main contributions to the broadband impedance of the DC-monitor assembly give low-Q resonances mentioned above. So we studied the dependance of $Z_\parallel/n$ on the gap width between the RF-shields. The results are presented in the
Table 1.
\begin{center}
\textbf{\emph{Table 1.}} {\it Contributions to broadband impedance from DC-monitor assembly }
\end{center}
\begin{tabular*} {\columnwidth} {|c|c|c|}\hline
$\;\;\;\;\;\;\;\;\;$ Gap width, mm $\;\;\;\;\;\;\;\;\;$ &$\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $|Z_{\parallel}/n|, \Omega$ $\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ & $\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $k_{loss}$,V/pC $\;\;\;\;\;\;\;\;\;\;\;\;$ \\
\hline
6 & 0.71 &0.21 \\
4 &0.64 &0.15 \\
2 &0.54 &0.11 \\
\hline
\end{tabular*}
\vspace{3 mm}
It is seen from the table that decreasing the gap width from 6 mm to 2 mm leads to reducing the $|Z_{\parallel}/n|$ and $k_{loss}$ by factors 1.3 and 2 respectively.
\section*{2.The broadband impedance of the \emph {NESTOR} ring. Results}
The results on contributions of various NESTOR ring components to BB impedance, obtained till the present time, are summarized in Tables 2,3.
\begin{center}
\textbf{\emph{Table 2.}} {\it Longitudinal broadband impedance budget}
\end{center}
\begin{tabularx} {\linewidth} {|p{41 mm}|p{20 mm}|p{32 mm}|p{32 mm}|p{32 mm}|}
\hline & &\multicolumn{3}{c|} {$|Z_{\parallel}/n|, Ohm$} \\
\cline{3-5}
\raisebox{1.5ex}[1mm][1mm] {$\;\;\;\;\;\;$Component} &\raisebox{1.5ex}[1mm][1mm] {$\;\;\;\;\;\;$N} &$\;\;\;\;\;\;\;\;${Analytic} &{$\;\;\;\;\;\;$CST MWS} &{$\;\;\;\;\;\;$CST PS} \\
\hline
RF-cavity &$\;\;\;\;\;\;$1 &$\;\;\;\;\;\;\;\;$1.40 &$\;\;\;\;\;\;\;\;$1.29 & \\
Resistive wall &$\;\;\;\;\;\;$1 &$\;\;\;\;\;\;\;\;$0.13 & & \\
Dipole chamber &$\;\;\;\;\;\;$4 & &$\;\;\;\;\;\;\;<0.20$ &$\;\;\;\;\;\;\;\;$0.13 \\
BPM &$\;\;\;\;\;\;$2 &$\;\;\;\;\;\;\;\;$0.02 & &$\;\;\;\;\;\;\;\;$0.014\\
Welding joint &$\;\;\;\;\;\;$8 &$\;\;\;\;\;\;\;\;$0.04 & &$\;\;\;\;\;\;\;\;$0.21 \\
DC-monitor &$\;\;\;\;\;\;$1 & & &$\;\;\;\;\;\;\;\;$0.71 \\
Strip line &$\;\;\;\;\;\;$1 & & &$\;\;\;\;\;\;\;\;$0.01 \\
\hline
Total &\multicolumn{4}{c|} {2.61} \\
\hline
\end{tabularx}
\vspace{3 mm}
\begin{center}
\textbf{\emph{Table 3.}} {\it Loss factors of beam pipe components}
\end{center}
\begin{tabularx} {\linewidth} {|p{41 mm}|p{20 mm}|p{49 mm}|p{48 mm}|}
\hline & &\multicolumn{2}{c|} {$k_{loss},V/pC$ } \\
\cline{3-4}
\raisebox{1.5ex}[1mm][1mm] {Component} &\raisebox{1.5ex}[1mm][1mm] {$\;\;\;\;\;\;\;\;\;\;$N} &{$\;\;\;\;\;\;\;\;\;\;$Analytic} &$\;\;\;\;\;\;\;\;\;\;${CST PS} \\
\hline
RF-cavity &$\;\;\;\;\;\;$1 &$\;\;\;\;\;\;\;\;\;\;\;\;$1.02 &$\;\;\;\;\;\;\;\;\;\;\;\;$1.04 \\
Resistive wall &$\;\;\;\;\;\;$1 &$\;\;\;\;\;\;\;\;\;\;\;\;$0.06 & \\
Dipole chamber &$\;\;\;\;\;\;$4 & &$\;\;\;\;\;\;\;\;\;\;\;\;$0.008 \\
BPM &$\;\;\;\;\;\;$2 & &$\;\;\;\;\;\;\;\;\;\;\;\;$0.012 \\
Welding joint &$\;\;\;\;\;\;$8 & &$\;\;\;\;\;\;\;\;\;\;\;\;$0.08 \\
DC-monotor &$\;\;\;\;\;\;$1 & &$\;\;\;\;\;\;\;\;\;\;\;\;$0.21 \\
Strip line &$\;\;\;\;\;\;$1 & &$\;\;\;\;\;\;\;\;\;\;\;\;$0.003 \\
\hline
Total &\multicolumn{3}{c|} {1.41} \\
\hline
\end{tabularx}
\vspace{3 mm}
It should be mentioned that for the time being we have no estimates
for the following components: i) the beam-pipe section for the
crossing point of the electron and laser beams; ii) the injection
section (inflector). The first element
at the commissioning stage will be replaced with a straight section.
The injection section is essentially non-symmetric, it gives,
presumably, a substantial contribution to broadband impedance and so
it requires to be studied with 3D time-domain codes.
\section*{3.The longitudinal high frequency impedance of \emph {NESTOR} ring. }
We also calculated contributions from all ring elements, considered up to date, to longitudinal impedance $Z(f)$ in the frequency range $0\div16\,GHz$, which corresponds to the charge distribution spectrum range of $1\,cm$ bunch. The frequency content of the $ReZ(f)$ and $ImZ(f)$ are presented in Fig.9 together with the charge distribution amplitude spectrum. All contributions are calculated via long-range wakes except one from the dipole chambers.
\begin{figure}[h]
\includegraphics [width=\columnwidth] {Fig9.eps}
\caption {\label{fig9} The frequency content of longitudinal impedance: a) Im\emph{Z}, b)Re\emph{Z} in the frequency range $0\div16\,GHz$.}
\end{figure}
We didn't include contribution from high-Q resonances of the RF-cavity in order not to deface the figure. The contribution from the dipole chambers are calculated from short-range wake by multiplying the impedance of the chamber segment with one hole by a number of holes. The absence of interference between holes up to cut-off frequency, that justify this approach, was verified earlier \cite{telegin11}. At frequencies $f > 6\,GHz$ a number of interference peaks will appear above the spectrum background for the long-range calculation (see Fig.10). The peak positions and peak intensities depend on a number of holes, so you couldn't obtain the impedance of the whole dipole chamber by summing up the impedances of the chamber segments. The short-range calculations give a smooth line for the impedance spectrum which roughly correspond to the spectrum background line in the impedance spectrum derived from the long-range wake.
\begin{figure}[h]
\includegraphics [width=\columnwidth] {Fig10.eps}
\caption {\label{fig10} The impedance of the dipole chamber segment derived from the long-range wake ($s=5\,m$) for various number of holes.}
\end{figure}
One can see from the Fig.9 that the imaginary part of the ring impedance shows a deep rise at high frequencies (mainly due to holes in the dipole chambers). The real part of impedance are substantially smaller than the imaginary part. The main contribution to the total ring loss factor, which is defined as convolution of the real part of impedance with bunch power spectrum, give the DC-monitor assembly and the RF-cavity.\\
\section*{4.Conclusions}
We estimated contributions from two more ring components, namely BPM and DC-monitor assembly, to the NESTOR ring impedance budget with CST Studio Suite$^{TM}$. The value of $Z_{\parallel}/n$ for BPM is in a good agreement with analytical estimate obtained earlier. The contributions both to $Z_{\parallel}/n$ and $k_{loss}$ from the DC-monitor assembly is found to be rather large for this kind of component. We consider the possibility to decrease these contributions by decreasing the gap between the RF-shields in the assembly. The further efforts have to be undertaken to estimate the contributions from two still unevaluated components: inflector and beam crossing chamber.
\section*{References}
|
1,108,101,563,118 | arxiv | \section{Introduction}
Supersymmetry (SUSY) is a leading candidate for physics beyond the Standard Model (SM). It is the only extension of the bosonic spacetime Poincar$\acute{e}$ symmetry to include a fermionic spacetime. Superstring theory, the currently prevailing paradigm of quantum gravity, generally includes SUSY, though not necessarily at the weak scale. The cancellation of the quadratic divergence in the Higgs mass-squared radiative correction, requiring fine-tuning in the SM, strongly motivates SUSY at the TeV scale. TeV-scale SUSY also unites the gauge coupling constants at the GUT scale and provides an attractive cold dark matter candidate, the lightest neutralino, when R-parity is conserved. The simplest supersymmetric extension of the SM is the Minimal Supersymmetric Standard Model (MSSM).
The MSSM suffers from the $\mu$-problem \cite{muproblem}. The $\mu$-parameter is the only dimensionful parameter in the SUSY conserving sector. Naively, in a top down approach, one would expect the $\mu$-parameter to be either zero or at the Planck scale, ${\cal O}(10^{19})$ GeV. At tree-level, the MSSM gives the relation \cite{ref:sugrawgc}
\begin{equation}
\frac{1}{2} M_Z^2={m_{d}^2 - m_{u}^2 \tan^2\beta \over \tan^2\beta -1} - \mu^2,
\end{equation}
where $m_d$ and $m_u$ are the soft mass parameters for the down-type and up-type Higgs, respectively. With the soft parameters at the EW/TeV scale, $\mu$ must be at the same scale, while LEP constraints on the chargino mass require $\mu$ to be non-zero \cite{ref:lepsusy}. A simple solution is to promote the $\mu$-parameter to a dynamical field in extensions of the MSSM that contain an additional singlet scalar field that does not interact with MSSM fields other than the two Higgs doublets. Extended models thereby circumvent the need for a fine tuning of the $\mu$-parameter to the electroweak scale.
The discovery of Higgs bosons is a primary goal of the Tevatron and the Large Hadron Collider (LHC) experiments. Although Higgs boson signals at colliders have been extensively studied, most of these studies were based on the assumption that the Higgs bosons occur only in doublet fields \cite{review:run2:atlas:cms}. The few case studies of the Higgs sector in the extensions of the MSSM have not been as comprehensive as the SM and MSSM Higgs studies \cite{ref:studies,ref:htoaa}. With the addition of singlet scalar fields, the properties of the Higgs bosons can be substantially different from those in the SM or the MSSM. Moreover, with SUSY, there are also one or more extra neutralinos and there may be an extra neutral gauge boson in some models.
In this paper we consider models with an extra Higgs singlet field that yield a dynamical solution to the $\mu$-problem. The dynamical field that gets a vacuum expectation value (VEV) generates an effective $\mu$-parameter that is associated with a new symmetry. These models have a third CP-even Higgs boson and, in some cases, an extra CP-odd Higgs boson. The mixing with the extra scalar state alters the masses and couplings of the physical Higgs bosons. We evaluate the phenomenological consequences of an extra scalar for the Higgs masses, couplings, decays and production. We include one-loop radiative corrections to the Higgs masses, which to a good approximation turn out to be common among the models at this order for the neutral and charged Higgs boson sector. While performing our systematic study on the Higgs sector alone, we consider indirect consequences from the neutralino sector in anticipation of a later full treatment including both sectors. Detailed studies of the neutralino sector in these models have been done by examining the lightest neutralino \cite{xMSSM_neutralino}. We translate the constraints from LEP experiments on the SM (lightest MSSM) Higgs into limits on the CP-even (CP-even and CP-odd) Higgs boson masses in the extended models and include constraints from the LEP chargino mass limit, the invisible $Z$ width and the $Z-Z'$ mixing angle.
The extended models of present interest\footnote{Many of the ideas of some of the models appeared already in Ref. \cite{Fayet}. For a recent review of supersymmetric singlet models, see Ref. \cite{cpnsh}.} are the Next-to-Minimal Supersymmetric Standard Model (NMSSM) \cite{NMSSM}, the Minimal Nonminimal Supersymmetric Standard Model (MNSSM) or the nearly Minimal Supersymmetic Standard Model (nMSSM) \cite{nMSSM}, the $U(1)'$-extended Minimal Supersymmetric Standard Model (UMSSM) \cite{UMSSM}, and the Secluded $U(1)'$-extended Minimal Supersymmetric Standard Model (sMSSM) \cite{SUMSSM}. A common $\mu$-generating term, $h_s \hat H_u \cdot \hat H_d \hat S$, is contained in the superpotentials of these models, which are listed in Table \ref{table:model}. After the $S$ field gets a VEV, the effective $\mu$-parameter is identified as
\begin{equation}
\mu_{\rm eff} = h_s \langle S \rangle.
\end{equation}
where $\langle S \rangle$ denotes the VEV of the singlet field.
The defining feature of each model is the symmetry that is allowed by the superpotential. The NMSSM has a discrete $\mathbb Z_3$ symmetry, allowing the $S^3$ term \cite{NMSSM,NMSSM_Higgs}. With any discrete symmetry, the possibility of domain walls exists. It has been shown that domain walls can be viewed as a source of dark energy \cite{darkenergydomainwall}. In the NMSSM, the equation of the state, $p=w\rho$, of dark energy is predicted to have $w = -2/3$ which is disfavored by a recent analysis of WMAP data that place $w=-1.062^{+0.128}_{-0.079}$ \cite{Spergel:2006hy}.
The domain walls may be eliminated if the $\mathbb Z_3$ symmetry is broken by higher dimensional operators, but these may lead to very large destabilizing tadpole operators \cite{domainwall}; one possibility for avoiding this problem is described in Ref. \cite{NMSSMwodomain}. The nMSSM with a $\mathbb Z_5^R$ or $\mathbb Z_7^R$ symmetry has a tadpole term of $\hat S$ that breaks the discrete symmetries and is thus free from domain walls \cite{nMSSM, nMSSM_Higgs}. The harmful tadpole divergences can destabilize the gauge hierarchy, but the discrete symmetries $\mathbb Z^R_5$ or $\mathbb Z^R_7$ allow the divergences to exist only at six and seven-loop order, respectively \cite{nMSSM_Higgs}. At these orders, the divergences are suppressed at scales below $M_{\rm Planck}$.
An extra $U(1)$ gauge symmetry, $U(1)'$, is motivated by many models beyond the SM, including grand unified theories (GUT) \cite{gutu1mot,ref:hewettrizzo}, extra dimensions \cite{U1_xd}, superstrings \cite{U1_string}, little Higgs \cite{U1_littleHiggs}, dynamical symmetry breaking \cite{U1_strongdynamics} and the Stueckelberg mechanism \cite{U1_stueckelberg}. The UMSSM and sMSSM each contains a $U(1)'$ gauge symmetry and its gauge boson, $Z'$, that can mix with the SM after symmetries are broken $Z$ \cite{UMSSM, UMSSM_Higgs}. While the continuous $U(1)'$ symmetry is free from domain wall constraints, the UMSSM may require exotic fields \cite{Erler:2000wu,Batra,Morrissey:2005uz} to cancel chiral anomalies related to the $U(1)'$ symmetry\footnote{Exotic fermions can be avoided in a family non-universal $U(1)'$ model \cite{Demir:2005ti}. }. There are constraints on the UMSSM from the strict experimental limits on $Z-Z'$ mixing that are at the mil-level \cite{LEPmixingangle}. The $Z'$ mass must be above 600-900 GeV to satisfy the Tevatron dilepton search results, with the precise experimental limit dependent on the $U(1)'$ model \cite{ref:zpmasslim}. With a leptophobic $Z'$, these mass limits are evaded.
\begin{table}[t]
\caption{Higgs bosons of the MSSM and several of its extensions. We denote the single CP-odd state in the MSSM and UMSSM by $A_2^0$ for easier comparison with the other models.
\label{table:model}}
\begin{tabular}{|r|c|l|l|l|c|}
\hline
Model~~ & Symmetry & ~~~~~~~Superpotential & ~~~~~~~~~CP-even & ~~~~CP-odd & Charged\\
\hline
MSSM & -- & $\mu \hat H_u \cdot \hat H_d$ & $H_1^0, H_2^0$ & $A_2^0$ & $H^\pm$ \\
NMSSM & $\mathbb Z_3$ & $h_s \hat S \hat H_u \cdot \hat H_d + \frac{\kappa}{3} \hat S^3$ & $H_1^0, H_2^0, H_3^0$ & $A_1^0, A_2^0$ & $H^\pm$\\
nMSSM & $\mathbb Z^R_5, \mathbb Z^R_7$ & $h_s \hat S \hat H_u \cdot \hat H_d + \xi_F M_{\rm n}^2 \hat S$ & $H_1^0, H_2^0, H_3^0$ & $A_1^0, A_2^0$ & $H^\pm$\\
UMSSM & $U(1)'$ & $h_s \hat S \hat H_u \cdot \hat H_d$ & $H_1^0, H_2^0, H_3^0$ & $A_2^0$ & $H^\pm$ \\
sMSSM & $U(1)'$ & $h_s \hat S \hat H_u \cdot \hat H_d + \lambda_s \hat S_1 \hat S_2 \hat S_3$ & $H_1^0, H_2^0, H_3^0, H_4^0, H_5^0, H_6^0$ & $A_1^0, A_2^0, A_3^0, A_4^0$ & $H^\pm$\\
\hline
\end{tabular}
\end{table}
The Higgs field content of the above listed models is given in Table \ref{table:model}. In the MSSM, the usual 2 Higgs doublets give two CP-even ($H^0_1$, $H^0_2$), a CP-odd ($A_2$), and a pair of charged ($H^\pm$) Higgs bosons\footnote{We ignore the possibility of CP-violating mixing effects.}. The extended models include additional CP-even Higgs bosons and CP-odd Higgs bosons or a $Z'$ gauge boson, depending on the model. The sMSSM contains three additional singlets that allow six CP-even and four CP-odd Higgs states. However, the additional Higgs fields decouple if $\lambda$ is small and the vacuum expectation values $\langle S_1\rangle,\langle S_2\rangle,\langle S_3\rangle$ are large. The decoupling limit eliminates the $D$-terms in the mass-squared matrix for the $S,H^0_d$, and $H^0_u$ fields and yields a model similar to the nMSSM with three CP-even and two CP-odd Higgs bosons. This is shown in Appendix \ref{apx:sumssmdecoup}. We shall therefore refer to the nMSSM as n/sMSSM since the results of the nMSSM correspond to the sMSSM in the decoupling regime. The charged Higgs sector for all of these models remains the same as in the MSSM due to the assumption that the number of Higgs doublets is unchanged.
We present an overview of the Higgs mass-squared matrices including radiative corrections due to top and stop loops in Section \ref{sect:massmtx}. We discuss the experimental and theoretical constraints applied in Section \ref{sect:constraints} and the details of the parameter scans in Section \ref{sect:scan}. In Section \ref{sect:results}, we discuss the Higgs spectra and couplings for various models, while implications for collider phenomenology are presented in Section \ref{sect:collpheno}. Finally, we summarize our results in Section \ref{sect:concl}. We provide details of decoupling of the sMSSM in Appendix \ref{apx:sumssmdecoup}. The derivation of the mass-squared matrices of each model are presented in Appendix \ref{apx:Higgs} and the neutralino mass matrices are given in Appendix \ref{apx:neut}. In Appendix \ref{apx:masses}, important limits in the Higgs sector are addressed, while additional information on the heavier states is given in Appendix \ref{apx:addparm}.
\section{Higgs Mass Matrices}\label{sect:massmtx}
\subsection{Tree-level}\label{sect:tree-level}
The tree-level Higgs mass-squared matrices are found from the potential, $V$, which is a sum of the $F$-term, $D$-term and soft-terms in the lagrangian, as follows.
\begin{eqnarray}
V_F &=& |h_s H_u\cdot H_d+\xi_F M_{\rm n}^2+ \kappa S^2|^2 + |h_s S|^2 \left(|H_d|^2+|H_u|^2 \right), \\
V_D &=& \frac{G^2}{8}\left( |H_d|^2-|H_u|^2 \right)^2+ \frac{g_{2}^2}{2} \left( |H_d|^2|H_u|^2-|H_u \cdot H_d|^2 \right),\\
&+& {{g_{1'}}^2\over2}\left(Q_{H_d} |H_d|^2+Q_{H_u} |H_u|^2+Q_{S} |S|^2\right)^2\\ \nonumber
V_{\rm soft}&=&m_{d}^{2}|H_d|^2 + m_{u}^{2}|H_u|^2+ m_s^{2}|S|^2 + \left( A_s h_s S H_u\cdot H_d + {\kappa \over 3} A_{\kappa} S^3+\xi_S M_{\rm n}^3 S + h.c. \right).
\label{eq:potential}
\end{eqnarray}
Here, the two Higgs doublets with hypercharge $Y=-1/2$ and $Y=+1/2$, respectively, are
\begin{equation}
H_d = \left( \begin{array}{c} H_d^0 \\ H^- \end{array} \right), \qquad
H_u = \left( \begin{array}{c} H^+ \\ H_u^0 \end{array} \right).
\end{equation}
and $H_u \cdot H_d = \epsilon_{ij} H_u^i H_d^j$. For a particular model, the parameters in $V$ are understood to be turned-off appropriately according to Table \ref{table:model}
\begin{eqnarray}
{\rm NMSSM}&:& g_{1'}=0, M_{\rm n} = 0,\nonumber\\
{\rm nMSSM}&:& g_{1'}=0, \kappa=0, A_\kappa = 0, \\
{\rm UMSSM}&:& M_{\rm n} = 0, \kappa = 0, A_\kappa = 0.\nonumber
\end{eqnarray}
The couplings $g_1,g_2$, and ${g_{1'}}$ are for the $U(1)_Y,SU(2)_L$, and $U(1)'$ gauge symmetries, respectively, and the parameter $G$ is defined as $G^2=g_1^2+g_2^2$.
The NMSSM model-dependent parameters are $\kappa$ and $A_\kappa$ while the free nMSSM parameters are $\xi_F$ and $\xi_S$ with $M_{\rm n}$ being fixed near the SUSY scale. The model dependence of the UMSSM is expressed by the $D$-term that has the $U(1)'$ charges of the Higgs fields, $Q_{H_d}, Q_{H_u}$ and $Q_S$. In general, these charges are free parameters with the restriction\footnote{Additional restrictions on the charges of the ordinary and exotic particles come from the cancellation of anomalies.} that $Q_{H_d}+Q_{H_u}+Q_{S}=0$ to preserve gauge invariance. In any particular $U(1)'$ construction, the charges have specified values. We assume the charges of a $E_6$ model that breaks via the chain $E_6 \to SO(10)\times U(1)_\psi \to SU(5)\times U(1)_\chi \times U(1)_\psi$ \cite{ref:hewettrizzo}. At some high energy scale, the $U(1)_{\chi}\times U(1)_{\psi}$ symmetry is assumed to break into one $U(1)'$\footnote{This is the same breaking scheme as in the Exceptional Supersymmetric Standard Model (ESSM) \cite{ESSM}. In the ESSM, among three pairs of SU(2) doublet scalars with MSSM Higgs quantum numbers and three singlet scalars, only one pair of doublets and one singlet develop VEVs due to an extra $Z_2^H$ symmetry and imposed hierarchical structure of the Yukawa interactions, yielding a model similar to the UMSSM.}. The above breaking scenario results in the charges
\begin{equation}
Q_{H_d} = {-1\over \sqrt{10}} \cos \theta_{E_6}-{1\over\sqrt6} \sin\theta_{E_6},\qquad
Q_{H_u} = {1\over \sqrt{10}} \cos \theta_{E_6}-{1\over\sqrt6} \sin\theta_{E_6},
\end{equation}
where $\theta_{E_6}$ is the mixing angle between the two $U(1)$s and is the only model-dependent parameter.
The $F$-term and the soft terms contain the model dependence of the NMSSM and n/sMSSM. The soft terms $A_{\kappa}$ of the NMSSM and $\xi_S M_{\rm n}^3$ of the n/sMSSM are new to $V_{\rm soft}$. The $B$-term of the MSSM is expressed in $V_{\rm soft}$ as $A_s h_sSH_u\cdot H_d$ after we identify
\begin{equation}
B \mu = A_s \mu_{\rm eff}.
\end{equation}
The other terms in $V_{\rm soft}$ are the usual MSSM soft mass terms.
The minimum of the potential is found explicitly using the minimization conditions found in Appendix \ref{apx:Higgs}. The conditions found allow us to express the soft mass parameters in terms of the VEVs of the Higgs fields. At the minimum of the potential, the Higgs fields are expanded as
\begin{equation}
H_d^0 = \frac{1}{\sqrt{2}} \left( v_d + \phi_d + i \varphi_d \right), \quad
H_u^0 = \frac{1}{\sqrt{2}} \left( v_u + \phi_u + i \varphi_u \right), \quad
S = \frac{1}{\sqrt{2}} \left( s + \sigma + i \xi \right).
\label{eq:fieldexp}
\end{equation}
with $v^2 \equiv v_d^2+v_u^2 = (246{\rm ~GeV})^2$ and $\tan\beta \equiv v_u / v_d$.
We write the Higgs mass-squared matrix in a compact form that includes all the extended models under consideration. The CP-even tree-level matrix elements in the $H_d^0, H_u^0, S$ basis are:
\begin{eqnarray}
\label{eq:cpetree1}
\left({\mathcal{M}_{+}^0}\right)_{11} &=& \left[\frac{G^2}{4} + Q_{H_d}^{2} {g_{1'}}^{2}\right] v_d^2 + \left(\frac{h_s A_s}{\sqrt{2}} + \frac{h_s \kappa s}{2} + \frac{h_s \xi_F M_{\rm n}^2}{s}\right) \frac{v_u s}{v_d}, \\
\left({\mathcal{M}_{+}^0}\right)_{12} &=& -\left[\frac{G^2}{4} - h_s^{2} - Q_{H_d} Q_{H_u} {g_{1'}}^{2}\right] v_d v_u - \left(\frac{h_s A_s}{\sqrt{2}} + \frac{h_s \kappa s}{2} + \frac{ h_s \xi_F M_{\rm n}^2}{s}\right) s,\\
\left({\mathcal{M}_{+}^0}\right)_{13} &=& \left[h_s^{2} + Q_{H_d} Q_{S} {g_{1'}}^{2}\right] v_d s - \left(\frac{h_s A_s}{\sqrt{2}} + h_s \kappa s\right) v_u,\\
\left({\mathcal{M}_{+}^0}\right)_{22} &=& \left[\frac{G^2}{4} + Q_{H_u}^{2} {g_{1'}}^{2}\right] v_u^2 + \left(\frac{h_s A_s}{\sqrt{2}} + \frac{h_s \kappa s}{2} + \frac{h_s \xi_F M_{\rm n}^2}{s} \right) \frac{v_d s}{v_u}, \\
\left({\mathcal{M}_{+}^0}\right)_{23} &=& \left[h_s^{2} + Q_{H_u} Q_{S} {g_{1'}}^{2}\right] v_u s - \left(\frac{h_s A_s}{\sqrt{2}} + h_s \kappa s\right) v_d, \\
\left({\mathcal{M}_{+}^0}\right)_{33} &=& \left[Q_S^{2} {g_{1'}}^{2} + 2 \kappa^2 \right] s^2 + \left(\frac{h_s A_s}{\sqrt{2}} - \frac{\sqrt{2} \xi_S M_{\rm n}^3}{v_d v_u}\right) \frac{v_d v_u}{s} + \frac{\kappa A_\kappa}{\sqrt{2}} s.
\label{eq:cpetree2}
\end{eqnarray}
The tree-level CP-odd matrix elements are:
\begin{eqnarray}
\left({\mathcal{M}_{-}^0}\right)_{11} &=& \left(\frac{h_s A_s}{\sqrt{2}} + \frac{ h_s \kappa s}{2} + \frac{h_s \xi_F M_{\rm n}^2}{s}\right) \frac{v_u s}{v_d}, \\
\left({\mathcal{M}_{-}^0}\right)_{12} &=& \left(\frac{h_s A_s}{\sqrt{2}} + \frac{ h_s \kappa s}{2} + \frac{h_s \xi_F M_{\rm n}^2}{s}\right) s, \\
\left({\mathcal{M}_{-}^0}\right)_{13} &=& \left(\frac{h_s A_s}{\sqrt{2}} - h_s \kappa s\right) v_u, \\
\left({\mathcal{M}_{-}^0}\right)_{22} &=& \left(\frac{h_s A_s}{\sqrt{2}} + \frac{ h_s \kappa s}{2} + \frac{h_s \xi_F M_{\rm n}^2}{s}\right) \frac{v_d s}{v_u}, \\
\left({\mathcal{M}_{-}^0}\right)_{23} &=& \left(\frac{h_s A_s}{\sqrt{2}} - h_s \kappa s\right) v_d, \\
\left({\mathcal{M}_{-}^0}\right)_{33} &=& \left(\frac{h_s A_s}{\sqrt{2}} + 2 h_s \kappa s - \frac{\sqrt{2} \xi_S M_{\rm n}^3}{v_d v_u}\right) \frac{v_d v_u}{s} - \frac{3 \kappa A_\kappa}{\sqrt{2}} s.
\end{eqnarray}
The tree-level charged Higgs mass-squared matrix elements are:
\begin{eqnarray}
\left({\mathcal{M}^\pm}\right)_{11} &=& {v_u^2 \left(g_2^2-2 h_s^2\right) \over 4}+\left({1 \over \sqrt 2} A_s h_s s+{1\over 2} h_s \kappa s^2+h_s \xi_F M_{\rm n}^2\right){v_u \over v_d},\\
\left({\mathcal{M}^\pm}\right)_{12} &=& -{v_d v_u \left(g_2^2-2 h_s^2\right) \over 4}- \left({1 \over \sqrt 2} A_s h_s s+{1\over 2} h_s \kappa s^2+h_s \xi_F M_{\rm n}^2\right),\\
\left({\mathcal{M}^\pm}\right)_{22} &=& {v_d^2 \left(g_2^2-2 h_s^2\right) \over 4}+ \left({1 \over \sqrt 2} A_s h_s s+{1\over 2} h_s \kappa s^2+h_s \xi_F M_{\rm n}^2\right){v_d\over v_u}.
\end{eqnarray}
The physical Higgs boson masses are found by diagonalizing the mass-squared matrices, ${\cal M}_D=R {\cal M} R^{-1}$, where ${\cal M}$ also includes the radiative corrections discussed below. The rotation matrices for the diagonalization of the CP-even and CP-odd mass-squared matrices, $R_{\pm}^{ij}$, and for the charged Higgs matrix, ${\cal R}^{ij}$, may then be used to construct the physical Higgs fields.
\begin{eqnarray}
H_{i} &=& R_{+}^{i1} \phi_d+R_{+}^{i2} \phi_u+R_{+}^{i3} \sigma, \\
A_{i} &=& R_{-}^{i1} \varphi_d+R_{-}^{i2} \varphi_u+R_{-}^{i3} \xi, \\
H^\pm_{i} &=& {\cal R}^{i1} H^-+{\cal R}^{i2} H^+ .
\end{eqnarray}
where the physical states are ordered by their mass as $M_{H_1}\le M_{H_2}\le M_{H_3}$ and $M_{A_1}\le M_{A_2}$. Many features of the models are apparent by inspection of the mass-squared matrix elements. We discuss these aspects in Section \ref{sect:results}.
\subsection{Radiative Corrections}
An accurate analysis of the Higgs masses requires loop corrections. The dominant contributions at one-loop are from the top and scalar top loops due to their large Yukawa coupling. In the UMSSM, the gauge couplings are small compared to the top quark Yukawa coupling so the one-loop gauge contributions can be dropped. Corrections unique to the NMSSM and n/sMSSM begin only at the two-loop level. Thus all contributions that are model-dependent do not contribute significantly at one-loop order and the usual one-loop SUSY top and stop loops are universal in these models. A similar approach has been done in studying extended Higgs sectors with many additional singlet fields \cite{Ham:2004pd}. These one-loop corrections to the potential can be found from the Coleman-Weinberg potential \cite{Coleman:1973jx} and are reviewed in Appendix \ref{apx:loop}.
The mass squared matrix elements become
\begin{equation}
{\cal M}_{\pm} = {\cal M}_{\pm}^0+{\cal M}_{\pm}^1,
\end{equation}
where the radiative corrections to the CP-even mass-squared matrix elements are given by
\begin{eqnarray}
\label{eq:cpemassmtx1}
({\mathcal{M}_{+}^1})_{11} &=& k \left[ \left( \frac{({\widetilde m}^2_1)^2}{(m_{\widetilde t_1}^2 - m_{\widetilde t_2}^2)^2} {\mathcal G} \right) v_d^2 + \left( \frac{h_s h_t^2 A_t}{2 \sqrt{2}} \mathcal{F} \right) \frac{v_u s}{v_d} \right], \\
({\mathcal{M}_{+}^1})_{12} &=&k \left[ \left( \frac{{\widetilde m}^2_1 {\widetilde m}^2_2}{(m_{\widetilde t_1}^2 - m_{\widetilde t_2}^2)^2} {\mathcal G} + \frac{h_t^2 {\widetilde m}^2_1}{m^2_{\widetilde t_1} + m^2_{\widetilde t_2}} (2-{\mathcal G}) \right) v_d v_u - \left( \frac{h_s h_t^2 A_t}{2 \sqrt{2}} \mathcal{F} \right)s \right], \\
({\mathcal{M}_{+}^1})_{13} &=&k \left[ \left(\frac{{\widetilde m}^2_1 {\widetilde m}^2_s}{(m_{\widetilde t_1}^2 - m_{\widetilde t_2}^2)^2} {\mathcal G} + \frac{h_s^2 h_t^2}{2} {\mathcal F} \right) v_d s - \left( \frac{h_s h_t^2 A_t}{2 \sqrt{2}} \mathcal{F} \right) v_u\right], \\
({\mathcal{M}_{+}^1})_{22} &=& k \left( \frac{({\widetilde m}^2_2)^2}{(m_{\widetilde t_1}^2 - m_{\widetilde t_2}^2)^2} {\mathcal G} + \frac{2 h_t^2 {\widetilde m}^2_2}{m^2_{\widetilde t_1} + m^2_{\widetilde t_2}} (2-{\mathcal G}) + h_t^4 \ln \frac{m^2_{\widetilde t_1} m^2_{\widetilde t_2}}{m_t^4} \right) v_u^2 \\
&+& k \left( \frac{h_s h_t^2 A_t}{2 \sqrt{2}} \mathcal{F} \right) \frac{v_d s}{v_u}, \nonumber \\
({\mathcal{M}_{+}^1})_{23} &=& k \left[ \left(\frac{{\widetilde m}^2_2 {\widetilde m}^2_s}{(m_{\widetilde t_1}^2 - m_{\widetilde t_2}^2)^2} {\mathcal G} + \frac{h_t^2 {\widetilde m}^2_s}{m^2_{\widetilde t_1} + m^2_{\widetilde t_2}} (2-{\mathcal G}) \right) v_u s - \left( \frac{h_s h_t^2 A_t}{2 \sqrt{2}} \mathcal{F} \right)v_d\right], \\
({\mathcal{M}_{+}^1})_{33} &=& k \left[ \left(\frac{({\widetilde m}^2_s)^2}{(m_{\widetilde t_1}^2 - m_{\widetilde t_2}^2)^2} {\mathcal G} \right) s^2 + \left( \frac{h_s h_t^2 A_t}{2 \sqrt{2}} \mathcal{F} \right) \frac{v_d v_u}{s}\right].
\label{eq:cpemassmtx2}
\end{eqnarray}
where $k={3\over(4\pi)^2}$ and the loop factors are
\begin{equation}
{\cal G}(m^2_{\tilde t_1}, m^2_{\tilde t_2}) = 2\left[1- \frac{m^2_{\tilde t_1}+ m^2_{\tilde t_2}}{m^2_{\tilde t_1}- m^2_{\tilde t_2}} \log \left( {m_{\tilde t_1} \over m_{\tilde t_2}} \right)\right], \quad\quad\quad {\cal F} = \log \left( \frac{m^2_{\tilde t_1} m^2_{\tilde t_2}}{Q^4}\right) - {\cal G}(m^2_{\tilde t_1}, m^2_{\tilde t_2}).
\end{equation}
Here we have defined
\begin{eqnarray}
\widetilde{m}_1^{2} &=& h_t^2 \mu_{\rm eff} \left(\mu_{\rm eff} - A_t \tan\beta \right),\\
\widetilde{m}_2^{2} &=& h_t^2 A_t \left(A_t - \mu_{\rm eff} \cot\beta \right),\\
\widetilde{m}_s^{2} &=& \frac{v_d^2}{s^2} h_t^2 \mu_{\rm eff} \left(\mu_{\rm eff} - A_t \tan\beta \right),
\end{eqnarray}
with $Q$ being the $\overline{\text{DR}}$ renormalization scale and $A_t$ is the stop trilinear coupling.
The corrections to the CP-odd mass-squared matrices are given by
\begin{equation}
(\mathcal{M}^1_{-})_{ij} = \frac{ h_s v_d v_u s}{\sqrt 2 v_i v_j}\frac{k h_t^2 A_t}{2} {\cal F}(m^2_{\tilde t_1}, m^2_{\tilde t_2}),
\label{eq:cpomassmtx}
\end{equation}
where we identify $v_1\equiv v_d, v_2\equiv v_u$, and $v_3 \equiv s$. These one-loop corrections agree with those of \cite{NMSSM_Higgs,nMSSM_Higgs,UMSSM_Higgs}.
The one-loop corrections to the charged Higgs mass are equivalent to those in the MSSM and can be significant for large $\tan \beta$. The charged Higgs boson in the MSSM has a tree-level mass
\begin{equation}
(M^{(0)}_{H^\pm})^2=M_W^2+M_Y^2,
\end{equation}
and the extended-MSSM charged Higgs boson mass is
\begin{equation}
(M^{(0)}_{H^\pm})^2 = M_W^2+M_Y^2-{h_s^2 v^2\over 2}+h_s{ \sqrt2 ( 2\xi_F M_{\rm n}^2+\kappa s^2) \over\sin 2 \beta},
\end{equation}
where $M_Y^2={ \sqrt 2 h_s s A_s \over \sin 2 \beta}$ is the tree-level mass of the MSSM CP-odd Higgs boson. The case of large $M_Y$ (or $M_A$ in the MSSM) yields a large charged Higgs mass and is consistent with the MSSM decoupling limit yielding a SM Higgs sector. Radiative corrections in the MSSM shift the mass by
\begin{equation}
(M^{(1)}_{H^\pm})^2={h_s A_t s k h_t^2 {\cal F}\over \sqrt 2 \sin 2 \beta} + \delta M_{H^\pm}^2,
\end{equation}
where after including $\tan \beta$ dependent terms, $\delta M_{H^\pm}^2$ is given by the leading logarithm result of the full one-loop calculation \cite{ref:chghiggscorr}
\begin{equation}
\delta M_{H^\pm}^2 ={N_{\rm c} g^2 \over 32 \pi^2 M_W^2}\left({2 m_t^2 m_b^2\over \sin^2\beta \cos^2\beta}-M_W^2\left({m_t^2\over \sin^2\beta}+{m_b^2\over \cos^2\beta}\right)+{2\over3}M_W^4\right) \log{ M_{SUSY}^2\over m_t^2},
\end{equation}
where $N_{\rm c}=3$ is the number of colors and $M_{SUSY}$ is the supersymmetric mass scale, taken to be 1 TeV. Model-dependent terms come in at tree-level, giving a charged Higgs mass after radiative corrections of
\begin{equation}
M_{H^\pm}^2 = M_W^2+M_Y^2-{h_s^2 v^2\over 2}+h_s{ \sqrt2 ( 2\xi_F M_{\rm n}^2+\kappa s^2) \over\sin 2 \beta} +\left({h_s A_t s k h_t^2 {\cal F}\over \sqrt 2 \sin 2 \beta}+\delta M_{H^\pm}^2 \right).
\label{eq:chghiggs}
\end{equation}
\section{Constraints}\label{sect:constraints}
Both theoretical and experimental constraints are important in ensuring that the models are realistic. In the following, we list the constraints that we apply in obtaining the allowed Higgs mass spectra.
To generate the Higgs boson masses, we scan over the relevant parameters of each model. Theoretical constraints eliminate large regions of the parameter space. To avoid solutions that contain unstable saddle-points of the potential, we require that the mass-squared eigenvalues are positive-definite, i.e. $M_{A_i}^2, M_{H_i}^2,M_{H^\pm}^2 \ge 0$. We also exclude solutions which give $m^2_{\widetilde t_i} <0$.
\subsection{Direct constraints}
\label{sect:dirlimits}
The direct constraints are provided by collider data. Currently, LEP gives the best experimental bound on the mass of the SM Higgs boson, $h$, of 114.4 GeV at 95\% C.L \cite{Sopczak:2005mc}. We translate this to limit the mass of the lightest Higgs boson of the extended models by using the $ZZh$ coupling limits from LEP, as reproduced in Fig. \ref{fig:lep}a of Section \ref{sect:distresults}, that consider all SM particle decay modes down to $M_{h} = 12$ GeV \cite{Sopczak:2005mc}\footnote{These limits actually assume standard model branching ratios for the $H_i$, which are dominantly into $b \bar b$ and $\tau^+ \tau^-$ in the relevant mass range. As discussed in Section \ref{sect:decaybf}, for some of the parameter values in the extended models the dominant decays are into (invisible) neutralinos, or into two light CP-odd states, and for those points the constraint in Figure \ref{fig:lep}a does not strictly apply. However, there are also quite stringent limits on the invisible $H_i$ decay modes \cite{Abdallah:2003ry}, and (weaker) limits on the decays into two CP-odd states which subsequently decay into $b \bar b$ or $\tau^+ \tau^-$ \cite{Abdallah:2004wy,Abbiendi:2002in}. These have not been given for the entire kinematic ranges of interest here, so we will simply take the conservative approach of allowing only those points satisfying the $ZZH_i$ coupling limit in Figure \ref{fig:lep}a.}. The $ZZH_i$ coupling relative to the SM coupling is given by the factor
\begin{equation}
\xi_{ZZH_i}= \left({g_{ZZH_i} / g^{SM}_{ZZh}}\right)^2 = (R_{+}^{i1} \cos \beta + R_{+}^{i2} \sin \beta)^2.
\label{eq:zzh}
\end{equation}
Since the $ZZH_i$ coupling in extended models is reduced by doublet-singlet mixing effects, it is possible to have Higgs bosons lighter than the SM bound of 114.4 GeV. The reach of the $ZZH_i$ coupling limit extends only to 12 GeV, below which we do not enforce this constraint. However, this low mass region is well constrained by bounds on $M_A$ and $M_h$ in the MSSM discussed below. The LEP bound is also applied to $H_2$ and $H_3$ since a heavier Higgs boson may violate the bound even if the lightest does not.
Another channel of relevance from LEP is $Z\to A_i H_j$ with $A_i\to b\bar b$ and $H_j\to b\bar b$. Current limits place the lightest possible CP-even and odd MSSM masses at $M_{H_1}=92.9$ GeV and $M_{A_2}=93.4$ GeV, respectively and are calculated assuming maximal stop mixing, yielding the most conservative limit on the lightest Higgs masses in the MSSM \cite{ref:Amasslep}. An estimation of the corresponding limits in extended models may be obtained by comparing the expected production cross section of the extended-MSSM models at the maximum LEP energy, $\sqrt s=209$ GeV, to that of the MSSM \cite{SUMSSM_Higgs}. At this energy the mass limits of the CP-even and CP-odd Higgs bosons provide an upper bound of the cross section at 40 fb. In practice, we find that the LEP $Z\to A_i H_j$ constraint eliminates a significant fraction of the points generated with a low CP-odd Higgs mass. In Fig. \ref{fig:lep}b, we show $\cos^2(\alpha-\beta)$, the prefactor of the $ZAh$ coupling where $\alpha$ is the rotation angle required to diagonalize the MSSM CP-even Higgs mass-squared matrix, versus CP-odd Higgs mass for the MSSM. A strong $ZAh$ coupling results in an enhanced $Ah$ production cross section. In the extended-MSSM models, we calculate the cross section for $e^+ e^- \to A_i H_j$ where $A_i$ is the lightest nonzero CP-even Higgs for that model. If it is above the calculated LEP limit of 40 fb, the generated point fails this constraint. Mixing effects which maximize the $ZA_i H_j$ coupling in the MSSM also result in a lower value of $M_{A_2}$, so that the LEP limit implies a lower bound on $M_{A_2}$. With the two complementary limits on the neutral Higgs bosons and the charged Higgs mass ($M_{H^\pm}=78.6\text{ GeV}$ from LEP\cite{ref:chghiggslim}), the Higgs sector in the MSSM and extended-MSSM models are rather well constrained.
\subsection{Indirect constraints}
While we focus on the Higgs sector in our analysis, indirect constraints from the neutralino and chargino sectors also need to be considered. The lightest chargino mass is currently limited by LEP to be $M_{\chi^\pm} > 104$ GeV at 95\% C.L.\cite{ref:lepsusy}. The chargino masses are determined by the diagonalization of
\begin{eqnarray}
{\cal M}_{\chi^\pm} = \left( \begin{array} {c c}
M_2 & \sqrt 2 M_W \sin \beta \\
\sqrt 2 M_W \cos \beta & \mu_{\rm eff}
\end{array} \right),
\end{eqnarray}
The $SU(2)_L$ gaugino mass, $M_2$, that enters the chargino sectors does not have a direct effect on the Higgs sector, but the lower bound on $M_{\chi^\pm}$ does constrain possible parameter values.
Precision electroweak data also provide an upper bound on the new contributions to the invisible $Z$ decay width of 1.9 MeV at 95\% C.L.\footnote{This is based on the constraint on new physics contributions to the invisible $Z$ width, $\Gamma^{new}_{inv}= -2.65 \pm 1.5$ MeV \cite{ref:pdg}, renormalizing the probability distribution to require that the true value is positive. Strictly speaking, such decays may not be invisible, and slightly weaker constraints would be obtained using the total or hadronic widths. We use the invisible width to be conservative and for simplicity, since it is also applied to decays of the $Z$ into neutralino pairs.} Contributions to this decay width include $Z\to A_i H_j$ for $M_{A_i}+M_{H_j} \le M_Z$ and $Z\to Z^* H_i \to f \bar f H_i$ for $M_{H_i}\le M_Z$. The decay widths are given by
\begin{eqnarray}
\Gamma_{Z\to A_i H_j} &=& {\alpha\over 48 x_W (1-x_W)} M_Z \lambda^{3/2}\left({M_{A_i}^2 / M_Z^2},{M_{H_j}^2 / M_Z^2}\right)\left( R^{i1}_{+} R^{j1}_{-}-R^{i2}_{+} R^{j2}_{-}\right)^2,\\
{d\Gamma_{Z\to f\bar f H_i} \over dx_{H_i}}&=& \Gamma_{Z \to SM} {\alpha\over 4\pi x_W(1-x_W)}{1+{2\over3} y_{H_i}^2-x_{H_i}+{1\over 12}x_{H_i}^2 \over (x_{H_i}-y_{H_i}^2)^2} \sqrt{x_{H_i}^2-4y_{H_i}^2}.
\end{eqnarray}
where $\lambda(x,y)=1-x^2-y^2-2(xy-x-y)$, $x_{H_i}=2 E_{H_i}/M_Z$ and $y_{H_i}=M_{H_i}/M_Z$ and where the SM $Z$ decay width is $\Gamma_{Z\to SM} = 2.50$ GeV \cite{ref:pdg}. Here we assume massless fermions in the $Z\to f\bar f H_i$ decay, which is a good approximation at low $M_{H_i}$. This decay mode complements the $ZZH_i$ coupling constraint quite well as it is valid below the reach of the LEP limit on $\xi_{ZZH_i}$. Since the masses of $H_2,H_3$ and $A_2$ are typically larger than $M_Z$, we only consider $Z \to H_1 A_1$ and $Z\to f \bar f H_1$ decay modes\footnote{Singlet mixing may allow $H_2$ or $A_2$ to be slightly less than $M_Z$ but the decay is still kinematically inaccessible.}.
The neutralino sector also provides constraints on the allowed parameter space via $Z$ boson decay. If $M_{\chi^0_1} \leq M_Z/2$, then $Z$ decays into neutralino pairs and this decay contributes to the invisible $Z$-decay width. Since the $Z$ does not couple to the singlino, the superpartner of the Higgs singlet, the decay width formula in the extended models is similar to that of the MSSM, except for mixing effects \cite{Hesselbach:2001ri}. The $Z$ decay width to neutralino pairs, when kinematically accessible, is
\begin{equation}
\Gamma_{Z \to \chi^0_1 \chi^0_1} = \frac{g_2^2+g_1^2}{96 \pi M_Z^2} (|N_{13}|^2-|N_{14}|^2)^2\left(M_Z^2-(2 M_{\chi^0_1})^2\right)^{3/2}.
\end{equation}
The neutralino rotation matrix elements, $N_{ij}$, are found by diagonalizing the model-dependent neutralino mass-squared matrices in Appendix \ref{apx:neut}.
The $Z-Z'$ mixing angle,
\begin{equation}
\alpha_{ZZ'} = \frac{1}{2} \tan^{-1} \left( {2 M^2_{ZZ'} \over M_{Z'}^2 - M_Z^2}\right),
\label{eq:mixing}
\end{equation}
is also constrained by electroweak precision data to be less than ${\cal O}(10^{-3})$, where the exact value is dependent on the $U(1)'$ model. The $Z'$ mass parameters are
\begin{equation}
M^2_{Z'} = {g_{1'}}^2(Q_{H_d}^2 v_d^2+Q_{H_u}^2 v_u^2+Q_{S}^2 s^2),\quad M^2_{ZZ'} = {g_{1'}} \sqrt{g_1^2+g_2^2} \left( v_d^2 Q_{H_d} - v_u^2 Q_{H_u}\right).
\label{eq:mzp}
\end{equation}
Eq. (\ref{eq:mixing}) bounds what types of $Z'$ models and associated Higgs sector parameters are allowed; it translates into a high value of $s$, typically at the TeV scale. There do, however, exist isolated points that allow a suppression of $\alpha_{ZZ'}$ at low $s$ such as the following
\begin{enumerate}
\item If $Q_{H_d},Q_{H_u}$ have the same sign, a cancellation occurs at $\tan \beta = \sqrt{Q_{H_d} \over Q_{H_u}}$.
\item If $Q_{H_u}$ is small and $\tan \beta$ is large, the mixing term is suppressed.
\end{enumerate}
The $Z'$ mass is also constrained \cite{LEPmixingangle,ref:zpmasslim}, but the limits are very model-dependent on the quark and lepton couplings and can be eliminated entirely in the leptophobic $Z'$ case \cite{ref:zpleptphob}. In any case, the large $s$ limit yields a $Z'$ with mass typically large enough to avoid existing experimental constraints. Therefore, we only apply the $Z-Z'$ mixing constraint in our study.
Constraints due to the possibility of electroweak baryogenesis have been previously explored in the NMSSM \cite{NMSSM}, the sMSSM \cite{ref:Kang} and the nMSSM \cite{Menon:2004wv}. The cubic ($A_s$) term in the tree-level potential makes it much easier to achieve the needed strongly first-order phase transition in these models than in the MSSM \cite{Carena:2002ss}. However, we do not consider CP-violating phases in the Higgs sector, which is also a necessary condition for baryogenesis. Furthermore, there are other possibilities for baryogenesis. Therefore, electroweak baryogenesis constraints are not included here.
\section{Parameter scans}\label{sect:scan}
To generate the Higgs boson masses, we perform both grid and random scans over the allowed available parameter space of each model. In the random scan, we evaluate 500000 points in the available parameter space for each model. Our grid scan gives a reproducible catalogue of the Higgs masses of each model. However, due to the large number of parameters, a finely spaced grid on individual parameters is not feasible. The results from the grid scan serve as a useful guide of the allowed Higgs boson masses but do not provide definitive upper or lower mass limits.
\begin{table}[t]
\caption{Parameter ranges in scans. (a) Model-independent parameters. (b) Model-dependent parameters. Parameters not scanned assume the values $M_{\widetilde Q} = M_{\widetilde U} = 1$ TeV and $Q = 300$ GeV.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
&$\tan \beta$ & $s$ & $\mu_{\rm eff}$ & $A_s$ & $A_t$ & $M_2$\\ \hline
Range &1, 50 & 50 , 2000 GeV & 50 , 1000 GeV & 0 , 1 TeV & -1 , 1 TeV& -500, 500 GeV \\ \hline
Step size &-- & 100 GeV & 100 GeV & 100 GeV & 250 GeV & 100 GeV \\ \hline
\end{tabular}
\end{center}
(a)
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
&$\kappa$ & $A_\kappa$& $\xi_S$ & $\xi_F$ & $\theta_{E_6}$ \\ \hline
Range &-0.75 , 0.75& -1 , 1 TeV & -1 , 1 & -1 , 1 & $0 , \pi$ \\ \hline
Step size &0.25 & 250 GeV & 0.2 & 0.2 & ${\pi\over10}$ \\ \hline
\end{tabular}
\end{center}
(b)
\label{table:scan}
\end{table}%
The model-independent parameters scanned over are $\tan \beta$, $s$, $\mu_{\rm eff}$, $A_s$, $A_t$, and $M_2$, where we always assume gaugino mass unification $M_1=M_{1'}={5 g_1^2 \over 3 g_2^2} M_2$. The masses $M_{\widetilde U}$ and $M_{\widetilde Q}$ are the soft masses of the up-type squarks and doublet-type squarks, respectively, and are fixed at 1 TeV; $M_{1'}$ is the mass of the $Z'$-ino in the UMSSM. The model-dependent parameters are $\kappa$ and $A_\kappa$ for the NMSSM, $\xi_F$ and $\xi_S$ for the n/sMSSM, and $\theta_{E_6}$ for the UMSSM.
In the parameter scans, we veto points that fail the direct and indirect constraints of Section \ref{sect:constraints}. We choose the phase convention $A_s>0$, $\mu_{\rm eff}>0$, with all the VEVs real and positive. We limit $h_s$ to be real and positive and allow the gaugino mass $M_2$ and coupling $\kappa$ to be real with either sign, although more generally these parameters could be complex. With complex parameters, CP violation could occur. If phases were included, the Higgs sector would be further complicated with up to five states for the NMSSM and n/sMSSM (four states for the UMSSM) that can intermix. The Higgs sector with an arbitrary number of additional singlets and CP violation was studied in Ref. \cite{Ham:2004pd}.
The couplings run as the energy scale is varied. Naturalness and the requirement that the couplings remain perturbative at the GUT scale limits $0.1\le h_s \le 0.75$ or $0.1\le \sqrt{\kappa^2+h_s^2}< 0.75$ for the NMSSM. The couplings in the n/sMSSM are real and fixed in the interval $-1 \le \xi_{S},\xi_{F}\le 1$ with $M_{\rm n} = 500$ GeV. We also constrain $\mu_{\rm eff}$ to the range $50 \le \mu_{\rm eff} \le 1000$ GeV to avoid fine tuning. A summary of the scan ranges over model parameters are given in Table \ref{table:scan}. For the grid scan, the step size for each parameter is given and we specifically scan $\tan \beta=1,1.5,2,10,50$.
\section{Discussion of the Higgs Mass spectra}\label{sect:results}
Throughout most of the parameter space, model distinguishing features are apparent in the Higgs masses. However, different models can produce similar masses and mixings in certain limits. Characteristics that are a direct consequence of how the singlet states mix affect the limits placed on the lightest Higgs boson mass.
\subsection{Common Characteristics}
\label{sect:comchar}
If the model-dependent parameters in the Higgs mass-squared matrices are set to zero, we obtain common mass-squared matrices and an additional symmetry that applies for each model. For the n/sMSSM, this is a Peccei-Quinn (PQ) symmetry which protects the mass of one CP-odd Higgs. Depending on what parameters vanish, the NMSSM may either have a PQ or a $U(1)_R$ symmetry \cite{Dobrescu:2000yn}, the global invariance of supersymmetry. Near these limits, the $A_1$ mass in these extended models is small, allowing decay modes involving light CP-odd Higgs bosons; this is addressed in more detail in Section \ref{sect:decaybf}.
In the UMSSM in the $g_{1'}\to 0$ limit, the gauged $U(1)'$ turns into a global $U(1)_{PQ}$ symmetry for the matter fields. A massless CP-odd state, $A_1$, emerges, which is just the Goldstone boson of the broken $U(1)$ while the other CP-odd state, present for $g_{1'}\neq 0$, remains massive. The $Z'$ decouples and remains massless in this limit. In Table \ref{tbl:limits}, we summarize the common limits of the extended models.
\begin{table}[t]
\begin{center}
\caption{Common Higgs mass-squared matrix limits of various models and their effects. Note that in the UMSSM, the $U(1)$ is a global symmetry and not a remaining $U(1)'$ symmetry. In these limiting cases, two of the CP-even Higgs bosons of each model are equivalent to the MSSM Higgs bosons if $s \gg \mu_{\rm eff}$, while the third decouples and is heavy for the NMSSM, or light for the n/sMSSM or UMSSM.}
\begin{tabular}{|c|c|c|c|}
\hline
Model & Limits & Symmetry & Effects \\
\hline
MSSM & $B\to0$& $U(1)_{PQ}$ & $M_{A_2} \to 0$\\
NMSSM & $\kappa, A_\kappa \to 0$ & $U(1)_{PQ}$ & $M_{A_1} \to 0$ \\
NMSSM & $A_s, A_\kappa \to 0$ & $U(1)_R$ & $M_{A_1} \to 0$ \\
n/sMSSM & $\xi_F$, $\xi_S \to 0$ & $U(1)_{PQ}$ & $M_{A_1} \to 0$ \\
UMSSM & ${g_{1'}} \to 0$ & $U(1)_{PQ}$ & $M_{Z'},M_{A_1}\to 0$\\
\hline
\end{tabular}
\label{tbl:limits}
\end{center}
\end{table}%
In the PQ limits (and for the UMSSM for all $g_{1'}$), the CP-odd Higgs mass-squared matrix factors into a tree-level matrix times the one-loop correction. Such a form is required by the $U(1)$ symmetries to require the existence of two massless CP-odd goldstones, one of which is eaten by the $Z$ and the second by the $Z'$ in the UMSSM after radiative corrections are included. Thus, $M_A$ is elevated by a factor of $1+\frac{k h_t^2 A_t}{2 A_s}{\cal F}$, where the ${\cal F}$ term is the loop contribution, i.e.,
\begin{equation}
M^2_A = \frac{h_s A_s}{\sqrt{2}} \left( 1+k \frac{h_t^2}{2} \frac{A_t}{A_s} {\mathcal F} \right) \left( \frac{v_d v_u}{s} + \frac{v_u s}{v_d} + \frac{v_d s}{v_u} \right).
\label{eq:maradcorr}
\end{equation}
Effectively the soft mass is increased by
\begin{equation}
A_s \to A_s+k {h_t^2 \over 2} A_t {\cal F}
\label{eq:Ahscale}
\end{equation}
to promote the tree-level mass of the CP-odd Higgs boson to the radiatively corrected one. In the $U(1)_R$ limit of the NMSSM, the radiative correction to the CP-odd masses vanishes.
Another limit, the $s$-decoupling limit, $s\to\infty$ while keeping $\mu_{\rm eff} ={h_s s\over \sqrt 2} \sim {\cal O}(\text{EW})$, gives similar EW/TeV scale Higgs boson masses for all models. In this limit there is little mixing among Higgs states. For the NMSSM and UMSSM, two CP-even Higgs correspond to the MSSM Higgs states, while the remaining Higgs boson is dominantly singlet with the mass ordering depending on $A_s, \kappa, A_\kappa$ and $g_{1'}$. In the n/sMSSM, the lightest Higgs boson has vanishing mass and is singlet dominated while $H_2$ and $H_3$ correspond to the MSSM Higgs bosons. Mass expressions in this limit cay be found in Appendix \ref{apx:masses}. The Higgs boson that is dominantly singlet couples weakly to MSSM particles.
The strength of a particular Higgs boson, $H_i$, coupling to fields in the MSSM may be quantified as the MSSM fraction
\begin{equation}
\xi^{H_i}_{\text{MSSM}} = \sum_{j=1}^{2} (R^{ij}_{+})^2.
\end{equation}
This quantity is not to be confused with the scaled $ZZH_i$ coupling $\xi_{ZZH_i}$. Since $R$ is unitary, a sum rule exists
\begin{equation}
\sum_{i=1}^{3} \xi_{\text{MSSM}}^{H_i} = 2,
\label{eq:summssmfrac}
\end{equation}
which implies that at most two CP-even Higgs bosons can be MSSM-like; equal mixing scenarios have $\xi^{H_{1,2,3}}=\frac{2}{3}$. A similar quantity can be found for the CP-odd Higgs bosons. In the UMSSM and in the limits in Table \ref{tbl:limits} for the NMSSM and n/sMSSM, the MSSM fraction of the massive CP-odd Higgs boson is
\begin{equation}
\xi^{A_2}_{\text{MSSM}} = \left(1+\frac{v^2}{s^2} \cos^2 \beta \sin^2 \beta\right)^{-1},
\end{equation}
consistent with the $s$-decoupling limit.
Since the trace is invariant under rotations, a mass-squared sum rule exists. The limits in Table \ref{tbl:limits} lead to a common sum rule of the tree-level Higgs masses:
\begin{equation}
Tr \left[{\cal M}_{+}^0-{\cal M}_{-}^0\right] = M^2_{H^0_1}+M^2_{H^0_2}+M^2_{H^0_3} - M^2_{A_2} = M^2_Z,
\end{equation}
where the $Z$ mass is given by $M_Z^2 = \frac{G^2}{4} (v_d^2 + v_u^2)$. The sum rule for the MSSM is realized by taking the $s$-decoupling limit in the n/sMSSM, and additionally requires $g_{1'}\to 0$ in the UMSSM and $\kappa \to 0$ in the NMSSM. In the CP-even and CP-odd mass-squared matrices of Section \ref{sect:massmtx}, we see that the upper left $2 \times 2$ submatrix is that of the MSSM while the third column/row vanishes. Then, the decoupled $M_{H_1}$ and $M_{A_1}$ ($M_{Z'}$ for the UMSSM) become massless at tree-level and the Higgs mass-squared sum rules become MSSM like:
\begin{equation}
M^2_{h^0}+M^2_{H^0} - M^2_{A} = M^2_Z,
\end{equation}
where $h^0 = H^0_2$ and $H^0 = H^0_3$ are the usual MSSM CP-even Higgses.
\subsection{Distinguishing Characteristics}
\label{sect:distresults}
The introduction of the singlet Higgs field in MSSM extensions produces Higgs boson properties that are distinct from those of the MSSM. Each model has additional defining characteristics that may be used to distinguish one model from another. In this section, we give bounds on the lightest CP-even Higgs mass and provide expressions for the masses utilizing the hierarchy of matrix elements given in Appendix \ref{apx:masses}. We scan over relevant model parameters to determine their effects on the Higgs masses. Finally, we summarize the results of the complete random and grid scans.
\subsubsection{Lightest CP-even Higgs Mass Bounds}
\label{sect:massbounds}
In any supersymmetric theory that is perturbative at the GUT scale, the lightest Higgs boson mass has an upper limit \cite{ref:upperlimhiggs}. Since the mass-squared CP-even matrix ${\cal M}_{+}$ is real and symmetric, an estimation of the upper bound on the smallest mass-squared eigenvalue may be obtained by the Rayleigh Quotient
\begin{equation}
M_{H_1}^2 \le \frac{u^T {\cal M}_{+} u}{u^T u},
\end{equation}
where $u$ is an arbitrary nonzero vector. With the choices
\begin{eqnarray}
u^T &=& (\cos \beta, \quad \sin \beta) \qquad \mbox{~~~~~[MSSM]} \nonumber \\
&=& (\cos \beta, \quad \sin \beta, \quad 0) \qquad \mbox{[extended models]}
\end{eqnarray}
the well-known upper bound of the lightest Higgs mass-squared from the mass-squared matrices of Eq. (\ref{eq:cpetree1}-\ref{eq:cpetree2}) and (\ref{eq:cpemassmtx1}-\ref{eq:cpemassmtx2}) are given as
(i) MSSM \cite{MSSM_Higgs}:
\begin{equation}\label{eq:masslimits1}
M^2_{H^0_1} \le M^2_Z \cos^2 2\beta +\tilde {\cal M}^{(1)},
\end{equation}
where
\begin{equation}
\tilde {\cal M}^{(1)} = ({\cal M}^{(1)}_{+})_{11}\cos^2\beta +({\cal M}^{(1)}_{+})_{22}\sin^2\beta + ({\cal M}^{(1)}_{+})_{12}\sin 2\beta.
\end{equation}
(ii) NMSSM, n/sMSSM, and Peccei-Quinn limits \cite{Drees:1988fc}:
\begin{equation}
M^2_{H^0_1} \le M^2_Z \cos^2 2\beta + \frac{1}{2} h_s^2 v^2 \sin^2 2\beta+\tilde {\cal M}^{(1)}.
\end{equation}
(iii) UMSSM \cite{UMSSM_Higgsbound}:
\begin{equation}\label{eq:masslimits2}
M^2_{H^0_1} \le M^2_Z \cos^2 2\beta + \frac{1}{2} h_s^2 v^2 \sin^2 2\beta + g_{1'}^2 v^2 (Q_{H_d} \cos^2 \beta + Q_{H_u} \sin^2 \beta)^2+\tilde {\cal M}^{(1)}.
\end{equation}
Although the upper bounds change with the choice of the $u$ vector, these results indicate that extended models have larger upper bounds for the lightest Higgs due to the contribution of the singlet scalar. The UMSSM can have the largest upper bound due to the quartic coupling contribution from the additional gauge coupling term, $g_{1'}$, in the $U(1)'$ extension. In the MSSM, large $\tan \beta$ values are suggested by the conflict between the experimental lower bound and the theoretical upper bound on $M_{H_1}$. Since the extended models contain additional terms which relax the theoretical bound, they allow smaller values for $\tan \beta$ than the MSSM.
\subsubsection{Numerical Evaluation of masses}
\label{sect:numeval}
\emph{a. CP-even Higgs Masses}
In Fig. \ref{fig:modelscans} we show the variation of the lightest Higgs mass in the different models as functions of $s$ and $\tan \beta$ with the other parameters fixed. (Similar plots for the heavier states are shown in
Appendix \ref{apx:addparm}.) We only apply the theoretical constraints to these spectra to see the general trends of the models before experimental constraints are applied. The UMSSM would fail to pass the $\alpha_{ZZ'}$ constraint in most of the plotted range of $s$.
\begin{figure}[h]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{xcpep01.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{xcpop01.ps}
(a)\hspace{0.48\textwidth}(b)
\includegraphics[angle=-90,width=0.49\textwidth]{xcpep02.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{xcpop02.ps}
(c)\hspace{0.48\textwidth}(d)
\caption{Lightest CP-even and lightest CP-odd Higgs masses vs. $\tan\beta$ and $s$ for the MSSM, NMSSM, n/sMSSM, UMSSM, and the PQ limits. Only the theoretical constraints are applied with $s = 500$ GeV (for $\tan\beta$-varying curves), $\tan\beta = 2$ (for $s$-varying curves). Input parameters of $A_s = 500$ GeV, $A_t = 1$ TeV, $M_{\tilde Q} = M_{\tilde U} = 1$ TeV, $\kappa = 0.5$, $A_\kappa = -250$ GeV, $M_{\rm n}=500$ GeV, $\xi_F = -0.1$, $\xi_S = -0.1$, $h_s = 0.5$, $\theta_{E6} = -\tan^{-1}\sqrt{5\over3}$, and $Q = 300$ GeV, the renormalization scale, are taken. The $U(1)_{PQ}$ limit allows one massive CP-odd Higgs whose mass is equivalent to that of the UMSSM CP-odd Higgs.}
\label{fig:modelscans}
\end{center}
\end{figure}
Note that the MSSM does not conform to the behavior of the extended models in the CP-even sector. Since the MSSM contains only two CP-even Higgs bosons, the heavier of the two mass-squares increases with $\mu_{\rm eff}A_s$ at tree-level, similar to the CP-odd and charged Higgs masses. Since we fix $h_s=0.5$, this Higgs mass-squared scales as the singlet VEV, $s$. The radiative corrections do not contribute a significant $s$ dependence to the mass-squared matrix. The tree-level dependence on $s$ prevents a level crossing between the $H_1$ and $H_2$ states. However, in the extended models there are three CP-even Higgs bosons. Level crossings are possible here as there is a Higgs boson of intermediate mass: see Fig. \ref{fig:modelscans}(c). We also see a significant difference between the MSSM and the extended-MSSM models in the $\tan \beta$ scan, which is expected since a moderate value of $s=500$ GeV is chosen. The terms that differentiate the matrix elements in the extended models from that of the MSSM are not negligible at this value of $s$, giving different $s$-dependences of the Higgs mass.
The MSSM $\tan \beta$ scan shows a dip in the Higgs mass at $\tan \beta = 1$ and a maximal mass is approached as $\tan \beta$ increases. However, the extended-MSSM models have a decrease in mass after $\tan\beta$ of $~2-4$ due to the level crossing with the additional moderate mass CP-even Higgs present in these models. The presence of the dip in the masses at $\tan \beta\sim 1$ for the UMSSM and n/sMSSM is not a consequence of a level crossing, but is due to the mass dependence on $\tan \beta$. When $\tan \beta = 1$, the upper bound on the lightest CP-even Higgs mass decreases as seen in Eq. (\ref{eq:masslimits1}-\ref{eq:masslimits2}). Overall, we see substantial differences in the spectra of the lightest higgs in the extended models compared to the MSSM.
\emph{b. CP-odd Higgs Masses}
Since only one massive CP-odd Higgs boson exists in the MSSM, UMSSM and the Peccei-Quinn limit of the extended models, the CP-odd masses generally behave the same over both scans and conform to the general scaling $M_{A_2}^2 \sim A_s s(\cot \beta+\tan\beta)$. (The exact expression in these cases is given by Eq. (\ref{eq:maradcorr}), with the first term omitted for the MSSM.) Further, we note that the CP-odd mass in the Peccei-Quinn limit is identical to that of the UMSSM, which may be understood by the absence of mixings and the resulting mass splittings that occur in the MSSM or other extended models. However, the MSSM mass approaches the PQ/UMSSM mass as $s$ increases, a result consistent with the $s$-decoupling limit.
The lightest CP-odd Higgs in the n/sMSSM and the NMSSM, however, does not share the similarities of the other models. In these models, there are two CP-odd Higgs bosons, resulting in a different dependences on $s$ and $\tan \beta$. Mixing effects tend to lower the lightest Higgs masses in these models, providing interesting phenomenological consequences. These are further discussed in Section \ref{sect:collpheno}.
\emph{ c. Higgs Mass Ranges}
We summarize the available ranges found in the grid and random scans of the lightest CP-even, CP-odd and charged Higgs boson masses that satisfy the applied constraints in Fig. \ref{fig:mhrange}. For each model, the values of the maximum and minimum masses are given as well as the reason for the bounds.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,width=.49\textwidth]{massrange-even.ps}
\includegraphics[angle=-90,width=.49\textwidth]{massrange-odd.ps}
\includegraphics[angle=-90,width=.49\textwidth]{massrange-chg.ps}
\caption{Mass ranges of the lightest CP-even and CP-odd and the charged Higgs bosons in each extended-MSSM model from the grid and random scans. Explanation of extremal bounds and their values are provided for each model. Explanations are Th. - theoretical bound met, value not sensitive to limits of the scan parameters; Scan - value sensitive to limits of the scan parameters; State Crossing - value has maximum when crossing of states occurs (specifically for $A_1$ and $A_2$ in the NMSSM and n/sMSSM); LEP - experimental constraints from LEP; $\alpha_{ZZ'}$ - experimental constraints in the UMSSM on the $Z-Z'$ mixing angle.}
\label{fig:mhrange}
\end{center}
\end{figure}
The lightest CP-even and CP-odd and the charged Higgs boson mass ranges differ significantly among the models. The CP-even Higgs mass range is quite restricted in the MSSM and satisfies the upper theoretical mass bound and lower experimental bound from LEP discussed in Section \ref{sect:constraints}. The upper limits for the CP-even Higgs masses in the extended models saturate the theoretical bounds and are extended by $30-40$ GeV compared to the MSSM while the upper limits in the lightest CP-odd Higgs masses are artificial in the MSSM and UMSSM as they change with the size of the scan parameters such as $A_s$ and $\tan \beta$. The lower limits of the lightest CP-odd masses in the MSSM and UMSSM reflect the LEP limits on $M_{A_2}$; the UMSSM is similar to the MSSM since $s$ is required to be large by the strict $\alpha_{ZZ'}$ constraint, decoupling the singlet state and recovering a largely MSSM Higgs sector. However, fine tuning the Higgs doublet charges under the $U(1)'$ gauge symmetry and $\tan \beta$ allows the $Z-Z'$ mixing constraint on $s$ to be less severe, and can result in a lower Higgs mass with respect to the MSSM. These instances along with the values $A_s=A_t=0$ GeV allow very low CP-even Higgs masses at ${\cal O}(1\text{ GeV})$ and a massless CP-odd state. Since these points are distinct from the range of masses typically found in the UMSSM, we do not show these points in Fig. \ref{fig:mhrange} but simply note that they exist. However, the NMSSM and n/sMSSM may have a massless CP-odd state due to global $U(1)$ symmetries discussed in Section \ref{sect:comchar} while the upper limit on the lightest CP-odd Higgs mass depends on the specifics of the state crossing with the heavier state, $A_2$, that has a scan-dependent mass. In these models, the CP-odd masses extend to zero since the mixing of two CP-odd states allow one CP-odd Higgs to be completely singlet and avoid the constraints discussed above.
The charged Higgs masses are found to be as low as 79 GeV in the scans, in agreement with the imposed experimental limit of 78.6 GeV. In these cases where $M_{H^\pm}\sim 80$ GeV, the charged Higgs is often the lightest member of the Higgs spectrum. However, these cases require fine tuning to obtain values of $\mu_{\rm eff}>100$ GeV \cite{cpnsh}. The upper limit of the charged Higgs mass is dependent on the range of the scan parameters as seen in Eq. (\ref{eq:chghiggs}). The discrepancy in the upper limit of the charged and CP-odd Higgs mass between the UMSSM and MSSM is a consequence of a lower $\mu_{\rm eff}$ in the UMSSM, resulting in a lower $M_Y$. Large values of $\mu_{\rm eff}$ are more fine-tuned in the UMSSM than the MSSM since the additional gauge, $g_{1'}$, and Higgs, $h_s$, couplings often drive $M_{H_1}^2<0$. Consequently, CP-odd and charged Higgs masses comparable to the higher MSSM limit are not present in the scan. The upper bound on the charged Higgs mass in the NMSSM is relaxed due to the additional parameter of the model.
\emph{d. Higgs Boson Searches}
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{zzhcoup.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{ma-vs-cosbma.ps}
(a)\hspace{0.48\textwidth}(b)
\caption{(a) LEP limit \cite{Sopczak:2005mc} on $\xi_{ZZH_i} = \left({g_{ZZH_i} / g^{SM}_{ZZh}}\right)^2 = \Gamma_{Z \to Z H_i}/\Gamma^{SM}_{Z\to Zh}$, the scaled $ZZH_i$ coupling in new physics, vs. the light Higgs mass. The solid black curve is the observed limit with a 95\% C.~L. Points falling below this curve pass the $ZZH_i$ constraint. (b) $\cos^2(\beta-\alpha)$ vs. $M_{A_2}$ in the MSSM. The hard cutoff shown by the solid green line at $M_{A_2} = 93.4$ GeV is due to the constraint on $\sigma(e^+ e^- \to A_i H_1)$ discussed in Section \ref{sect:dirlimits}.}
\label{fig:lep}
\end{center}
\end{figure}
The focus of Higgs searches is most commonly the lightest CP-even Higgs boson. In the models that we consider, the lightest CP-even Higgs boson can have different couplings than in the SM. In Fig. \ref{fig:lep}a, we show the present limits from LEP on the scaled $ZZH_{i}$ coupling.\footnote{For clarity, in all the plots that follow we sample the passed points in the results from the random scans.} Mixing effects can lower the $ZZH_i$ coupling and, in the MSSM, this occurs if $M_{A_2}$ is low, as seen in Fig. \ref{fig:lep}b where the $ZZH_i$ coupling is lowest for $\cos^2(\beta-\alpha)=1$. However, an additional limit is placed on the mixing via the $e^+ e^- \to A_i H_1$ cross section discussed in Section \ref{sect:dirlimits}, eliminating low mass CP-even Higgs bosons in the MSSM, as seen in Fig. \ref{fig:lep}b. In extended-MSSM models, additional mixing may occur with the singlet fields. Due to this mixing and the subsequent evasion of the LEP limit on the $ZZH_i$ coupling, the lightest CP-even Higgs may then have a mass smaller than the SM Higgs mass limit. Indeed, attempts to explain the $2.3 \sigma$ and $1.7 \sigma$ excess of Higgs events at LEP for masses of 98 GeV and 114.4 GeV, respectively, with light CP-even Higgs bosons in the UMSSM have been explored \cite{ref:umssmlep}. This slight excess has also been studied in the NMSSM where a light Higgs with a SM coupling to $ZZ$ decays to CP-odd pairs \cite{NMSSMlepxs}.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{Nmh-vs-xi.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{lnmh-vs-xi.ps}
(a)\hspace{0.48\textwidth}(b)
\includegraphics[angle=-90,width=0.49\textwidth]{umh-vs-xi.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{mh1-vs-xi.ps}
(c)\hspace{0.48\textwidth}(d)
\caption{Higgs masses vs. $\xi_{\text{MSSM}}$ in the (a) NMSSM, (b) n/sMSSM, (c) UMSSM and (d) the lightest CP-even Higgs of all extended models. The vertical line is the LEP lower bound on the MSSM (SM-like) Higgs mass. }
\label{fig:mh-vs-xi}
\end{center}
\end{figure}
The reduction in the CP-even Higgs mass in extended models can be seen in Fig. \ref{fig:mh-vs-xi}, where we plot the MSSM fraction versus the Higgs boson mass. When there is little mixing between the singlet and doublet Higgs fields, the MSSM limit is reached and the LEP bound applies, as seen by the MSSM cutoffs at $\xi_{\text{MSSM}} = 1$ and $M_{H_i} = 114.4$ GeV. A common feature of each model is a CP-even Higgs boson with a mass range concentrated just above the LEP SM mass limit shown by the dark-green vertical line. These Higgs bosons have a large MSSM fraction, for which the $ZZH_i$ coupling limit is effective in elimination of the generated points. We note that there are cases where a Higgs boson mass below 114.4 GeV but with relatively high MSSM fraction is allowed due to cancellation between the rotation matrices in Eq. (\ref{eq:zzh}). This cancellation permits the lightest MSSM Higgs boson to be below the SM limit, and has been taken as a possible explanation of the Higgs signal excess \cite{Drees:2005jg}.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{lnmh-vs-xis.ps}
\caption{Higgs mass dependence on $\xi_S$ in the n/sMSSM. When $\xi_S\sim -0.1$, $H_2$ and $H_1$ switch content, allowing a light CP-even Higgs below the LEP limit.}
\label{fig:mhxis}
\end{center}
\end{figure}
By measuring the lightest Higgs boson couplings to MSSM fields, an estimation of the MSSM fraction may be obtained, providing important information on the singlet content. In the NMSSM and especially the n/sMSSM the lightest CP-even Higgs boson may have both low MSSM fraction and low mass as seen in Fig. \ref{fig:mh-vs-xi}d. Since $\mu_{\rm eff}$ is fixed at the EW scale, the matrix elements $({\cal M}_{+})_{i3}$ are suppressed in the n/sMSSM at large $s$. This results in a low mass CP-even Higgs boson with high singlet composition; the other Higgs states have a high MSSM fraction due to the sum rule in Eq. (\ref{eq:summssmfrac}). However in the n/sMSSM, the existence of a low mass CP-even Higgs boson depends on the value of $\xi_S$. In appendix \ref{apx:lnmhexpr}, we show that the tree-level mass-squares of the singlet dominated CP-even and odd Higgs bosons in the n/sMSSM at large $s$ are
\begin{equation}
M_{H_1}^2 \sim M_{A_1}^2 \sim -{\sqrt 2 \xi_S M_{\rm n}^3\over s},
\end{equation}
which forces the parameter $\xi_S$ to be negative in this limit. Therefore, a largely singlet CP-even Higgs boson can have a mass lower than the LEP limit if
\begin{equation}
-\xi_S < {(114\text{ GeV})^2 s\over \sqrt 2 M_{\rm n}^3} \sim 0.1.
\end{equation}
In Fig. \ref{fig:mhxis}, we show the Higgs mass dependence on this parameter, which exhibits the crossing of states at $\xi_S = -0.1$.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{Nmh-vs-s.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{lnmh-vs-s.ps}
(a)\hspace{0.48\textwidth}(b)
\includegraphics[angle=-90,width=0.49\textwidth]{umh-vs-s.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{mh1-vs-s.ps}
(c)\hspace{0.48\textwidth}(d)
\caption{Higgs masses vs. $s$ in the (a) NMSSM, (b) n/sMSSM, (c) UMSSM and (d) the lightest CP-even Higgs of all extended models. The vertical line is the LEP lower bound on the mass of the SM Higgs.}
\label{fig:mh-vs-s}
\end{center}
\end{figure}
In the UMSSM, the lightest Higgs mass is concentrated near the LEP limit with $\xi_{\text{MSSM}}$ near one, which is a direct consequence of the high $s$ constraint placed by the strict $\alpha_{ZZ'}$ limit. This is also seen in Fig \ref{fig:mh-vs-s}, where we plot the Higgs masses versus the singlet VEV. The lowest allowed point in the UMSSM has $s$ above $\sim 800$ GeV, compared to the other models which allow $s$ to be as low as a few hundred GeV. By examining Fig. \ref{fig:mh-vs-xi}c and \ref{fig:mh-vs-s}c we see that $M_{H_2}$ varies linearly with $s$ and is characteristically dominantly singlet. Without the $\alpha_{ZZ'}$ constraint, the $H_1$ and $H_2$ states cross near $s\sim 400$ GeV. This constraint may be evaded by the fine tuning cases discussed in Section \ref{sect:numeval}. At this point, the mass eigenstates switch content, below which the lightest Higgs is dominantly singlet, has a mass below the LEP bound, and evades the $ZZH_i$ coupling constraint.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{Nmh-vs-tanb.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{lnmh-vs-tanb.ps}
(a)\hspace{0.48\textwidth}(b)
\includegraphics[angle=-90,width=0.49\textwidth]{umh-vs-tanb.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{mh1-vs-tanb.ps}
(c)\hspace{0.48\textwidth}(d)
\caption{Higgs masses vs. $\tan \beta$ in the (a) NMSSM, (b) n/sMSSM, (c) UMSSM and (d) the lightest CP-even Higgs of all extended models where the blue curve shows the theoretical MSSM mass limit with maximal mixing in the stop mass-squared matrix. }
\label{fig:mh-vs-tanb}
\end{center}
\end{figure}
The Higgs mass dependence on $\tan \beta$ has some interesting features, especially that of the lightest Higgs. We show this dependence in Fig. \ref{fig:mh-vs-tanb} for all the Higgs bosons of each extended MSSM model and separately for the lightest Higgs in all the models considered. The lightest CP-even Higgs boson mass vs. $\tan \beta$ in each model shown in Fig. \ref{fig:mh-vs-tanb}d has a majority of generated points in the band $114.4\text{ GeV}\lesssim M_{H_1}\lesssim 135\text{ GeV}$ and $\tan \beta \gtrsim 2$. This is one of the salient features of the MSSM as shown in Fig. \ref{fig:modelscans}a. The MSSM parameter space has a lower cutoff at $\tan \beta \sim 2$ due to the LEP limit at 114.4 GeV for a SM-like Higgs and is shown in Fig \ref{fig:mh-vs-tanb}d as the intersection of the theoretical MSSM Higgs mass limit shown in blue and the LEP limit in green. However, the extended-MSSM models may have values of $\tan \beta$ that are below this region. Since mixing effects can decrease the lightest Higgs mass and thereby satisfy the LEP bounds, a strict bound on $\tan \beta$ cannot be given. Additionally, an increase in the Higgs mass from the MSSM theoretical limit shown in Section \ref{sect:massbounds} can permit low $\tan \beta$ scenarios which have masses above the LEP limit.
Among these models, the heaviest CP-odd Higgs state follows the same dependence on $\tan \beta$ that was noted above in Section \ref{sect:numeval}. The heaviest CP-even Higgs and charged Higgs bosons also follow this trend with the charged Higgs boson mass having the same $\tan \beta$ dependence as the CP-odd Higgs mass, see e.g. Eq. (\ref{eq:chghiggs}). The heaviest CP-even and CP-odd Higgs masses are approximately the same even after radiative corrections. An explanation is provided by the mass-squared sum rules that each model obeys, namely
\begin{equation}
\sum_i M_{H_i}^2 - \sum_j M^2_{A_j} = M^2_Z + M_{xMSSM}^2 + \delta M^2,
\label{eq:sumrule1}\end{equation}
The sums are over the massive Higgs bosons, and $M_{xMSSM}$ is a model-dependent mass parameter with values
\begin{eqnarray}
M_{NMSSM}^2 &=& 2 \kappa (-h_s v_d v_u + s (\sqrt{2} A_\kappa + s \kappa)),\\
M_{n/sMSSM}^2 &=& 0,\\
M_{UMSSM}^2 &=& M_{Z'}^2.
\label{eq:sumrule2}
\end{eqnarray}
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,width=.49\textwidth]{sum-vs-tanb.ps}
\includegraphics[angle=-90,width=.49\textwidth]{sum-vs-At.ps}
(a)\hspace{0.48\textwidth}(b)
\caption{Radiative corrections, $\delta M \equiv (\delta M^2)^\frac{1}{2}$ to the Higgs mass-squared sum rule vs. (a) $\tan \beta$ and (b) $A_t$. The radiative corrections introduce a deviation from the sum rule by at most ${\cal O}(100 \text{ GeV})^2$ over most of the range of the scan. A larger deviation is seen at low $\tan \beta$ due to a larger Yukawa coupling there.}
\label{fig:sumrule}
\end{center}
\end{figure}
The term $\delta M^2$ in Eq. (\ref{eq:sumrule1}) is due to the radiative corrections, and has a value
\begin{equation}
\delta M^2 = Tr\left[ {\cal M}^1_{+} - {\cal M}^1_{-}\right],
\end{equation}
that gives an estimate of the effect the radiative corrections have on the Higgs masses. Note that the CP-odd radiative corrections, the ${\cal F}$ terms, are cancelled by equivalent terms in the CP-even mass-squared matrix. The radiative corrections alter the sum rule by at most ${\cal O}(100 \text{ GeV})^2$ over most of the scanned range, as seen in Fig. \ref{fig:sumrule} where we plot the shift versus both $\tan \beta$ and $A_t$.
The radiative correction contributions to the sum rule are largest for large $A_t$ and small $\tan \beta$. Since the top quark Yukawa coupling increases when $\tan \beta$ is small, the radiative corrections are enhanced at small $\tan \beta$, causing larger deviations from the sum rule. Since radiative corrections only affect the sum rule by ${\cal O}(100\text{ GeV})$, any high mass CP-even Higgs boson contribution must be cancelled by a CP-odd Higgs of similar mass.
\section{Collider Phenomenology}
\label{sect:collpheno}
Higgs boson decays are important to consider as they affect signals at colliders. Both production and decay modes are relevant in determining whether a given model yields detectable Higgs physics. While Higgs searches have been addressed for the NMSSM \cite{ref:studies,ref:Nmssmlhc}, a side-by-side comparison of the NMSSM, n/sMSSM, and UMSSM has not yet been made. In the above parameter scans, we calculate the partial decay widths relevant to production and branching fraction for decays in these models of various important modes.
\subsection{Higgs Production}
At hadron colliders the dominant production of the lightest Higgs boson in the SM proceeds through $gg$ fusion and/or Weak Boson Fusion (WBF). The Higgs production cross-sections are directly related to Higgs decay widths when the decay channels are kinematically accessible at the Higgs mass. Considerable effort has been put into calculating Higgs decays beyond leading order \cite{ref:higgsdecay}. In the SM, MSSM, and NMSSM, numerical codes have been implemented to calculate these widths precisely \cite{ref:hdecay,ref:nmhdecay}. We calculate the partial decay widths of $H \to gg$, $WW$, and $ZZ$ in each model via HDECAY \cite{ref:hdecay} with the SM Higgs couplings modified to reflect the model of interest. Higgs decay to $gg$ occurs via quark and squark loops and is calculated at NLO. However, to a good approximation, the loops involving heavy squarks are suppressed \cite{ref:gunion86}. The large squark mass approximation is justified for our assumed values of $M_{\widetilde Q}=M_{\widetilde U}=M_{\widetilde D}=1$ TeV. Therefore, we only consider the SM quark loops in the $H\to gg$ calculation. Decays to weak boson pairs are calculated at tree-level as the radiative corrections to the width are negligible.
\begin{figure}[htbp]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{wwwidth.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{zzwidth.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{ggwidth.ps}
\caption{Decay widths for $WW^*$, $ZZ^*$, and $gg$ in the MSSM and extended-MSSM models. Curves denote the corresponding SM width. For clarity, not all points generated are shown.}
\label{fig:prod}
\end{center}
\end{figure}
In Fig. \ref{fig:prod}, we plot the partial widths of the lightest Higgs boson for the SM, MSSM and extended-MSSM models. Since the n/sMSSM and, to a lesser extent, the NMSSM contain a very light Higgs with high singlet composition, its decay widths to SM particles are highly suppressed. However, from Fig. \ref{fig:mh-vs-xi}, the second lightest CP-even Higgs boson in the n/sMSSM has a high MSSM fraction and often has comparable mass to the lightest Higgs bosons in other models. Hence, we also show the decay width of the second lightest Higgs boson in the n/sMSSM, as it is characteristically similar to the lightest Higgs boson of the MSSM. The decay widths of the lightest Higgs in the MSSM show a large spread with respect to the SM, associated with low $A_2$ mass: see Fig. \ref{fig:lep}b. When $M_{A_2} \gg M_Z$, the masses and couplings of the lightest CP-even Higgs approach those of the SM Higgs \cite{ref:barger-phillips-stange}. In the UMSSM, the $\alpha_{ZZ'}$ limit forces the model to be near the $s$-decoupling limit, resulting in masses, couplings, and decay widths that are close to those of the MSSM. Consequently, the UMSSM decay widths lie directly on the SM width in Fig. \ref{fig:prod}.
In the models considered, the $H_1$ mass is typically below the $WW$ and $ZZ$ thresholds. Therefore, the off-shell $WW^*$ and $ZZ^*$ decay widths are evaluated in the MSSM and its extensions\footnote{In this case the decay width cannot be translated directly into production rates since they require transverse and longitudinal polarizations of the $W$-bosons to be treated separately. However, the gauge coupling is equivalent in either case, and its scaling contains the suppression of the production rate.}. For the decays of the very light Higgs boson to occur in the n/sMSSM two off-shell gauge bosons are involved, resulting in high kinematic suppression of decay rates. In all the models considered, the $WW^*$ and $ZZ^*$ partial widths are bounded above by those of the SM. This is a consequence of the complementarity of the couplings of $H_1$ and $H_2$ to gauge fields in the MSSM. The gauge couplings in the MSSM follow the relation
\begin{equation}
(g^{SM}_{VVh})^2=(g^{MSSM}_{VVH_1})^2+(g^{MSSM}_{VVH_2})^2.
\label{eq:coupsumrule}
\end{equation}
More sum rules exist in the MSSM and can be found in Ref. \cite{ref:gunionsum}. In extended-MSSM models the gauge couplings are related to the SM couplings by
\begin{equation}
g_{VVH_i}=g^{SM}_{VVh} (R_{+}^{i1} \cos\beta +R_{+}^{i2} \sin\beta),
\label{eq:vvhcoup}
\end{equation}
where $g^{SM}_{ZZh}= {i g_2 M_Z g_{\mu \nu}\over \cos \theta_W}$ and $g^{SM}_{WWh}= i g_2 M_W g_{\mu \nu}$. The sum rule in Eq. (\ref{eq:coupsumrule}) generalizes to one involving three Higgs couplings. Therefore, the coupling of the lightest Higgs boson to weak bosons in the MSSM and its extensions is always reduced compared to the SM couplings. LEP constraints require the $ZZH_i$ coupling to be below the SM coupling when the Higgs mass is below the 114.4 GeV limit. Associated production $q \bar q \to V^* \to V H_i$ for SM Higgs bosons can be important at the Tevatron and the LHC for low Higgs masses \cite{ref:lhcprodxsect,ref:sugrawgc}. The corresponding production cross section can be scaled from the SM calculation by the $VVH_i$ coupling in Eq. (\ref{eq:vvhcoup}).
The $H_1\to gg$ partial width governs Higgs boson production via $gg$ fusion at hadron colliders. We see in Fig. \ref{fig:prod} that the $gg$ partial decay width is typically suppressed in the n/sMSSM for a low mass of the lightest Higgs boson since it is dominantly singlet. However, there is a trade-off in the production cross-section between smaller $\Gamma(H \to gg)$ and the kinematic enhancement from a lighter $M_{H_1}$ whose interplay is beyond the scope of this paper. The lightest Higgs in the NMSSM and MSSM and the second lightest Higgs in the n/sMSSM have decay widths to $gg$ that may be either enhanced or suppressed by a few orders of magnitude depending on the Higgs coupling to the internal quarks and their interferences. However, the lightest Higgs in the UMSSM and the MSSM in the limit of a large CP-odd mass shows no significant deviations from the SM $h \to gg$ decay width.
\subsection{Decay Branching Fractions}
\label{sect:decaybf}
Specific decay modes are important for identifying the Higgs boson at colliders. We calculate the contributions of $b \bar b$, $c\bar c$, $s \bar s$, $\tau^+ \tau^-$, $\mu^+ \mu^-$, $W W^*$, $Z Z^*$, and $g g$ to the total decay width of the lightest or MSSM-like Higgs boson in each model using HDECAY after modifying the corresponding Higgs couplings. In addition, non-SM decays including $\chi^0_1 \chi^0_1$ and $A_i A_i$ are calculated since they are often quite light in the extended-MSSM models. The decays to $\gamma \gamma$, and $\gamma Z$ are also calculated with loops involving quarks, $W^\pm$-bosons, charged Higgs bosons, charginos, and squarks (which decouple for sufficiently large squark mass).
\begin{figure}[h]
\begin{center}
\includegraphics[angle=-90,width=0.45\textwidth]{wwbf.ps}
\includegraphics[angle=-90,width=0.45\textwidth]{zzbf.ps}
\includegraphics[angle=-90,width=0.45\textwidth]{bbbf.ps}
\includegraphics[angle=-90,width=0.45\textwidth]{tautaubf.ps}
\includegraphics[angle=-90,width=0.45\textwidth]{gamgambf.ps}
\includegraphics[angle=-90,width=0.45\textwidth]{Zgambf.ps}
\caption{Branching fractions for various modes in the MSSM and extended-MSSM models. Curves denote SM branching fractions.}
\label{fig:decays}
\end{center}
\end{figure}
The branching fractions of representative decays to the SM particles $WW^*$, $ZZ^*$, $b\bar b$, $\tau^+ \tau^-$, $\gamma \gamma$, and $Z \gamma$ are presented in Fig. \ref{fig:decays} for the lightest CP-even Higgs boson in the MSSM, NMSSM and UMSSM, either $H_1$ or $H_2$ in the n/sMSSM, and $h$ in the SM.
Note that the branching fractions may be larger in the SUSY models than in the SM. For instance, in the NMSSM the branching fractions to $WW^*$ and $ZZ^*$ can be larger than the corresponding SM branching fractions, as seen in Fig \ref{fig:decays}. These enhancements are due to the smaller total decay width of the Higgs boson rather than an enhancement of the particular partial width and may aid in the $H \to W^*W^* \to l \bar\nu jj$ and $l \bar\nu \bar l \nu$ discovery modes at the Tevatron \cite{Han:1998ma}. Since the dominant decay mode is typically to $b\bar b$ in the mass range $2 m_b\lsim M_H\lsim 140 \text{ GeV}$, any decrease in the $b\bar b$ partial width reduces the total Higgs width. The Higgs boson couplings to fermions are related to the SM values by
\begin{equation}
g_{ddH_i}=g^{SM}_{ffh} {R_{+}^{i1}\over \cos\beta},\qquad g_{uuH_i}=g^{SM}_{ffh} {R_{+}^{i2}\over \sin\beta},
\end{equation}
where $g^{SM}_{ffh} = -{i g_2 m_f\over 2 M_W}$. Hence, either suppression or enhancement of the partial decay widths to fermions is possible, but not to an arbitrary degree, since in the MSSM the rotation matrices and $\tan \beta$ obey the tree-level relation
\begin{equation}
\sin 2\alpha={M_H^2+M_h^2\over M_h^2-M_H^2} \sin 2 \beta,
\end{equation}
where $\sin \alpha = R^{i1}_{+}$ and $\cos \alpha = R^{i2}_{+}$ in the MSSM. A similar expression holds for the extended models, which restricts the rotation matrix values. As noted earlier, the couplings converge to SM couplings just as they do for the MSSM in the $s$-decoupling limit \cite{ref:barger-phillips-stange}.
For a SM Higgs boson of mass below 150 GeV, $h\to \gamma\gamma$ is a significant mode for discovery at the LHC. The branching fraction for this mode can be enhanced significantly due to the modified fermion loops in the n/sMSSM and NMSSM for the same reasons that the $H\to gg$ decay width is enhanced, providing more opportunity for discovery. The Higgs couplings to $W$-bosons, charginos and $H^\pm$ also affect the $\gamma \gamma$ and $Z\gamma$ branching fractions, shown in Fig. \ref{fig:decays}. These couplings are reduced from their MSSM values. However, the reduced couplings may not necessarily lead to a rate suppression as interference effects can enhance the overall partial decay widths.
\subsubsection{Non-SM decays}
Decays to non-SM particles can also be important in the extended models. Since the lightest neutralino is a dark matter candidate, its production at colliders is of great interest to both the particle physics and cosmology communities \cite{dmatcolliders}. We show in Fig. \ref{fig:lspdecay}a the kinematic region where neutralino production via the decay $H_1 \to \chi^0_1 \chi^0_1$ is possible. The couplings and masses of the lightest neutralino have been investigated for the models considered here \cite{xMSSM_neutralino}, and $M_{\chi^0_1}$ may be quite small in the n/sMSSM \cite{xMSSM_neutralino,Menon:2004wv}. However, in the n/sMSSM most of the kinematic region is disfavored due to a large $\chi^0_1$ relic density \cite{xMSSM_neutralino}. This is indicated in Fig. \ref{fig:lspdecay}a below the red horizontal line at $M_{\chi^0_1} = 30$ GeV, which is the lower bound of $M_{\chi^0_1}$ allowed by the dark matter relic density constraint when only the annihilation through the $Z$ pole is considered. The $Z$ pole is the most relevant channel since the $\chi^0_1$ in this model is very light ($M_{\chi^0_1} \lsim 100$ GeV). In principle, other annihilation channels such as a very light Higgs may allow the lighter $\chi^0_1$ although the pole will be quite narrow \cite{NMSSM_neutralino}. Furthermore, in the secluded (sMSSM) version of the model,
it is possible that the $\chi^0_1$ considered here actually decays to a still lighter (almost) decoupled
neutralino, as discussed in Appendix \ref{apx:sumssmdecoup}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{mh1-vs-mn1-new.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{chichibf-new.ps}
(a)\hspace{0.48\textwidth}(b)
\caption{(a) $M_{H_i}$ vs. $M_{\chi^0_1}$ in all the models considered. Points falling below the blue line allow the decay of the lightest CP-even Higgs to two $\chi^0_1$. (b) Branching fraction of $H_i \to \chi^0_1 \chi^0_1$}
\label{fig:lspdecay}
\end{center}
\end{figure}
The $\chi^0_1 \chi^0_1$ partial decay width is given by
\begin{equation}
\Gamma_{H_i \to \chi^0_1 \chi^0_1} = {1 \over 16 \pi M_{H_i}} \lambda^{1/2}\left(M^2_{\chi^0_1}/M^2_{H_i},M^2_{\chi^0_1}/M^2_{H_i}\right)\left(M_{H_i}^2-4 M_{\chi^0_1}^2\right) |C_{H_i \chi^0_1 \chi^0_1}|^2,
\end{equation}
where the $H_i \chi^0_1 \chi^0_1$ coupling is
\begin{eqnarray}
C_{H_i \chi^0_1 \chi^0_1} &=&\left[(g_2 N_{12}- g_1 N_{11}+ g_{1'} Q_{H_d} N_{16})N_{13}+\sqrt 2 h_s N_{14}N_{15}\right]R_{+}^{i1}\nonumber\\
&+&\left[(g_1 N_{11}-g_2 N_{12}+ g_{1'} Q_{H_u} N_{16})N_{14}+\sqrt 2 h_s N_{13}N_{15}\right]R_{+}^{i2}\nonumber\\
&+& \left[g_{1'}Q_S N_{16}N_{15}+\sqrt{2} h_s N_{13}N_{14}-\sqrt{2} \kappa N_{15}N_{15}\right] R_{+}^{i3}.
\end{eqnarray}
where the expression for the NMSSM in Ref. \cite{Choi:2004zx} has been generalized to include the UMSSM while the $H_i \chi^0_1 \chi^0_1$ coupling in the n/sMSSM does not contain any model-dependence. For a particular model, the irrelevant parameters are understood to be set to zero as in Eq. (\ref{eq:potential}). The lightest Higgs boson in the n/sMSSM can have a high branching fraction to the lightest neutralino as seen in Figs. \ref{fig:lspdecay}b. In fact, in this model the $\chi^0_1 \chi^0_1$ branching fraction can be near 100\%.\footnote{If we do not assume gaugino mass unification and assume that $\mu_{\rm eff}$ is light and $M_{1'}$ is heavy, then $\chi^0_1$ is light and large $\chi^0_1 \chi^0_1$ branching fractions are possible in the UMSSM, similar to those found in the n/sMSSM. For constraints on $M_{\chi^0_1}$ in the MSSM from supernova data see Ref. \cite{Dreiner:2003wh}.} This $Z$ decay is seen as missing energy and makes Higgs searches difficult at the Tevatron or LHC. It has been explored in the MSSM \cite{Belanger:2001am} and more generally \cite{Davoudiasl:2004aj}.
In addition to decays to neutralino pairs, decays involving the lightest CP-odd Higgs bosons are relevant in the extended models. In Fig. \ref{fig:h1decay}a we show the possibilities for decays involving both $A_i$ and $H_j$, where $A_i$ is the lightest nonzero CP-odd state for each model. The kinematic regions where $Z\to A_i H_1$ and $H_j \to A_i A_i$ are given. Even though the $Z$ decay is possible in the n/sMSSM and NMSSM, the coupling is suppressed due to the low MSSM fraction of both $A_i$ and $H_1$ seen in Fig. \ref{fig:mh-vs-xi}b. Also shown is the crossing of states in the n/sMSSM where $H_2$ and $H_1$ switch content and hence their variation with $M_{A_i}$. The lightest CP-even and CP-odd Higgs masses in both the MSSM and n/sMSSM show a strong correlation below the LEP limit.
In the MSSM, this is evident from Fig. \ref{fig:lep}b where the reduced $ZZh$ coupling occurs when $\cos^2(\beta-\alpha)$ does not vanish, resulting in a lower CP-even Higgs mass. The n/sMSSM correlation is more clearly shown in Fig. \ref{fig:mhxis} where the crossing of states at $\xi_S \sim -0.1$ is discussed.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{mh1-vs-ma1.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{AAbf.ps}
(a)\hspace{0.48\textwidth}(b)
\caption{ (a) $M_{H_j}$ vs. $M_{A_i}$ showing the kinematics for decays in extended-MSSM models, where $A_i$ is the lightest nonzero CP-odd state for each model. $H_i \to A_i A_i$ decays are allowed for regions below the blue-dashed line. Decays of $Z\to H_j A_i$ are allowed to the left of the green dark line. (b) $H\to A_i A_i$ branching fraction vs. Higgs mass. The n/sMSSM parameter $\xi_S$ is scanned with a higher density at low $|\xi_S|$ to allow low Higgs masses.}
\label{fig:h1decay}
\end{center}
\end{figure}
The $H\to A_i A_i$ mode can be significant if allowed kinematically \cite{ref:htoaa} and has been studied in the NMSSM \cite{Dermisek:2005ar} and in the general singlet extended MSSM via an operator analysis \cite{Chang:2005ht}. Since the lightest Higgs masses are small in the n/sMSSM at low $|\xi_S|$, we scan over this parameter with a higher density in this region to be near the PQ limit, which gives a lightest CP-odd Higgs boson of low mass. In Fig. \ref{fig:h1decay}a, all the points below the line $M_{H_j} = 2 M_{A_i}$ allow this decay; the corresponding partial width is given by
\begin{equation}
\Gamma(H_j\to A_i A_i) = {1 \over 16 \pi M_{H_j}}\lambda^{1/2}\left(M^2_{A_i}/M^2_{H_j},M^2_{A_i}/M^2_{H_j}\right) |C_{H_j A_i A_i}|^2,
\end{equation}
where the $H_j A_i A_i$ coupling,
\begin{equation}
C_{H_j A_i A_i}=P_{H_j}P_{A_i}P_{A_i} V,
\end{equation}
is determined by the projection operators that parallel the equivalent MSSM operators in Ref. \cite{Gunion:1989we}
\begin{eqnarray}
P_{H_j} &=&{1\over \sqrt 2}\left(R_{+}^{j1}{\partial\over\partial{\phi_d}}+R_{+}^{j2}{\partial\over\partial{\phi_u}}+R_{+}^{j3} {\partial\over\partial{\sigma}}\right),\\
P_{A_i} &=&{i\over \sqrt 2}\left(R_{-}^{i1} {\partial\over\partial{\varphi_d}}+R_{-}^{i2}{\partial\over\partial{\varphi_u}}+R_{-}^{i3} {\partial\over\partial{\xi}}\right),
\end{eqnarray}
where we evaluate the potential at the minimum $\phi_{u,d} =\sigma =\varphi_{u,d} =\xi= 0$, where the field values are shifted to the minimum as in Eq. (\ref{eq:fieldexp}).
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,,width=0.49\textwidth]{AAdecay.ps}
\includegraphics[angle=-90,width=0.49\textwidth]{lnh1a1-vs-tanb.ps}
(a)\hspace{0.48\textwidth}(b)
\caption{(a) $H_1 \to A_1 A_1$ decay width in the n/sMSSM vs. the product of the MSSM fractions of $H_1$ and $A_1$. (b) $M_{A_1}/M_{H_1}$ vs. $\tan \beta$ in the n/sMSSM. The horizontal line marks the production threshold of $H_1 \to A_1 A_1$.}
\label{fig:AAdecay}
\end{center}
\end{figure}
The $H_1\to A_2 A_2$ decay is not allowed in the UMSSM. This is because the $\alpha_{ZZ'}$ constraint often requires a large value of $s$ resulting in a CP-odd mass above the typical CP-even mass, see, e.g., Eq. (\ref{eq:pqlimcpo}). However, the $A_1 A_1$ decay is kinematically allowed in both the n/sMSSM and NMSSM. When allowed, this decay mode can be dominant in the NMSSM as seen in Fig. \ref{fig:h1decay}b. In the n/sMSSM, the $H_1\to A_1 A_1$ decay mode is suppressed since there is no tree-level coupling of three singlet Higgs states in this model and the $H_1$ and $A_1$ states are dominantly singlet. However, because $H_1$ is not completely a singlet, these decays can still be non-negligible. In Fig. \ref{fig:AAdecay}a, we show the $H_1\to A_1A_1$ decay width in the n/sMSSM versus the product of the MSSM fractions of $H_1$ and $A_1$. As both MSSM fractions vanish, $H_1$ and $A_1$ become singlet dominated, giving a vanishing decay width in the n/sMSSM due to the absence of a singlet self-coupling. Nevertheless, this partial decay width alone characteristically exceeds the total width of the SM Higgs boson as can be seen in Fig. \ref{fig:totwidth}.
The MSSM-like second lightest Higgs boson in the n/sMSSM also may have a large branching fraction to light $A$ pairs. Since $H_2$ has a large MSSM fraction, the coupling to the singlet $A_1$ pairs is not suppressed. In addition, kinematic suppression is absent due to the larger mass of $H_2$. In the lightest Higgs boson decay to $A_1 A_1$ in the n/sMSSM, low $\tan \beta$ is preferred as shown in Fig. \ref{fig:AAdecay}b, where the horizontal line marks the production threshold. This decay also requires a low $A_1$ mass and results from the near-Peccei-Quinn limit when $\xi_S \to 0$. The low $\tan \beta$ preference is a result of the larger $H_1$ mass in this region. This enhancement is suggested in the one-parameter scans shown in Figs. \ref{fig:modelscans}a and \ref{fig:modelscans1} where the lightest Higgs mass is peaked at low $\tan \beta$ due to the crossing of Higgs states. In contrast, while the NMSSM's lightest Higgs mass is maximal at low $\tan \beta$, a sharp drop as $\tan \beta$ is increased is not present, yielding little to no correlation of $\tan \beta$ with the existence of this decay mode.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-90,width=0.49\textwidth]{totalwidth.ps}
\caption{Total decay width for each model. Large enhancements with respect to the SM are largely due to the decays to $A_i A_i$ and $\chi^0_1 \chi^0_1$. }
\label{fig:totwidth}
\end{center}
\end{figure}
Finally, we show the total decay width of the light CP-even Higgs bosons to SM modes and $\chi^0_1 \chi^0_1$ and $A_i A_i$ in the models considered in Fig. \ref{fig:totwidth}. The total decay width can be enhanced due to the $\chi^0_1 \chi^0_1$ and $A_i A_i$ partial widths. The total width in the MSSM can be larger than the SM due to the enhanced couplings of the Higgs to $b \bar b$ when away from the MSSM decoupling limit. In the n/sMSSM, Higgs masses above the LEP bound decaying to $\chi^0_1$ pairs make a contribution to the total width that is no larger than a few MeV. The $A_i A_i$ decays are responsible for the significantly larger total widths in the NMSSM and n/sMSSM.
\section{Conclusions}\label{sect:concl}
Extensions of the MSSM that include a singlet scalar field provide a natural solution to the undesirable fine-tuning of the $\mu$-parameter needed in the MSSM. After symmetry breaking, the singlet Higgs obtains a VEV, generating an effective $\mu$-parameter naturally at the EW/TeV scale. While the extensions to the MSSM that we consider each contain at least one additional singlet field, $S$, the symmetries that distinguish each model and their resulting superpotential terms provide phenomenologically distinct consequences. We made grid and random scans over the parameter space of each model and imposed the LEP experimental bounds on the lightest CP-even $ZZH_i$ couplings. The limits on $M_{A_2}$ and $M_{H_1}$ in the MSSM were converted to associated $A_i H_j$ production cross section limits and imposed. We also imposed constraints from the LEP chargino mass limit and new contributions to the invisible $Z$ decay width. Within the UMSSM, we enforced an additional constraint on the $Z'$ boson mixing with the SM $Z$.
We found the following interesting properties of the considered models:
\begin{enumerate}
\item The lightest Higgs boson can have a considerable singlet fraction in the n/sMSSM and NMSSM. Since the singlet field does not couple to SM fields, the couplings of the lightest Higgs to MSSM particles are reduced due to the mixing of the singlet field with the doublet Higgs bosons, resulting in the $e^+ e^-$ production cross sections being significantly smaller. Therefore, in the n/sMSSM and NMSSM, Higgs boson masses that are considerably smaller than the LEP bound on the SM Higgs boson mass are possible. The upper bound on the lightest CP-even Higgs mass in extended-MSSM models is also relaxed due to the contribution of the singlet scalar through the mixing of the Higgs doublets and the singlet. The upper limit in parameter scans is increased up to 164 GeV for the NMSSM and 170 GeV for the n/sMSSM. The lightest CP-even Higgs mass in the UMSSM can be as large as 173 GeV due to additional gauge interactions; however, the lower bound on the lightest CP-even Higgs mass is similar to that of the MSSM.
\item A common feature of each model is a CP-even Higgs boson with a mass in a range concentrated just above the SM mass limit. At least two CP-even Higgs bosons must have nonzero MSSM fractions, making the lightest non-singlet dominated Higgs obey limits on the $ZZH_i$ coupling, forbidding masses below 114.4 GeV unless additional doublet mixing occurs. The lightest Higgs in the NMSSM and n/sMSSM can evade this by singlet-doublet mixing, allowing a Higgs with mass just below the SM limit.
\item In the $s$-decoupling limit with fixed $\mu_{\rm eff}$ at the EW/TeV scale, two Higgs states correspond to the MSSM Higgs states. The $s$-decoupling limit in the n/sMSSM and UMSSM yields two CP-even Higgs bosons with similar masses and couplings to those of the MSSM with one extra decoupled Higgs. The $s$-decoupling limit is often achieved in the UMSSM since the strict $\alpha_{ZZ'}$ mixing constraint must be obeyed, and requires $s$ to be at the TeV scale. In this case, the mass of the decoupled Higgs scales with $s$. However, the $s$-decoupling limit is not always required in the UMSSM as either a delicate cancellation of the mixing term in the $Z-Z'$ mass-squared matrix that requires $\tan \beta \sim \sqrt{Q_{H_d} \over Q_{H_u}}$, or a suppression in $Q_{H_u}$ at large $\tan \beta$ can evade the mixing constraint. These fine-tuning scenarios do allow $s$ to be lower, but does not often result in a dramatically reduced lightest Higgs mass. In the n/sMSSM, the lightest Higgs boson decouples as it has vanishing mass and is singlet dominated while $H_2$ and $H_3$ correspond to the MSSM Higgs bosons. However, the NMSSM does not have this behavior. Although the $s$-decoupling limit provides two MSSM-like Higgs bosons, one becomes massless at tree-level and the other scales as $\sqrt{ \sqrt 2 \kappa s \mu_{\rm eff} \csc{2\beta}}$, while the singlet Higgs boson mass scales with $\kappa$ and $s$. This departure from the $s$-decoupling behavior of the other models is provided by the cubic self-coupling of the singlet field in the superpotential.
\item Weak boson couplings of the Higgs bosons are generally reduced from those of the SM, which translates to lower Higgs production rates. However, the production rates can be enhanced kinematically since the Higgs mass can be lower than the SM mass limit. Branching fractions may be larger than in the SM due to the suppression of the total width if the dominant $b\bar b$ decay mode is suppressed by mixing effects. The $H\to gg$ partial width can be either enhanced or reduced due to both enhancements of couplings to fields running in the loops and their interference effects.
\item The branching fraction for $H\to \gamma \gamma$ can be enhanced significantly in the n/sMSSM and NMSSM, providing more opportunity for Higgs discovery. Interference effects aid the enhancement of the overall decay width, as in $H\to gg$.
\item Non-SM decays can become important if allowed. The lightest Higgs boson in the n/sMSSM can have a high branching fraction to light neutralino pairs if kinematically allowed. This decay width can be as large as a few MeV and contribute significantly to the total width of the Higgs.
However, in the n/sMSSM, much of the allowed kinematic region with $M_{\chi^0_1} \le 30$ GeV may be disfavored from the prediction of a high $\chi^0_1$ relic density. The $H_1 \to A_2 A_2$ decays are not favored in the UMSSM since $s$ must be large to avoid the $Z-Z'$ mixing constraint, which in turn pushes the allowed values of $M_{A_2}$ beyond the allowed region for the decay. When allowed, this decay mode is dominant in the NMSSM and n/sMSSM.
\item The total decay width of the lightest Higgs boson can be enhanced by many orders of magnitude due to the large partial widths for the non-SM modes $\chi^0_1 \chi^0_1$ and $A_i A_i$. Decays to $\chi^0_1$ pairs make a contribution to the total width that is no larger than a few MeV, while $A_i A_i$ decays lead to significant total width enhancements.
\end{enumerate}
|
1,108,101,563,119 | arxiv | \section{Introduction}
Gravitational wave (GW) interferometers currently taking data, and under development or construction, have as a possible target a stochastic background of GWs of cosmological origin (for reviews, see\cite{thorne,allen:1996vm,Maggiore:1999vm,Maggiore:2000gv}, for recent results, see \cite{ligonature}). Since GWs propagate freely through the Universe after being produced, their detection provides a powerful diagnostics for the physics of the Early Universe. Several mechanisms that might generate such GWs have been discussed, including quantum fluctuations during or shortly after inflation \cite{Turner:1990rc,hogan,Witten:1984rs}, cosmic strings \cite{Steinhardt:1981ct}, cosmological magnetic fields \cite{Gyulassy:1983rq,VanHove:1983rk,Enqvist:1991xw,Huet:1992ex}, plasma turbulence \cite{landau,Abney:1993ku,weinberggr,Turner:1992tz} and bubble wall collisions during first-order phase transitions \cite{Kosowsky:1992rz,Kosowsky:1991ua,Kosowsky:1992vn,Coleman:1977py,Callan:1977pt,Kamionkowski:1992dc}.
Specifically, the space interferometer LISA, expected to fly, or to be nearing completion, over the next decade will have a sensitivity peak for GWs with a frequency between $10^{-4}$ and $1$ Hz \cite{Maggiore:2000gv}. Quite fortunately, this range corresponds to the frequency range expected today, after redshifting, from GWs produced at a temperature $T \sim 100\ {\rm GeV}\sim E_{\rm EW}$, where the latter symbol indicates the electroweak scale. Intriguingly enough, the production of GWs at $T\sim E_{\rm EW}$ is indeed expected if the electroweak phase transition is strongly first order. In turn, this is a necessary condition for the success of scenarios where the baryon asymmetry of the Universe is produced at the electroweak phase transition (electroweak baryogenesis, EWB; for a review, see \cite{Riotto:1999yt}). However, there is a generic tension between the requirement of a small bubble wall velocity needed in the context of electroweak baryogenesis models (see e.g.~\cite{bauref1,bauref2,bauref3}) and the super-sonic bubble velocity values (detonation) under investigation here. Depending on the sources of net baryon number, super-sonic bubble wall velocities might be compatible with a baryon asymmetry produced at the EWPT, although this is generally not the case. We are at the dawn of the exciting period of exploring the electroweak scale with the possibility to connect such diverse experimental endeavors as GW detection and the Large Hadron Collider. These grand experimental enterprises may also help answer the question of how the baryon asymmetry arose in the Early Universe, which is all the more exciting.
In its minimal setup, the strength of the electroweak phase transition in the Standard Model only depends only on the mass of the Higgs bosons (for a review, see \cite{tempreview}). For the Higgs mass range compatible with searches at LEP-II, non-perturbative lattice computations indicate that there is no phase transition at all, but rather a smooth crossover \cite{Kajantie:1996mn}. No GW production is thus expected at the EW phase transition, if the electroweak sector corresponds to the minimal Standard Model. In addition, if this is the case, EWB cannot be the mechanism underlying the production of the baryon asymmetry in the Universe.
Fortunately, the strength of the EW phase transition is actually strongly model-dependent and parameter-dependent (for a review, see \cite{Rubakov:1996vz}), and numerous extensions of the Standard Model do predict a strongly first order EW phase transition. If this is further extended with the other necessary ingredients for EWB, the EW scale might indeed be responsible for generating the observed baryon-antibaryon asymmetry.
If the EW phase transition is strongly first order, the universe finds itself trapped, at $T\lesssim E_{\rm EW}$, in a metastable EW unbroken phase where the vacuum state of the universe is a false vacuum (i.e.~it is not the lowest energy state) as the universe cools down and its temperature decreases. A potential energy barrier exists between the false and the true (lowest energy) electroweak symmetry breaking vacuum. Quantum mechanical tunneling produces bubbles of true vacuum (broken EW phase), which then expand, collide, and combine to fill the universe with the true vacuum. GWs can be abundantly produced at the EW phase transition, primarily through bubble collisions \cite{Witten:1984rs,hogan,Turner:1990rc,Kosowsky:1992rz,Kosowsky:1991ua,Turner:1992tz,Kosowsky:1992vn,Kamionkowski:1993fg}. This mechanism has been extensively investigated, analytically and numerically, and in relation to the effective potential that drives the transition itself.
In particular, the possibility of a GW signal from the EW phase transition has attracted, not surprisingly, a great deal of attention. In view of the above mentioned large model-dependence, though, most of the existing literature has been devoted to either special cases and particular corners of parameter space (such as the light stop scenario in the context of the minimal supersymmetric extension of the Standard Model, MSSM; for a review, see \cite{Quiros:2000wk}), and/or to accurate but numerical studies only \cite{Ashoorioon:2009nf,gwbubble}. Alternatively, model independent results on the detectability of a GW signal from a strongly first order phase transition have been derived assuming a few relevant parameters for the GW dynamics could be computed from the effective potential that drives the particular phase transition under consideration (for recent related work, see e.g.~\cite{Grojean:2006bp,Caprini:2007xq,Kahniashvili:2008pf}).
The scope of the present study is to use semi-analytical results of the tunneling rate of scalar fields with quartic potentials \cite{se3approx} to predict {\em analytically} the strength of the EW phase transition and the GW signal in extensions of the Standard Model EW sector that can be characterized with the dynamics of a single order parameter, a scalar field $\phi$. These models include simple generalizations of the SM effective potential, in appropriate dynamical regimes. The main result of this paper is a closed analytic formula for two parameters describing the GW signal as a function of parameters appearing in the effective potential of the scalar field driving the EW phase transition.
The rest of this paper is organized as follows: in Section \ref{sec:backgrnd} we summarize the relevant physics and definitions of the quantities we will be studying, in particular an approximation for the three dimensional Euclidean action, which is the basis for computing the finite temperature tunneling rate. Then, in Section \ref{sec:approx}, we derive an approximation for the tunneling temperature and present exact and approximate formulas for the GW parameters for any effective potential similar in form to the SM case. Following that, in Section \ref{sec:param} we constrain some parameters in order to reproduce the usual electroweak symmetry breaking pattern, and plot the effect of the various parameters in the potential on the parameters driving the GW signal. We examine the physical relevance of the parameter space and the detectability of the GW when varying various parameters beyond their SM values. In Section \ref{sec:models} we describe specific examples where our formalism can be applied in certain regimes, including $SU(2)$ triplet and singlet extensions to the Higgs sector, top-flavor models, and models encompassing higher dimensional operators.
\section{Background}\label{sec:backgrnd}
In the context of a strongly first-order phase transition in the early universe, the basic problem of evaluating the GW signal amounts to calculating the tunneling rate (i.e.~the decay probability) from the false vacuum state to the true vacuum state -- in other words, the bubble nucleation rate per unit time per unit volume (for a review of tunneling in a finite temperature quantum field theory, see \cite{tempreview}). This is given in general by the expression:
\begin{equation}
\frac{\Gamma}{V} \sim A(T)e^{-S_{E3}/T}
\end{equation}
where the factor $A(T) \sim T^4$ and $S_{E3}$ is the three dimensional\footnote{The symmetry for finite temperature bubble nucleation is $O(3)$, not $O(4)$ \cite{linde1,linde2}.} Euclidean action
\begin{equation}
S_{E3} = \int\textrm{d}^3x\left[\frac{1}{2}\left(\vec{\nabla}\phi\right)^2 + V\left(\phi,T\right)\right].
\end{equation}
Typically $S_{E3}$ is not calculated analytically, but semi-analytic expressions exist (\cite{se3approx}), which provide the basis for the rest of our analysis. We summarize these results below.
As we point out in Sec.~\ref{sec:approx}, it is possible to derive an expression for the temperature at which tunneling is probable and the universe undergoes a phase transition. Now that regions of the universe can tunnel to the true vacuum, it is possible to produce large bubbles of true vacuum which expand, collide, and combine into larger bubbles. Expanding bubbles gain wall velocity and energy, but spherical symmetry does not allow any energy to be directly transferred into GWs. However, when more than two bubbles collide, this symmetry is broken and energy can be released into GWs\footnote{Section IIB of \cite{Kosowsky:1991ua} has a detailed discussion of the symmetry of two colliding bubbles.}. Energy released into the universe can also be transferred to GWs through turbulence, but we neglect this sub-dominant contribution to the spectrum (for recent analyses, see \cite{Kosowsky:2001xp,Gogoberidze:2007an,Caprini:2009yp}).
Two quantities determine the GW spectrum when the phase transition proceeds through detonation (the bubble wall velocity is faster than the speed of sound in the plasma). $\alpha$ measures the energy density change by transitioning from the false to true vacuum, and $\beta$ is the bubble nucleation rate per unit volume. The actual parametrization of the GW spectrum is summarized below.
\subsection{The Three Dimensional Euclidean Action}\label{sec:se3}
The tunneling process is calculated from the three dimensional Euclidean action. Typically this is done numerically, however, there is a general, semi-analytic, approximate solution for quartic potentials of a single scalar field \cite{se3approx}. This is the key starting point for our calculations.
Consider the potential for a scalar field $\phi$ of the form
\begin{equation}
V(\phi) = \lambda\phi^4 - a\phi^3 + b\phi^2 + c\phi + d
\end{equation}
with $\lambda > 0$ to have the potential bounded from below, $b > 0$ to have $\phi = 0$ a minimum, and $a > 0$ for the minimum to be for positive $\phi$. Without loss of generality, the false vacuum can be placed at the origin ($\phi = 0, V = 0$), and so $d = c = 0$. Here the above coefficients are typically not the same as in the zero temperature potential of the same theory, and may have temperature dependence. They are also all greater than zero. The three dimensional Euclidean action is approximately given by
\begin{equation}\label{eq:se3}
S_{E3} = \frac{\pi a}{\lambda^{3/2}}\frac{8\sqrt{2}}{81}(2-\delta)^{-2}\sqrt{\delta/2}\left(\beta_1\delta + \beta_2\delta^2 +\beta_3\delta^3\right)
\end{equation}
where $\delta \equiv 8\lambda b/a^2, \beta_1 = 8.2938, \beta_2 = -5.5330,
\textrm{and }\beta_3 = 0.8180$. These parameters are the result of a numerical fit in the semi-analytic study\footnote{The absolute errors of the reduced action divided by the thin wall limit action are bounded to be less than $0.033$ -- see \cite{se3approx} for the full details.} \cite{se3approx}.
\subsection{Gravitational Wave Parameters}\label{sec:gwp}
We are interested in the gravitational wave signal from electroweak symmetry breaking, which is mostly produced through bubble (as in which vacuum state) nucleation. The first part of calculating the GW parameters is to determine at what temperature bubble nucleation will be an energetically favored process.
Assuming temperatures of $\mathcal{O}(100\textrm{ GeV})$, the probability of a single bubble to be nucleated in a horizon volume to be $\sim \mathcal{O}(1)$ is well approximated\footnote{The exponential factor of the tunneling probability ensures that this approximation is valid for a broad range of temperature or energy scale.} in the Early Universe by
\begin{equation}
S_3(T_t)/T_t \sim 140
\end{equation}
where $T_t$ is the tunneling temperature (see e.g.~\cite{tempreview}). This temperature will be between the critical temperature $T_c$, where there is a degenerate minimum with $\phi \neq 0$ (the only minimum at high $T$ is at $\phi = 0$), and the destabilization temperature $T_{dest}$, where the minimum at $\phi = 0$ is a local maximum:
\begin{equation}
T_{dest} \leq T_t < T_c.
\end{equation}
An approximation is derived for $T_t$ in Section \ref{sec:approx}.
The vacuum energy (latent heat) density in this process is given by the standard statistical mechanics expression
\begin{equation}
\epsilon_t = -V(v(T),T) + T\frac{\textrm{d}}{\textrm{d}T}V(v(T),T)\bigg|_{T_t},
\end{equation}
and the ratio between this and the radiation energy density is
\begin{equation}
\alpha = \frac{30\epsilon_t}{\pi^2g_tT_t^4},
\end{equation}
where $g_t$ is the number of relativistic degrees of freedom at $T_t$. In a radiation dominated universe the parameter $\beta$ is given by
\begin{equation}
\frac{\beta}{H_t} = T_t\frac{\textrm{d}(S_3(T)/T)}{\textrm{d}T}\bigg|_{T_t}.
\end{equation}
$\alpha$ and $\beta/H_t$ parametrize the GW spectrum \cite{Kamionkowski:1993fg}, which is defined below, following \cite{gwbubble}.
We consider only a phase transition proceeding through detonation -- the bubble wall velocity is faster than the speed of sound in the plasma. This also ensures that the thin wall approximation is valid, which was used in \cite{gwbubble}. The wall velocity is \cite{Steinhardt:1981ct}
\begin{equation}
v_b(\alpha) = \frac{\frac{1}{\sqrt{3}} + \sqrt{\alpha^2 + \frac{2}{3}\alpha}}{1 + \alpha},
\end{equation}
which increases with $\alpha$, starting at the speed of sound in the plasma ($1/\sqrt{3}$) up to the speed of light. However, this may not always be a good assumption, depending on the exact theory for the electroweak phase transition; particle scattering with the bubble wall will affect $v_b$. For instance, \cite{vb} analyzed the MSSM stop contribution to the friction of the bubble wall in the plasma. This can greatly decrease $v_b$, down to about $0.05$. On the other hand, one still expects some scaling with $\alpha$ (which can be quite large and we are not restricting ourselves to just the MSSM), which could counteract these effects. Since we do not make any assumptions on the underlying theory, it is difficult to say what value of $v_b$ will ultimately take, and we use the above equation for what follows. We are also restricting ourselves to the case of detonation, so we must have $v_b$ greater than $1/\sqrt{3}$. This is a non-trivial calculation from the effective potential and bubble dynamics, so for the purposes of this work we must assume that this holds. For a particular model, however, one must check that $v_b$ is sufficiently large to apply the formulas we will present for the GW spectrum.
How much of the vacuum energy is transferred to the bulk, rather than reheating the bubble, is given by an efficiency factor, again considering only the case of detonations and the same caveats above:
\begin{equation}
\kappa(\alpha) = \frac{1}{1 + 0.715\alpha}\left(0.715\alpha + \frac{4}{27}\sqrt{\frac{3\alpha}{2}}\right)
\end{equation}
For the bubble collision contribution the spectrum is parametrized (close to the peak frequency) as
\begin{equation}
\Omega_{\mathrm{Col}*}(f_{\mathrm{Col}*}) = \widetilde{\Omega}_{\mathrm{Col}*}\frac{(a + b)\tilde{f}_{\mathrm{Col}*}^bf_{\mathrm{Col}*}^a}{b\tilde{f}_{\mathrm{Col}*}^{(a + b)} + af_{\mathrm{Col}*}^{(a + b)}}
\end{equation}
with the peak frequency $\tilde{f}_{\mathrm{Col}*}$ and peak amplitude $\widetilde{\Omega}_{\mathrm{Col}*}$ (as functions of the wall velocity). The exponents are in the range $a \in [2.66, 2.82]$ and $b \in [0.90, 1.19]$ with the case of large wall velocity, $v_b \approx 1$, having $a \approx 2.8$ and $b \approx 1.0$ (see the numerical analysis of \cite{gwbubble} for details). The spectrum observed today is found by redshifting:
\begin{align}
\tilde{f_{\mathrm{Col}}} &= 16.5\times10^{-3}\textrm{ mHz}\left(\frac{\tilde{f}_{\mathrm{Col}*}}{\beta}\right)\left(\frac{\beta}{H_t}\right)\left(\frac{T_t}{100\textrm{ GeV}}\right)\left(\frac{g_t}{100}\right)^{1/6}\label{eq:gws1}\\
h^2\widetilde{\Omega}_\mathrm{Col} &= 1.67\times10^{-5}\widetilde{\Omega}_{\mathrm{Col}*}\left(\frac{100}{g_t}\right)^{1/3}\notag\\
&= 1.67\times10^{-5}\tilde{\Delta}\kappa^2\left(\frac{H_t}{\beta}\right)^2\left(\frac{\alpha}{\alpha + 1}\right)^2\left(\frac{100}{g_t}\right)^{1/3}\label{eq:gws2}
\end{align}
with the functions $\tilde{f}_{\mathrm{Col}*}/\beta$ and $\tilde{\Delta}$ approximately \cite{gwbubble}
\begin{align}
\tilde{\Delta} &= \frac{0.11v_b^3}{0.42 + v_b^2}\\
\tilde{f}_{\mathrm{Col}*}/\beta &= \frac{0.62}{1.8 - 0.1v_b + v_b^2}.
\end{align}
Although we use the results of a numerical analysis for the GW spectrum, there has also been recent analytic work on this subject (e.g.~\cite{Caprini:2007xq,Caprini:2009fx}).
To summarize, the three dimensional Euclidean action is used to calculate the tunneling rate to the true, symmetry breaking, minimum of the potential. An approximation for quartic potentials was given in \cite{se3approx}, which we will employ in this study. The action is used to directly calculate the tunneling temperature, where this process can produce appreciable amounts of gravitational waves. Two parameters are calculated from the potential to give the spectrum observed today.
\section{$T_t, \alpha$, and $\beta/H_t$ from a Generic SM-Like Potential}\label{sec:approx}
The potential we consider is a generic, quartic potential modeled after the SM Higgs's effective potential at high temperature. What follows is a quick overview of the relevant terms and quantities in the SM. The one loop, finite temperature, effective potential due to the gauge bosons and top quark, expanded at high temperature, is
\begin{equation}
V(\phi,T) = D(T^2-T_{0 SM}^2)\phi^2 - ET\phi^3 + \frac{\lambda_{SM}(T)}{4}\phi^4
\end{equation}
with the coefficients being
\begin{align}
D &= \frac{2m_W^2 + m_Z^2 + 2m_t^2}{8v^2} \approx 0.17\\
E &= \frac{2m_W^3 + m_Z^3}{4\pi v^3} \approx 9.6\times10^{-3}\\
T_{0 SM}^2 &= \frac{m_h^2 - 8Bv^2}{4D} \approx (238.6~\mathrm{GeV})^2\\
B &= \frac{3}{64\pi^2 v^4}(2m_W^4 + m_Z^4 - 4m_t^4) \approx -4.6\times10^{-3}\\
\lambda_{SM}(T) = \lambda_{SM} - \frac{3}{16\pi^2v^4}&\bigg(2m_W^4\log\frac{m_W^2}{A_BT^2} + m_Z^4\log\frac{m_Z^2}{A_BT^2} - 4m_t^4\log\frac{m_t^2}{A_FT^2}\bigg)
\end{align}
where $\log A_B = \log a_b - 3/2, \log A_F = \log a_f - 3/2, a_b = 16\pi^2\exp(3/2 - 2\gamma_E), a_F = \pi^2\exp(3/2 - 2\gamma_E),$ and $\gamma_E \approx 0.5772$ is the Euler-Masccheroni constant \cite{gammae} (for details, see e.g.~\cite{tempreview}). All the masses (Higgs, W, Z, and top) above refer to the usual zero temperature values, and $v \approx 246$ GeV is the Higgs vacuum expectation value (vev). The high temperature approximation comes from expanding the thermal bosonic and fermionic functions, $J_{\mathrm{B,F}}$, appearing as the thermal contribution to the one-loop effective potential:
\begin{equation}
\frac{T^4}{2\pi^2}J_{\mathrm{B}}\left[m^2(\phi)/T^2\right],\qquad
\frac{-2\lambda T^4}{2\pi^2}J_{\mathrm{F}}\left[m^2(\phi)/T^2\right],
\end{equation}
where $m$ is the field-dependent mass of the boson or fermion and we are working in units where $k = 1 = \hbar = c$ (so temperature is measured in energy units). The thermal functions are given by
\begin{equation}
J_{\mathrm{B,F}}\left[m^2\beta^2\right] = \int_0^\infty\mathrm{d}x x^2\log\left[1 \mp e^{-\sqrt{x^2 + m^2\beta^2}}\right],
\end{equation}
and have high temperature expansions
\begin{align}
J_\mathrm{B}(m^2/T^2) &\approx -\frac{\pi^4}{45} + \frac{\pi^2}{12}\frac{m^2}{T^2} - \frac{\pi}{6}\left(\frac{m^2}{T^2}\right)^{3/2} - \frac{1}{32}\frac{m^4}{T^4}\log\frac{m^2}{a_bT^2} + \cdots\\
J_\mathrm{F}(m^2/T^2) &\approx \frac{7\pi^4}{360} + \frac{\pi^2}{24}\frac{m^2}{T^2} - \frac{1}{32}\frac{m^4}{T^4}\log\frac{m^2}{a_fT^2} + \cdots
\end{align}
Since $T_t > T_{0 SM}$, the destabilization temperature, the temperature scale for the phase transition is much larger than any of the particle masses; a high temperature expansion will be a good approximation in this regime.
In terms of the definitions given in Section \ref{sec:backgrnd} we have
\begin{align}
\lambda(T) &= \frac{\lambda_{SM}(T)}{4}\\
a(T) &= ET\\
b(T) &= D(T^2 - T_{0 SM}^2)\\
\delta(T) \equiv \frac{8\lambda b}{a^2} &=
\frac{2\lambda_{SM}(T)D(T^2 - T_{0 SM}^2)}{E^2T^2} =
\frac{2D}{E^2}\left[1 - \left(\frac{T_{0 SM}}{T}\right)^2\right]\lambda_{SM}(T).
\end{align}
We now consider an effective potential of the general form of the SM case,
\begin{equation}\label{eq:genpot}
V_{eff}(\phi,T) = \frac{\lambda(T)}{4}\phi^4 - (ET - e)\phi^3 + D(T^2 - T_0^2)\phi^2,
\end{equation}
where there is a new parameter, $e$, motivated by gauge singlets (see Sec.~\ref{sec:singlet}). $e = 0$ in the SM. The potential minima (first derivative with respect to $\phi$ is $0$) located at
\begin{equation}\label{eq:genphic}
\phi = 0, \frac{3(ET - e) \pm \sqrt{9(e - ET)^2 - 8D\lambda(T)(T^2 - T_0^2)}}{2\lambda(T)}.
\end{equation}
The parameters in the Euclidean action are (using a tilde here to denote the $\lambda$ of the formulae in Sec.~\ref{sec:se3}, otherwise we mean the $\lambda$ of our general effective potential above):
\begin{align}
\tilde{\lambda}(T) &= \frac{\lambda(T)}{4}\\
a(T) &= ET - e\\
b(T) &= D(T^2 - T_0^2)\\
\delta(T) \equiv \frac{8\tilde{\lambda} b}{a^2} &= \frac{2D(T^2 - T_0^2)\lambda(T)}{(ET - e)^2}.
\end{align}
The critical temperature is (the sign of the square root is chosen so that the temperature is positive in the SM case):
\begin{equation}\label{eq:gentc}
T_c = \frac{eE-\sqrt{D\lambda(T_c)(e^2 + (D\lambda(T_c) - E^2)T_0^2)}}{E^2 - D\lambda(T_c)}
\end{equation}
From here on the temperature dependence of $\lambda$ will be dropped, taking it as a free parameter in the theory. Alternatively, $\lambda(T) \approx \lambda(T_0)$ or $\lambda(T) \approx \lambda(T_c)$, since $\lambda$ is slowly varying with temperature (logarithmic corrections).
For calculating the tunneling temperature, we note the general behavior of $S_{E3}/T$: it decreases rapidly as the temperature is lowered, from a singularity at $T=T_c$. Physically, this is because the tunneling rate starts at zero when the minima are degenerate, and increases rapidly as the new minima becomes the global one. In the approximation, this can be seen by looking at the term $(2 - \delta)^{-2}$. At $T_c$, $\delta(T_c) = 2$, for this general effective potential. To approximate a solution for $T_t$ we expand near $T_c$: $T \rightarrow T_c(1 - \epsilon)$. $\epsilon$ will be very small for $T_t$, as the sharp peak ensures that $S_{E3}/T = 140$ very close to $T_c$. Expanding $\delta$ to lowest non-vanishing order in $\epsilon$, about $\epsilon = 0$,
\begin{equation}
\delta \approx 2 + \frac{4D\lambda(ET_0^2 - eT_c)T_c}{(e - ET_c)^3}\epsilon \equiv 2 + F\epsilon,
\end{equation}
where the first term is due to substituting in \eqref{eq:gentc}. As $\epsilon \rightarrow 0$, we recover that $\delta \rightarrow 2$. Now the relevant quantity for the tunneling temperature, $S_{E3}/T$, can be approximated by again having $T \rightarrow T_c(1 - \epsilon)$ and expanding about $\epsilon = 0$. Defining the prefactors of $S_{E3}$ as
\begin{equation}
G \equiv \frac{64\sqrt{2}\pi}{81\lambda^{3/2}},
\end{equation}
the resulting lowest order expression is
\begin{equation}\label{eq:se3tnew}
S_{E3}/T \approx \frac{2G(\beta_1 + 2\beta_2 + 4\beta_3)(ET_c - e)}{F^2T_c}\frac{1}{\epsilon^2}.
\end{equation}
As expected, there is a singularity as $\epsilon \rightarrow 0$, with the same power as the divergent piece in the original expression. Finally, we arrive at an estimate for the tunneling temperature:
\begin{align}
\epsilon &\approx \sqrt{\frac{2G(\beta_1 + 2\beta_2 + 4\beta_3)(ET_c - e)}{140F^2T_c}} \label{eq:epsilon}\\
T_t &\approx T_c(1 - \epsilon).
\end{align}
A rough error estimate comes from comparing this value of $T_t$ to the value obtained numerically using the first approximation, \eqref{eq:se3}. Throughout most of the parameter space that we analyze in the next section, the difference between the two values averages to be less than $0.1\%$ (most of the space is much lower even). Some regions of $-e$, however, can have an average difference of $30\%$.
The true minimum of the potential is $\phi$ with the positive square root in \eqref{eq:genphic} which gives an expression for the minimum of the potential as a function of temperature. This yields an exact expression for $\alpha$ (from the formulae in Sec.~\ref{sec:gwp}):
\begin{align}
\alpha &= \frac{15(3e - 3ET_t - \xi)}{16\pi^2g_t\lambda^3T_t^4}\left\{9e^3+e^2(9ET_t - 6\xi) + 9E^2T_t^2(3ET_t + \xi) - 4D\lambda\left[-3ET_t(T_0^2 - 3T_t^2)\right.\right.\notag\\ &\left.\left. + (T_0^2 + T_t^2)\xi\right] + e\left[-45E^2T_t^2 + 12D\lambda(T_0^2 + T_t^2) - 3ET_t\xi + \xi^2\right]\right\},
\end{align}
where $\xi \equiv \sqrt{9(e-ET_t)^2 - 8D\lambda(T_t^2-T_0^2)}$. $\beta/H_t$ can be computed directly from \eqref{eq:se3} and the definitions in Sec.~\ref{sec:approx}:
\begin{align}
\beta/H_t &= \frac{1024\pi D^2\sqrt{2\lambda}}{81 T_t (e-E T_t)^{10}\sqrt{D\lambda h}k^3} \left\{-32 D\lambda T_t (E T_0^2-e T_t) (ET_t -e) h^2j(8,64)\right.\notag\\
&\left.-E T_t h^2kj(8,64) + (E T_t - e)h^2k j(8,64) + T_t (e T_t - E T_0^2)hkj(8,64)\right.\notag\\
&\left.+ 4T_t (E T_0^2 - e T_t) (e - ET_t)\left[(e - ET_t)^2 - 4 D\lambda h\right]h j(16,192)\right\},
\end{align}
where
\begin{align}
h &\equiv T_t^2 - T_0^2\\
j(x,y) &\equiv (ET_t - e)^4\beta_1 + xD\lambda(ET_t - e)^2h\beta_2 + yD^2\lambda^2h^2\beta_3\\
k &\equiv (ET_t - e)^3\left(2 - \frac{8D\lambda h}{(ET_t - e)^2}\right).
\end{align}
A much simpler expression comes from using the approximation in \eqref{eq:se3tnew} (noting that $\frac{\textrm{d}}{\textrm{d}T} = -\frac{1}{T_c}\frac{\textrm{d}}{\textrm{d}\epsilon}$ near $T_t$):
\begin{equation}
\beta/H_t \approx \frac{4G(\beta_1 + 2\beta_2 + 4\beta_3)(ET_c - e)}{F^2T_c}\frac{(1 - \epsilon)}{\epsilon^3}.
\end{equation}
Again, a rough error estimate comes from comparing this approximation versus computing $\beta/H_t$ directly from \eqref{eq:se3}. Over the parameter space we consider, the difference typically averages to be less than $10\%$. Again, part of the $-e$ region has much larger errors, but, on average, well within an order of magnitude. These errors in $T_t$ and $\beta/H_t$ might potentially affect the peak frequency and overall power, to an extent that quantitatively depends on the size of these errors. It is difficult to estimate analytically the effect on the GW spectrum, but if the errors are not very large, they are possibly only important in borderline detection cases.
Using the above approximation for the tunneling temperature provides an approximation for the GW parameters for any theory with a SM-like effective potential. The GW spectrum is then fully specified, using the equations at the end of Sec.~\ref{sec:gwp}.
\subsection{Parameter Constraints}
We need to enforce that the potential of \eqref{eq:genpot} (where we will work at $T = 0$ here) correctly describes electroweak symmetry breaking. Note that the potential is typically not a tree level potential, even at $T = 0$ (in the SM there are still the one-loop effects), and using the SM parameters will give differing results (by a few percent) from tree level calculations. The first condition is that $\phi$ at the electroweak breaking minimum, \eqref{eq:genphic} at $T=0$ with the positive root, is the usual vev of the Higgs field:
\begin{equation}\label{eq:vev}
<\phi> = \frac{-3e + \sqrt{9e^2 + 8D\lambda T_0^2}}{2\lambda} = v \approx 246\textrm{ GeV},
\end{equation}
where $\lambda$ is also at $T = 0$. This point must be a stable minimum, hence
\begin{equation}\label{eq:stab}
\frac{\partial^2V_{eff}(\phi,T=0)}{\partial\phi^2}\bigg|_{\phi = v} = \frac{1}{2\lambda}\left(9e^2 + 8D\lambda T_0^2 - 3e\sqrt{9e^2 + 8D\lambda T_0^2}\right) > 0.
\end{equation}
Finally, we will also restrict the parameters to have a Higgs mass above current limits:
\begin{equation}\label{eq:mh}
m_h^2 = \frac{\partial^2V_{eff}(\phi,T=0)}{\partial\phi^2}\bigg|_{\phi = v} > (114 \mathrm{ GeV})^2
\end{equation}
The first constraint, \eqref{eq:vev}, can be solved to give an expression for $T_0$:
\begin{equation}
T_0^2 = \frac{v(3e + \lambda v)}{2D}
\end{equation}
which is rather similar to the SM form. Requiring that $T_0$ and $v$ are greater than $0$, along with the original constraints on the coefficients of the potential ($\lambda, D, (ET - e) > 0$, see Section \ref{sec:se3} and \cite{se3approx}), satisfies the second constraint, \eqref{eq:stab}, if $e > 0$ or if $e < 0$ and $\lambda \neq -\frac{3e}{2v}$. Finally, we can use the the last equation for $m_h$, \eqref{eq:mh}, to solve for $\lambda$. In order to satisfy all the constraints we are left with the following solutions:
\begin{equation}
\lambda =
\begin{cases}
\frac{m_h^2-3ev}{2v^2},\quad\frac{1}{4v^2}(m_h^2-9ev-\sqrt{m_h^4 - 18evm_h^2 + 9e^2v^2}),\\ \quad\quad\text{if $\left(\frac{m_h^2 - 2v^2}{3v}, \frac{-1}{6}\left(3v + \sqrt{4m_h^2 + v^2}\right)\right) < e < 0$,}\\
\frac{m_h^2-3ev}{2v^2}, \qquad\text{if $0 \le e < \frac{m_h^2}{3v}$.}
\end{cases}
\end{equation}
For the SM case of $e = 0$ we have the usual relation of $m_h^2 = 2\lambda v^2$. In order for the theory to be perturbative we need $\lambda < 1$, so for the SM we use
\begin{equation}
0.11 \le \lambda < 1,
\end{equation}
where the lower bound is from the Higgs mass limit. This roughly corresponds to
\begin{equation}
115\textrm{ GeV} \le m_h < 348\textrm{ GeV.}
\end{equation}
When $e \ne 0$ the value of $e$ and $m_h$ in the range given above set the value of $\lambda$. When $e < 0$ both solutions for $\lambda$ are used, which gives the greatest range of values for both $\lambda$ and $e$.
Other constraints follow from requiring $T_c$ to be real and positive, which can be deduced simply from \eqref{eq:gentc}. For instance, for $T_c$ to be real,
\begin{equation}
e^2 \ge (E^2 - D\lambda)T_0^2,
\end{equation}
while having $T_c > 0$ can reduce to a simple constraint depending on which parameters are being varied together, as well as the sign of the square-root.
It is also important to note divergent features of the above expressions. First, in \eqref{eq:epsilon}, $\epsilon$ will be complex if $e > ET_c > 0$, since the other terms are all positive ($D$ and $E$ are positive in general, as they depend only on masses-squared). In this case $T_t$ and the GW parameters will also be complex, so we limit
\begin{equation}
e < ET_c.
\end{equation}
Since this arises in our approximation, it is possible that higher order terms will resolve this behavior. However, if $e \ge ET_c$, then the sign of the cubic term in the potential changes (becoming positive). There would no longer be any potential energy barrier, and the phase transition could not be first-order. Therefore, it is consistent for this analysis to limit $e$ by the inequality above.
In several of the equations in Section \ref{sec:approx} the term $E^2 - D\lambda$ appears. Occurring in the numerator of eq.~\eqref{eq:gentc} this appears in the reality constraint above. Appearing in the denominator, however, when it equals zero it is a pole in the (exact) expression for $T_c$. This causes $\epsilon$ to diverge, and thus $T_t$ will pass through zero. This carries over to a pole in $\alpha$. Clearly, as $\epsilon$ approaches one, our approximation will break down. However, fine-tuning to be near this pole could provide models with a very strong phase transition (large $\alpha$). Below, we only consider values of $E^2$ up to $90\%$ of $D\lambda$.
\section{The Parameter Space}\label{sec:param}
Now that we have general expressions for the GW parameters in terms of the coefficients of our potential, we can look at the parameter space. Throughout this analysis the parameters we are not varying are set to their SM values (with $\lambda_{SM} = 0.3$, and $\lambda_{SM}(T)\big|_{T=T_0}$ when not varied). Also, we set $g_t = 100$ throughout. This can vary greatly between models, but here we are looking at the parameter space in general, without assumptions on what theory is being used.
\begin{figure}
\begin{center}
\includegraphics[width=14cm,clip]{figs/plots_combo}
\caption{In each plot the labels for the x-axis denotes the varied parameter in blue and red (dashed), respectively. The green line is the (constant) SM value. The ranges for the varied parameters were chosen arbitrarily, to show the overall behavior. $e$ is in GeV, while the other parameters are dimensionless. Note the pole as $E$ is varied, as described. On the left-hand side, the blue line (varying $e$) closely follows the green one (SM value), for the range shown.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=14cm,clip]{figs/ld}
\caption{A plot of the $\alpha-\beta/H$ plane with $\lambda$ and $D$ varying. The red line is the SM with $\lambda$ in the allowed range. The darker blue region is an order of magnitude greater and less than the SM value for $D$. The lighter blue region is a further order of magnitude greater and less. BBO and LISA rough sensitivity regions are shown as the lighter and darker red shading, respectively.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=14cm,clip]{figs/lbe}
\caption{A plot of the $\alpha-\beta/H$ plane with $\lambda$ and $E$ varying. The red line is the SM with $\lambda$ in the allowed range. The darker blue region is an order of magnitude greater and less than the SM value for $E$. The lighter blue region is a further order of magnitude greater and less. BBO and LISA rough sensitivity regions are shown as the lighter and darker red shading, respectively.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15cm,clip]{figs/lse}
\caption{A plot of the $\alpha-\beta/H$ plane with $\lambda$ and $e$ varying. The dark blue region is for $e > 0$, while the light blue shading indicates the first solution for $\lambda$ when $e < 0$, and the green the second solution. BBO and LISA rough sensitivity regions are shown as the lighter and darker red shading, respectively. The tail of the green region, extending to very large $\alpha$, may have errors in the calculation of $\beta/H_t$ due to the approximate formula. However, the points remain in the LISA region.}
\end{center}
\end{figure}
In the plots of the $\alpha-\beta/H_t$ plane both $\lambda$ and one other parameter are varied. $\lambda$ ranges as described above. The other varied parameter ranges from an order of magnitude less than its SM value to an order of magnitude larger for the darker blue region, and a further order of magnitude larger and smaller for the lighter blue region, taking into account the above constraints. Note that for negative $e$, we also need to make sure that $\epsilon$ is real and less than one, as this is the only parameter (in the ranges we considered) that could cause these problems for $\epsilon$. The red line represents the SM with only $\lambda$ varied. $e$ is less than $ET_c$ or $m_h^2/3v$, whichever is smallest. LISA and BBO sensitivity regions were computed using rough estimates of the sensitivity curves, as in \cite{gwbubble,Buonanno:2004tp,Grojean:2006bp,Caprini:2009yp}, and finding the region in the $\alpha$-$\beta/H$ plane where the computed GW spectrum is above the sensitivity. The LISA spectrum used is the approximate instrument strain sensitivity, while the BBO spectrum is the correlated extension (BBO Corr), which uses data analysis of two LISA-like detectors to improve sensitivity. LIGO is not sensitive in this region to GWs. The sensitivity regions shown flatten (do not reach into higher $\beta$) for large $\alpha$, as the LISA and BBO sensitivity drops considerably outside of their most sensitive frequency. Thus, even with increasing $\alpha$, larger $\beta$ means a greater peak frequency, pushing it out of the experimental range.
We note that very small $v_b$, due to viscosity of the plasma, could potentially affect detection with LISA or BBO, due to a change in the peak energy density and the peak frequency of the GWs. A small $v_b = 0.05$, as in \cite{vb}, does not change the overall overlap of the experimental sensitivities with the parameter space in the figures (using this $v_b$ in calculating the experimental sensitivity). However, this could be a large effect for borderline cases. Without specifying the specifics of a theory, it is difficult to explore the issue of calculating $v_b$ precisely. Furthermore, our analysis considers only detonations, where $v_b$ is at least ten times greater. Small $v_b$ requires considering the case of deflagrations in detail, and is outside the scope of this work (for recent work, see \cite{Megevand:2008mg,Caprini:2007xq}). This is another direction to be explored not only in the context of GW detection, but also for baryogenesis, as $v_b$ affects the length of the phase transition.
Although varying $D$ or $E$ does not greatly enhance $\alpha$, their effect is noticeable. Both also tend to have large values of $\beta$, but cover a very large range. However, $D$, at least for this range of variation, is not close to the sensitivity regions of LISA or BBO. $E$ has regions which are much closer to LISA or BBO, but we do not predict any overlap. $e < 0$, on the other hand, can greatly increase $\alpha$, as well as have a much smaller $\beta$. Here there is the greatest parameter space which may be observable. There is considerable overlap between the various potential parameters and the corresponding values of the GW parameters, and thus a particular GW spectrum does not generally point to a value of just one of these potential parameters.
\section{Models}\label{sec:models}
In this section we investigate a few models which naturally fit into the effective potential analyzed above. Of primary interest are models with gauge singlets, which we argue provide a very good motivation for the parameter $e$. Since this parameter was shown to have the largest effect in producing a strongly first-order phase transition, these models could predict an observable GW signal in the near future. There are many models with singlets, and in general it is not possible to derive an analytic expression for the effective potential. We will look at the simplest such model, the SM with the addition of one (real) gauge singlet field.
\subsection{SM + Gauge Singlet}\label{sec:singlet}
While the motivation for the overall form of the effective potential we consider is due to the SM, the additional term, $e$, is motivated by gauge singlets. Gauge singlets are present in many models, such as the Next-to-MSSM (NMSSM). Singlets in the context of the electroweak phase transition and baryogenesis have been studied for some time (e.g.~\cite{singlet1,singlet2,singlet3}, and more recently, e.g.~\cite{ahriche,profumosinglet,darkside}), including the effect of trilinear couplings driving a strongly first-order phase transition without even considering the thermal term (see, for instance, \cite{cubic1,cubic2}). Recently, \cite{darkside} has also explored the connection of singlets with possible explanations for recent astrophysical signals. In this section we will give a brief decoupling (in the sense of taking the singlet to be much heavier than the Higgs) analysis, motivating the inclusion of $e$ in the effective potential, and showing that these classes of models naturally fit into our framework above.
The procedure we will follow is to use the tree-level, zero-temperature mass eigenstates as a basis, taking the mostly singlet state to be much heavier. In addition, we will want the mixing of the states to be very small. The goal is to show that there is a tree-level, cubic, self-coupling term of the lighter (mostly Higgs) state. We will consider two different limiting cases (in terms of an expansion parameter, defined below) and show that we can always have a very small mixing angle, a very heavy singlet state, and a light Higgs state with sufficiently large cubic coupling. If we consider SM values for all other parameters of the effective potential, than we need $e \lesssim -20$ GeV (see previous section) for a strongly first-order phase transition.
Following the notation of \cite{profumosinglet}, the tree level scalar potential of the SM Higgs sector with an additional gauge singlet field is the sum of
\begin{align}
V_{SM} &= -\mu^2\left(H^\dagger H\right) + \lambda\left(H^\dagger H\right)^2\\
V_{HS} &= \frac{a_1}{2}\left(H^\dagger H\right)S + \frac{a_2}{2}\left(H^\dagger H\right)^2S^2\\
V_S &= \frac{b_2}{2}S^2 + \frac{b_3}{3}S^3 + \frac{b_4}{4}S^4
\end{align}
with $H$ the usual SM $SU(2)_L$ scalar doublet and $S$ the gauge singlet (real) scalar field. The vevs are defined to be $v_0/\sqrt{2}$ for $H$ and $x_0$ for $S$. The fields for the fluctuations about the vevs are $h$ and $s$, defined as $H = (v_0 + h)/\sqrt{2}$ and $S = x_0 + s$. For simplicity, we'll now take $H$ to be real. The minimization conditions (for both fields, with $x_0 \neq 0$) can be used to eliminate the mass parameters:
\begin{align}
\mu^2 &= \lambda v_0^2 + (a_1 + a_2x_0)\frac{x_0}{2}\\
b_2 &= -b_3x_0 - b_4x_0^2 - \frac{a_1v_0^2}{4x_0} - \frac{a_2v_0^2}{2}.
\end{align}
While for $x_0 = 0$ the conditions enforce $a_1 = 0$ and $\mu^2 = \lambda_Sv_0^2$, as in the SM.
The mass matrix has the following elements:
\begin{align}
\mu_h^2 \equiv \frac{\partial^2 V}{\partial h^2} &= 2\lambda v_0^2\\
\mu_s^2 \equiv \frac{\partial^2 V}{\partial s^2} &= b_3x_0 + 2b_4x_0^2 - \frac{a_1v_0^2}{4x_0}\\
\mu_{hs}^2 \equiv \frac{\partial^2 V}{\partial h\partial s} &= (a_1 + 2a_2x_0)v_0.
\end{align}
The mass eigenstates are defined as
\begin{align}
h_1 &= (\sin\theta) s + (\cos\theta) h\\
h_2 &= (\cos\theta) s - (\sin\theta) h
\end{align}
with the mixing angle $\theta$ as
\begin{equation}
\tan\theta = \frac{y}{1 + \sqrt{1 + y^2}}, \qquad \textrm{where } y \equiv \frac{\mu_{hs}^2}{\mu_h^2 - \mu_s^2}.
\end{equation}
Here, $|\cos\theta| > 1/\sqrt{2}$, and so $h_1$ is the state with the largest $SU(2)$-like component (and $h_2$ has the largest singlet component). The terms singlet state, $h_2$, and heavier state will be used interchangeably (and likewise for the Higgs state). Inverting the above states, we have the original fields, expanded about the minimum, in terms of these mass eigenstates:
\begin{align}\label{eq:states}
H &= v_0 + (\cos\theta) h_1 - (\sin\theta) h_2\\
S &= x_0+ (\sin\theta) h_1 + (\cos\theta) h_2.
\end{align}
The mass eigenvalues are
\begin{equation}
m_{1,2}^2 = \frac{\mu_h^2 + \mu_s^2}{2} \pm \frac{\mu_h^2 - \mu_s^2}{2}\sqrt{1 + y^2},
\end{equation}
with the upper (lower) sign for $m_1$ ($m_2$).
We consider the decoupling limit where the singlet is very heavy (i.e.~the state $h_2$) and study the cubic term of the effective $h_1$ potential. The lighter state, $h_1$, should roughly be in the mass range allowed for the SM Higgs. Although we allow mixing between the states (so the singlet will not be completely physically decoupled), we want to keep the mixing angle small; interactions between $h_1$ and $h_2$ will be highly suppressed after integrating $h_2$ out, by both the small mixing angle and large mass scale of $h_2$. However, since higher dimensional operators are generated, this could contribute to enhancing $E$ (and thus $\alpha$), as discussed in \cite{dim6}. We will also consider $x_0$ an essentially free parameter, determined by physics at the higher, singlet scale. We will consider $x_0$ both much larger or smaller than $v_0$, and use the ratio as an expansion parameter.
Since we consider $x_0$ set by other dynamics, $S$ will be expanded as in eq.~\eqref{eq:states}, while we will drop the $v_0$ in expanding $H$\footnote{If we include $v_0$, the only change is the addition of the expected cubic coupling of the shifted Higgs field in the SM.}. The reason for this is that we want to consider these results as leading terms in the finite-temperature effective potential; the proper degree of freedom before the phase transition is the unshifted field (there is no vev yet). Clearly, the mass eigenstates above are for the tree-level, zero-temperature, shifted fields. However, we will consider this as simply a change of basis; the coefficients of the expansion are just the proper values for the mass eigenstate basis at tree level (and zero temperature).
Considering the limit of $x_0 \gg v_0$, we expand to lowest order in the small parameter $u \equiv v_0/x_0$. We have that $\cos\theta \approx 1$ and $\sin\theta \approx -\frac{a_2}{2b_4}u$. Also, to lowest order in $u$,
\begin{align}
m_1^2 &\approx \frac{(4b_4\lambda - a_2^2)v_0^2}{2b_4}\\
m_2^2 &\approx \frac{2b_4v_0^2}{u^2}.
\end{align}
Note that as we take $u$ very small, the heavy mass becomes arbitrarily large. Writing the fields in the potential in terms of the mass eigenstates and working to lowest order in $u$, the coefficient of $h_1^3$ is given by
\begin{equation}
-\frac{a_2^2v_0}{2b_4} + \frac{a_2}{2b_4}\left(\frac{a_2b_3}{2b_4} - a_1\right)u.
\end{equation}
Some fine tuning will be needed to have appropriate masses and cubic coupling. The values, for example, of $a_2 = 0.35, b_4 = 0.3, \lambda = 0.4$, has a light mass of about $190$ GeV and a cubic coupling of about $-50$ GeV (in the limit of $u \rightarrow 0$). The massive parameter $a_1$ could also be tuned as $1/u$, for instance. In this case the series expansions also change and there are large regions of parameter space where all the constraints can be met.
In the small $x_0$ limit, to lowest order in $w \equiv x_0/v_0$,
\begin{align}
\theta &\approx 2w\\
m_1^2 &\approx 4\lambda v_0^2 + a_1v_0w\\
m_2^2 &\approx -\frac{a_1v_0}{4w}.
\end{align}
Again, the heavy state mass is arbitrarily large as we take $w$ very small. The cubic coefficient of $h_1$, in this limit, is simply $a_1w$. In this case, we can have $m_2$ large and a sufficient cubic coupling by having $a_1$ negative and (possibly unnaturally) large. However, the second term in the expression for $m_1^2$ can constrain $a_1$ (or require tuning with $\lambda$). For instance, the light state has mass about $177$ GeV when $\lambda = 0.3$ and $a_1 = -20/w$ (the cubic coupling is $-20$ GeV), with $w$ arbitrary. Note that these expansions (to lowest order) do not change for $a_1 = a/w$.
In both limiting cases then, a cubic coupling in the effective potential of the light state appears. This coupling can be of the right size to drive a strongly first-order phase transition, while still having a small mixing angle between the states, and appropriate heavy/light masses for the fields. It is also possible (for instance, \cite{profumosinglet}) to approximate the two-field effective potential in the same form as we analyzed (with the field being the Higgs, not the singlet). Viewed this way, the coefficients are more general functions of the singlet field. In general, however, this potential still must be numerically analyzed, due to the dynamics of the Higgs-singlet interactions. The decoupling analysis shown above is simply one limit of a general analysis. It is meant to motivate $e$ and show that, even in this extreme limit, a singlet can have a large effect on the phase transition\footnote{The singlet may also have other effects on the overall potential, including from its own finite temperature potential. A full analysis, of course, must account for effects besides generating $e$.}.
\subsection{SM + $SU(2)_L$ Triplet}
An example of a model where the quartic coupling in the tree-level effective potential is suppressed for a fixed value of the SM-like Higgs is given by models with an additional $SU(2)_L$ triplet in the scalar sector. First considered in \cite{tripletfirst}, this extension to the SM Higgs sector has several significant phenomenological implications, including a dark matter candidate \cite{Cirelli:2005uq, Cirelli:2007xd} and providing a natural framework for a Type II seesaw mechanism for generating non-vanishing neutrino masses \cite{triplet}. We postpone an exhaustive study of this scenario to a future paper \cite{profumopavel}, but we point out here this scenario as one where the results of the present analysis apply.
Indicating with $\delta$ the neutral component of the $SU(2)_L$ triplet $\Delta$, the tree level neutral CP-even part of the Higgs potential, neglecting interaction terms with two or four $\Delta$'s, reads:
\begin{equation}
V(H,\delta) = -m_H^2H^2 + \frac{\lambda_{SM}}{4}H^4 + M_\Delta^2\delta^2 - 2\mu\delta H^2,
\end{equation}
where $M_\Delta$ is the mass term associated to $\Delta$ and the last term stems from the term, in the scalar potential,
\begin{equation}
\mu H^T\ i\sigma_2\Delta^\dagger H+{\rm h.c.}
\end{equation}
(see e.g.~\cite{triplet}). Imposing the minimization condition to the tree level potential and singling out the field-space trajectory along which the minimum of the (tree-level) potential is found allows us to express $\delta$ as a function of $H$:
\begin{equation}
\frac{\partial V}{\partial \delta}=0\quad \rightarrow \quad \delta=\frac{\mu}{M_\Delta^2}H^2.
\end{equation}
Along the locus of minima in the $\delta$ direction, the effective potential is now a function of a single field, $H$:
\begin{equation}
V_{min}(H) = -m_H^2H^2 + \left(\frac{\lambda_{SM}}{4} - \frac{\mu^2}{M_\Delta^2}\right)H^4.
\end{equation}
This is essentially the same as in the SM case, but with the parameter change
\begin{equation}
\frac{\lambda_{SM}}{4} \longrightarrow \frac{\lambda_{SM}}{4} - \frac{\mu^2}{M_\Delta^2} \equiv \lambda_T.
\end{equation}
This model can therefore be cast into the form of the effective potential of the SM we consider in this study, with the above change to the effective quartic coupling $\lambda$. The extra triplet scalar therefore automatically enhances the strength of the phase transition, since, for a given SM Higgs mass and at a given tri-scalar coupling $E$, a smaller $\lambda$ correspond to larger values of $\alpha$ (which we can think of as the ``strength'' of the phase transition).
\subsection{Other Models}
Here we will only briefly mention a few other models which could be analyzed in our framework. First is the analysis of \cite{higherdim}. The authors study baryogenesis using the MSSM to generate a low energy effective theory, and in particular also study the strength of the electroweak phase transition. Given certain constraints on the parameters, the potential minimization reduces to the one-dimensional case, and the phase transition strength (i.e.~the value of $\alpha$) can be greatly enhanced. It is enhanced through an increase of the parameter $E$, which can be about an order of magnitude larger than in the SM. Given SM values for the other parameters in the effective potential, this alone, for a light Higgs, drives $\alpha$ to be larger than one. The new expressions for $E$ can also be used to investigate more closely the parameter space that produces large $\alpha$.
In a similar vein, \cite{dim6} considers dimension-six Higgs operators, arising, for instance, from integrating out a heavy singlet. This can also produce a first-order phase transition. Another effect of this operator is to alter the Higgs self-couplings from the SM. The cubic and quartic self-couplings are altered, and this in turn would alter the couplings appearing in the effective potential. Again, this is easily incorporated in our analysis.
The ``topflavor'' model also has a phase transition which fits into our framework. In this model there are separate $SU(2)$'s for the third versus other generations. In \cite{topflavor}, the earlier phase transition, from $SU(2)_1 \times SU(2)_2$ to the SM $SU(2)_L \times U(1)_Y$, was analyzed in the context of baryogenesis. This phase transition has a scalar field as the order parameter, which has a quartic (tree-level) potential. The one-loop, finite-temperature, effective potential can be derived, and it has the same functional structure as the SM. The high temperature expansion will then be the same form as the general effective potential we analyzed. By using the constants of the topflavor model, which has a strongly first order phase transition, the GW parameters can be found through the above results. Note that in this case some of the constraints on the parameters are not applicable, since they refer to specific electroweak constraints. Baryogenesis in this model requires that the electroweak phase transition is \emph{not} strongly first order, and therefore any GW produced comes from this earlier phase transition.
Finally, we note that the form of the potential we used is really quite general. The potential mimics the SM form, which we might reasonably expect to be a good approximation to any low energy, effective theory. Also, besides $e$, the form of the potential and the constants are all derived from the high-temperature expansion of the general one-loop, finite-temperature, effective potential of a theory with gauge bosons and fermions coupled to a scalar field. As we have shown above, the addition of gauge singlets motivates the inclusion of the additional parameter. Thus we expect this potential to arise very generically for any field theory model. The obvious exception is for multiple fields controlling the phase transition (so that all but one cannot simply be integrated out). In that case, it may still be possible to use this potential, by approximating field configurations or by taking certain limits.
\section{Conclusions}
Upcoming experiments may soon observe the first GW signals, including those from early universe processes. Such experiments will deliver the very exciting prospect of observing GWs from the electroweak phase transition, if it is strongly first-order. While there have been many studies on this topic, they largely involve numerical computations of the phase transition temperature and GW parameters. In this work we have presented an analysis of a generic effective potential which is motivated by the SM Higgs potential and an additional contribution from a gauge singlet (or other possible origins) in the form of a temperature-independent cubic coupling.
By approximating the tunneling temperature to be very close to the critical temperature (based on the general form of the action), we have derived an expression for the tunneling temperature based on the parameters of the effective potential. Using this result, the GW parameters also have expressions depending on the parameters of the potential.
Once these expressions were found, the parameter space of the potential was explored starting from the SM values. While all the parameters can have noticeable effects, $e$, motivated from singlet models, easily has the greatest effect. Following this, we showed how this parameter arises from a decoupling analysis of a simple SM plus singlet model.
Other models can also be analyzed with our effective potential. The addition of an additional $SU(2)$ triplet affects $\lambda$, while higher dimensional operators can also affect $E$. Increasing $E$ by an order of magnitude from its SM value could also produce a strongly first-order phase transition. The topflavor model also has an effective potential of the same form as the SM for its earlier phase transition, which can also be analyzed through this work. In this case, the GW signal would not be from the (later) electroweak phase transition.
Finally, although the effective potential studied has clear origins in the SM and singlet extensions, the potential is indeed very general. A low energy, effective theory attempting to model the electroweak phase transition will likely closely model this form of the potential. Additionally, the potential is a high-temperature approximation to a one-loop, finite temperature, effective potential of a scalar field with gauge bosons, fermions, and singlets. So in this sense its form is also generic and expected to be common in a phase transition with a scalar field order parameter.
The potential applications of the results presented include applying it to other models for the electroweak (or other) phase transition. By approximating or otherwise finding a way to put an effective potential in the form we analyzed, the tunneling temperature and GW spectrum is now easily obtained through the above formulae.
Besides being significant itself, the detection of a stochastic background of GWs may provide insight into the specifics of models of the electroweak phase transition, and the fundamental mechanism underlying the generation of the baryon asymmetry of the universe.
\begin{acknowledgments}
We gratefully acknowledge Lam Hui for useful discussions and feedback on
this manuscript, and especially for first bringing the idea elaborated on
in this paper to our attention. We also acknowledge the many helpful comments and corrections (especially regarding turbulence) from the anonymous JCAP referee. S.P.~is partly supported by an Outstanding Junior Investigator Award from the US Department of Energy (DoE), Office of Science, High Energy Physics, and by DoE Contract DEFG02-04ER41268 and NSF Grant PHY-0757911. J.K.~was partially supported by a Doctoral Student Sabbatical Fellowship and an Orals Fellowship from UCSC.
\end{acknowledgments}
\renewcommand{\bibsection}
{\section*{References}}
\bibliographystyle{JHEP}
|
1,108,101,563,120 | arxiv | \section{Introduction}\setcounter{equation}{0}\quad
Vortex rings in fluids are well-studied using a variety of
analytic, numerical and experimental techniques (for a review
see \cite{SL}).
The interaction of multiple vortex rings produces complex dynamical
behaviour, such as the leapfrogging locomotion of a pair of
vortex rings.
The number of leapfrogs increases with
Reynolds number \cite{RS} and perpetual leapfrogging
can only occur in inviscid flow.
Numerical simulations of the Euler equations can reproduce
several leapfrogs, providing sophisticated methods are used
to deal with numerical diffusion \cite{SU}.
Studies using a thin-cored approximation accurately describe
leapfrogging and reveal that the dynamics of
three or more coaxial vortex rings has
regimes of chaotic behaviour in which the
evolution is very sensitive to the initial condition \cite{Ko}.
The interaction of two vortex rings in a superfluid shares many
common features with those in a normal fluid, as demonstrated by
numerical simulations of the Gross-Pitaevskii equation \cite{KL},
so some phenomena may be universal for all types of vortex rings.
The Landau-Lifshitz equation describes the dynamics
of the local magnetization in a ferromagnetic medium.
This equation has vortex ring solutions
that are magnetic analogues of vortex rings in fluids.
They are toroidal regions in which the
magnetization is not in the ground state,
and they propagate along their symmetry axis with a
constant speed \cite{Pa,Co,Su}.
The existence of vortex ring solutions of the Landau-Lifshitz equation
is a fairly recent result, and so far studies have been limited.
In fact, all previous work has been restricted to the case of a single
magnetic vortex ring in uniform motion, so the full extent to which
these are indeed analogues of vortex rings in fluids remains to be seen.
Investigating this issue is the main aim of the current paper.
Here we present the first results on the dynamics and interaction of multiple
magnetic vortex rings, obtained from numerical simulations of the
Landau-Lifshitz equation. In particular, we demonstrate the
leapfrogging motion of a pair of magnetic vortex rings and
evidence for the chaotic dynamics of a trio of rings.
The Landau-Lifshitz equation is more amenable to a standard
numerical treatment than the Euler or Navier-Stokes equations.
As a result, we are able to apply a simple finite difference
scheme with a Runge-Kutta algorithm to
compute the evolution of several coaxial rings.
The direct simulation of the nonlinear partial differential equation
removes the need to resort to any form of thin-cored approximation.
Within the thin-cored approximation of fluid vortex rings,
Shashikanth and Marsden \cite{SM} have shown that
the periodic leapfrogging motion of a pair of
rings generates a geometric phase.
These authors state that they view this work as a preliminary step to more
sophisticated modelling of ring interactions using ideas from geometric
mechanics. Our results show that the Landau-Lifshitz equation is an ideal
system in which to apply these ideas, beyond the
thin-cored approximation, to a tractable partial differential equation.
Our demonstration of periodic leapfrogging of vortex rings indicates
that this system could be useful for any future investigations in this
direction.
Our numerical simulations of the Landau-Lifshitz equation suggests
a novel link between fluids and magnetism, with many familiar
phenomena for fluid vortex rings being reproduced in the nanoscale
world of ferromagnetic media. The interesting properties of
magnetic vortex rings that we have demonstrated provides some motivation
for attempts at experimental creation and observation of these
structures. If this can be achieved then there are potential applications
within the field of spintronics, and we shall speculate on this possibility
later in this paper.
The outline of the paper is as follows. In section \ref{sec-LL}
we review the Landau-Lifshitz equation and describe its vortex ring
solution.
In section \ref{sec-leapfrog} we present our main results on the
dynamics of multiple vortex rings.
In section \ref{sec-LLG} we investigate the effect of including dissipation,
by solving the Landau-Lifshitz-Gilbert equation, and compare with
known results on the leapfrogging of fluid vortex rings for
varying Reynolds number. Finally, we present some conclusions
in section \ref{sec-con}.
\section{A vortex ring in the Landau-Lifshitz equation}\setcounter{equation}{0}\quad\label{sec-LL}
The system of interest is a three-dimensional ferromagnet with
isotropic exchange interactions and an easy axis anisotropy.
In the continuum approximation, the ferromagnet is
described by its spin ${\bf n}({\bf x},t),$
which is a three-component unit vector
${\bf n}=(n_1,n_2,n_3),$
specifying the local orientation of the magnetization.
In the absence of dissipation, the dynamics of the ferromagnet is
governed by the Landau-Lifshitz equation
\begin{equation}
\frac{\partial {\bf n}}{\partial t}={\bf n} \times
\{\nabla^2{\bf n} + A({\bf n}\cdot{\bf k}){\bf k}\}.
\label{ll}
\end{equation}
Here ${\bf k}=(0,0,1)$ is the easy axis and we work in the unbounded
domain $\mathbb{R}^3,$ with the ground state aligned along the easy axis,
so that the boundary conditions are ${\bf n}\rightarrow {\bf k}$
as $|{\bf x}|\rightarrow \infty.$
For simplicity of presentation, we have used suitably scaled dimensionless
units. The corresponding physical units depend on the material properties,
such as the effective spin exchange interaction.
For typical material parameters the dimensionless unit of length equates to
a physical size of around 10 nanometres and the dimensionless unit of time to
around 10 picoseconds (see \cite{TKS} for details).
However, it is important to note that, as
discussed below, the radius of a magnetic vortex ring is a
parameter that can vary over a large range and is determined by the initial
conditions. In particular, the size of a vortex ring is not fixed by any
constants in the Landau-Lifshitz equation, which are related to the
material parameters.
This is, of course, entirely expected and mirrors the situation for
fluid vortex rings. As we shall see, both the size and speed of a ring
can be adjusted by several orders of magnitude,
which should make their physical existence
realistic over a wide range of material conditions. The examples we present in
detail are chosen so that characteristic scales and speeds are of the same
order of magnitude as for current-driven domain wall motion realized in
magnetic wires \cite{Ya}, where typical speeds are of the order of 10m/s.
There is a scaling transformation \cite{Co} that relates magnetic vortex
rings with
differing values of $A$, so for computational concreteness we set
$A=1$ from now on.
Numerical studies in \cite{Co} restrict to the isotropic case of $A=0$,
but the investigations in \cite{Su} reveal that the results for axial
vortex rings are
similar for $A=0$ and $A=1$. Thus the value of $A$
is not crucial for the phenomena
described in this paper and we expect little qualitative difference between
hard materials and permalloy.
A magnetic vortex ring is an axially symmetric configuration
that propagates at constant speed along its symmetry axis.
Far from the vortex ring the spin is in the ground state ${\bf n}={\bf k},$
whereas in the core of the ring the spin points in the antipodal direction
${\bf n}=-{\bf k}.$ A useful way to visualize a magnetic vortex ring is
to plot the isosurface $n_3=0,$
delineating the boundary between the core where $n_3=-1$ and the vacuum
where $n_3=1$.
Such an isosurface indicates both the position of
the core of the ring and its thickness.
The existence of a magnetic vortex ring
as a stationary structure was first suggested in \cite{DI},
but it took a careful consideration of the
conserved quantities of the Landau-Lifshitz equation to
identify that such structures could not be stationary,
but instead must propagate at
constant speed along their symmetry axis \cite{Pa}.
Subsequently, an analysis \cite{Co} based on an axially symmetric
travelling wave
ansatz revealed that a magnetic vortex ring is characterized by two
real parameters.
Roughly speaking, these parameters determine the
radius and the thickness of the vortex ring, and
the ring exists providing its radius is
above a critical value determined by its thickness.
In this paper we shall be concerned with vortex rings where the
radius of the ring is large in comparison to its thickness.
As described in \cite{Su},
in this
situation the cross-section of a vortex ring at any given time
is well-approximated
by a stationary solution (called a Skyrmion)
of the two-dimensional version of the
Landau-Lifshitz equation (\ref{ll}).
It is therefore useful to
briefly review some results on these planar Skyrmions.
For more details see \cite{PZ}.
It is perhaps worth pointing out that, despite the name
magnetic vortex ring, its cross-section is a planar
Skyrmion and not a magnetic spin vortex \cite{Hu}.
Magnetic spin vortices appear in two-dimensional systems with an easy plane
anisotropy, rather than the easy axis anisotropy considered here.
To obtain the planar Landau-Lifshitz equation from (\ref{ll}) we simply take
the spin ${\bf n}$ to be independent of the third Cartesian coordinate $z.$
Introducing polar coordinates $(r,\phi)$ in the $(x,y)$ plane, a stationary
planar Skyrmion located at the origin has the form
\begin{equation}
{\bf n}=(\sin f\cos(q\phi+\omega t),\sin f \sin(q\phi+\omega t), \cos f),
\label{ansatz}
\end{equation}
where $\omega$ is the constant frequency of precession, $q$ is a non-zero
integer and
$f(r)$ is a radial profile function satisfying the boundary conditions
$f(0)=\pi$ and $f(r)\rightarrow 0$ as $r\rightarrow\infty.$
The Skyrmion is an example of a topological soliton \cite{MS}
and the integer $q$ counts the
number of times that the unit vector ${\bf n}$ covers the two-sphere
as $(x,y)$ varies over the entire plane.
It is this kind of topological arrangement of the spin that is called a
Skyrmion, or
sometimes a baby Skyrmion to emphasize its planar character.
The integer $q$ is known as the Skyrmion number, with $q<0$ called
an anti-Skyrmion. Similar Skyrmions have recently been observed experimentally
in chiral ferromagnets \cite{Yu}, whose mathematical description
involves an additional term in the Landau-Lifshitz equation
that derives from a contribution to the energy
(called the Dzyaloshinskii-Moriya interaction)
that is first order in the derivative of the spin.
This extra term allows
static Skyrmion solutions, in contrast to the
dynamical Skyrmions considered here that are stationary
but not static. However, the spatial structure
of the spin has the same form for Skyrmions in both systems.
Substituting the ansatz (\ref{ansatz}) into the Landau-Lifshitz equation
(\ref{ll}) yields a stationary solution providing the profile function
satisfies the ordinary differential equation
\begin{equation}
f''+\frac{f'}{r}-\bigg(1+\frac{q^2}{r^2}\bigg)\sin f\cos f+\omega \sin f=0.
\label{ode}
\end{equation}
\begin{figure}[h]\begin{center}
\includegraphics[width=8cm]{profiles.pdf}
\caption{
A plot of $n_3=\cos f$ against $r$ for planar Skyrmions with
frequencies $\omega=0.4$ (black curve);
$\omega=0.5$ (red curve);
$\omega=0.6$ (blue curve).
}
\label{fig-profiles}\end{center}\end{figure}
For a solution of this equation to exist, that satisfies the given
boundary conditions, the frequency must be restricted to the
range $0<\omega<1.$ More generally, the restriction is to
the range $0<\omega<A.$
Here we shall restrict attention to the simplest case of the single
Skyrmion and set $q=1$ from now on.
In terms of vortex rings, a
Skyrmion in the ferromagnetic system will play the role of
a vortex in the analogous fluid setting.
The frequency $\omega$ is a parameter of the Skyrmion, with
the size of the Skyrmion decreasing as the frequency increases.
As an illustration of this phenomenon, in Figure~\ref{fig-profiles}
we plot $n_3=\cos f$ as a function of $r$ for three increasing values
of the frequency $\omega=0.4,0.5,0.6.$ The position of the Skyrmion
is the point in space where ${\bf n}=-{\bf k},$ that is, the spin is
antipodal to the ground state spin. In the ansatz (\ref{ansatz}) this
position has been chosen to be the origin, but obviously the translation
invariance of the Landau-Lifshitz equation allows the Skyrmion to be
positioned at any point in the plane. As $n_3$ varies from the value $-1$
at the centre of the Skyrmion to the value $1$ at spatial infinity then
a natural definition for the size of the Skyrmion is the distance
from the centre at which $n_3$ vanishes. It is this definition that is
used in the above statement regarding the size of the Skyrmion decreasing
as $\omega$ increases.
As mentioned briefly above, axially symmetric initial conditions for a
vortex ring can be constructed by embedding a Skyrmion along
a circle, so that a cross-section of the vortex ring is the
Skyrmion \cite{Su}.
Explicitly, introduce cylindrical coordinates $\rho,\theta,z$
where $x=\rho\cos\theta$ and $y=\rho\sin\theta,$ and impose axial
symmetry by requiring ${\bf n}$ to be independent of $\theta.$
The initial condition for a vortex ring of radius $R$ and position
$z_0$ along the symmetry axis is obtained by embedding the
above planar Skyrmion (with frequency $\omega$) in the $(\rho,z)$ plane
with position $(\rho,z)=(R,z_0).$ The vortex ring is obtained by rotating
this planar configuration around the $z$-axis.
The size of the Skyrmion (determined by $\omega$)
corresponds to the thickness
of the vortex ring and the vortex ring radius is equal to $R.$
In terms of an explicit formula, this initial condition is given by
\begin{equation}
{\bf n}=\bigg(\frac{(\rho-R)}{D}\sin F,\frac{(z-z_0)}{D}\sin F,\cos F\bigg),
\quad \mbox{where} \quad D=\sqrt{(\rho-R)^2+(z-z_0)^2}
\end{equation}
and $F=0$ if $D>R$ but $F=f(D)$ if $D\le R,$ where
$f(r)$ is the solution of the ordinary differential equation
(\ref{ode}) with $q=1$ and the boundary
conditions $f(0)=\pi$ and $f(R)=0.$
By scaling symmetry, the important quantity for a vortex ring
is the ratio of its radius to its thickness, which we want to be large
for our investigations. Without loss of generality, we may therefore fix
the size of the Skyrmion (that is, the thickness of the ring) and treat
the vortex ring radius $R$ as a parameter to vary. From now on,
we fix the frequency of the Skyrmion to be $\omega=0.5,$ which
corresponds to a size of $1.35,$ as confirmed by an examination of
Figure~\ref{fig-profiles}. The regime in which we are interested is therefore
$R\gg 1.$
We study the dynamics of coaxial vortex rings
by computing axially symmetric solutions of the Landau-Lifshitz
equation (\ref{ll}). In cylindrical coordinates
the Landau-Lifshitz equation
reduces to the dynamics of an effective two-dimensional problem in the variables
$\rho$ and $z,$ because ${\bf n}$ is independent of $\theta.$
Spatial derivatives are approximated using
second order accurate finite differences with a lattice spacing
$\Delta \rho=\Delta z=0.15.$
The boundary condition on the symmetry axis is $\partial_\rho{\bf n}={\bf 0}$
and on the remaining boundaries of the numerical lattice we set
${\bf n}={\bf k},$ so that the vacuum value is attained.
Time evolution is implemented via a fourth order Runge-Kutta
method with a timestep $\Delta t=0.006.$
In Figure~\ref{fig-onering} the (lower) blue ring is the isosurface
$n_3=0$ of the initial condition ($t=0$) obtained by embedding the
Skyrmion (with $\omega=0.5$) along a circle of radius $R=45.$
The region for this numerical simulation is
$(\rho,z)\in[0,90]\times[-45,45].$
The (upper) red ring in Figure~\ref{fig-onering}
is the $n_3=0$ isosurface at the later time
$t=1800.$ It can be seen that the vortex ring simply translates
along its symmetry axis with no change in the radius or thickness
of the ring. In Figure~\ref{fig-position_J} the (upper) red curve
shows the position along the $z$-axis as a function of time.
This confirms that the vortex ring travels at a constant speed, to
within a good numerical accuracy. The tiny deviations from
uniform motion are a result of using an initial condition that assumes
the cross-section is exactly that of a Skyrmion,
rather than solving numerically for the initial conditions using a
travelling wave ansatz.
This approximation is already accurate enough for our requirements when
the ring radius is $R=45,$ and the accuracy of this approximation
improves as the radius $R$ increases.
\begin{figure}[ht]\begin{center}
\includegraphics[width=6cm]{onering.pdf}
\caption{
The isosurface $n_3=0$ at the initial time $t=0$ (lower blue ring)
and the later time $t=1800$ (upper red ring).
The vortex ring has a radius $R=45$ and moves with a constant
speed $v=0.026.$ Between the two images it has travelled a
distance $46.8$ which is comparable to its radius.
}
\label{fig-onering}\end{center}\end{figure}
\begin{figure}[ht]\begin{center}
\includegraphics[width=8cm]{position_J.pdf}
\caption{
The (upper) red curve shows the position
along the $z$-axis as a function of time
for a vortex ring with a radius $R=45.$
The (lower) blue curve shows the position of the
same vortex ring when there is an applied electric current
$J=0.026$ to freeze the motion.
}
\label{fig-position_J}\end{center}\end{figure}
From the data in Figure~\ref{fig-position_J} we determine that
the speed of the ring is $v=0.026.$
From our earlier comment that the dimensionless unit of length equates to
a physical size of around 10 nanometres and the dimensionless unit of time to
around 10 picoseconds, then in terms of physical units this is a speed of
the order of $10\,m/s,$
which is comparable to the speeds observed for
current-driven domain wall motion in magnetic wires \cite{Ya}.
Once again, we stress that the speed of a vortex ring is not fixed by the
material parameters but by the size of the vortex ring.
In particular, larger rings of the same thickness travel more slowly.
In the presence of a spin-polarized electric current, ${\bf J},$
the Landau-Lifshitz equation (\ref{ll}) contains an extra term
and is given by \cite{TKS}
\begin{equation}
\frac{\partial {\bf n}}{\partial t}={\bf n} \times
\{\nabla^2{\bf n} + A({\bf n}\cdot{\bf k}){\bf k}\}
+({\bf J\cdot \nabla}){\bf n},
\label{llj}
\end{equation}
where we continue to use dimensionless units.
An important feature is that the Landau-Lifshitz equation (\ref{llj})
has a Galilean symmetry
${\bf x}\mapsto {\bf x}-{\bf v}t,$ if accompanied by a corresponding
shift in the spin-polarized electric current
${\bf J}\mapsto {\bf J}+{\bf v}.$
The microscopic derivation of this result
and its physical interpretation is subtle.
We refer the reader to section 14 of \cite{TKS}, and references
therein, for a detailed discussion.
The Galilean symmetry implies that a single vortex ring is brought to rest in
the presence of an appropriate current ${\bf J}=(0,0,J),$
where $J$ is the speed of the vortex ring in the absence of an
electric current. This is a very useful feature
that can be exploited in the numerical study of
vortex rings.
Without an electric current,
extremely large grids would be required to prevent the
rings from leaving the simulation region during the long
time periods needed for the study of interacting vortex rings.
However, with a suitable electric current the centre of mass of the
system can be fixed, allowing the rings to remain inside a reasonable
simulation region for a long time.
As an example, we have seen that a vortex ring with a radius $R=45$
moves along the positive $z$-axis with a speed $v=0.026,$
when the current vanishes ${ J}={0}$. We can therefore bring this
vortex ring to rest by applying the current $J=v=0.026.$
The (lower) blue curve in
Figure~\ref{fig-position_J} shows the position along the $z$-axis
of the vortex ring as a function of time, obtained by numerical simulation of
equation (\ref{llj}) with this value of the current.
This confirms that the introduction of this specific current freezes
the motion of the centre of mass of the ring.
\section{Interacting vortex rings}\setcounter{equation}{0}\quad\label{sec-leapfrog}
In this section we investigate the interaction of multiple vortex
rings and demonstrate that leapfrogging motion takes place.
In the previous section we have seen how to construct an initial
condition for a single vortex ring. This approach can also be
used to create an initial condition for multiple vortex rings,
through the addition of the complex variable obtained as the
stereographic projection of the spin ${\bf n}.$ Explicitly,
${\bf n}$ is a unit vector and the associated point on the
two-sphere can be specified by the Riemann sphere coordinate
\begin{equation}
\zeta=\frac{n_1+in_2}{1+n_3}.
\label{rsphere}
\end{equation}
Note that the vacuum value ${\bf n}={\bf k}$ maps to the point $\zeta=0$
and the value at the centre of the core
${\bf n}=-{\bf k}$ maps to the point at infinity $\zeta=\infty.$
Let $\zeta^{(j)},$ for $j=1,\ldots,m,$ denote the Riemann sphere
coordinates obtained from (\ref{rsphere}) by taking the spin ${\bf n}$
to be the initial condition of a single vortex ring with radius
$R^{(j)}$ and position $z_0^{(j)}.$
The set of $m$ single vortex rings are coaxial and
are taken to have mutual separations that are greater than the core diameter.
Taking the sum $\zeta=\sum_{j=1}^m\zeta^{(j)}$ and inverting
(\ref{rsphere}) yields an initial condition for
$m$ coaxial rings.
In the first image in Figure~\ref{fig-leapfrog} we
display the initial condition, obtained using the above procedure,
for two rings with equal radii
$R=45$ and starting positions $z_0=\pm 4.$
The two thin tubes correspond to
the isosurface $n_3=0,$ identifying the vortex cores. The
two components of this isosurface
are coloured red and blue in order to aid identification of the
two vortex rings throughout the motion.
\begin{figure*}[h]\begin{center}
\includegraphics[width=16.5cm]{leapfrog.pdf}
\caption{
A pair of leapfrogging magnetic vortex rings.
The isosurface $n_3=0$ is displayed at equal time intervals
(from left to right) from
$t=0$ to $t=1440.$ Initially the two rings have equal radii
$R=45$ and positions $z_0=\pm 4.$
There is an applied electric current $J=0.026$ to freeze
the centre of mass motion. The colouring of the isosurfaces is
to aid identification of the two rings throughout the
evolution.
}
\label{fig-leapfrog}\end{center}\end{figure*}
\begin{figure}[h]\begin{center}
\includegraphics[width=8cm]{position.pdf}
\caption{
The positions of pairs of leapfrogging vortex rings in the absence of
an electric current.
The graph shows the positions along the $z$-axis as a function of time
for pairs of leapfrogging vortex rings with initial equal radii $R$
and an initial separation in the $z$ direction equal to 8.
Upper blue curves have $R=45$,
middle black curves have $R=115$ and
lower red curves have $R=523$.
}
\label{fig-position}\end{center}\end{figure}
The images in Figure~\ref{fig-leapfrog} show the
time evolution, obtained by numerical solution of (\ref{llj})
in the presence of an electric current $J=0.026,$
applied to freeze the centre of mass motion.
These images reveal that the bottom (red) ring shrinks and speeds up
until it
passes through the top (blue) ring, after which it expands so that the initial
configuration is recovered but with an exchange of the two rings.
This is the famous leapfrogging motion discussed by Helmholtz \cite{He}
in the mid-nineteenth century, in the context of vortex rings in fluids.
Here we have shown, for the first time,
that the same phenomenon takes place in the nanoscopic
world of ferromagnetic spin structures, as modelled by the Landau-Lifshitz
equation. In the absence of dissipation, perpetual leapfrogging is
expected. Our numerical results support this expectation, with
our longest simulations producing around a dozen leapfrogging events
with no discernible deviation from a periodic motion.
\begin{figure}[ht]\begin{center}
\includegraphics[width=16.5cm]{xsection.pdf}
\caption{
A cross-section in the $(\rho,z)$ plane showing
plots of $n_3,$ at equal time intervals (from left to right)
from $t=0$ to $t=4000,$
with colour scale from blue to red corresponding
to values from -1 to 1.
In each plot the displayed region is $(\rho,z)\in[100,130]\times[-45,45].$
The cross-section contains a pair of Skyrmions that rotate around each other
and propagate in the $z$-direction. The associated three-dimensional vortex
rings are obtained by rotating these results about the $z$-axis.
From the three-dimensional perspective each ring has an initial radius $R=115$
and the rings leapfrog each other.
}
\label{fig-xsection}\end{center}\end{figure}
\begin{figure*}[ht]\begin{center}
\includegraphics[width=8.1cm]{trio1.pdf}
\includegraphics[width=8.1cm]{trio2.pdf}
\caption{
Trios of leapfrogging vortex rings in the presence
of an electric current $J=0.026,$ to freeze the centre of mass
motion.
The positions along the $z$-axis as a function of time
for three vortex rings with initial equal radii $R=45.$
In the left image the rings are equally spaced with initial positions
$z_0=-2,-12,-22.$
In the right image the initial separation between the two lowest
rings has been increased by $1\%$ to give the initial positions
$z_0=-2,-12,-22.1$
The drastic difference in the resulting evolution demonstrates
the sensitivity to the initial conditions expected of chaotic dynamics.
}
\label{fig-trio}\end{center}\end{figure*}
\begin{figure}[ht]\begin{center}
\includegraphics[width=5cm]{headon.pdf}
\caption{
The cross-section for a head-on collision of two vortex rings,
with initially equal radii $R=45$ and opposite orientations.
The three-dimensional vortex rings are obtained by rotating these
cross-sections around the $z$-axis.
The initial axis positions are $z_0=\pm 5.$
The plots show $n_3,$
at equal time intervals (from top to bottom) from $t=0$ to $t=228,$
with colour scale from blue to red corresponding
to values from -1 to 1.
In each plot the displayed region is $(\rho,z)\in[30,120]\times[-15,15].$
The head-on collision generates a rapidly expanding single ring.
}
\label{fig-headon}\end{center}\end{figure}
A Galilean transformation can be applied to convert the above result to
the situation in which there is no applied electric current, so that
the leapfrogging pair propagate up the $z$-axis.
This yields the upper blue curves in Figure~\ref{fig-position} for the
$z$-positions of the pair of vortex rings.
These curves display multiple leapfrogging events, identified by
the crossing points of the two curves.
Increasing the radius of the vortex rings to $R=115$ produces the
middle black curves in Figure~\ref{fig-position}.
The associated effective two-dimensional simulation
(obtained as the half-plane $(\rho,z)$ by taking a cross-section)
is presented in Figure~\ref{fig-xsection}, for the case of
no electric current.
In the cross-section,
this shows a pair of Skyrmions rotating around each other whilst
drifting in the positive $z$-direction.
The three-dimensional vortex rings are obtained by
rotating these cross-sectional results about the $z$-axis.
Under this identification, a pair of cross-sectional Skyrmions that
rotate around each other maps to a leapfrogging event.
A further increase of the radius
to $R=523$ produces the lower red curves
in Figure~\ref{fig-position}.
This set of results demonstrates that increasing the
ring radius has a significant influence on the propagation speed of the
pair, which decreases with increasing ring radius.
However, there is much less of an impact
on the leapfrogging period, which is mainly determined by the separation
of the rings, and increases with this separation.
The results in \cite{Kom} suggest that
the frequency of the leapfrogging motion should vary with the
inverse of the square of the separation between the rings,
and it may be possible to
derive such a relation analytically using the methods described there.
The similarity between magnetic vortex rings and those in fluids extends to
other aspects beyond leapfrogging. Below we shall demonstrate some examples of
this correspondence.
By applying a thin-cored approximation,
studies \cite{Ko} of three or more coaxial fluid vortex rings
have shown that there are regimes of chaotic behaviour in which the
evolution is very sensitive to the initial condition.
We now investigate the same issue for magnetic vortex rings, but using the
full nonlinear partial differential equations,
rather than a thin-cored approximation.
The left image in Figure~\ref{fig-trio} displays the evolution
of the positions along the $z$-axis of a trio of
equal radii $R=45$ vortex rings that initially are equally spaced.
There is an applied electric current $J=0.026,$ to keep all three
rings inside the simulation region for a reasonable length of time.
It can be seen that after some mutual leapfrogging, the asymptotic state
contains a pair of leapfrogging rings plus a decoupled single ring,
which is the one that was initially at the top.
The right image in Figure~\ref{fig-trio} displays the resulting evolution
following a tiny change in the initial condition to increase the
initial separation of the two lowest rings by $1\%.$
It can be seen that this tiny change results in a drastic difference
in the dynamics. There are fewer mutual leapfrogging events and this time
the single ring that decouples is the ring that was initially at the
bottom not the top. The period of the asymptotic leapfrogging pair has
also increased to about twice the value found in the first case,
indicating that this pair has an increased separation.
This sensitivity to the initial conditions is in line with the results obtained
for fluid vortex rings \cite{Ko}
and is an indication of the presence of chaotic dynamics.
The fact that we
are able to reproduce this behaviour further strengthens the analogy
between magnetic vortex rings and their fluid counterparts.
We have also studied the head-on collision of two coaxial magnetic vortex
rings that are initially moving in opposite directions and have
equal radii.
A vortex ring moving along the negative $z$-axis is obtained by replacing
the initial $q=1$ Skyrmion cross-section by an anti-Skyrmion with $q=-1.$
In Figure~\ref{fig-headon} we display the initial cross-section, with
axis positions $z_0=\pm 5$ and a common radius $R=45$, together
with the subsequent evolution of the cross-section.
We find that the two rings merge into a rapidly expanding and thinning ring,
as seen in the initial stages of experiments on fluids \cite{LN}.
In terms of the cross-section, the single structure that forms from
the collision has Skyrmion number $q=0,$ as it is formed from the merger
of a Skyrmion and an anti-Skyrmion.
This can be understood within the two-dimensional system
as the formation of a non-topological soliton, which is an object
that has been studied in \cite{PZ}.
The non-topological soliton is a stationary solution that
has the form (\ref{ansatz}) but with $q=0.$ The resulting ordinary
differential equation for the profile function, (\ref{ode}) with $q=0,$
is now subject to the boundary condition $f'(0)=0,$ rather than
$f(0)=\pi,$ which was required for $q\ne 0.$
The fact that the vortex ring with a $q=0$ cross-section has an expanding
radius can be understood from the above description of its formation.
Namely, a perturbation of the $q=0$ cross-section is a Skyrmion anti-Skyrmion
pair in cross-section and one expects the cross-sectional dynamics to be
qualitatively similar to the two-dimensional dynamics of a planar
Skyrmion anti-Skyrmion pair. Such a pair generates a common translational
motion, in the same manner as a vortex anti-vortex pair in fluid dynamics,
which turns into an expanding ring when considered as a cross-section that
is to be rotated around the $z$-axis to provide the full three-dimensional
configuration.
The head-on collision of vortex rings provides another demonstration
of the similarity between vortex rings in fluids and their magnetic analogues.
It would be interesting to extend our simulations beyond the axially
symmetric regime to see if the non-axial instability and subsequent
production of small rings via reconnection, seen in the latter stages of
the fluid experiments \cite{LN}, can be reproduced in the
Landau-Lifshitz equation.
\section{Vortex rings in the Landau-Lifshitz-Gilbert equation}\setcounter{equation}{0}\quad\label{sec-LLG}
So far we have neglected dissipation, but this can be included in the theory
by extending our simulations to the Landau-Lifshitz-Gilbert equation.
In the absence of an electric current, this equation reads \cite{TKS}
\begin{equation}
\frac{\partial {\bf n}}{\partial t}={\bf n} \times
\{\nabla^2{\bf n} + A({\bf n}\cdot{\bf k}){\bf k}\}
-\lambda {\bf n}\times({\bf n}\times
\{\nabla^2{\bf n} + A({\bf n}\cdot{\bf k}){\bf k}\})
\label{llg}
\end{equation}
where $\lambda>0$ is the damping constant.
Once damping is included,
the perpetual leapfrogging of a pair of vortex rings is
replaced by a finite number of leapfrogs, with this number decreasing
as the damping constant is increased.
Figure~\ref{fig-llg} provides an illustrative example of this phenomenon
for two vortex rings with initial radii $R=115$ and a separation of 8 along
the $z$-axis. The upper red curves are the position curves for the
two vortex rings in the absence of dissipation, that is,
solutions of (\ref{llg}) with $\lambda=0.$
The lower black curves display the positions of the vortex rings
using identical initial conditions, but this time for solutions of
(\ref{llg}) with the small damping constant
$\lambda=0.0001.$ To aid visualization, this second pair of curves have
been shifted down to avoid any overlap with the first set of curves.
Damping does not have a significant influence on the radius of a vortex ring
but instead the main effect is to reduce the thickness of the vortex ring.
In terms of the planar cross-section, this is a reduction of the core size
of the Skyrmion. For a leapfrogging event, this reduction in core size
means that the distance between the two Skyrmions has increased relative
to the core size, which increases the period of the rotating pair.
This phenomenon is clearly demonstrated in Figure~\ref{fig-llg},
where it is evident that the dissipation produces a continual increase in the
leapfrogging period.
Again this has a direct
analogy in fluid dynamics, where it has been observed \cite{RS}
that the number of leapfrogs increases with Reynolds number.
Perpetual leapfrogging of vortex rings in fluids can only
occur in inviscid flow, which in our magnetic analogy corresponds to the
Landau-Lifshitz equation without dissipation.
\begin{figure}[ht]\begin{center}
\includegraphics[width=10cm]{llg.pdf}
\caption{The upper red curves show the propagation of a leapfrogging pair of
vortex rings without dissipation. The lower black curves show the result
for the same pair after the inclusion of a small damping constant:
to aid visualization, the lower black curves have been shifted down.
The damping has very little effect on the radii of the vortex rings
but their thickness decreases, which results in a continual
increase in the period of leapfrogging.
}
\label{fig-llg}\end{center}\end{figure}
In materials that are currently used in experiments in standard conditions,
the damping constant can be as large as $\lambda=0.02.$
For a frequency of precession $\omega$ the effect of damping is that
the precession spirals down on a time scale of order $(\lambda\omega)^{-1}.$
Given this relation and our above results for small damping it
will therefore be a considerable challenge to create experimental
conditions under which leapfrogging has time to take place.
\section{Conclusion}\setcounter{equation}{0}\quad\label{sec-con}
In this paper we have presented the results of numerical simulations of
vortex ring dynamics in the Landau-Lifshitz equation, describing the
evolution of the local magnetization in a ferromagnetic continuum.
We have observed many similarities between the dynamics of magnetic vortex
rings and those in fluids, including the famous leapfrogging locomotion
of a pair of vortex rings.
Although it is beyond the present paper, it would be interesting to
study how our results are modified by additional effects
that are can be included within the Landau-Lifshitz equation to improve its
description of physical materials.
An example is the dipole-dipole
interaction: though as a non-local effect, this would require a substantial
increase in computation and thus constitutes a future project.
Another example is a realistic modelling of temperature dependence.
However, as long as the Landau-Lifshitz model remains a valid mean field
description, temperature effects can be accounted for via a
renormalization of the parameters of the theory.
As we have observed, the qualitative features of vortex ring
dynamics appear to be universal, with many similarities between our results and
known results in fluid dynamics.
Hence we expect the qualitative features to remain robust to the inclusion
of any additional terms,
although there may be some
quantitative differences that could be important in
attempting any experimental observations of these phenomena.
There are enormous technical challenges that must be overcome before
the experimental creation and observation of magnetic vortex rings
can be achieved, as this requires the manipulation of highly nonlinear
excitations involving a large number of spin-flips.
As we have discussed, the cross-section of a magnetic vortex ring has
the non-trivial topology of a Skyrmion, so the recent experimental
observations of Skyrmions in chiral magnets \cite{Yu},
with a size of the order of $100$ nanometres, offers some
encouragement that the complex phenomena predicted here might be
observed in future experiments.
Current-driven domain wall motion is an important phenomenon in the
rapidly developing field of spintronics, with applications to high density
memory storage. Vortex rings provide a complementary companion to domain
walls in that a single vortex ring is in uniform motion without a current,
but can be brought to rest by the application of a current.
Magnetic vortex rings may therefore have a future role to play within
spintronic devices.
There are also opportunities to exploit the
interactions of multiple vortex rings.
As mentioned earlier, it has been shown \cite{SM}
that within the thin-cored approximation of
fluid vortex rings, the periodic leapfrogging motion of a pair of vortex
rings generates a geometric phase.
It is expected that such results can be extended to
more sophisticated vortex ring interactions, such as those considered here.
This is related to the existence of exotic exchange statistics
for leapfrogging motion \cite{Ni}, that follows from the braiding of
vortex loops, to yield a three-dimensional version of the phenomenon
that produces anyon statistics in two dimensions.
The relevant mathematical generalization of the braid group is the
McCool group \cite{Mc,Go} and leapfrogging magnetic vortex rings
provide a physical realization of this group.
Given the importance of anyons in applications to topological quantum
computing, we may speculate on similar potential applications for
leapfrogging magnetic vortex rings.
As we have observed, a pair of leapfrogging rings can be held in position by
a spin-polarized electric current, allowing exotic exchange statistics
to be physically realised. By turning off the current the vortex rings are
released, allowing the result to be read.\\
Finally, we make some comments on the important differences between vortex
rings in the Landau-Lifshitz equation and related structures,
known as Hopf solitons \cite{MS},
that exist in relativistic theories of a
three-component unit vector.
Consider any theory, in three-dimensional space, defined for a
three-component unit vector ${\bf n},$ with the boundary condition
${\bf n}\rightarrow {\bf k}$ as $|{\bf x}|\rightarrow \infty.$
Irrespective of the equations of motion, such a theory has a
conserved integer-valued topological charge $Q,$ known as the Hopf invariant.
Physically, $Q$ is the linking number of two curves obtained as the preimages
of any two distinct generic values of ${\bf n}.$
The concept of the Hopf invariant is applicable to vortex
ring solutions of the Landau-Lifshitz equation, but for
all the vortex rings studied in this paper the Hopf invariant is zero.
In fact $Q=0$ follows directly from the restriction to axial symmetry.
Note that there is some potential for confusion here,
particularly in relation to Hopf solitons,
because it is common to use the term axial symmetry when the intended
meaning is actually equivariance, rather than symmetry.
Such equivariance means that a rotation of the cylindrical angle $\theta$
through an angle $\alpha$ can be compensated by an internal rotation of the
two-component vector $(n_1,n_2)$ through an angle $m\alpha,$ for some
non-zero integer $m.$ Strict axial symmetry, as used in the current paper,
corresponds to the case $m=0,$ so that no compensating internal rotation
is needed.
In terms of our earlier description of a vortex ring, as
a planar Skyrmion rotated around the $z$-axis, then a vortex ring with Hopf
invariant $Q$ is obtained if the $(n_1,n_2)$ components of the planar
Skyrmion are rotated through an angle $2\pi Q$ as the planar Skyrmion
is rotated through an angle $2\pi$ around the $z$-axis.
It is possible to study Landau-Lifshitz vortex rings with a
non-zero Hopf invariant \cite{Co,Su}, but the properties of a single ring
are very similar to rings with $Q=0,$
so the Hopf invariant is not an important ingredient
in this setting. This contrasts markedly with the case of relativistic Hopf
solitons, where the Hopf invariant plays a vital role, and vortex ring
structures exist only if $Q\ne 0.$
Another important difference in relativistic theories is that Hopf solitons
can be static. A moving Hopf soliton has an arbitrary speed (less than
the speed of light) and is simply obtained from the static solution by
performing a Lorentz boost.
The most detailed studies of Hopf solitons have been performed in
the Skyrme-Faddeev model \cite{FN}, where solitons with $Q=1$ and $Q=2$ are
axially equivariant (as described above) and have a vortex ring structure.
The size of these vortex rings is not arbitrary, but is fixed by the
parameters in the theory. The ratio of the radius of a vortex ring to its
thickness is independent of these parameters, and so there is no
scope for a large thin ring, of the kind studied in the present paper.
Moreover,
Hopf solitons form bound states, with the energy of the static $Q=2$
soliton being less than twice the energy of the static $Q=1$ soliton.
For larger values of $Q$ Hopf solitons form knotted and linked configurations
\cite{BS,HS}, rather than axial vortex rings.
This attraction between Hopf solitons has important consequences for the
relativistic dynamics of these vortex rings,
as studied recently in \cite{HPJP}. These simulations reveal that
vortex rings tend to merge, or bounce off each other, depending upon
internal orientations and impact parameters, but they do not leapfrog. The
first order dynamics of the Landau-Lifshitz equation, discussed in the current
paper is therefore very different to the relativistic dynamics presented
in \cite{HPJP}, even though, at first glance, there may appear to be some
similarities between the two systems.
\section*{Acknowledgements}
\noindent
AJN is supported by a CNRS PEPS collaboration grant, by a Region
Centre Recherche d'Initiative Academique grant, by a
Chinese-French Scientific Exchange program Cai Yuanpei, and
by a research grant through the Qian Ren Program at BIT.
\noindent PMS thanks Fred Cohen for useful discussions,
and is grateful for the hospitality of the KITP in Santa Barbara.
|
1,108,101,563,121 | arxiv | \section{#1}\setcounter{equation}{0}}
\title{Segmentation-free x-ray energy spectrum estimation for computed tomography}
\author{Wei Zhao\supit{a}, Qiude Zhang\supit{a}, Tianye Niu\supit{b}
\skiplinehalf
\supit{a}Department of Biomedical Engineering, Huazhong University of Science and Technology, Hubei, China 430074;\\
\supit{b}Sir Run Run Shaw Hospital, Zhejiang University School of Medicine; Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, China 310016.
}
\begin{document}
\maketitle
\begin{abstract}
X-ray energy spectrum plays an essential role in imaging and related tasks. Due to the high photon flux of clinical CT scanners, most of the spectrum estimation methods are indirect and are usually suffered from various limitations. The recently proposed indirect transmission measurement-based method requires at least the segmentation of one material, which is insufficient for CT images of highly noisy and with artifacts. To combat for the bottleneck of spectrum estimation using segmented CT images, in this study, we develop a segmentation-free indirect transmission measurement based energy spectrum estimation method using dual-energy material decomposition.
The general principle of the method is to compare polychromatic forward projection with raw projection to calibrate a set of unknown weights which are used to express the unknown spectrum together with a set of model spectra. After applying dual-energy material decomposition using high- and low-energy raw projection data, polychromatic forward projection is conducted on material-specific images. The unknown weights are then iteratively updated to minimize the difference between the raw projection and estimated projection. Both numerical simulations and experimental head phantom are used to evaluate the proposed method. The results indicate that the method provides accurate estimate of the spectrum and it may be attractive for dose calculations, artifacts correction and other clinical applications.
\end{abstract}
\section{Introduction}
X-ray spectrum plays an very important role in CT imaging, including dose calculation~\cite{demarco2005}, polychromatic image reconstruction~\cite{elbakri2002}, artifacts reduction~\cite{zhao2014,zhao2015sac}, material decomposition~\cite{long2014}, energy-resolved imaging and etc.
In clinical applications, the x-ray flux is usually quite high in order to meet the fast imaging requirement. Thus it is not easy to directly measure the energy spectrum of a CT scanner using an energy-resolved detector, as the detector count rate is usually limited and the pile-up effect is severe. Instead, spectrum calibration often employs indirect methods, including Compton-scattering measurement~\cite{duisterwinkel2015}, Monte Carlo (MC) simulation~\cite{bazalova2007}, empirical or semi-empirical physical models~\cite{tucker1991} and transmission measurements~\cite{sidky2005,duan2011}.
The accuracy of these methods is usually suffered from various limitations. For example, environment conditions (such as low temperature requirement) or hole trapping effect which yields low-energy tailing may affect the spectrum measured using energy-resolved detectors~\cite{koenig2012}. Attenuation and scattering (e.g. Rayleigh and multiple Compton) in the material of the scatterer of the Comton-scattering measurement need to be carefully considered. Transmission measurements based on step or wedge phantom requires dedicated hardware or workflow. Indirect transmission measurements (ITM)~\cite{zhao2015} needs at least the segmentation of one material class. When noise or artifacts are present in the reconstructed image, it causes incorrect material segmentation and yields inferior estimate of the spectrum.
This work aims to develop a segmentation-free indirect transmission measurement-type energy spectrum estimation method using dual-energy material decomposition. Different from ITM~\cite{zhao2015} where polychromatic forward projection is conducted on segmented images, the herein proposed method performs polychromatic forward projection using material-specific images. Thus the method can be applied to estimate spectrum where CT image segmentation is tough.
\section{Methods}
\subsection{Workflow of the proposed algorithm}
To avoid determining each energy bin of the X-ray spectrum, we use model spectra to express the spectrum that is to be estimated. The model spectra expression can significantly reduce the degree of freedom of the spectrum estimation problem. In this case, the unknown spectrum $\Omega(E)$ is the weighted summation of a set of model spectra $\Omega_{i}(E)$, i.e.
\begin{equation}\label{equ:spek}
\Omega(E)=\sum_{i=1}^{M}c_{i}\Omega_{i}(E),
\end{equation}
with $M$ the number of the model spectra and $c_i$ the weight on the respective model spectrum. The model spectra can be predetermined using spectrum generators (such as SpekCalc~\cite{poludniowski2009} and Spektr~\cite{siewerdsen2004}) or MC simulation toolkits.
The flowchart of the proposed algorithm is presented in Figure 1. The method starts from acquiring dual-energy raw projection data $p_m$ (i.e., low-energy data $p_L$ and high-energy data $p_H$), based on which, material-specific images are obtained by using either projection-domain or image-domain material decomposition algorithms. The material images are then employed along with the model spectra expression to calculated a set of estimated projection $\hat{p}$. By iteratively updating the unknown weights $c_i$, we can converge to a set of optimal $c_i$ to minimize the quadratic error between the measured raw projection $p_m$ (either $p_L$ or $p_H$) and the estimated projection $\hat{p}$. The unknown spectrum is finally yielded by using Eq (\ref{equ:spek}). The three major components of the approach will be detailed in the following subsections: dual-energy material decomposition, material image-based polychromatic reprojection, and weight estimation
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{fig1_2.pdf}
\caption{Flowchart of the proposed dual-energy material decomposition-based spectrum estimation method.}
\label{fig:f1}
\end{figure}
\subsubsection{Dual-energy material decomposition}
Since magnified noise is a general concern for both projection-domain and image-domain dual-energy material decomposition, in this study, to keep the accuracy of the estimated projection $\hat{p}$, we have used an iterative image-domain method to obtain the noise significantly reduced material-specific images~\cite{niu2014}.
\subsubsection{Polychromatic projection on decomposed material images}
In dual-energy material decomposition, the linear attenuation coefficient $\mu(\vec{r},E)$ is modeled with two basis materials via a weighted summation fashion as,
\begin{equation}\label{equ:decomposition}
\mu(\vec{r},E)=f_{1}(\vec{r})\psi_{1}(E)+f_{2}(\vec{r})\psi_{2}(E).
\end{equation}
Here $\psi_{1,2}$ are the known independent energy dependencies which can be mass attenuation coefficients of basis materials and $f_{1,2}(\vec{r})$ are the material-selective images. Based on the above formulation, polychromatic projection of an object is represented as
\begin{equation}\label{equ:polyreprojBimg}
\hat{I}=N\int_{0}^{E_{max}}\mathrm{d}E\,\Omega(E) \, \eta(E)\,\mathrm{exp}\left[-A_{1}\psi_{1}(E)-A_{2}\psi_{2}(E)\right],
\ee
with $A_{1}=\int_{L}\mathrm{d}\vec{r}\,f_{1}(\vec{r})$ and $A_{2}=\int_{L}\mathrm{d}\vec{r}\,f_{2}(\vec{r})$ the line integral of the material-selective images. Here $L$, $\Omega(E)$ and $E_{max}$ are the propagation path length of each ray, the corresponding polychromatic x-ray spectrum of the ray and the maximum photon energy of the spectrum, respectively. $\eta(E)$ is the energy dependent response of the detector.
Note that $\hat{I}$ is detector pixel dependent and the detector channel index is omitted for convenience. For the absent of the object, the flood field $I_{0}$ can be expressed as follows:
\begin{equation}\label{equ:floodfield}
\hat{I}_{0}=N\int_{0}^{E_{max}}\mathrm{d}E\,\Omega(E) \, \eta(E).
\end{equation}
After applying the logarithmic operation, the projection data can be expressed as:
\begin{eqnarray}\label{equ:esproj}
\begin{split}
\hat{p}(\vec{c})=& \log\left(\frac{\hat{I}_{0}}{\hat{I}}\right) \\
=&\log\left(\frac{\int_{0}^{E_{max}}\mathrm{d}E\,\Omega(E)\,\eta(E)}{\int_{0}^{E_{max}}\mathrm{d}E\,\Omega(E)\,\eta(E)\,\mathrm{exp}\left[-A_{1}\psi_{1}(E)-A_{2}\psi_{2}(E)\right]}\right).
\end{split}
\end{eqnarray}
\subsubsection{Weights estimation}
To estimate the unknown weights for each model spectrum, we minimize the quadratic error between the detector measurement $p_m$ ($p_m$ is either $p_L$ or $p_H$) and the corresponding estimated projection $\hat{p}$ by iteratively updating the weights. This procedure is formulated as the following optimization problem,
\begin{equation}\label{equ:opt-constraint2}
\vec{c}=\underset{\vec{c}} {\mathrm{argmin}}\;\|p_{m}-\hat{p}(\vec{c})\|_{2}^{2}, ~~\mathrm{s.t.}~\sum_{i=1}^{M}c_{i}=1,~\mathrm{and}~c_{i}>0.
\end{equation}
Here the normalization constraint $\sum_{i=1}^{M}c_{i}=1$ and the non-negative constraint which keeps the solution of the problem physically meaning, are introduced. The objective function should be minimal if the spectrum expressed using the model spectra matches the unknown raw spectrum. To solve Eq~(\ref{equ:opt-constraint2}), we use a sequential optimization approach, i.e. minimizing the objective function, followed by normalizing the solution and enforcing non-negative constraint sequentially.
\subsection{Evaluation}
We first use numerical simulation to evaluate the proposed spectrum estimation method. A water cylinder with six iodine concentrate inserts (range, 0 - 20 mg/mL with 4 mg/mL interval) was simulated in a 2D fan-beam CT geometry. The diameter of the water cylinder is 198 mm and the diameter of the six inserts are 22.5 mm. The low- and high-energy spectra are 100 kVp and 140 kVp, which were generated using the SpekCalc software~\cite{poludniowski2009} with 12 mm Al and 0.4 mm Sn + 12 mm Al filtration, respectively. For the x-ray detection, an energy integrating detector is simulated with 0.388 mm pixel size and 1024 pixels. The x-ray source to the isocenter distance and to the detector distance are 785 mm and 1200 mm, respectively. A set of 720 view angles were scanned in an angular range of $360^0$. Since one difficulty of DECT decomposition is the ill-conditioning, Poisson noise was included in the raw projection to show the robustness of the algorithm. In addition, first order beam hardening correction was performed to improve the accuracy of the material-specific images.
The algorithm was also evaluated using experimental data from an anthrophomorphic head phantom scanned on a cone-beam CT (CBCT) benchtop system. The distance of source to isocenter and source to detector are 1000 mm and 1500 mm, respectively. A total of 655 projections were evenly acquired in 360 degree rotation with $2\times2$ rebinning mode and narrow collimation to avoid scatter radiation. Tube potentials of high and low-energy spectra were 125 kVp and 75 kVp, respectively. Both of the spectra were filtered with a 6 mm aluminium filter.
For both of the numerical simulation and experimental evaluations, low and high energy CT images were reconstructed by using a filtered backprojection (FBP) algorithm with the band-limited Ramp filter (i.e. Ram-Lak filter) whose cut-off frequency is set to the Nyquist frequency. Low-energy data sets are used to estimate low-energy spectra and high-energy spectra estimation is exactly the same.
To quantify the accuracy of the estimated spectrum, we calculate the normalized root mean square error (NRMSE) and the mean energy difference $\Delta E$ between the raw spectrum (ground truth) and the estimated spectrum, i.e.
\begin{equation}\label{equ:error}
NRMSE=\sqrt{\frac{\sum_{e=1}^{N}(\hat{\Omega}(e)-\Omega(e))^{2}}{\sum_{e=1}^{N}\Omega(e)^{2}}}
\end{equation}
\begin{equation}\label{equ:meanEnergy}
\Delta E= \sum_{e=1}^{N}E_e\,(\Omega(e)-\hat{\Omega}(e)),
\end{equation}
with $\hat{\Omega}(e)$ the $e$th energy bin of the normalized estimated spectrum and $\Omega(e)$ $e$th energy bin of the normalized true spectrum. $N$ and $E(e)$ are the number of the energy bins and the energy of the $e$th energy bin of the spectrum, respectively.
\section{Results}
\subsection{Numerical simulation}
\begin{figure
\centering
\includegraphics[width=0.5\textwidth]{fig2_1.pdf}
\caption{High and low-energy CT images and material-specific images of the numerical iodine concentrate phantom. Display windows: kV CT images, C/W=0 HU/300 HU; material images, C/W=100\%/200\%. }
\label{fig:f2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures06_simSpek.pdf}
\caption{Spectrum estimated using the numerical iodine concentrate phantom. }
\label{fig:f3}
\end{figure}
Fig.~\ref{fig:f2} shows the results of high and low-energy CT images, and basis material images of the numerical iodine concentrate phantom. As can be seen, the 100 kV image shows much high contrast level for the iodine inserts. Although water correction has been applied, there are some residual high order streaks in the 100 kV images since its spectrum is much softer than the 140 kV spectrum. Superior water and iodine images were obtained by selecting the central of water cylinder and the 20 mg/mL iodine concentrate insert as ROIs to calculate the decomposition matrix. Fig.~\ref{fig:f3} depicts the results of the 100 kV spectrum estimation using the numerical phantom. The initial spectrum is the hardest spectrum of the model spectra. The raw spectrum is the spectrum that was used to generate the 100 kV projection data and it can be regarded as the ground truth. The estimated spectrum matches the raw spectrum quite well and their mean energy difference is 0.16 keV, suggesting the dual-energy material decomposition-based method provides accurate spectrum estimate.
\subsection{Experiments phantom study}
Fig.~\ref{fig:f4} shows low- and high-energy CT images of the head phantom. For the experimental evaluation, the benchtop CBCT system has used a flat detector with 0.6 mm thickness of CsI. To better estimate the spectrum, energy dependent efficiency has been taken into account. Fig.~\ref{fig:f5} depicts spectrum estimated with the anthrophomorphic head phantom with detector efficiency incorporation. The initial spectrum is the hardest spectrum of the model spectra. As can be seen, the estimated spectrum matches the raw spectrum well. The mean energy difference and NRMSE are 0.71 keV and $7.5\%$, respectively. Note that we do not directly measure the raw spectrum in this case, instead, the well-validated spectrum generator SpekCalc is used to generate the raw spectrum with matched x-ray tube specifications.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig5.pdf}
\caption{Low- and high-energy CT images of the experimental head phantom. Display window: [-300HU, 300HU].}
\label{fig:f4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures07_headSpek.pdf}
\caption{Spectra estimated using the physical head phantom with detector efficiency incorporation. The reference spectrum (Raw) is calculated using SpekCalc software with filtration matched with the experimental setting.}
\label{fig:f5}
\end{figure}
\section{Conclusion}
This work presents an x-ray energy spectrum calibration method for CT scanners using dual-energy material decomposition and the indirect transmission measurement framework. The method conducts polychromatic reprojection on material-specific images instead of segmented CT images, with which the segmentation procedure is overcome. Hence, the herein proposed method does not require dedicated hardware or workflow and it is segmentation-free. The reprojection data is then compared to raw projection data and their difference is minimized by iteratively updating a set of weights, which are used to express the unknown spectrum together with a set of model spectra. The method was evaluated using numerical simulation data and experimental phantom data. The results demonstrate raw spectra can be accurately recovered by incorporating the energy-dependent detector absorption efficiency. Mean energy differences between raw spectra and estimated spectra are 0.16 keV and 0.71 keV for the numerical simulation and experimental phantom data, respectively.
\section*{}
This work has not been submitted to any other conferences or publications.
\bibliographystyle{spiebib}
|
1,108,101,563,122 | arxiv | \section{Introduction}
Studying violations of local Lorentz invariance in the spacetime theory of gravity is a way to probe General Relativity.
To search for Lorentz violation in pure gravity, some short-range experiments\cite{1,2,3}
such as testing the gravitational inverse-square law using a torsion pendulum
were performed and the data analyzed for a Lorentz-violation signal.
In this work, we mainly present a special design
for a pendulum experiment to enhance the sensitivity to the violation signal,
which has not yet been detected.
\section{Starting point of the new design}
Lorentz-invariance violation in the pure-gravity sector with quadratic couplings of the Riemann curvature has been described quantitatively
by the Standard-Model Extension (SME).\cite{1,4} Using the effective field technique, the coupling introduces a correction for the Newton gravity,
which depends on time and orientation. Corresponding to the gravitational potential, the correction can be written as:
\begin{equation}\label{Eq1}
{V_{LV}}(\vec{x})
= - G \frac{{m_1}{m_2}}{|\vec{x}|^{3}}{\bar k}(\hat{x},T),
\end{equation}
with
\begin{equation}\label{Eq2}
{\bar k}(\hat{x},T)\equiv \!\!\tfrac{3}{2}{({{\bar k}_{\rm{eff}}})_{jkjk}}-9{({{\bar k}_{\rm{eff}}})_{jkll}}\hat{x}^{j}\hat{x}^{k}
+\tfrac{15}{2}{({{\bar k}_{\rm{eff}}})_{jklm}}
\hat{x}^{j}\hat{x}^{k}\hat{x}^{l}\hat{x}^{m}.
\end{equation}
Here, $m_{1}$ and $m_{2}$ represent two point masses, and $\vec{x}$ is the separation between them.
The quantity $\hat{x}^{j}$
is the projection of the unit vector $\vec{x}$ along the $j$th direction.
The coefficient ${({{\bar k}_{\rm{eff}}})_{jklm}}$
for Lorentz violation
has 15 independent components,
as it is totally symmetric with indices $j$, $k$, $l$, $m$
ranging over the three spatial directions.
As discussed in Ref.\ \refcite{5}, shape and edge effects play an important role in determining the sensitivity of the
experiment to the coefficients for Lorentz-invariance violation:
\begin{equation}\label{Eq3}
\tau_{LV} \sim\varepsilon \Delta C{({\bar k_{\rm{eff}}})_{jklm}},
\end{equation}
where $\varepsilon$ represents the edge effect, which is related to the geometrical parameters of the test and source masses. To
derive an amplified experimental sensitivity to the Lorentz-violation signal, $\varepsilon$ should be designed as large as possible.
However, for our pendulum experiment, we have to work within the maximum capability of the fiber.
\section{Theoretical analysis for the new design}
According to Eq.\ (\ref{Eq2}), the violation coefficients ${({{\bar k}_{\rm{eff}}})_{jklm}}$ are different in different frames. Thus,
the Sun-centered frame is usually adopted as the canonical frame to report the results from experiments searching for a Lorentz-violation signal,
since the coefficients can be regarded as constant
on the scale of the solar system. The violation coefficients
${({{\bar k}_{\rm{eff}}})_{JKLM}}$ in the Sun-centered frame can be connected with the coefficients
${({{\bar k}_{\rm{eff}}})_{jklm}}$ in the laboratory frame by the rotation matrix $R^{jJ}$:
\begin{equation}\label{Eq4}
({{\bar k}_{\rm{eff}}})_{jklm}=R^{jJ}R^{kK}R^{lL}R^{mM}({{\bar k}_{\rm{eff}}})_{JKLM},
\end{equation}
where $R^{jJ}$ involves $\omega_{\oplus}\!\!\simeq \!\!2\pi/(\rm{23~h~56~min})$. Thus, Eq.\ (\ref{Eq2}) can be expressed as a Fourier series
in the sidereal time $T$ as:
\begin{equation}\label{Eq5}
{\bar k}(\hat{x},T) =\! {c_0} + \sum\limits_{m = 1}^4 \left[ {c_m}\cos (m{\omega _ \oplus }T) + {s_m}\sin (m{\omega _ \oplus }T) \right]
\end{equation}
through Eqs.\ (\ref{Eq2}) and (\ref{Eq4}). The nine Fourier amplitudes ($c_0$, $c_m$, $s_m$) are functions of $({{\bar k}_{\rm{eff}}})_{JKLM}$.
\begin{figure}[!t]
\includegraphics[width=3.4in]{shaofig_1.eps}
\caption{The I-shape torsion pendulum. $\rm{W}_{t1}$ and $\rm{W}_{s1}$ represent the test mass and source mass, respectively.
The masses are in the striped geometry (a) and in the checkered geometry (b).
}
\label{aba:fig1}
\end{figure}
We reassemble the $({{\bar k}_{\rm{eff}}})_{JKLM}$ (15 dimensions), and redefine the violation coefficients as ${\bar k}_{j}$ (15 dimensions).
By introducing ${\bar k}_{j}$, the space $({{\bar k}_{\rm{eff}}})_{JKLM}$ is decomposed into 5 subspaces, in which different harmonics of
$\omega_{\oplus}$ for the Lorentz-violation signal are linked to different subspaces, such as:
\begin{equation}\label{Eq6}
{c_0} = {\alpha _0}{\bar k_0} + {\alpha _1}{\bar k_1} + {\alpha _2}{\bar k_2}.
\end{equation}
We find ${c_0}$ is linked to $k_0$, $k_1$, $k_2$, and the other four harmonics of $\omega_{\oplus}$ correspond to the other four subspaces, respectively. Thus, for the
Lorentz-violation torque,
\begin{eqnarray}\label{Eq7}
{\tau _{LV}} \!\!=&& \!\!\!G{\rho _1}{\rho _2}\!\!\iint {d{V_1}d{V_2}}\frac{\partial }{{\partial \theta }}\frac{{\bar k(\hat r,T)}}{{{r^3}}} \hfill \nonumber\\
{\text{ }} =&&\!\!\! {\Lambda _j} {\bar k}_{j}.
\end{eqnarray}
Here, the transfer coefficient ${\Lambda _j}$ is related to ${\alpha _j}$ and the geometrical parameters of the test mass and source mass, and it
includes edge effects as described by $\varepsilon$ in Eq.\ (\ref{Eq3}). According to Eqs.\ (\ref{Eq5})-(\ref{Eq7}), one can seek a special design for the experiment
to satisfy the particular research requirements. For example, to probe the Lorentz-violation coefficients ${\bar k}_{0}$, ${\bar k}_{1}$ and ${\bar k}_{3}$ better,
one can design the test mass and source mass to make ${\Lambda _0}$, ${\Lambda _1}$ and ${\Lambda _2}$ larger.
\section{Experimental design}
The experimental schematic is similar to that in testing the inverse-square law for HUST-2011 (see Fig.\ 2 in Ref.\ \refcite{5}), but
the geometrical parameters here are designed differently to amplify the violation signal. We analyzed two upgraded designs (see Fig.\ 1) for the torsion pendulum.
In one, the shape of the masses involves a striped geometry (see Fig.\ 1(a)), and in the other it involves a checkered geometry (see Fig.\ 1(b)).
For the test mass $\rm{W}_{t1}$ and the source mass $\rm{W}_{s1}$ in the two designs, the gray part and white part represent tungsten and
glass, respectively. Comparing the results of numerical simulation, we find the first design (striped geometry) is the better option to increase
the Lorentz-violation signal.
\section{Summary}
Theoretically, we decomposed the 15 Lorentz-violation coefficients into five parts,
with different harmonics of the violation signal corresponding to different parts, which
helps to perform the special experimental design required to study a certain violation coefficient. In addition, we proposed a design to search for Lorentz violation at higher sensitivity,
in which the masses are in a striped pattern.
\section*{Acknowledgments}
This work was supported by the National Natural Science Foundation of China (11275075).
|
1,108,101,563,123 | arxiv | \section{\label{sec:Introduction}Introduction}
The seminal papers by Swerling \cite{swerl}, K\'alm\'an \cite{kal}, and K\'alm\'an and Bucy \cite{kalbuc} instigated the development of data-fusion algorithms for systems of known dynamical behaviour, the
family of the Kalman Filters (KFs). The KFs are simple methods providing a trade-off between the expectations of the actual (also called `true') state of a system (obtained on the basis of a physical
or mathematical model, to be called `dynamical' henceforth) and measurements yielding information on that state. It is assumed that the process of extracting information from the system does not disturb
its temporal evolution. KFs have been applied in numerous research and development domains. Standard applications include the navigation and control of vehicles. Notable examples are the NASA Space Shuttle
and the International Space Station. Other standard applications range from robotic motion and trajectory optimisation to machine learning. From the practical point of view, an advantage of the KFs is that
their implementation in real-time applications does not require the storage of the history of the states of the system and of the measurements.
The application of KFs is usually thought of as comprising two phases, a categorisation which however rests upon a logistical basis: prediction and correction~\footnote{There is no consensus regarding the
naming of these two phases.}. In the prediction phase, an estimate of the state of the system is obtained from the estimate of the system's last state. This prediction also takes account of the effects of
the so-called control inputs, i.e., of the means to enforce changes onto the system via deliberate actions, e.g., as the case is when the brakes are applied in order to bring a moving vehicle to a halt at
a road intersection. This prediction is usually referred to as the \emph{a priori} state estimate. In the correction phase, the \emph{a priori} state estimate is blended with the current-time measurement,
to refine the system's state estimate. This improved estimate is called \emph{a posteriori} state estimate and represents a weighted average of the \emph{a priori} state estimate and of the measurement, a
result which is more representative of the actual state of the system than either the \emph{a priori} state estimate or the measurement. Typically (but not necessarily), the prediction and correction
phases alternate.
As aforementioned, the KFs make use of the state-transition model, the known control inputs, as well as the acquired measurements, to provide reliable estimates of the actual state of the system. One
additional operator relates the actual state of the system and the result of the observation. The effects of the noise, in both phases, are included via appropriate covariance matrices. The Basic KF (BKF)
is applicable in case of linear systems. Two other KFs have been developed in order to cover non-linear systems: the Extended KF (EKF) and the Unscented KF (UKF). The KFs do not rely on assumptions about
the uncertainties involved in each particular problem. If the predictions of the state of a linear system and the measurements follow Gaussian distributions, the BKF is the optimal method for extracting
the important information from the observations.
Aiming at providing simplified explanations of the essentials of the KFs, numerous papers have appeared in scientific and in popular journals. In my opinion, many of these papers do not serve their cause,
due to the lack of clarity and/or of rigorousness, omission of examples, and, sadly, the mistakes they contain. Some of the available works give the impression that their authors' ambition was to provide
recipes for fast implementation, rather than insight into the basic concepts underlying these algorithms. Rigorous and detailed overviews on the subject may be obtained from Refs.~\cite{wb,tbf}. The present
document aims at providing a short, didactical introduction to three standard versions of the KF (i.e., the BKF, the EKF, and the UKF). It is addressed to the non-expert (I am no expert in Kalman filtering)
who wishes to understand the basics of an algorithm prior to applying it in order to obtain solutions.
\section{\label{sec:Definitions}Definitions}
It is assumed that the temporal evolution of a deterministic system is under study and that measurements are performed on the system at specific time instants $t_0$, $t_1$, \dots, thus yielding $K$ time
series, where $K$ stands for the dimension of the measurement vector $z$, i.e., for the number of independent pieces of information obtained from the system at one time instant. It is also assumed that the
system is fully described (at all times) by a real $N$-dimensional (ket) state vector $x$ (e.g., encompassing position vectors of the system's components, velocities, etc.). Without loss of generality,
one may choose $N$ to be the minimal number of quantities necessary for the full description of the system. Independent measurements imply that $K \leq N$. In the following, the state and measurement
vectors at time instant $t_k$ will be denoted as $x_k$ and $z_k$, respectively. From now on, reference to the time instants $t_k$ will be made by simply quoting the index $k$.
The temporal evolution of a deterministic physical system is known, if the state of the system is \emph{exactly} known at one time instant. Of course, given that all measurements are subject to finite
(non-zero) uncertainties, `exactitude' in experimental results is of no practical relevance. For systems which are observed only once and let evolve, the difference between the predicted and actual states
is expected to increase with time; this is simply the consequence of error propagation in predictions. To obtain as reliable predictions as possible, it is required that the system be regularly monitored
and its updated state be used in the derivation of new predictions. Of course, the frequency at which the state vector is updated is linked to the lapse of time within which the predictions may be considered
as reliable.
\section{\label{sec:BKF}The Basic Kalman Filter}
The following matrices must be specified:
\begin{itemize}
\item $F_k$: the $N \times N$ state-transition matrix,
\item $H_k$: the $K \times N$ state-to-measurement transformation matrix,
\item $Q_k$: the $N \times N$ covariance matrix of the process noise, and
\item $R_k$: the $K \times K$ covariance matrix of the measurement noise.
\end{itemize}
One additional matrix ($B_k$, known as the control-input matrix) maps the control inputs onto the state of the system. This is an $N \times L$ matrix, where $L$ is the dimension of the (ket) control-input
vector $u_k$.
Let us consider the problem of determining the position of a vehicle given the underlying deterministic dynamics, the effects of the application of the control inputs (accelerator, brakes, steering direction,
etc.), and a time series of (noisy) measurements (of the position of the vehicle). Most vehicles are equipped with a GPS unit nowadays, providing an estimate of the vehicle's actual position within a few
metres.
In the prediction phase, the laws of motion ($F_k$), using as input the vehicle's old position, and the changes induced by the control inputs ($u_k$) yield an \emph{a priori} state estimate of a new position
at the time instant corresponding to the subsequent measurement. In the correction phase, a measurement of the vehicle's position $z_k$ is obtained from the GPS unit. The \emph{a priori} state estimate and
the measurement are blended together, aiming at the minimisation of the difference between the \emph{a posteriori} state estimate and the actual position of the vehicle. The scheme is shown in
Fig.~\ref{fig:BKFScheme}. The uncertainties, associated with the predictions and with the measurements, are taken into account via the covariance matrices $Q_k$ and $R_k$.
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{BKFScheme.eps}
\caption{\label{fig:BKFScheme}The scheme underlying the application of the Basic Kalman Filter. The output of the algorithm at time instant $k-1$ comprises estimates of the state vector $\hat{x}_{k-1 \vert k-1}$
and of the covariance matrix $P_{k-1 \vert k-1}$. Using the laws of motion ($F_k$) and the effects of the control inputs ($B_k$), one obtains predictions at time instant $k$, i.e., of the state vector
$\hat{x}_{k \vert k-1}$ and of the covariance matrix $P_{k \vert k-1}$; these predictions comprise the input at that time instant. A prediction of the measurement at time instant $k$ is obtained from
$\hat{x}_{k \vert k-1}$ on the basis of the state-to-measurement transformation matrix ($H_k$); this prediction is blended with the measurement $z_k$ at time instant $k$, resulting in updated predictions
of the state vector $\hat{x}_{k \vert k}$ and of the covariance matrix $P_{k \vert k}$ at time instant $k$; these predictions comprise the output of the algorithm at time instant $k$. The next iteration
involves quantities pertaining to time instants $k$ and $k+1$.}
\vspace{0.35cm}
\end{center}
\end{figure}
\subsection{\label{sec:BKFDetails}The details of the Basic Kalman Filter}
All KFs assume that the actual state of the system at time instant $k$ evolves from that at time instant $k-1$. In the BKF, the two actual states are related via the expression:
\begin{equation} \label{eq:EQ001}
x_k = F_k x_{k-1} + B_k u_k + q_k \, \, \, ,
\end{equation}
where $q_k$ is the process noise, drawn from a zero-mean multi-variate normal distribution with covariance matrix $Q_k$. At time instant $k$, a measurement $z_k$ is conducted, relating to the actual state
of the system via the expression:
\begin{equation} \label{eq:EQ002}
z_k = H_k x_k + r_k \, \, \, ,
\end{equation}
where $r_k$ is the measurement noise, drawn from a zero-mean multi-variate normal distribution with covariance matrix $R_k$. At all time instants, the state estimates and the noise are not correlated. The
matrices $F_k$ and $H_k$ are not dependent on the state vector of the system.
At time instant $k-1$, two quantities are obtained:
\begin{itemize}
\item $\hat{x}_{k \vert k-1}$: the \emph{a priori} state estimate and
\item $P_{k \vert k-1}$: the \emph{a priori} covariance matrix.
\end{itemize}
The subscript $n \vert m$ indicates estimates corresponding to time instant $n$, given information available at time instant $m$. At time instant $k$, two quantities are obtained:
\begin{itemize}
\item $\hat{x}_{k \vert k}$: the \emph{a posteriori} state estimate and
\item $P_{k \vert k}$: the \emph{a posteriori} covariance matrix.
\end{itemize}
{\bf Prediction equations}
The two prediction equations are:
\begin{itemize}
\item \emph{A priori} state estimate ($N$-dimensional ket)
\begin{equation} \label{eq:EQ003}
\hat{x}_{k \vert k-1} = F_k \hat{x}_{k-1 \vert k-1} + B_k u_k
\end{equation}
\item \emph{A priori} covariance matrix ($N \times N$ matrix)
\begin{equation} \label{eq:EQ004}
P_{k \vert k-1} = F_k P_{k-1 \vert k-1} F_k^T + Q_k
\end{equation}
\end{itemize}
{\bf Correction equations}
The correction scheme is based on the following relations:
\begin{itemize}
\item Predicted measurements ($K$-dimensional ket)
\begin{equation} \label{eq:EQ005}
\hat{z}_k = H_k \hat{x}_{k \vert k-1}
\end{equation}
\item Residuals ($K$-dimensional ket)
\begin{equation} \label{eq:EQ006}
\tilde{z}_k = z_k - \hat{z}_k
\end{equation}
\item Covariance matrix of the residuals ($K \times K$ matrix)
\begin{equation} \label{eq:EQ007}
S_k = H_k P_{k \vert k-1} H_k^T + R_k
\end{equation}
\item Optimal gain ($N \times K$ matrix)
\begin{equation} \label{eq:EQ008}
K_k = P_{k \vert k-1} H_k^T S_k^{-1}
\end{equation}
\item \emph{A posteriori} state estimate ($N$-dimensional ket)
\begin{equation} \label{eq:EQ009}
\hat{x}_{k \vert k} = \hat{x}_{k \vert k-1} + K_k \tilde{z}_k
\end{equation}
\item \emph{A posteriori} covariance matrix ($N \times N$ matrix) corresponding to the optimal gain
\begin{equation} \label{eq:EQ010}
P_{k \vert k} = (I_N - K_k H_k) P_{k \vert k-1}
\end{equation}
\end{itemize}
In the last expression, $I_N$ denotes the $N \times N$ identity matrix.
{\bf Some matrix algebra}
At first glance, relations (\ref{eq:EQ004},\ref{eq:EQ007},\ref{eq:EQ008},\ref{eq:EQ010}) may appear quasi-mystical, yet they are easily obtained on the basis of very simple mathematics. I start with the
\emph{a priori} covariance matrix which is defined as the following expectation value:
\begin{equation} \label{eq:EQ011}
P_{k \vert k-1} = E \left[ (x_k - \hat{x}_{k \vert k-1}) (x_k - \hat{x}_{k \vert k-1})^T \right] \, \, \, .
\end{equation}
Invoking Eqs.~(\ref{eq:EQ001},\ref{eq:EQ003}), one obtains
\begin{align} \label{eq:EQ012}
P_{k \vert k-1} &= E \left[ \left( F_k (x_{k-1} - \hat{x}_{k-1 \vert k-1}) + q_k \right) \left( F_k (x_{k-1} - \hat{x}_{k-1 \vert k-1}) + q_k \right)^T \right]\nonumber\\
&= E \left[ \left( F_k (x_{k-1} - \hat{x}_{k-1 \vert k-1}) + q_k \right) \left( (x_{k-1} - \hat{x}_{k-1 \vert k-1})^T F_k^T + q_k^T \right) \right]\nonumber\\
&= E \left[ F_k (x_{k-1} - \hat{x}_{k-1 \vert k-1}) (x_{k-1} - \hat{x}_{k-1 \vert k-1})^T F_k^T + q_k q_k^T \right]\nonumber\\
&+ E \left[ F_k (x_{k-1} - \hat{x}_{k-1 \vert k-1}) q_k^T + q_k (x_{k-1} - \hat{x}_{k-1 \vert k-1})^T F_k^T \right]\nonumber\\
&= E \left[ F_k (x_{k-1} - \hat{x}_{k-1 \vert k-1}) (x_{k-1} - \hat{x}_{k-1 \vert k-1})^T F_k^T \right] + E \left[ q_k q_k^T \right]\nonumber\\
&= F_k P_{k-1 \vert k-1} F_k^T + Q_k \, \, \, ;
\end{align}
Equation (\ref{eq:EQ004}) is thus obtained. To derive this expression, use has been made of the assumption that the process noise is not correlated with the dynamics of the system.
I will next derive the expression for the covariance matrix of the residuals $S_k$. Obviously,
\begin{equation} \label{eq:EQ013}
S_k = E \left[ \tilde{z}_k \tilde{z}_k^T \right] \, \, \, ,
\end{equation}
which, after invoking Eqs.~(\ref{eq:EQ002},\ref{eq:EQ005},\ref{eq:EQ006}), attains the form:
\begin{align} \label{eq:EQ014}
S_k &= E \left[ \left( H_k (x_k - \hat{x}_{k \vert k-1}) + r_k \right) \left( H_k (x_k - \hat{x}_{k \vert k-1}) + r_k \right)^T \right]\nonumber\\
&= E \left[ \left( H_k (x_k - \hat{x}_{k \vert k-1}) + r_k \right) \left( (x_k - \hat{x}_{k \vert k-1})^T H_k^T + r_k^T \right) \right]\nonumber\\
&= E \left[ H_k (x_k - \hat{x}_{k \vert k-1}) (x_k - \hat{x}_{k \vert k-1})^T H_k^T + r_k r_k^T \right]\nonumber\\
&+ E \left[ H_k (x_k - \hat{x}_{k \vert k-1}) r_k^T + r_k (x_k - \hat{x}_{k \vert k-1})^T H_k^T \right]\nonumber\\
&= E \left[ H_k (x_k - \hat{x}_{k \vert k-1}) (x_k - \hat{x}_{k \vert k-1})^T H_k^T \right] + E \left[ r_k r_k^T \right]\nonumber\\
&= H_k P_{k \vert k-1} H_k^T + R_k \, \, \, ,
\end{align}
which is Eq.~(\ref{eq:EQ007}). To derive this expression, use has been made of the assumption that the measurement noise is not correlated with the system under observation.
Let me finally come to the derivation of the expression for the \emph{a posteriori} covariance matrix $P_{k \vert k}$, defined by the expression:
\begin{equation} \label{eq:EQ015}
P_{k \vert k} = E \left[ (x_k - \hat{x}_{k \vert k}) (x_k - \hat{x}_{k \vert k})^T \right] \, \, \, ,
\end{equation}
where the \emph{a posteriori} state estimate $\hat{x}_{k \vert k}$ is taken from Eq.~(\ref{eq:EQ009}). Invoking Eqs.~(\ref{eq:EQ002},\ref{eq:EQ005},\ref{eq:EQ006},\ref{eq:EQ007}), one obtains
\begin{align} \label{eq:EQ016}
P_{k \vert k} &= E \left[ \left( (I_N - K_k H_k)(x_k - \hat{x}_{k \vert k-1}) - K_k r_k \right) \left( (I_N - K_k H_k)(x_k - \hat{x}_{k \vert k-1}) - K_k r_k \right)^T \right]\nonumber\\
&= E \left[ \left( (I_N - K_k H_k)(x_k - \hat{x}_{k \vert k-1}) - K_k r_k \right) \left( (x_k - \hat{x}_{k \vert k-1})^T (I_N - H_k^T K_k^T) - r_k^T K_k^T \right) \right]\nonumber\\
&= E \left[ (I_N - K_k H_k)(x_k - \hat{x}_{k \vert k-1}) (x_k - \hat{x}_{k \vert k-1})^T (I_N - H_k^T K_k^T) \right]\nonumber\\
&+ E \left[ K_k r_k r_k^T K_k^T \right]\nonumber\\
&- E \left[ (I_N - K_k H_k)(x_k - \hat{x}_{k \vert k-1}) r_k^T K_k^T \right]\nonumber\\
&- E \left[ K_k r_k (x_k - \hat{x}_{k \vert k-1})^T (I_N - H_k^T K_k^T) \right]\nonumber\\
&= (I_N - K_k H_k) P_{k \vert k-1} (I_N - H_k^T K_k^T) + K_k R_k K_k^T \nonumber\\
&= P_{k \vert k-1} - P_{k \vert k-1} H_k^T K_k^T - K_k H_k P_{k \vert k-1} + K_k H_k P_{k \vert k-1} H_k^T K_k^T + K_k R_k K_k^T \nonumber\\
&= P_{k \vert k-1} - P_{k \vert k-1} H_k^T K_k^T - K_k H_k P_{k \vert k-1} + K_k S_k K_k^T \, \, \, ;
\end{align}
this relation is known as the `Joseph form'. The matrix $K_k$ may be chosen such that $P_{k \vert k}$ be minimised, in which case it satisfies the relation:
\begin{equation} \label{eq:EQ017}
K_k S_k = P_{k \vert k-1} H_k^T
\end{equation}
or, equivalently,
\begin{equation} \label{eq:EQ018}
K_k = P_{k \vert k-1} H_k^T S_k^{-1} \, \, \, .
\end{equation}
The quantity $K_k$ of Eqs.~(\ref{eq:EQ017},\ref{eq:EQ018}) is the so-called optimal gain. Substituting the optimal gain into Eq.~(\ref{eq:EQ016}) and using the fact that the covariance matrix $P_{k \vert k-1}$
is symmetric (hence its transpose is equal to the matrix itself), one finally obtains
\begin{equation} \label{eq:EQ019}
P_{k \vert k} = (I_N - K_k H_k) P_{k \vert k-1} \, \, \, .
\end{equation}
It should be stressed that Eq.~(\ref{eq:EQ019}) is applicable for the optimal gain, whereas Eq.~(\ref{eq:EQ016}) is the expression for the \emph{a posteriori} covariance matrix for arbitrary gain. The
importance of the model predictions, relative to the measurements, is regulated by the gain matrix. Large values of the gain-matrix elements place larger weight on the measurements; for small values,
the filtered data follow the model predictions more closely.
\subsection{\label{sec:BKFPerformance}Performance of the Basic Kalman Filter}
The BKF yields the optimal solution when the following conditions are fulfilled:
\begin{itemize}
\item the dynamics of the system is known and linear,
\item the noise is white and Gaussian, and
\item the covariance matrices of the noise ($Q_k$ and $R_k$) are known.
\end{itemize}
Experience shows that, in case of incomplete modelling of the dynamical behaviour of a system, the performance of the filter deteriorates. There are also issues relating to its numerical stability: round-off
uncertainties may lead to noise covariance matrices which are not positive-definite, as the case is when they involve small values or when the noise level is high.
\subsection{\label{sec:BKFExample}Example of an application of the Basic Kalman Filter}
One representative example of the straightforward application of the BKF deals with the motion of a massive object in the gravitational field of the Earth; treated in several textbooks is the case where the
air resistance is neglected. (For the sake of completeness, the dynamical model when the resistance is proportional to the velocity of the object is detailed in Appendix \ref{App:AppA}.) The object is released
at height $x=x_0$ ($x=0$ on the surface of the Earth, positive upwards or, better expressed, away from the centre of gravity) at time $t=t_0$ and the release velocity is equal to $v_0$, in the radial
direction of the gravitational field. Without loss of generality, $t_0$ may be fixed to $0$ (i.e., the clock starts ticking at the moment the object is released). Elementary Physics dictates that the height
$x(t)$ and the velocity $v(t)$ of the object are given by the expressions:
\begin{equation} \label{eq:EQ020}
x(t) = x_0 + v_0 t - \frac{1}{2} g t^2
\end{equation}
and
\begin{equation} \label{eq:EQ021}
v(t) = v_0 - g t \, \, \, ,
\end{equation}
where $g=9.80665$ m s$^{-2}$ \cite{pdg} is the standard gravitational acceleration. Of course, these two relations are valid so long as $g$ may be assumed to be constant. Discretising the motion in the
time domain, one may put these two equations into the compact form:
\begin{equation} \label{eq:EQ022}
\left(
\begin{array}{c}
x_k\\
v_k
\end{array} \right)
= \left(
\begin{array}{cc}
1 & \Delta t_{k-1}\\
0 & 1
\end{array} \right)
\left(
\begin{array}{c}
x_{k-1}\\
v_{k-1}
\end{array} \right)
-g \left(
\begin{array}{c}
\Delta t_{k-1}^2/2\\
\Delta t_{k-1}
\end{array} \right)
\end{equation}
where $\Delta t_{k-1}=t_k-t_{k-1}$. This simple deterministic system is fully described if the position and the velocity of the object are known at (any) one time instant. A comparison of this expression
with Eq.~(\ref{eq:EQ001}) in the absence of process noise ($q_k=0$) leads to the straightforward identification of the state-transition and control-input matrices, as well as of the control-input vector
$u_k$ which reduces (in this problem) to a mere constant ($-g$). In the absence of a control input, no force is exerted on the object and Newton's first law of motion is recovered. The dimensions $N$ and
$L$ are equal to $2$ and $1$, respectively.
Using the initial conditions of $x_0 = 10$ m and $v_0 = 3$ m/s, adding white Gaussian noise to the data (process: $2$ mm and $2$ mm/s; measurement: $10$ mm and $10$ mm/s), and applying the filter to $1\,000$
measurements of the height and velocity ($K=2$) for $H_k=I_2$, one obtains Figs.~\ref{fig:Altitude} and \ref{fig:Velocity}. Evidently, the filter is successful in reducing the noise present in the raw
measurements.
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{Altitude.eps}
\caption{\label{fig:Altitude}Difference of the position (height) to the exact solution corresponding to the free-fall example of Section \ref{sec:BKFExample}: the raw data are represented by the points,
the filtered data by the black zigzag line. The green line marks the level of the exact noise-free solution.}
\vspace{0.35cm}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{Velocity.eps}
\caption{\label{fig:Velocity}Same as Fig.~\ref{fig:Altitude} for the velocity.}
\vspace{0.35cm}
\end{center}
\end{figure}
In comparison, the time dependence of the height and of the velocity is also shown in case that only the height is monitored ($K=1$), see Figs.~\ref{fig:AltitudeO1M} and \ref{fig:VelocityO1M} for the
same initial conditions as before. Evidently, the filter is successful in reducing the noise present in the measurements of the height. On the other hand, some systematic effects are seen in
Fig.~\ref{fig:VelocityO1M}.
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{AltitudeO1M.eps}
\caption{\label{fig:AltitudeO1M}Same as Fig.~\ref{fig:Altitude} in the case that only the height of the object is measured ($K=1$).}
\vspace{0.35cm}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{VelocityO1M.eps}
\caption{\label{fig:VelocityO1M}Same as Fig.~\ref{fig:Velocity} in the case that only the height of the object is measured ($K=1$).}
\vspace{0.35cm}
\end{center}
\end{figure}
At this point, one remark needs to be made. The BKF reduces the noise level; it does not eliminate it. This implies that, if a smooth solution is sought in an application, a smoothing procedure must be
applied to the filtered data. In such a case, the process of applying the BKF to the raw measurements might have been an unnecessary step; an optimisation scheme (i.e., directly fitting to the measurements)
might have been a more appropriate approach.
\section{\label{sec:EKF}The Extended Kalman Filter}
In the EKF, the state-transition matrix $F$ and the state-to-measurement transformation matrix $H$ are replaced by differentiable functions, to be denoted by $f$ and $h$, respectively. I will assume that
the function $f$ also includes the effects of the control inputs. Equations (\ref{eq:EQ001},\ref{eq:EQ002}) take the form:
\begin{equation} \label{eq:EQ023}
x_k = f(x_{k-1},u_k) + q_k
\end{equation}
and
\begin{equation} \label{eq:EQ024}
z_k = h(x_k) + r_k \, \, \, .
\end{equation}
To make use of the prediction and correction expressions developed in the case of the BKF, i.e., of the scheme outlined by Eqs.~(\ref{eq:EQ003}-\ref{eq:EQ010}), the functions $f$ and $h$ are linearised
around the state estimate, and the corresponding Jacobian matrices of the transformations $x_{k-1} \to x_k$ and $x_k \to z_k$ are analytically evaluated and supplied to the algorithm. Equations
(\ref{eq:EQ003},\ref{eq:EQ005}) take the form:
\begin{equation} \label{eq:EQ025}
\hat{x}_{k \vert k-1} = f(\hat{x}_{k-1 \vert k-1},u_k)
\end{equation}
and
\begin{equation} \label{eq:EQ026}
\hat{z}_k = h(\hat{x}_{k \vert k-1}) \, \, \, .
\end{equation}
The matrices $F_k$ and $H_k$ in Eqs.~(\ref{eq:EQ004},\ref{eq:EQ007}-\ref{eq:EQ010}) must be replaced by the corresponding Jacobians.
\subsection{\label{sec:EKFExample}Example of an application of the Extended Kalman Filter}
The Lotka-Volterra equations~\footnote{The first version of these equations appeared in a paper by Lotka in the beginning of the $20^{\rm th}$ century \cite{lot}.} are non-linear, first-order, differential
equations, which are frequently employed in the modelling of the dynamics of isolated ecosystems, consisting of a number of interacting (in terms of their dietary habits) biological species. The simplest
of these ecosystems involves just two species: one serving as prey (herbivores), another as predators (preying on the herbivores). The temporal evolution of such a two-species ecosystem is deterministic
and may be modelled according to the following set of equations.
\begin{align} \label{eq:EQ027}
\frac{dx}{dt} &= x (\alpha - \beta y)\nonumber\\
\frac{dy}{dt} &= y (-\gamma + \delta x) \, \, \, ,
\end{align}
where $x$ and $y$ denote the prey and predator populations, respectively; $\alpha$, $\beta$, $\gamma$, and $\delta$ are positive parameters. A numerical solution to this set of equations may be obtained with
standard solvers, e.g., see Ref.~\cite{ptvf} (Chapter 17).
In the absence of predators ($y=0$), the first equation yields an exponentially growing prey population (which, of course, is unrealistic as vegetation will become scarce at some time, resulting in the
decrease of the population). The decrease of the prey population due to predation is assumed to be proportional to the probability at which predators and prey occupy the same regions of spacetime; this
probability involves the product $x y$. On the other hand, in the absence of prey ($x=0$), the predator population is bound to decrease exponentially with time; this is taken into account by the negative
sign of the term $\gamma y$ in the second of Eqs.~(\ref{eq:EQ027}). Finally, the reproduction rate of the predators is related to the availability of food, i.e., to the probability (once again) of the
encounters between predators and prey (term $\delta x y$).
One example of the temporal evolution of a two-species ecosystem is illustrated in Fig.~\ref{fig:PreyPredator}. As expected, the two populations are intricately interrelated. With abundant prey, the predator
population rises; this leads to the decrease of the prey population; this leads the decrease of the predator population; this leads to the increase of the prey population, and so on. One observes that the
two scatter plots $x(t)$ and $y(t)$ are somewhat shifted (in time) with respect to each other, the prey-population maximum preceding that of the predators.
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{PreyPredator.eps}
\caption{\label{fig:PreyPredator}One example of the temporal evolution of a two-species ecosystem. The parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ were fixed to $1.0$, $0.2$, $5.0$, and $0.3$,
respectively. The initial conditions for the populations were: $x(0)=y(0)=10$.}
\vspace{0.35cm}
\end{center}
\end{figure}
From Eqs.~(\ref{eq:EQ027}), one obtains the approximate expressions:
\begin{align} \label{eq:EQ028}
x_k &= x_{k-1} + x_{k-1} (\alpha - \beta y_{k-1}) \Delta t_{k-1}\nonumber\\
y_k &= y_{k-1} + y_{k-1} (-\gamma + \delta x_{k-1}) \Delta t_{k-1} \, \, \, ,
\end{align}
yielding the Jacobian matrix:
\begin{equation} \label{eq:EQ029}
F_{x_k/x_{k-1}} =
\left(
\begin{array}{cc}
1 + \alpha \Delta t_{k-1} - \beta y_{k-1} \Delta t_{k-1} & - \beta x_{k-1} \Delta t_{k-1}\\
\delta y_{k-1} \Delta t_{k-1} & 1 - \gamma \Delta t_{k-1} + \delta x_{k-1} \Delta t_{k-1}
\end{array} \right)
\end{equation}
Using the initial conditions of $x_0 = y_0 = 10$, adding white Gaussian noise to the data (process: $0.2$; measurement: $1.0$), and applying the filter to $1\,000$ measurements of the two populations
($K=2$) for $H_k=I_2$, one obtains Figs.~\ref{fig:Prey} and \ref{fig:Predator}.
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{Prey.eps}
\caption{\label{fig:Prey}Difference of the solution for the prey population to the numerical solution of the set of Eqs.~(\ref{eq:EQ027}), used in the modelling of the predator-prey problem of Section
\ref{sec:EKFExample}: the raw data are represented by the points, the filtered data by the black zigzag line. The green line marks the level of the exact, noise-free solution.}
\vspace{0.35cm}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{Predator.eps}
\caption{\label{fig:Predator}Same as Fig.~\ref{fig:Prey} for the predator population.}
\vspace{0.35cm}
\end{center}
\end{figure}
\section{\label{sec:UKF}The Unscented Kalman Filter}
It has been demonstrated that the EKF performs poorly when the functions $f$ and $h$ of Eqs.~(\ref{eq:EQ023},\ref{eq:EQ024}) are highly non-linear \cite{ju1,ju2,wm,kfi}. The selective (deterministic)
sampling of these functions, rather than the use of single points in the derivation of predictions, has been put forth as the `treatment plan' for the linearisation problem: the UKF was thus established.
One frequently advertised feature of this algorithm is that no Jacobian matrices need to be evaluated. In my opinion, the emphasis which many authors place on this `advantage' is misleading and
counter-productive, as it diverts attention away from the main feature of the UKF, which is the inclusion of higher-order effects in the Taylor expansions of the non-linear functions $f$ and $h$.
I will next outline the procedure of applying the UKF, starting from the estimates $\hat{x}_{k-1 \vert k-1}$ and $P_{k-1 \vert k-1}$. Compact implementations of the UKF, inspired by Refs.~\cite{ju1,ju2,wm},
may be found in the literature, e.g., see Ref.~\cite{mw} (and several other later works); most of these implementations rest upon operations with enhanced state vector and covariance matrices, after the
incorporation of the effects (i.e., of the expectation values and covariance matrices) of the process and/or of the measurement noise. However, I refrain from sacrificing straightforwardness for elegance,
and describe herein only what is known as `standard' implementation of the UKF. A commendable effort towards explaining the application of the UKF to continuous-time filtering problems may be found in
Ref.~\cite{sarkka}. A schematic overview of the UKF is given in Fig.~\ref{fig:UKFScheme}.
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{UKFScheme.eps}
\caption{\label{fig:UKFScheme}The adaptation of Fig.~\ref{fig:BKFScheme} for the Unscented Kalman Filter. The essential differences between the two figures pertain to the use of the functions $f$ and $h$
in the mappings $x_{k-1} \to x_k$ and $x_k \to z_k$, respectively, as well as to the introduction of $2 N + 1$ representative points (i.e., of the sigma points) in the derivation of the predictions.}
\vspace{0.35cm}
\end{center}
\end{figure}
In the prediction phase, one first selects $2 N + 1$ representative points around $\hat{x}_{k-1 \vert k-1}$ (e.g., the so-called sigma points), propagates these points through the non-linear function $f$,
and obtains estimates for $\hat{x}_{k \vert k-1}$ and $P_{k \vert k-1}$ as weighted averages over the transformed points. These estimates are expected to perform better than their EKF counterparts (which
are solely based on one single point, namely $\hat{x}_{k-1 \vert k-1}$), the improvement originating in the inclusion of the higher-order effects in the mapping of the function $f$ (and, in the correction
phase, of $h$). There is no unique prescription for drawing the sigma points. One of the possibilities, utilising two sigma points per component of the vector $\hat{x}_{k-1 \vert k-1}$ (as well as the
central value $\hat{x}_{k-1 \vert k-1}$), is outlined as follows.
\begin{align} \label{eq:EQ030}
\bar{x}^0_{k-1 \vert k-1} &= \hat{x}_{k-1 \vert k-1}\nonumber\\
\bar{x}^i_{k-1 \vert k-1} &= \hat{x}_{k-1 \vert k-1} + \mathcal{X}_i \text{, for $1 \leq i \leq N$}\nonumber\\
\bar{x}^i_{k-1 \vert k-1} &= \hat{x}_{k-1 \vert k-1} - \mathcal{X}_{i-N} \text{, for $N + 1 \leq i \leq 2 N$}
\end{align}
The quantity $\mathcal{X}_j$ in Eqs.~(\ref{eq:EQ030}) represents the $j^{\rm th}$ column of the `square root' of the matrix
\begin{equation*}
(N + \lambda) P_{k-1 \vert k-1} \, \, \, ,
\end{equation*}
where $\lambda$ will be dealt with shortly. The Cholesky decomposition is a robust method for obtaining the `square root' matrix, e.g., see Ref.~\cite{ptvf} (Chapter 2.9). The sigma points come with two
types of weights: one set ($w^i_x$) for the determination of state and measurement predictions, another ($w^i_p$) for the various covariance matrices relating to the method. These weights are defined as
follows.
\begin{align} \label{eq:EQ031}
w^0_x &= \frac{\lambda}{N + \lambda}\nonumber\\
w^0_p &= w^0_x + 1 - \alpha^2 + \beta\nonumber\\
w^i_x &= w^i_p = \frac{1}{2(N + \lambda)}
\end{align}
The weights $w^i_x$ are normalised:
\begin{equation*}
\sum_{i=0}^{2 N} w^i_x = 1 \, \, \, .
\end{equation*}
The scaling parameter $\lambda$ is expressed in terms of two parameters, $\alpha \in (0,1]$ and $\kappa$, according to the formula:
\begin{equation} \label{eq:EQ032}
\lambda = \alpha^2 (N + \kappa) - N \, \, \, .
\end{equation}
Obviously, three parameters need adjustment before applying the UKF: $\alpha$, $\beta$, and $\kappa$. The parameters $\alpha$ and $\kappa$ regulate the range of values of the sigma points: the distance
between corresponding sigma points widens with increasing values of these two parameters. A general method for obtaining the optimal values of $\alpha$, $\beta$, and $\kappa$ is still to be established;
these parameters may be `fine-tuned', aiming at the optimisation of the results in each particular problem. In case of Gaussian distributions, the choice $\beta = 2$ is optimal \cite{wm}.
In the correction phase, one selects a new set of $2 N + 1$ representative points around $\hat{x}_{k \vert k-1}$ (taking the updated covariance matrix $P_{k \vert k-1}$ into account, in accordance with
Eq.~(\ref{eq:EQ005})), propagates these points through the non-linear function $h$, and obtains estimates for $\hat{x}_{k \vert k}$ and $P_{k \vert k}$ as weighted averages over the transformed points.
Two sigma points per component of the vector $\hat{x}_{k \vert k-1}$ (as well as the central value), are selected.
\begin{align} \label{eq:EQ033}
\bar{y}^0_{k \vert k-1} &= \hat{x}_{k \vert k-1}\nonumber\\
\bar{y}^i_{k \vert k-1} &= \hat{x}_{k \vert k-1} + \mathcal{Y}_i \text{, for $1 \leq i \leq N$}\nonumber\\
\bar{y}^i_{k \vert k-1} &= \hat{x}_{k \vert k-1} - \mathcal{Y}_{i-N} \text{, for $N + 1 \leq i \leq 2 N$}
\end{align}
The quantity $\mathcal{Y}_j$ represents the $j^{\rm th}$ column of the `square root' of the matrix
\begin{equation*}
(N + \lambda) P_{k \vert k-1} \, \, \, .
\end{equation*}
Apart from the use of a single point to utilising a set of appropriately selected points, one additional modification over the steps outlined by Eqs.~(\ref{eq:EQ005}-\ref{eq:EQ010}) is worth mentioning:
in the UKF, the optimal gain matrix $K_k$ involves the so-called state-measurement cross-covariance matrix; the product $P_{k \vert k-1} H_k^T$ of Eq.~(\ref{eq:EQ008}) is replaced by this matrix.
{\bf Prediction equations}
The two prediction equations take the form:
\begin{itemize}
\item \emph{A priori} state estimate ($N$-dimensional ket)
\begin{equation} \label{eq:EQ034}
\hat{x}_{k \vert k-1} = \sum^{2 N}_{i=0} w^i_x f(\bar{x}^i_{k-1 \vert k-1})
\end{equation}
\item \emph{A priori} covariance matrix ($N \times N$ matrix)
\begin{equation} \label{eq:EQ035}
P_{k \vert k-1} = \sum^{2 N}_{i=0} w^i_p \left( f(\bar{x}^i_{k-1 \vert k-1}) - \hat{x}_{k \vert k-1} \right) \left( f(\bar{x}^i_{k-1 \vert k-1}) - \hat{x}_{k \vert k-1} \right)^T + Q_k
\end{equation}
\end{itemize}
{\bf Correction equations}
The correction equations take the form:
\begin{itemize}
\item Predicted measurements ($K$-dimensional ket)
\begin{equation} \label{eq:EQ036}
\hat{z}_k = \sum^{2 N}_{i=0} w^i_x h(\bar{y}^i_{k \vert k-1})
\end{equation}
\item Residuals ($K$-dimensional ket)
\begin{equation} \label{eq:EQ037}
\tilde{z}_k = z_k - \hat{z}_k
\end{equation}
\item Innovation matrix ($K \times K$ matrix)
\begin{equation} \label{eq:EQ038}
S_k = \sum^{2 N}_{i=0} w^i_p \left( h(\bar{y}^i_{k \vert k-1}) - \hat{z}_k \right) \left( h(\bar{y}^i_{k \vert k-1}) - \hat{z}_k \right)^T + R_k
\end{equation}
\item State-measurement cross-covariance matrix ($N \times K$ matrix)
\begin{equation} \label{eq:EQ039}
C_k = \sum^{2 N}_{i=0} w^i_p \left( f(\bar{x}^i_{k-1 \vert k-1}) - \hat{x}_{k \vert k-1} \right) \left( h(\bar{y}^i_{k \vert k-1}) - \hat{z}_k \right)^T
\end{equation}
\item Optimal gain ($N \times K$ matrix)
\begin{equation} \label{eq:EQ040}
K_k = C_k S_k^{-1}
\end{equation}
\item \emph{A posteriori} state estimate ($N$-dimensional ket)
\begin{equation} \label{eq:EQ041}
\hat{x}_{k \vert k} = \hat{x}_{k \vert k-1} + K_k \tilde{z}_k
\end{equation}
\item \emph{A posteriori} covariance matrix ($N \times N$ matrix)
\begin{equation} \label{eq:EQ042}
P_{k \vert k} = P_{k \vert k-1} - K_k S_k K_k^T
\end{equation}
\end{itemize}
The weights $w^i_x$ and $w^i_p$ (identical in both phases) have been detailed in Eqs.~(\ref{eq:EQ031}).
\subsection{\label{sec:UKFExample}Example of an application of the Unscented Kalman Filter}
The re-entry problem (i.e., the motion of an Earth-bound spacecraft, impacting the Earth's atmosphere), tailored to the case of the Space Shuttle, is described in Appendix \ref{App:AppB}. The position and
the direction of motion of the incoming spacecraft are monitored by terrestrial radars, providing the measurements (for the filters). Mehra \cite{mehra} compared the performance of several non-linear filters
on such data, including two EKFs (formulated in two different coordinate systems), whereas Chang, Whiting, and Athans \cite{caw} addressed the modelling accuracy and complexity, as well as the real-time
processing requirements, also exploring techniques compensating for modelling inaccuracies by increasing the process noise. Austin and Leondes \cite{al} developed a robust filter, based on statistical
linearisation. More recently, Crassidis and Markley \cite{cm}, as well as a number of other authors with contributions in the Proceedings of various AIAA and IEEE Conferences, dealt with the re-entry problem,
which is generally regarded as stressful for filters and trackers \cite{ju1,ju2,mehra,al}.
One simplification of the problem is achieved by considering the $3$-dimensional motion of the vehicle as planar; in this approximation, the object remains on the plane defined by the velocity vector at
the beginning of the re-entry and the centre of gravity, see Fig.~\ref{fig:ReentryGeometry}.
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{ReentryGeometry.eps}
\caption{\label{fig:ReentryGeometry}The coordinate system used in the re-entry problem. Point $A$ corresponds to the position of the vehicle at time $t=0$; as in Ref.~\cite{ju1}, an initial altitude of
$131.632$ km is assumed. Figure (a), drawn to scale, demonstrates the smallness of this altitude in comparison to the radius of the Earth. Figure (b), also drawn to scale, is a close-up of Fig.~(a) around
points $A$ and $B$; the axis $x^\prime_2$ is parallel to $x_2$. The green vector indicates the initial velocity of the vehicle ($-1.8093$ km/s,$-6.7967$ km/s) \cite{ju1}. The coordinate system ($x_1$,$x^\prime_2$),
hence also ($x_1$,$x_2$), may be chosen at will; however, it makes sense to place the origin of the system in the vicinity of the region of interest, i.e., of the points $A$ and $B$.}
\vspace{0.35cm}
\end{center}
\end{figure}
The dynamical model, followed in Ref.~\cite{ju1}, attempts the determination of the motion on the basis of three forces.
\begin{itemize}
\item The aerodynamic drag. This velocity- and altitude-dependent force is the dominant one.
\item The gravitational force.
\item Forces associated with random buffeting effects.
\end{itemize}
The state of the system is assumed to be a $5$-dimensional vector, comprising the coordinates $x_1$ and $x_2$ of the position vector, the corresponding velocities $x_3=\dot{x}_1$ and $x_4=\dot{x}_2$, as
well as one term associated with the aerodynamic aspects of the motion ($x_5$). The rate of change of the two velocity components and of $x_5$, including the process noise $q$ of Eq.~(\ref{eq:EQ001}), is
given in Ref.~\cite{ju1} as:
\begin{align} \label{eq:EQ043}
\dot{x}_3 &= A x_3 + B x_1 + q_3\nonumber\\
\dot{x}_4 &= A x_4 + B x_2 + q_4\nonumber\\
\dot{x}_5 &= q_5 \, \, \, ,
\end{align}
where
\begin{align} \label{eq:EQ044}
A &= -\gamma \exp \left( \frac{R - r}{r_c} \right) v\nonumber\\
B &= -\frac{G_N M}{r^3}\nonumber\\
\gamma &= \gamma_0 \exp(x_5)\nonumber\\
r &= \sqrt{x^2_1+x^2_2}\nonumber\\
v &= \sqrt{x^2_3+x^2_4} \, \, \, .
\end{align}
For the sake of brevity, all dependences on the appropriate variables, as well as on time, are not explicitly given in Eqs.~(\ref{eq:EQ043},\ref{eq:EQ044}). The constants are fixed as follows: $r_c=13.406$
km \cite{ju1}, $G_N=6.6738 \cdot 10^{-11}$ m$^3$ kg$^{-1}$ s$^{-2}$, $M=5.9726 \cdot 10^{24}$ kg, and $R=6\,378.137$ km \cite{pdg}.
Regarding this problem, the reader should be warned that Refs.~\cite{ju1,ju2} contain a number of mistakes; a subsequent short communication \cite{ju3} attempted corrections to some of these flaws, but did
not cover all the issues raised below. The corrected flaws are as follows.
\begin{itemize}
\item A representative value of the ballistic coefficient $\gamma_0$ (this parameter being denoted as $\beta_0$ therein) was given in Refs.~\cite{ju1,ju2} as $-0.59783$ km$^{-1}$ (in both papers, most of
the physical units were omitted; SI base units were generally assumed, save for the lengths which were expressed in km). However, given the overall sign of the function $A$ (denoted as $D$ therein), a
negative $\gamma_0$ value would inevitably result in a drag force pointing toward the centre of gravity, i.e., in the direction of motion; therefore, when adopting the negative sign of Refs.~\cite{ju1,ju2}
(see first of Eqs.~(\ref{eq:EQ044})), the correct $\gamma_0$ value is $+0.59783$ km$^{-1}$. Reference \cite{ju3} provided a correction by reverting the overall sign of the function $A$ (and retaining the
$\gamma_0$ value given in Refs.~\cite{ju1,ju2}).
\item In the formula of the gravitational force, the quantity $r$ in Refs.~\cite{ju1,ju2} was replaced by $R$ \cite{ju3}.
\item The covariance matrix of the process noise $Q_k$, given in Refs.~\cite{ju1,ju2}, corresponds to the input in the Monte-Carlo simulation; the matrix driving the filter (to be given shortly) contains
a non-zero diagonal element for the state-vector component $x_5$ \cite{ju3}.
\item Figures 5 \cite{ju1} and 9 \cite{ju2} are confusing. The caption of the figures refers to one solid line, yet two solid curves appear in each of these figures; the dots of the dotted curves cannot
be discerned, and these curves also appear as solid (thicker, however, than the curves which were intended to be `solid lines'). Reference \cite{ju3} provided a set of improved figures.
\end{itemize}
For some inexplicable reason, the reviewers of these two works missed all these obvious problems. Two additional flaws in Refs.~\cite{ju1,ju2} remain uncorrected: one is an obvious `slip of the pen',
whereas the second one is rather serious. Regarding the former, the statistical term `variance' is used in Section 4 of Ref.~\cite{ju1}, at the point where the covariance matrix of the measurement noise
is introduced, rather than `standard deviation'; that passage was obviously copied-and-pasted to part B of Section VI of Ref.~\cite{ju2}, which contains the same mistake. The serious flaw will be discussed
shortly.
Assumed in Ref.~\cite{ju1} is that the motion of the vehicle is monitored by a radar positioned at ($x_1$,$x_2$)=($x^r_1$,$x^r_2$); I will use $x^r_1=R$ and $x^r_2=0$, i.e., a radar positioned at point $B$
in Fig.~\ref{fig:ReentryGeometry}. The radar of Ref.~\cite{ju1} provides measurements of the distance $d$, as well as of the elevation angle $\theta$, at the sampling rate of $10$ Hz. Evidently,
\begin{align} \label{eq:EQ045}
d &= \sqrt{(x_1-x^r_1)^2+ (x_2-x^r_2)^2} + r_1\nonumber\\
\theta &= \arctan \left( \frac{x_2-x^r_2}{x_1-x^r_1} \right) + r_2 \, \, \, ,
\end{align}
where $r_1$ and $r_2$ represent the two components of the measurement noise; herein, the root-mean-square (rms) of the measurement noise was set to $1$ m for $d$ and $0.17$ mrad for $\theta$ \cite{caw}.
In Fig.~\ref{fig:ReentryGeometry}, the distance $d$ of the first of Eqs.~(\ref{eq:EQ045}) is the magnitude of the position vector of the vehicle in the coordinate system ($x_1$,$x^\prime_2$) with origin
at point $B$; $\theta$ is the angle of the position vector with the $x^\prime_2$ axis.
The second uncorrected flaw of Refs.~\cite{ju1,ju2} relates to the rms of the noise in the angle measurement. On page 102 of Ref.~\cite{caw}, Chang, Whiting, and Athans wrote in 1977: ``The angle measurement
standard deviation is assumed to be $0.17$ mrad.'' Quoting that paper in Refs.~\cite{ju1,ju2}, Julier and Uhlmann explain: ``\dots $w_1(k)$ and $w_2(k)$ [equivalent to the quantities $r_1$ and $r_2$ of
Eqs.~(\ref{eq:EQ045})] are zero mean uncorrelated noise processes with variances [\emph{sic}] of $1$ m and $17$ mrad, respectively.'' Up to the present time, it remains a mystery why the noise level in the
measurement of the angle in Refs.~\cite{ju1,ju2} was increased by two orders of magnitude (that is, if the value quoted in Refs.~\cite{ju1,ju2} is not one additional mistype).
The actual state of the system at $t=0$ is given in Ref.~\cite{ju1} as
\begin{equation} \label{eq:EQ046}
x_0
= \left(
\begin{array}{c}
(6\,500.4000 \pm 0.0010) \text{ km}\\
(349.1400 \pm 0.0010) \text{ km}\\
(-1.8093 \pm 0.0010) \text{ km/s}\\
(-6.7967 \pm 0.0010) \text{ km/s}\\
0.6932
\end{array} \right)
\end{equation}
and the process noise as
\begin{equation} \label{eq:EQ047}
Q_k
= \left(
\begin{array}{ccccc}
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0\\
0 & 0 & \sigma_3^2 & 0 & 0\\
0 & 0 & 0 & \sigma_4^2 & 0\\
0 & 0 & 0 & 0 & \sigma_5^2
\end{array} \right)
\end{equation}
where $\sigma^2_3=\sigma^2_4=2.4064 \cdot 10^{-5}$ km$^2$/s$^4$ \cite{ju1,ju2,ju3} and $\sigma_5^2=10^{-6}$ \cite{ju3}.
On the basis of this input, one may obtain series of simulated radar measurements ($d$,$\theta$) and filter the data using the UKF. One representative example of residuals (i.e., of the differences between
the simulated and the filtered data) is displayed in Figs.~\ref{fig:ResidualsReentryDistance} and \ref{fig:ResidualsReentryAngle}. The reduced $\chi^2$ (i.e., the overall $\chi^2$ value divided by the number
of degrees of freedom), obtained on the basis of these residuals for the measurement uncertainties of Ref.~\cite{caw}, comes out equal to about $0.66$. This value suggests that the filtering of the raw
measurements is successful~\footnote{The reduced $\chi^2$ value is significantly smaller than $1$; this is the result of the obvious double-counting of the uncertainties in the simulation and in the processing
of the simulated data.}. Furthermore, the residuals come out independent of the free variable in the problem (i.e., the time $t$); this is the expected result in all successful optimisation schemes. Plots
\ref{fig:ResidualsReentryDistance} and \ref{fig:ResidualsReentryAngle} have been obtained using the $\alpha$, $\beta$, and $\kappa$ values of $10^{-3}$, $2$, and $0$, respectively \cite{wm}. It has been
confirmed that the sensitivity of the present results to the variation of the parameters~\footnote{Kandepu, Foss, and Imsland \cite{kfi} favour larger $\alpha$ values (e.g., they use $\alpha=1$), as well as
the choice $\lambda=3-N$ \cite{ju1}. In Ref.~\cite{sarkka}, S\"arkk\"a also used a `large' $\alpha$ value (namely, $0.5$); however, the comparison of his parameter values with those of other works might not
be appropriate, as it is not clear whether the second of his Eqs.~(10) - i.e., the expression yielding his weight $W^{(c)}_0$, denoted as $w^0_p$ in this work, see the second of Eqs.~(\ref{eq:EQ031}) - is a
mistype or represents the actual formula employed in S\"arkk\"a's work.} $\alpha$ (fixed at $10^{-4}$, $10^{-3}$, $0.1$, $0.5$, and $1$) and $\kappa$ (fixed at $-2$ and $0$) is very low. For $\alpha \geq 10^{-3}$,
the reduced $\chi^2$ varied between $0.66008$ and $0.66016$; somewhat smaller values were obtained for $\alpha = 10^{-4}$, but the temporal dependence of the diagonal elements of the covariance matrix $P$
was rather noisy.
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{ResidualsReentryDistance.eps}
\caption{\label{fig:ResidualsReentryDistance}The time series of the residuals between the simulated and filtered data of the distance $d$ for $0 \leq t \leq 200$ s.}
\vspace{0.35cm}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15.5cm]{ResidualsReentryAngle.eps}
\caption{\label{fig:ResidualsReentryAngle}The time series of the residuals between the simulated and filtered data of the elevation angle $\theta$ for $0 \leq t \leq 200$ s.}
\vspace{0.35cm}
\end{center}
\end{figure}
The effort notwithstanding, I have not been able to reproduce the temporal dependence of the variance of the state-vector components $x_1$, $x_3$, and $x_5$, given in Refs.~\cite{ju1,ju2,ju3}; unfortunately,
it has not been possible to resolve this issue with the first author of these works. The inability to reproduce these plots may be due to my misinterpretation of the process and measurement covariance
matrices, given in these three papers. S\"arkk\"a, who also studied the re-entry problem in Ref.~\cite{sarkka}, did not report discrepancies with the results of Refs.~\cite{ju1,ju2,ju3}.
The figures in this paper have been produced either with CaRMetal, a dynamic geometry free software (GNU-GPL license), first developed by Ren\'e Grothmann and recently under Eric Hakenholz \cite{CMl}, or
with MATLAB.
|
1,108,101,563,124 | arxiv | \section{Introduction}
The objective of low-rank approximation is to approximate a given matrix by one with low-rank structure. It is an essential tool in many applications like computer vision \cite{turk1991eigenfaces}, signal processing \cite{parker2005signal}, recommender systems \cite{drineas2002competitive}, natural language processing \cite{ren2019label}, machine learning \cite{murphy2012machine}, principal component analysis (PCA) \cite{wold1987principal} and data mining \cite{skillicorn2007understanding}, to name a few examples. The reason for finding a low-rank approximation is that if we know in advance that the given matrix possesses low rank structure, then doing a low-rank approximation is a neat way to strip off meaningless noise and obtain a more compact representation. The distance between original matrix and approximate matrix is usually measured by the Frobenius norm. On this basis, the optimal low-rank approximation can be obtained by truncated singular value decomposition (SVD). However, considering a squared matrix $A$ with dimension $n$, the computational complexity of SVD is up to $O(n^3)$. This is prohibitive for large-scale datasets, especially when data is collected sequentially or parallelly, for instance, various applications receive data on the fly by varying the time period including online advertising \cite{zhang2008detecting}, sensor network \cite{bonnet2001towards} and network traffic \cite{gilbert2001quicksand}.
One popular solution to remedy the computational burden for processing large data matrices is the so-called matrix sketching technique. The main idea is to construct a sketch matrix $B$ which is much smaller than the original matrix $A$ but can retain most of the information of $A$, and then use $B$ instead of $A$ to do the subsequent operations, such as SVD. More precisely, given an $n \times d$ matrix $A$, the goal is to find an $\ell \times d$ matrix $B$ with $\ell \ll n$ such that $A^{T} A \approx B^{T} B$. The efficiency of doing operations on the concisely representable sketch matrix makes this technique widely used in various applications, including dimension reduction~\cite{drineas2006fast}, online learning~\cite{luo2016efficient}, clustering~\cite{yoo2016streaming}, among many others.
For getting an approximate sketch matrix, many randomized techniques have drawn great attention, such as random sampling and random projection. The random sampling technique \cite{bhojanapalli2014tighter,mahoney2011randomized,bach2013sharp} obtains a precise representation of $A$ by sampling a small number of rows or columns and reweighting them. The most well-known random sampling technique is the leverage score sampling, in which the sampling probability is proportional to the leverage score of each column. This obviously poses the difficulty that the leverage score involves the calculation of the singular vectors of $A$, and thus hard to process streaming data and large-scale data. Therefore, one may pay more attention to the random projection technique \cite{halko2011finding} whose key is to find a random matrix $X$ used to project the original matrix $A$ to a much smaller matrix $B$. It is requested that the construction of $X$ should guarantee that $B$ captures the principal subspace of $A$. In addition, even a single pass over data is sufficient \cite{tropp2017practical}, this enables approximation for dense matrices that cannot be loaded completely in memory.
\begin{figure*}[htp]
\centering
\subfigure[]{
\includegraphics[width=0.9\columnwidth]{ex22.eps}
}
\subfigure[]{
\includegraphics[width=0.9\columnwidth]{ex12.eps}
}
\caption{The approximation on two synthetic data with different singular decaying spectrum: (a) the distribution of singular values; (b) the covariance error calculated as $\|A^T A - B^T B\|_2$ by SpFD10, a variant of SpFD method \cite{teng2018fast}.}
\label{fig:com}
\end{figure*}
Besides the aforementioned randomized method for constructing the sketch, a deterministic method named Frequent Directions (FD) \cite{liberty2013simple, ghashami2016frequent} was proposed recently. The inspiration behind this method is from estimating item frequency in a stream \cite{misra1982finding}. Precisely, for a given $n \times d$ matrix $A$, FD processes rows of $A$ in a stream and maintains an $\ell \times d$ sketch $B$. Then by setting $\ell = k + \frac{k}{\varepsilon}$, it achieves $(1+\varepsilon)$ best rank-$k$ approximation while costs $O(d\ell)$ space and runs in $O(nd\ell)$ time. Hence when facing massive data, such as surveillance video data, the efficiency is still limited due to the space restriction. Several attempts have been made to tackle this issue by combining the idea from random projection \cite{teng2018fast, chen2017frosh, chen2019making}. For example, Teng and Chu \cite{teng2018fast} proposed a method named SpFD that uses the CountSketch matrix to capture more information from the original matrix, thus can accelerate the computation significantly. However, there still have room for further improvement, considering that the projection procedure cannot be often capable of compressing the information accurately, leading to deteriorate the performance. As a typical case shown in Fig. 1, when the singular values decay rapidly, SpFD can obtain a good approximation when the sketch size is over 50. However, when the decrease is slow, SpFD could not provide a satisfactory result even for a larger sketch size. This mainly because that the efficiency of random projection relies on the gap between $k$-th singular value and $(k+1)$-th singular value~\cite{musco2015randomized}. Therefore, to improve the accuracy of the randomized variants of FD, there is an urgent need for finding a more accurate projection subspace.
The Krylov subspace method was firstly introduced in solving a system of linear equations \cite{lanczos1950iteration}, then generalized by Golub et al. \cite{golub1977block}, and more recently, Musco et al.~\cite{musco2015randomized} brought it into the randomized SVD and established the first gap-free theoretical analysis for the low-rank approximation. Compared with another popular technique, i.e., subspace iteration~\cite{Gu2014SubspaceIR, halko2011finding}, for improving the precision in low-rank approximation, Krylov subspace method utilizes less iterations for achieving the same precision theoretically and experimentally. This motivates us to apply the Krylov subspace method to accelerate the FD algorithm, while could maintain its precision.
To sum up, we propose in this work a fast and accurate Frequent Directions algorithm by incorporating the power of Krylov subspace and random projection techniques. Primarily, each block of matrix $A$ is compressed by a non-oblivious projection matrix constructed from Krylov subspace and then embedded into the FD procedure to update the sketch matrix $B$. Our main contributions are summarized as follows:
\begin{itemize}
\item[1)] The newly proposed FD algorithm named r-BKIFD that can skillfully integrate Block Krylov Iteration into randomized FD, so that we can obtain a more accurate subspace during projection procedure. Taking the dense Gaussian random matrix and the sparse CountSketch matrix as examples, we demonstrate the merits of the proposed r-BKIFD in terms of the approximation error and computational speed.
\item[2)] The theoretical analysis shows that our method has the comparable theoretical guarantees with original FD. Specifically, we derive the error bounds in terms of both the covariance and projection errors. It is also shown that such error bounds would be arbitrarily small if we choose an appropriate number of iterations.
\item[3)] Extensive experiments are carried out on both synthetic and real data to show that the proposed r-BKIFD outperforms over traditional FD and its variants in most cases.
\end{itemize}
\noindent \textbf{Notations.} For an $n \times d$ matrix $A$, $a_{i}$ denotes its $i$-th row, $a_{ij}$ denotes its $(i, j)$-th element. The matrices $I_n$ represents the $n$-dimensional identity matrix and $0^{n \times d}$ is the all-zero valued matrix with dimension $n \times d$. The Frobenius norm of $A$ is defined as $\|A\|_{F}=\sqrt{\sum_{i}\sum_{j}a_{ij}^2}$, and the spectral norm of $A$ is $\|A\|_{2}=\sup _{x \neq 0}\|A x\| /\|x\|$. The rank-$k$ approximation of $A$ is expressed as $A_{k}=U_{k}\Sigma_{k}V_{k}^{T}$, where $A=U\Sigma V^T$ represents the SVD of $A$. Let $\sigma_{i}$ denote the $i$-th singular value of $A$. The notation $nnz(A)$ denotes the number of non-zero entries of $A$. And we use $\widetilde{O}(n)$ to hide the logarithmic factor on $n$.
\section{Related work}
As a basic dimension reduction method, low rank approximation of large-scale matrices is an ubiquitous tool in scientific computing, machine learning, numerical analysis, among a number of other areas \cite{drineas2006fast, markovsky2012low, ye2005generalized}. It can be mainly formulated as the problem that for a given matrix $M$ and an input parameter $k$, one would like to find a matrix $M^{\prime}$ of rank at most $k$ to minimize the Frobenius norm of the discrepancy between $M$ and $M^{\prime}$, i.e.,
$$
\min _{\operatorname{rank}(M^{\prime}) \leq k}\|M-M^{\prime}\|_{F}.
$$
The classic Eckart-Young-Mirsky theorem shows the best low-rank matrix approximation can be obtained from the truncated SVD. However, for a matrix $M \in \mathbb{R}^{n \times d}$, the computational complexity of calculating the truncated SVD is $O (nd^{2})$, which is unacceptable for large-scale matrix data. Sketching algorithms have been proposed to alleviate the heavy computational cost, by mapping the input matrix to a smaller surrogate matrix called sketch, thus one can perform the low-rank approximation on such sketch as an alternative.
\subsection{Randomized Sketching Techniques}
Popular matrix sketching techniques include many randomized algorithms, such as random sampling \cite{boutsidis2014near} and random projection \cite{bingham2001random}. Random sampling forms a sketch by finding the small subset of rows or columns based on a pre-defined probability distribution. Random projection allows the original matrix to be efficiently processed in a lower dimensional space by using a random matrix, such as Gaussian \cite{dasgupta1999elementary}, CountSketch \cite{clarkson2017low} and subsampled randomized Hadamard transform (SRHT) \cite{tropp2011improved}. The Johnson-Lindenstrauss (JL) Lemma \cite{lindenstrauss1984extensions} shows that such random matrices can preserve the pairwise distance between any two data points. It is known that the $d\times m$ Gaussian random matrix $S$ is defined in the form of $S=\frac{1}{\sqrt{m}}G$, where each entry of $G$ is sampled i.i.d. from $\mathcal{N}(0,1)$. And the CountSketch matrix stems from estimating the most frequent items in a data stream \cite{charikar2002finding}, further applied in performing low-rank approximation \cite{clarkson2017low}. Mathematically, it is constructed as $X = D \Phi^{T}\in \mathbb{R}^{d \times m}$, where
\begin{itemize}
\item $\Phi \in \mathbb{R}^{m \times d}$ is a binary matrix with $\Phi_{h(i), i } = 1$ and $\Phi_{j, i } = 0$ for all $j \ne h(i)$. Here $h$ is a uniformly random map from $[d] \rightarrow [m]$;
\item $D$ is a $d\times d$ diagonal matrix with each diagonal element chosen from $\{-1,1\}$ with equal probability.
\end{itemize}
Note that CountSketch matrix is an extremely sparse matrix due to the structure of one non-zero element per row. Thus, given the input matrix $A\in\mathbb{R}^{n \times d}$ and CountSketch matrix $X\in\mathbb{R}^{d \times m}$, the computation cost to get $AX$ is $O(nnz(A))$, which is superior to the costs of $O(ndm)$ for Gaussian and $O(ndlog(m))$ for SRHT \cite{tropp2011improved, ailon2009fast}. Clarkson $\&$ Woodruff \cite{clarkson2017low} further illustrated CountSketch is the fastest known procedure for low rank approximation. Thus it is well suitable for sparse data and has been frequently applied to various applications, including differential privacy \cite{balu2016differentially}, deep learning \cite{cui2017kernel}, among others.
\subsection{Frequent Directions}
Different from the above randomized techniques, Frequent Directions (FD) algorithm is firstly proposed by Liberty as a deterministic matrix sketching technique in \cite{liberty2013simple}. Instead of projecting or sampling the whole matrix at once, it processes the matrix in a row-update approach. That is, given an input matrix $A \in \mathbb{R}^{n \times d}$, the goal is to construct a sketch matrix $B \in \mathbb{R}^{(\ell-1) \times d}$ which is much smaller than $A$ but is still a good estimation. Precisely, the matrix $B$ is initialized to an all-zero valued matrix. We insert the rows of $A$ into $B$ until it is fullfilled. Then a shrinkage procedure is conducted by computing the SVD and subtracting the squared $\ell$-th singular value from all squared singular values. Considering that the last row of $B$ is always all-zero valued after the shrinkage procedure, we could insert continually until all rows are processed. The running time is dominated by the SVD which takes $O(d \ell^2)$ time, so the total time cost is $O(n d \ell^2)$. Later Ghashami et al. \cite{ghashami2016frequent} modify the original FD by doubling the space of $B$ to reduce the running time to $O(n d \ell)$. See more details in Algorithm $\ref{Fast-FD}$. It has been showed that FD has the following error bound holds for any $k< \ell$,
\begin{align} \label{FD-property}
\left\|A^{ T} A-B^{T} B\right\|_{2}
\leq \frac{ \left\|A-A_{k}\right\|_{F}^{2}}{\ell-k}.
\end{align}
Noting that setting\begin{small} $\ell=\lceil k+1/\varepsilon\rceil$\end{small} yields error of \begin{small}$\varepsilon \left\|A-A_{k}\right\|_{F}^{2}$\end{small} , that is, the sketch matrix $B$ is a good low-rank approximation.
\begin{algorithm}[!htb]
\caption{Fast Frequent Directions (Fast-FD) \cite{ghashami2016frequent}}
\label{Fast-FD}
\begin{algorithmic}[1]
\REQUIRE $\mathrm{A} \in \mathbb{R}^{n \times d}, \text { sketch size } \ell$\\
\ENSURE $\mathrm{B} \in \mathbb{R}^{(\ell-1) \times d}$\\
\STATE $B \leftarrow 0^{2\ell \times d}$
\FOR{$i\in1, \ldots, n$}
\STATE Insert $a_{i}$ into a zero valued row of $B$\\
\IF{$B$ has no zero valued rows}
\STATE $\left[U, \Sigma, V\right] \leftarrow \operatorname{svd}(B)$\\
\STATE $\delta \leftarrow \sigma_{\ell}^{2}$\\
\STATE $B \leftarrow \sqrt{\max(\Sigma^{2}-\delta I_{2\ell},0)} \cdot V^{T}$
\ENDIF
\ENDFOR
\STATE return $B\leftarrow B(1 :\ell-1, :)$
\end{algorithmic}
\end{algorithm}
Since FD is of high accuracy guarantee and well suitable for the streaming settings, several studies have embedded it into online learning. Boutsidis et al. \cite{boutsidis2014online} proposed the online version of PCA (OPCA) and embedded FD in it to reduce both the time and space complexities. Leng et al. \cite{leng2015online} utilized FD to learn hash functions in the online fashion with low computational complexity and storage space. Recently, many improvements have been considered in improving the accuracy and efficiency. Luo et al. \cite{luo2019robust} proposed Robust Frequent Directions (RFD) by introducing an adaptive regularizer to improve the approximation error bound by a factor 1/2. Not only sketching the primary matrix $B$, Huang \cite{huang2019near} considered to use the random sampling technique to sketch the part removed in the singular values shrinkage procedure, and proved that it is a space-optimal algorithm with faster running time. Besides, several studies considered the random projection techniques to improve the efficiency. Ghashami et al. \cite{ghashami2016efficient} considered the sparsity of original matrix and combined the randomized SVD to accelerate FD. Chen et al. \cite{chen2017frosh} proposed the so-called Faster Frequent Directions by utilizing subsampled randomized Hadamard transform (SRHT) on each data batch and then performing FD on the compact small matrix. Teng et al. \cite{teng2018fast} combined FD with CountSketch matrix to achieve comparable accuracy with low computational cost. Considering that the integration of random projection techniques would deteriorate the accuracy, this work aims at finding a more precise projection method without lossing much accuracy while maintaining the algorithm running efficiently.
\subsection{Block Krylov Iteration}
Historically, the classical Lanczos algorithm was first proposed by Lanczos \cite{lanczos1950iteration} to compute the extremal eigenvalues and corresponding eigenvectors of symmetric matrices, and then generalized by Golub and Kahan \cite{golub1965calculating} to solve the singular value pairs of $m\times n$ non-symmetric matrices. The basic idea is to construct Krylov subspace for initial vector $v$, and then project the original matrix onto this subspace. Thus, by using the eigenvalue pairs of projection matrix to approximate the counterpart of original matrix, we get the Krylov subspace described as
$$K :=\left[v, A v,A^{2}v, \ldots,A^{q-1}v\right].$$
Note that the invovled $v$ is a single vector, the works \cite{golub1977block, cullum1974block} modified the Lanczos algorithm from single vector $v$ to a block of $b$ vectors $V =\left[v_1, \ldots,v_b\right]$ and builded the Krylov subspace as
$$K :=\left[V, A V,A^{2}V, \ldots,A^{q-1}V\right].$$
Compared to classical Lanczos, block Lanczos is more efficient in terms of memory and cache. Recently, with the explosive development of randomized algorithm, randomized block Krylov algorithm has emerged, which could be seen as an integration of classical block Lanczos algorithm with randomized starting matrix $AX$, where $X$ is a random matrix that chosen as {Gaussian} random matrix. The detailed procedure is presented in Algorithm $\ref{BKI}$.
The key idea is to take the random projection $V=A X$ as the initial matrix, instead of the arbitrary set of vectors $V$ that may result in poor convergence. The convergence analysis proposed by Musco and Musco \cite{musco2015randomized} reveals that this method has faster convergence rate with respect to number of iterations, and can capture a more accurate range space, as compared with the popular simultaneous iteration (also known as power iteration) technique, which is defined as
$$ K :=\left(AA^{T}\right)^{q} A X.$$
The requirement of $q$ is just $\Theta(\frac{\log d}{\sqrt{\varsigma}})$ for getting the $(1+\varsigma)$ relative-error bound for Block Krylov Iteration, instead of $\Theta(\frac{\log d}{{\varsigma}})$ for power iteration. This shows that Block Krylov Iteration can get the accuracy guarantee with fewer iterations. A more detailed theoretical analysis could be found in \cite{drineas2018structural, yuan2018superlinear}.
\begin{algorithm}[!htb]
\caption{Block Krylov Iteration \cite{musco2015randomized}}
\label{BKI}
\begin{algorithmic}[1]
\REQUIRE $A \in \mathbb{R}^{n \times d} ,\text { error } \varsigma \in(0,1), \text { rank } k \leq n$\\
\ENSURE $Z \in \mathbb{R}^{n \times k}$\\
\STATE $q :=\Theta\left(\frac{\log d}{\sqrt{\varsigma}}\right), X \sim \mathcal{N}(0,1)^{d \times k}$
\STATE $K :=\left[A X,\left(A A^{T}\right) A X, \ldots,\left(AA^{T}\right)^{q} A X\right]$
\STATE Orthonormalize the columns of $K $ to obtain $Q$
\STATE Compute $ M :=Q^{T} A A^{T} Q $
\STATE Set $ \overline{U}_{k} $ to the top $ k$ singular vectors of $M$
\STATE return $Z=Q \overline{U}_{k}$
\end{algorithmic}
\end{algorithm}
\section{The proposed algorithm}
For universal large-scale streaming datasets, although many randomized FD variants achieve low computational cost, the algorithmic accuracy is sacrificed to a certain extent for getting the low-rank approximation. Considering that Block Krylov Iteration gives nearly optimal low-rank approximation with the fastest known theoretical runtime, we present a new algorithm named r-BKIFD, which incorporates the Block Krylov Iteration technique into FD to reduce the computational cost with better accuracy guaranteed.
The classical Block Krylov Iteration is limited to Gaussian random matrix for starting guess, and we can extend it to CountSketch random matrix by observing that some of the real-world datasets are extremely sparse, such as hyperspectral data \cite{ul2011fast}, recommendation data \cite{tang2012dynamic}, speech spectrograms \cite{kameoka2009robust} and so on. For computing $AX$, when $X$ is the CountSketch matrix, the computation complexity is only $O(nnz(A))$ instead of $O(ndk)$ for Gaussian matrix. We thus could sequentially transform the input matrix without explicitly generating CountSketch matrix for the case that the input matrix couldn't fit in memory. The above excellent properties make CountSketch perform well when constructing the Krylov subspace especially when the data matrices are sparse. Therefore, our subsequent analysis is based on both Gaussian random and CountSketch matrices.
We now give a detailed description of the proposed r-BKI algorithm. For getting a more accurate approximation, we set $m\ge\ell$. At first, we apply $\left(AA^{T}\right)^{i} A\ (i=0,1,\ldots,q)$ to form the Krylov matrix $K$ which contains all the information accumulated along the projection process, then we employ an orthonormal procedure to obtain the newly compressed matrix $Z$ and $P$. See Algorithm $\ref{r-BKI}$ for detailed description of r-BKI.
\begin{algorithm}[!htb]
\caption{r-BKI}
\label{r-BKI}
\begin{algorithmic}[1]
\REQUIRE $A \in \mathbb{R}^{n \times d} ,\text { error } \varsigma \in(0,1), \text { integers } m, \ell$\\
\ENSURE $Z \in \mathbb{R}^{n \times \ell},P \in \mathbb{R}^{\ell \times d}$\\
\STATE $q :=\Theta\left(\frac{\log d}{\sqrt{\varsigma}}\right),$ let $ X \in \mathbb{R}^{d \times m}$ be a randomized matrix
\STATE $ K :=\left[A X,\left(A A^{T}\right) A X, \ldots,\left(AA^{T}\right)^{q} A X\right]$
\STATE Orthonormalize the columns of $ K $ to obtain $Q \in \mathbb{R}^{n \times (q+1) m} $
\STATE Compute $ M :=Q^{T} A A^{T} Q \in \mathbb{R}^{(q+1) m \times (q+1) m} $
\STATE Set $ \overline{U}_{\ell} $ to the top $ \ell$ singular vectors of $M$
\STATE $Z=Q\overline{U}_{\ell}$
\STATE return $P=Z^{T}A$
\end{algorithmic}
\end{algorithm}
Then we illustrate how to integrate r-BKI into FD. Given the streaming data $A =\left[A_{(1)} ; \cdots ; A_{(s)}\right]\in \mathbb{R}^{n \times d}$ with each $A_{(i)}\in \mathbb{R}^{\frac{n}{s} \times d}\ (i=1,2,\ldots,s)$, our goal is to obtain a small sketch $B\in \mathbb{R}^{(\ell-1) \times d}$ that offers a good performance in preserving crucial information of the original matrix $A$. We assume that $n/s$ is an integer, otherwise we can change it into an integer by appending zero rows to $A$. Compared with the classical FD which directly performs singular values shrinkage procedure on each $\ell$ rows of $A$, we mainly embed r-BKI technique for each batch $A_{(i)} (i=1,2,\ldots,s)$ to obtain an intermediate sketch matrix with more compact representation but preserves accuracy, and then perform Fast-FD on the intermediate sketch matrix. Here r-BKI is used to find a more accurate subspace representation with less computational complexity during the projection procedure.
\begin{figure*}[htp]
\centering
\includegraphics[width=2\columnwidth]{rBKIFD.pdf}
\caption{Illustration of r-BKIFD.}
\label{fig:illu}
\end{figure*}
\begin{algorithm}[!htb]
\caption{r-BKIFD}
\label{r-BKIFD}
\begin{algorithmic}[1]
\REQUIRE $A \in \mathbb{R}^{n \times d} ,\text { error } \varsigma \in(0,1), \text { integers } m, \ell$\\
\ENSURE $\mathrm{B} \in \mathbb{R}^{(\ell-1) \times d}$\\
\STATE $B \leftarrow \text{r-BKI}\left( A\left(1 : \frac{n}{s}, :\right),\varsigma, m, \ell \right)$\\
\FOR{$i\in1, \ldots, s-1$}
\STATE $P \leftarrow \text{r-BKI}\left( A\left(i \frac{n}{s}+1 :(i+1) \frac{n}{s}, :\right),\varsigma, m, \ell \right)$\\
\STATE $B=\left[B;P\right]$
\STATE $\left[U, \Sigma, V\right] \leftarrow \operatorname{svd}(B)$\\
\STATE $\delta \leftarrow \sigma_{\ell}^{2}$\\
\STATE $B \leftarrow \sqrt{\max(\Sigma^{2}-\delta I,0)} \cdot V^{T}$\\
\STATE $B \leftarrow B(1:\ell-1,:)$
\ENDFOR
\STATE return $B$
\end{algorithmic}
\end{algorithm}
The detailed procedure is listed in Algorithm 4. Precisely, for each batch $A_{(i)}$, we apply the r-BKI algorithm to compress it into a relatively small intermediate sketch matrix $P$. The sketch matrix $B$ is initialized as the first intermediate matrix $P$. Then for the rest of data, each time we append the sketch $P$ into $B$. Similar to the traditional FD, we perform the singular values shrinkage procedure and maintain the first $\ell-1$ rows of $B$, and as a result, the remaining rows of B are set to be zeros and replaced by the next intermediate sketch matrix P. This iterative process continues until all batches are processed. The illustration is shown in Fig. \ref{fig:illu}.
\section{Error bounds}
In this section, we theoretically analyze the accuracy of the proposed algorithm r-BKIFD. To this end, we shall first introduce some useful lemmas.
\begin{lemma}[Theorem 2.3 of \cite{drineas2018structural}]\label{lemma1}
Given data matrix $A\in \mathbb{R}^{n \times d}$ and the
random matrix $X\in \mathbb{R}^{d \times m}$, let the sketch
$Z\in \mathbb{R}^{n \times \ell}$ be constructed by Algorithm $\ref{r-BKI}$.
The best rank-$\ell$ approximation to A can be written as $A_{\ell} = U_{\ell} \Sigma_{\ell} V_{\ell}^{T}$, and let $\ell< rank(A)$.
If rank $\left(V_{\ell}^{T} X\right)=\ell$, then
\begin{small}
$$ \label{choose}
\left\|A-Z Z^{T} A\right\|_{2} \leq\left\|A-A_{\ell}\right\|_{2}+\left\|\phi\left(\Sigma_{\ell, \perp}\right)\right\|_{2}\left\|V_{\ell, \perp}^{T} X\left(V_{\ell}^{T} X\right)^{\dagger}\right\|_{F}.
$$
\end{small}
\end{lemma}
Lemma \ref{lemma1} characterizes the distance between input matrix $A$ and projection matrix $P$ in Block Krylov Iteration step. Note that the upper bound of $\left\|A-Z Z^{T} A\right\|_{2}$ is closely related to the properties of random matrix $X$. That is to say, this lemma is helpful for choosing appropriate random matrix to estimate the original matrix accurately. A well-behaved random matrix $X$ could tighten the second part $\left\|V_{\ell, \perp}^{T} X\left(V_{\ell}^{T} X\right)^{\dagger}\right\|_{F}$. In the following lemma, we will try to bound $\left\|\phi\left(\Sigma_{\ell, \perp}\right)\right\|_{2}$.
\begin{lemma}[Lemma 2.4 of \cite{drineas2018structural}]\label{lemma2}
If $\frac{\sigma_{\ell}-\sigma_{\ell+1}}{\sigma_{\ell+1}} \geq \gamma>0$ holds, then there exists a polynomial $\phi(x)$ of degree $2 q+1$
with odd powers only, such that $\phi\left(\sigma_{i}\right) \geq \sigma_{i}>0$ for $1 \leq i \leq \ell,$ and
\begin{small}
$$
\left|\phi\left(\sigma_{i}\right)\right| \leq \frac{4 \sigma_{\ell+1}}{2^{(2 q+1) \min \{\sqrt{\gamma}, 1\}}}, \quad i \geq \ell+1
$$
\end{small}Hence
\begin{small}
$$
\left\|\phi\left(\Sigma_{\ell}\right)^{-1}\right\|_{2} \leq \sigma_{\ell}^{-1} \quad \text { and } \quad\left\|\phi\left(\Sigma_{\ell, \perp}\right)\right\|_{2} \leq \frac{4 \sigma_{\ell+1}}{2^{(2 q+1) \min \{\sqrt{\gamma}, 1\}}}.
$$
\end{small}
\end{lemma}
It is not hard to see that as the number of iterations $q$ and singular value gap $\gamma$ {increase}, $\left\|\phi\left(\Sigma_{\ell, \perp}\right)\right\|_{2}$ will exponentially decay. Moreover, when the sketch size $\ell$ increases, $\sigma_{\ell+1}$ gets smaller, which also makes the bound actually tighter.
\begin{remark}
As $q$ increases, the error bound is drastically reduced. The essential reason is the introduction of the Krylov subspace. Unlike the power method that aims at computing the dominant eigenspace, Krylov subspace contains the information accumulated along the way is used. This construction of the Krylov matrix makes full use of the orignal matrix, so that less information is lost during projection.
\end{remark}
\begin{lemma}[Matrix Bernstein inequality, \cite{tropp2015introduction}] \label{Matrix-Bernstein-inequality}
Let $\left\{A_{i}\right\}_{i=1}^{s} \in \mathbb{R}^{n \times d}$ be independent random matrices with $\mathbb{E}\left[A_{i}\right]=0^{n \times d}$ and $\left\|A_{i}\right\|_{2} \leq$
$R$ for all $i \in[s] .$ Define a variance parameter as $\sigma^{2}=$ $\max \left\{\left\|\sum_{i=1}^{s} \mathbb{E}\left[A_{i} A_{i}^{T}\right]\right\|_{2},\left\|\sum_{i=1}^{s} \mathbb{E}\left[A_{i}^{T} A_{i}\right]\right\|_{2}\right\}$. Then, for all $\epsilon \geq 0$ we have
$$
\mathbb{P}\left(\left\|\sum_{i=1}^{s} A_{i}\right\|_{2} \geq \epsilon\right) \leq(d+n) \exp \left(\frac{-\epsilon^{2} / 2}{\sigma^{2}+R \epsilon / 3}\right).
$$
\end{lemma}
\begin{lemma}[Courant-Fischer min-max theorem, \cite{van1996matrix}]\label{minmax}
Let $A$ be an $n \times n$ Hermitian matrix with eigenvalues $\lambda_{1} \leq \ldots \leq \lambda_{k} \leq \ldots \leq \lambda_{n}$, then $$\lambda_{k}=\min _{U}\left\{\max _{x}\left\{R_{A}(x) \mid x \in U \text{ and } x \neq 0\right\} \mid \operatorname{dim}(U)=k\right\}$$
where the Rayleigh-Ritz quotient $R_{A}: \mathbb{C}^{n} \backslash\{0\} \rightarrow \mathbb{R}$ defined by
$$
R_{A}(x)=\frac{(A x, x)}{(x, x)}
$$
and $(\cdot, \cdot)$ denotes the Euclidean inner product on $\mathbb{C}^{n} .$
\end{lemma}
\subsection{Error Bounds for GA-BKIFD}
It is known that Gaussian random matrix has high quality sketch accuracy and is easy to implement \cite{wang2015practical}. In this subsection, we apply it to the r-BKIFD algorithm and call the algorithm as GA-BKIFD. The theoretical performance is guaranteed by the following theorem.
\begin{theorem}[Covariance error of GA-BKIFD]\label{thm2}
Given data $A =\left[A_{(1)} ; \cdots ; A_{(s)}\right]\in \mathbb{R}^{n \times d}$, where each $A_{(i)}\in \mathbb{R}^{\frac{n}{s} \times d}$, let the small sketch
$B \in \mathbb{R}^{(\ell-1) \times d}$ be constructed by Algorithm $\ref{r-BKIFD}$, where $X$ is a Gaussian random matrix. For any $\eta \in(0,1)$ and $\varepsilon\ge0$, if $\frac{\sigma_{\ell}-\sigma_{\ell+1}}{\sigma_{\ell+1}} \geq \gamma>0$, then with probability at least $1-2s\exp(-\varepsilon^{2}/2)-\eta$, we have
\begin{small}
\begin{align}
&\left\|A^{ T} A-B^{T} B\right\|_{2} \notag\\
\le&\left(s\left(1+\delta\right)+\log \left(\frac{2 d}{\eta}\right) \frac{4 \left(1+\delta\right)}{3}+\sqrt{2 s \left(1+\delta\right)^2 \log \left(\frac{2 d}{\eta}\right)}\right)\notag\\
&\times\left\|A-A_{\ell}\right\|_{2}^{2}+\frac{ \left\|A-A_{k}\right\|_{F}^{2}}{\ell-k}, \label{snega}
\end{align}
\end{small}where $1+\delta=\left(1+\frac{4 }{2^{(2 q+1) \min \{\sqrt{\gamma}, 1\}}}\frac{\sqrt{d-\ell}(\sqrt{d}+\sqrt{m}+\varepsilon)}{\sqrt{m}-\sqrt{\ell}-\varepsilon}\right)^{2}$, $\sigma_{i}$ is the singular values of $A$ in descending order.
\end{theorem}
To explore the trend of the error bound with each variable more conveniently and clearly, we hide the logarithmic factor on $(d, \eta)$. Note that $\frac{ \left\|A-A_{k}\right\|_{F}^{2}}{\ell-k}$ decreases obviously as $\ell$ increases. Thus we focus on analyzing the effect of $\widetilde{O}\left(s(1+\delta)\right)\left\|A-A_{\ell}\right\|_{2}^2$. Firstly, a small increase in $q$ and $\gamma$ can lead to an exponential decay in $\delta$, therefore, for the fixed singular value gap $\gamma$, $\delta$ can become arbitrarily small if an appropriate $q$ is chosen; Secondly, with the increase of sketch size $\ell$, $(\ell+1)$-th singular value (i.e., $\left\|A-A_{\ell}\right\|_{2}$) becomes smaller, we stress that this advantage is even more significant when the singular value gap $\gamma$ is large.
Note that the above analysis is based on covariance error, now we introduce a key lemma which illustrates the relationship between covariance error and projection error.
\begin{lemma}[covariance error to projection error \cite{huang2019near}] \label{lemma4}
\begin{align}\label{sne-fne}
\left\|A-\pi^k_{B}(A)\right\|_{F}^{2} \leq\left\|A-A_{k}\right\|_{F}^{2}+2 k \cdot\left\|A^{T} A-B^{T} B\right\|_{2}
\end{align}
where $\pi^k_{B}(A)$ is the projection of $A$
onto the top-$k$ singular vectors of $B$.
\end{lemma}
This lemma shows that as long as we obtain the error bound of the covariance error, we can also get the error bound of the projection error. This property is very important in the low rank approximation. Many researches focus on the covariance error, because it can reveal the difference between two matrices more substantially.
The following corollary shows the projection error of GA-BKIFD, which follows by combining (\ref{snega}) and (\ref{sne-fne}).
\begin{corollary}
Given data $A =\left[A_{(1)} ; \cdots ; A_{(s)}\right]\in \mathbb{R}^{n \times d}$, where each $A_{(i)}\in \mathbb{R}^{\frac{n}{s} \times d}$, let the small sketch
$B \in \mathbb{R}^{(\ell-1) \times d}$ be constructed by Algorithm $\ref{r-BKIFD}$, where $X$ is a Gaussian random matrix. For any $\eta \in(0,1)$ and $\varepsilon\ge0$, if $\frac{\sigma_{\ell}-\sigma_{\ell+1}}{\sigma_{\ell+1}} \geq \gamma>0$, then with probability at least $1-2s\exp(-\varepsilon^{2}/2)-\eta$, we have
\begin{align}
&\left\|A-\pi^k_{B}(A)\right\|_{F}^{2}
\notag\\
\le&2k\left(s\left(1+\delta\right)+\log \left(\frac{2 d}{\eta}\right) \frac{4 \left(1+\delta\right)}{3}+\sqrt{2 s \left(1+\delta\right)^2 \log \left(\frac{2 d}{\eta}\right)}\right)\notag\\
&\times\left\|A-A_{\ell}\right\|_{2}^{2}+\frac{\ell+k }{\ell-k} \left\|A-A_{k}\right\|_{F}^{2}
, \notag
\end{align}
where $1+\delta=\left(1+\frac{4 }{2^{(2 q+1) \min \{\sqrt{\gamma}, 1\}}}\frac{\sqrt{d-\ell}(\sqrt{d}+\sqrt{m}+\varepsilon)}{\sqrt{m}-\sqrt{\ell}-\varepsilon}\right)^{2}$, and $\sigma_{i}$ is the singular values of $A$ in descending order.
\end{corollary}
\subsection{Error Bounds for CS-BKIFD}
As mentioned before, Gaussian random matrix has been well applied to the proposed r-BKIFD algorithm. However, when the dimension of input matrix reaches a larger scale, the time complexity to perform matrix multiplication is too high. This is because it destroys the sparse nature of original matrix if the input matrix is sparse. To address this issue, we introduce the CountSketch matrix with sparse structure to r-BKIFD in this subsection, which is called as CS-BKIFD. Its error bound in terms of covariance error is listed in the following theorem.
\begin{theorem}[Covariance error of CS-BKIFD]\label{thm5}
Given data $A =\left[A_{(1)} ; \cdots ; A_{(s)}\right]\in \mathbb{R}^{n \times d}$, where each $A_{(i)}\in \mathbb{R}^{\frac{n}{s} \times d}$, let the small sketch
$B \in \mathbb{R}^{(\ell-1) \times d}$ be constructed by Algorithm $\ref{r-BKIFD}$, where $X$ is a CountSketch matrix. For any $\varepsilon, p,\eta \in(0,1)$, if $\frac{\sigma_{\ell}-\sigma_{\ell+1}}{\sigma_{\ell+1}} \geq \gamma>0$ and
$m \geq \frac{\ell^{2}+\ell}{\varepsilon^{2} p},$
then with probability at least $1-sp-\eta$, we have
\begin{small}
\begin{align}
&\left\|A^{ T} A-B^{T} B\right\|_{2} \notag \\
\le&\left(s\left(1+\delta\right)+\log \left(\frac{2 d}{\eta}\right) \frac{4 \left(1+\delta\right)}{3}+\sqrt{2 s \left(1+\delta\right)^2 \log \left(\frac{2 d}{\eta}\right)}\right)\notag\\
&\times\left\|A-A_{\ell}\right\|_{2}^{2}+\frac{ \left\|A-A_{k}\right\|_{F}^{2}}{\ell-k}, \label{snecs}
\end{align}
\end{small}where $1+\delta=\left(1+\frac{4 }{2^{(2 q+1) \min \{\sqrt{\gamma}, 1\}}}\sqrt{\frac{d(d-\ell)}{1-\varepsilon}}\right)^{2}$, $\sigma_{i}$ is the singular values of $A$ in descending order.
\end{theorem}
{
Similar to the above analysis, the projection error bound of CS-BKIFD can be obtained immediately by combining (\ref{sne-fne}) and (\ref{snecs}).
}
{\begin{remark}
The core analysis of the error bound is consistent with the algorithm GA-BKIFD. In addition, as the sketch size $\ell$ increases, the algorithmic accuracy is improved, which will be verified in the experimental study.
\end{remark}}
\begin{corollary}
Given data $A =\left[A_{(1)} ; \cdots ; A_{(s)}\right]\in \mathbb{R}^{n \times d}$, where each $A_{(i)}\in \mathbb{R}^{\frac{n}{s} \times d}$, let the small sketch
$B \in \mathbb{R}^{(\ell-1) \times d}$ be constructed by Algorithm $\ref{r-BKIFD}$, where $X$ is a CountSketch matrix. For any $\varepsilon, p,\eta \in(0,1)$, if $\frac{\sigma_{\ell}-\sigma_{\ell+1}}{\sigma_{\ell+1}} \geq \gamma>0$ and
$m \geq \frac{\ell^{2}+\ell}{\varepsilon^{2} p},$
then with probability at least $1-sp-\eta$, we have
\begin{align}
&\left\|A-\pi^k_{B}(A)\right\|_{F}^{2}
\notag\\
\le&2k\left(s\left(1+\delta\right)+\log \left(\frac{2 d}{\eta}\right) \frac{4 \left(1+\delta\right)}{3}+\sqrt{2 s \left(1+\delta\right)^2 \log \left(\frac{2 d}{\eta}\right)}\right)\notag\\
&\times\left\|A-A_{\ell}\right\|_{2}^{2}+\frac{\ell+k }{\ell-k} \left\|A-A_{k}\right\|_{F}^{2}
, \notag
\end{align}
where $1+\delta=\left(1+\frac{4 }{2^{(2 q+1) \min \{\sqrt{\gamma}, 1\}}}\sqrt{\frac{d(d-\ell)}{1-\varepsilon}}\right)^{2}$, and $\sigma_{i}$ is the singular values of $A$ in descending order.
\end{corollary}
\subsection{Comparison of GA-BKIFD and CS-BKIFD}
{
We shall make a comparison of GA-BKIFD and CS-BKIFD in terms of accuracy and running time. For the algorithmic accuracy, we observe that the covariance error bound can achieve $\widetilde{O}\left(s(1+\delta)\right)\left\|A-A_{\ell}\right\|_{2}^2+\frac{ \left\|A-A_{k}\right\|_{F}^{2}}{\ell-k}$ when $q$ is large, according to Theorems \ref{thm2} and \ref{thm5}. However, the random size $m$ should satisfy $m\ge\ell$ for GA-BKIFD while $m \geq \frac{\ell^{2}+\ell}{\varepsilon^{2} p}$ for CS-BKIFD. Therefore, GA-BKIFD achieves almost the same accuracy guarantees with less sampling numbers. For the algorithmic running time, the matrix multiplication operation generated in the construction of Krylov subspace takes up a lot of time. Fortunately, because of the sparse structure of CountSketch matrix, CS-BKIFD could run faster in this step, that is to say, it has a lower computational complexity just as the following subsection shown, and thus works well in some practical situations.
}
\subsection{Comparison with Existing Algorithms}
TABLE \ref{comparison} shows detaild comparison in terms of the projection and covariance errors. For easy and intuitive comparison, we rewrite the original error bounds and use $\widetilde{O}$ to hide the logarithmic.\\
\indent First of all, we can observe that the error bounds of the traditional deterministic algorithm FD are sharper than all these randomized FD variants. This is mainly because that the techniques one use to derive randomized FD variants' error bounds still rely on the properties of FD. Besides, it can be emphasized here that the error bounds of the proposed r-BKIFD are superior to FFD and SpFD, according to the following detailed comparative analysis. \\
\indent In terms of covariance error, our bound is tighter than FFD from two aspects. Firstly, the first term of our bound is related to the $(\ell+1)$-th singular value of $A$ rather than the largest singular value in FFD, which is more advantageous when sketch size $\ell$ is large. Secondly, a small increase in $q$ and $\gamma$ can lead to an exponential decay in $\delta$, and as a result, for the fixed singular value gap $\gamma$, if we choose an appropriate $q$, the $\delta$ in our bound can be arbitrarily small, whereas the $\Delta_t$ in FFD cannot be. The reason is that $\Delta_t=\Theta\left(\sqrt{\frac{\min(\frac{n}{s},d) \log \left(2 \min(\frac{n}{s},d)/ \delta\right)}{\ell/2}}\right)$, and one can observe that $\Delta_t$ can not decay exponentially with one of the variables. We stress that such two advantages come from the incorporation of Block Krylov Iteration technique.\\
\indent In terms of projection error, our bound is much tighter than SpFD in most cases. That is, it is easy to check that when $\delta_1\le \frac{1}{2}$ and $\ell \le \frac{k}{8s\delta_1 (1+\delta)\log\left(\frac{2 d}{\eta}\right)}+k$, r-BKIFD can achieve a tighter upper bound than SpFD. And such two assumptions can be satisfied most often, because the failure probability $\delta_1$ should be small. By the way, although \cite{teng2018fast} aims to analyze the error bound $\left\|A-[AV]_kV^T\right\|_F^2$ of low-rank approximation, it is essentially analyzing the projection error due to the use of inequality $\left\|A-[AV]_kV^T\right\|_F^2\le \left\|A-AV_kV_k^T\right\|_F^2$ in our analysis.
\begin{table*}[!htbp]
\begin{center}
\caption{Comparison with FD and its randomized variants}\label{comparison}
\begin{threeparttable}
\begin{tabular}{ccc}
\toprule
Algorithm & Covariance error & Projection error \\
\midrule
FD \cite{ghashami2016frequent} &$\frac{1 }{\ell-k} \left\|A-A_{k}\right\|_{F}^{2}$ &$\left(1+\frac{k }{\ell-k}\right) \left\|A-A_{k}\right\|_{F}^{2}$ \\
FFD \cite{chen2017frosh} &$\max\limits_i\widetilde{O}\left(\sqrt{s}(1+\Delta_t)\left\|A_{(i)}\right\|_{2}^2+\varepsilon \left\|A-A_{k}\right\|_{F}^{2}\right)$ &N/A \\
SpFD \cite{teng2018fast} &N/A &$\frac{k}{(\ell-k)\delta_1}\left\|A_k\right\|_{F}^{2}+(1+\frac{k}{(\ell-k)\delta_1})\left\|A-A_{k}\right\|_{F}^{2}$ \\
r-BKIFD (this work) &$\max\limits_i\widetilde{O}\left(s(1+\delta)\left\|A_{(i)}-[A_{(i)}]_{\ell}\right\|_{2}^2+\varepsilon \left\|A-A_{k}\right\|_{F}^{2}\right)$ &$8ks\left(1+\delta\right)\log \left(\frac{2 d}{\eta}\right)
\left\|A-A_{\ell}\right\|_{2}^{2}+\frac{\ell+k }{\ell-k} \left\|A-A_{k}\right\|_{F}^{2}$ \\
\bottomrule
\end{tabular}
\end{threeparttable}
\end{center}
\end{table*}
\subsection{Complexity Analysis}
The running time of r-BKIFD is dominated by the step 4 of performing r-BKI, that is, the procedure for computing the sketch matrix $P$. Thus, if the submatrix $A_i \in \mathbb{R}^{b \times d}$ is chosen as the Graussian random matrix, it needs the time of $O(bdmq)$ to construct the Krylov space $K$; while if $A_i$ is chosen as CountSketch matrix, it only needs $O(nnz(A_i)q)$ time to get $K$. Then the QR decomposition is further needed to obtain $Q$, which requires the time of $O(b (mq)^2)$. And the truncated SVD could cost the time of $O((mq)^2 \ell)$ to get $\overline{U}_{\ell}$. After that, the main cost lies in the step 6 of performing SVD on $B$, which costs $O(d \ell^2)$. Summarizing all these calculations, the computational cost of each iteration is about $O(bdmq + bm^2q^2+ m^2q^2 \ell + d \ell^2)$. Considering we only should proceed $\frac{n}{s}$ rows, the total cost for GA-BKIFD is $O(ndmq + nm^2 q^2 + sm^2q^2 \ell + s d \ell^2)$. Further noting that $\sum_i nnz(A_i) = nnz(A)$, it is easy to conclude that the total cost for CS-BKIFD is $O(nnz(A)q + nm^2 q^2 + sm^2q^2 \ell+ s d \ell^2)$. Therefore, CS-BKIFD would be a better choice for large-scale sparse datasets.
\section{Experiments}
In this section, the proposed r-BKIFD is compared with three popular algorithms, namely FD \cite{ghashami2016frequent}, SFD \cite{ghashami2016efficient} and SpFD10 \cite{teng2018fast}, through a series of synthetic and real data experiments. All such algorithms are implemented in MATLAB 2018a on a 56-core CPU (2.20 GHz) with 128 GB of RAM, and we run each method 30 times and take the average result. Their detailed information is listed as follows:
\begin{itemize}
\item[1] FD: Algorithm 1.
\item[2] SFD \cite{ghashami2016efficient}: It is a randomized FD algorithm which bases on Gaussian random matrix and power iteration.
\item[3] SpFD10: It is a randomized FD algorithm by utilizing sparse subspace embedding method, which chooses $q=10$ to balance the precision and computational time of the SpFDq algorithm proposed by \cite{teng2018fast}.
\item[4] GA-BKIFD: Following Algorithm 4, here we choose to use standard Gaussian random matrix in the Block Krylov Iteration step.
\item[5] CS-BKIFD: Following Algorithm 4, here we choose to use standard CountSketch matrix in the Block Krylov Iteration step.
\end{itemize}
For measuring the accuracy of these computing algorithms, we consider both the covariance and projection errors. The covariance error is defined as $\|A^TA - B^TB\|_2/\|A\|_F^2$, which measures the difference in singular values. And the projection error is defined by projecting $A$ onto the top-$k$ singular vectors of $B$, i.e., $\|A - \pi^k_{B}(A)\|_F/\|A - A_k\|_F^2$. Moreover, we also measure the computational cost by changing the sketch size $\ell$.
\begin{figure*}[htp]
\centering
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_1.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_2.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_3.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_21.eps}
}
\\
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_1.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_2.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_3.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_21.eps}
}
\\
\subfigure[\small{ k = 50 (dense)}]{
\includegraphics[width=0.42\columnwidth]{time_1.eps}
}
\subfigure[\small{k = 200 (dense)}]{
\includegraphics[width=0.42\columnwidth]{time_2.eps}
}
\subfigure[\small{k = 500 (dense)}]{
\includegraphics[width=0.42\columnwidth]{time_3.eps}
}
\subfigure[\small{k = 50 (sparse)}]{
\includegraphics[width=0.42\columnwidth]{time_21.eps}
}
\\
\centering
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_22.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_23.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_9.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_42.eps}
}
\\
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_22.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_23.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_9.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_42.eps}
}
\\
\subfigure[\small{k = 200 (sparse)}]{
\includegraphics[width=0.42\columnwidth]{time_22.eps}
}
\subfigure[\small{k = 500 (sparse)}]{
\includegraphics[width=0.42\columnwidth]{time_23.eps}
}
\subfigure[\small{w8a}]{
\includegraphics[width=0.42\columnwidth]{time_9.eps}
}
\subfigure[\small{CIFAR-10}]{
\includegraphics[width=0.42\columnwidth]{time_42.eps}
}
\caption{Comparison results on synthetic datasets and real datasets: w8a and CIFAR-10}
\label{fig:exp}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_43.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_41.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_45.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_11.eps}
}
\\
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_43.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_41.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_45.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_11.eps}
}
\\
\subfigure[\small{Sido0}]{
\includegraphics[width=0.42\columnwidth]{time_42.eps}
}
\subfigure[\small{MovieLens-10M}]{
\includegraphics[width=0.42\columnwidth]{time_43.eps}
}
\subfigure[\small{MovieLens-20M}]{
\includegraphics[width=0.42\columnwidth]{time_41.eps}
}
\subfigure[\small{Protein}]{
\includegraphics[width=0.42\columnwidth]{time_11.eps}
}
\caption{Comparison results on real datasets: Side0, MovieLens 10M, MovieLens 20M and Protein}
\label{fig:real1}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_6.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_7.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_71.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{cov_err_72.eps}
}
\\
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_6.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_7.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_71.eps}
}
\subfigure{
\includegraphics[width=0.42\columnwidth]{proj_err_72.eps}
}
\\
\subfigure[\small{MNIST}]{
\includegraphics[width=0.42\columnwidth]{time_6.eps}
}
\subfigure[\small{rcv1-small}]{
\includegraphics[width=0.42\columnwidth]{time_7.eps}
}
\subfigure[\small{Newsgroups}]{
\includegraphics[width=0.42\columnwidth]{time_71.eps}
}
\subfigure[\small{amazon7-small}]{
\includegraphics[width=0.42\columnwidth]{time_72.eps}
}
\caption{Comparison results on real datasets: MNIST, rcv1-small, Newsgroups, amazon7-small}
\label{fig:real}
\end{figure*}
\subsection{Synthetic Data Experiments}
We consider both dense and sparse synthetic data. The generation of dense data follows the setting in \cite{ ghashami2016frequent}, that is, we generate $A = SDU + N/\zeta \in \mathbb{R}^{n \times d}$, where $S \in \mathbb{R}^{n \times k}$ is the coefficients matrix with $S_{i,j} \sim \mathcal{N}(0,1)$, $D \in \mathbb{R}^{k \times k }$ is a diagonal matrix with $D_{ii} = 1-(i-1)/k$ that gives linearly diminishing singular values, $U \in \mathbb{R}^{k \times d}$ is the row space matrix with $U U^T = I_k$, and $N \in \mathbb{R}^{n \times d}$ is a noise matrix with $N_{ij} \sim \mathcal{N}(0,1)$. The parameter $\zeta$ determines whether the noise can dominate the signal. The generation of the sparse data is by random sampling, each row contains roughly $0.1\%$ non-zeros chosen uniformly from $[0,1]$, with the remaining entries as 0.
In our method, we fix the batch size to the dimension $d$ for dense data and $2d$ for sparse data, and the iteration number $q = 2$. Empirically, setting $m = \ell + p$ is sufficient to estimate an accurate subspace, where $p$ is a small nonnegative integer. For other computing methods, we follow the parameter settings in their original papers. In our experiments, we consider $n = 60000, d = 5000$, $k \in \{50, 200, 500\}$ and $\zeta = 10$, and vary the sketch size $\ell$ to measure the performance. We present their average results in Fig. \ref{fig:exp}.
Several observations can be easily obtained from Fig. \ref{fig:exp}. Firstly, all compared methods have a tendency to reduce error as the growth of sketch size $\ell$ in both projection and covariance errors. Secondly, the proposed GA-BKIFD and CS-BKIFD algorithms obtain much lower error bounds compared with two randomized FDs as well as FD in most cases. And SFD has a close performance as ours but has a much higher computational cost for the dense matrix cases. For FD and SpFD10 methods, they both obtain a worse estimation, especially for the sparse matrix. Thirdly, all randomized methods generally spend less running time than FD. And for the dense matrix cases, though the SpFD10 method is the fastest among all such methods, it fails to find an accurate estimation. In addition, once the input matrix is extremely sparse, by utilizing the structure of CountSketch matrix, CS-BKIFD only takes at most one sixth of the time compared with GA-BKIFD, and has {the least} running time among all methods. All these verify the effectiveness and the efficiency of the proposed Algorithm 4, and thus further support the performance guarantees provided by Theorems 1 and 2.
\subsection{Real Data Experiments}
In this section, we evaluate the performance by considering ten real-world datasets: "w8a", "CIFAR-10", "sido0", "MovieLens-10M", "MovieLens-20M", "Protein", "MNIST", "rcv1-small", "Newsgroups" and "amazon7-small". The sparsity in the datasets varies from $0.017\%$ to $99.76\%$ and the detailed information is listed in the TABLE I.
\begin{table}[!htbp]
\centering
\begin{threeparttable}
\begin{tabular}{cccccc}
\toprule
dataset& n& d &nnz\% &k &$\ell$\\
\midrule
w8a \cite{platt1999fast} & 64700 & 300 & 3.88&20 & 20:10:120 \\
\midrule
CIFAR-10 \cite{krizhevsky2009learning} & 60000 & 3072 & 99.76&20&20:10:120 \\
\midrule
sido0 \cite{guyon2008design} &12678 & 4932&9.84&20&20:10:120 \\
\midrule
MovieLens-10M\textsuperscript{1} \cite{harper2015movielens} & 71567 & 3000&3.24&50&50:10:150 \\
\midrule
MovieLens-20M\textsuperscript{1} \cite{harper2015movielens} & 138493 & 3000&3.99&50&50:10:150 \\
\midrule
Protein \cite{wang2002application} & 24387 & 357 & 28.2 & 50 & 50:10:120 \\
\midrule
MNIST \cite{lecun1998gradient} & 70000& 784 & 19.14 & 50 & 50:10:150\\
\midrule
rcv1-small \cite{teng2018fast}& 47236& 3000 &0.14& 100 & 100:20:300\\
\midrule
Newsgroups\textsuperscript{2} \cite{ghashami2016efficient}& 130107& 3000 & 0.12& 100 & 100:20:300\\
\midrule
amazon7-small \cite{teng2018fast} & 262144 & 1500 & 0.017 & 100 & 100:20:300 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[1] For the MovieLens datasets, the original data matrix has 10681 and 27278 columns separately for 10M and 20M. We extract the first 3000 columns.
\item[2] For this dataset, the original data matrix has 11314 rows and 130107 columns. We extract the first 3000 columns and use the transpose.
\end{tablenotes}
\end{threeparttable}
\caption{Summary of real datasets}
\end{table}
For the three datasets with small feature dimension, i.e. w8a, Protein and MNIST, we utilize a larger batch size $1000$ in our method, and for the other datasets, we set the batch size as in the synthetic data experiments. For other competing methods, we still keep the settings of the original papers. According to the results shown in Figs. \ref{fig:exp}-\ref{fig:real}, we observe that our algorithm outperforms other ones in terms of accuracy for nearly all the datasets. And for the last three extremely sparse datasets, our algorithm still achieves a lowest covariance error among nearly all these methods, and a comparable projection error with the best one. The performance of SpFD10 is unstable, even with a larger sketch size, the covariance error may be higher. This may because that the sparse subspace embedding in SpFD10 fails to capture the important information underling the input matrix. Additionally, SFD attains comparable accuracy to ours while it has a higher computational cost. All these results, together with the previous results on synthetic data, demonstrate that the proposed r-BKIFD provides great improvement over other randomized and traditional FD algorithms, both in terms of computational efficiency and accuracy.
\section{Conclusion}
In this paper, we proposed a novel algorithm named r-BKIFD to alleviate the inaccuracy issue in the randomized FD variants for low-rank approximation. Different from the existing ones embedding random projection technique directly, which may lead to the loss of some important information of the original matrix during the projection process, the basic idea of r-BKIFD is to incorporate the Block Krylov Iteration technique that could capture a more accurate projection subspace into the randomized FD. In the new algorithmic framework, we consider two types of random matrix, i.e. Gaussian and CountSketch matrices. Our rigorous theoretical analysis reveals that the proposed r-BKIFD gives a comparable error bound with traditional FD. The extensive experiments on both synthetic and real data further demonstrate that r-BKIFD outperforms traditional FD and its randomized variants in most cases in terms of computational efficiency and accuracy.
Noting that some real-world data is mostly of the form of multi-dimensional arrays (or say, tensors), such as videos and hyperspectral images, thus how to extend the proposed procedure to get efficient low-rank approximation of such tensor data is the focus of our future study.
{ |
1,108,101,563,125 | arxiv | \section{Introduction}
\label{Intr}
We study discretization of $L_q$ norms of functions from finite dimensional subspaces. In the case of $1\le q<\infty$ this problem can be formulated as a problem
of numerical integration. Let us return to the question of discretization of $L_q$ norm after a general discussion of the numerical integration problem.
Numerical integration seeks good ways of approximating an integral
$$
\int_\Omega f(\mathbf x)d\mu(\mathbf x)
$$
by an expression of the form
\begin{equation}\label{1.1ni}
\Lambda_m(f,\xi) :=\sum_{j=1}^m{\langle}_jf(\xi^j),\quad \xi=(\xi^1,\dots,\xi^m),\quad \xi^j \in \Omega,\quad j=1,\dots,m.
\end{equation}
It is clear that we must assume that $f$ is integrable and defined at the points
$\xi^1,\dots,\xi^m$. Expression~\eqref{1.1ni} is called a {\it cubature formula} $(\xi,\Lambda)$ (if $\Omega \subset {\mathbb R}^d$, $d\ge 2$) or a {\it quadrature formula} $(\xi,\Lambda)$ (if $\Omega \subset {\mathbb R}$) with nodes $\xi =(\xi^1,\dots,\xi^m)$ and weights $\Lambda:=({\langle}_1,\dots,{\langle}_m)\in{\mathbb R}^m$. We do not impose any {\it a priori} restrictions on nodes and weights. Some nodes may coincide and both positive and negative weights are allowed.
Some classes of cubature formulas are of special interest. For instance, the Quasi-Monte Carlo cubature formulas, which have equal weights $1/m$, are important in applications. We use a special notation for these cubature formulas
$$
Q_m(f,\xi) :=\frac{1}{m}\sum_{j=1}^mf(\xi^j).
$$
Other examples include positive weights and weights satisfying stability constraint $\sum_{j=1}^m|\lambda_j|<const$
Typically, one is interested in {\it good cubature formulas} for a given function class. The term {\it good} can be understood in different ways. Cubature formulas providing
exact numerical integration for functions from a given class can be considered ``best''. If a cubature formula is not exact on a given class then we need to introduce a concept of error. Following the standard approach, for a function class $\mathbf W$ we introduce the concept of error of the cubature formula $\Lambda_m(\cdot,\xi)$ by
\
\Lambda_m(\mathbf W,\xi):= \sup_{f\in \mathbf W} \left|\int_\Omega fd\mu -\Lambda_m(f,\xi)\right|.
\]
The quantity $\Lambda_m(\mathbf W,\xi)$ is a classical characteristic of the quality of a given cubature formula $\Lambda_m(\cdot,\xi)$. This setting is called {\it the worst case setting} in
the Information Based Complexity (see, e.g., \cite{Wo}). Notice that the above error characteristic provides
an absolute error independent of an individual function from the class.
{ Recently, in a number of papers (see \cite{VT158}, \cite{VT159}, \cite{VT160}) a systematic study of the problem of discretization of the $L_q$ norms of elements of finite dimensional subspaces has begun. The first results in this direction were obtained by Marcinkiewicz and
by Marcinkiewicz-Zygmund (see \cite{Z}) for discretization of the $L_q$ norms of the univariate trigonometric polynomials in 1930s. This is why we call discretization results of this kind the Marcinkiewicz-type theorems. We discuss here the way of discretization which uses function values at a fixed finite set of points. Therefore, this way can also be called {\it sampling discretization}. }
We discuss this problem in a rather general setting. Let $\Omega$ be a compact subset of ${\mathbb R}^d$ and $\mu$ be a probability measure on $\Omega$. We consider the space $L_q(\Omega)=L_q(\Omega,\mu)$, $1\le q< \infty$, of functions satisfying
$$
\|f\|_q := \left(\int_\Omega |f|^qd\mu\right)^{1/q} <\infty.
$$
In the case $q=\infty$ we define $L_\infty(\Omega)=\mathcal C(\Omega)$ as the space of continuous functions on $\Omega$ with
$$
\|f\|_\infty := \max_{\mathbf x\in\Omega} |f(\mathbf x)|.
$$
In a special case when $\Omega$ is a discrete set $\Omega_M=\{\mathbf x^j\}_{j=1}^M$ of distinct points $\mathbf x^j$, we consider the measure
$\mu$ such that $\mu(\mathbf x^j)=1/M$, $j=1,\dots,M$.
In the Marcinkiewicz-type discretization problems we study the numerical integration
problem for the class \[\mathbf W:= X_N^q:=\{f\in L_q(\Omega)\cap X_N: \|f\|_q\le 1\},\] where $X_N$ is a finite dimensional subspace of $L_q(\Omega)$, $1\le q<\infty$. An important new feature of our approach is the measurement of the error -- we study the {\it relative} error of numerical integration. Let us now formulate explicitly the main problems of our interest.
{\bf Marcinkiewicz problem.} Let $\Omega$ be a compact subset of ${\mathbb R}^d$ with the probability measure $\mu$. We say that a linear subspace $X_N$ (usually $N$ stands for the dimension of $X_N$) of the $L_q(\Omega)$, $1\le q < \infty$, admits the Marcinkiewicz-type discretization theorem with parameters $m$ and $q$ if there exist a set $\{\xi^\nu \in \Omega: \nu=1,\dots,m\}$ and two positive constants $C_j(d,q)$, $j=1,2$, such that for any $f\in X_N$ we have
\begin{equation}\label{1.1}
C_1(d,q)\|f\|_q^q \le \frac{1}{m} \sum_{\nu=1}^m |f(\xi^\nu)|^q \le C_2(d,q)\|f\|_q^q.
\end{equation}
In the case $q=\infty$ (recall that we set $L_\infty(\Omega)=\mathcal C(\Omega)$) we
ask for
\
C_1(d)\|f\|_\infty \le \max_{1\le\nu\le m} |f(\xi^\nu)| \le \|f\|_\infty.
\]
We will also use a brief way to express the above property: the $\mathcal M(m,q)$ theorem holds for a subspace $X_N$ or $X_N \in \mathcal M(m,q)$.
{\bf Marcinkiewicz problem with weights.} We say that a linear subspace $X_N$ of the $L_q(\Omega)$, $1\le q < \infty$, admits the weighted Marcinkiewicz-type discretization theorem with parameters $m$ and $q$ if there exist a set of nodes $\{\xi^\nu \in \Omega\}$, a set of weights $\{{\langle}_\nu\}$, $\nu=1,\dots,m$, and two positive constants $C_j(d,q)$, $j=1,2$, such that for any $f\in X_N$ we have
\begin{equation}\label{1.5}
C_1(d,q)\|f\|_q^q \le \sum_{\nu=1}^m {\langle}_\nu |f(\xi^\nu)|^q \le C_2(d,q)\|f\|_q^q.
\end{equation}
Then we also say that the $\mathcal M^w(m,q)$ theorem holds for a subspace $X_N$ or $X_N \in \mathcal M^w(m,q)$.
Obviously, $X_N\in \mathcal M(m,q)$ implies that $X_N\in \mathcal M^w(m,q)$.
{\bf Marcinkiewicz problem with $\varepsilon$.} We write $X_N\in \mathcal M(m,q,\varepsilon)$ if (\ref{1.1}) holds with $C_1(d,q)=1-\varepsilon$ and $C_2(d,q)=1+\varepsilon$. Respectively,
we write $X_N\in \mathcal M^w(m,q,\varepsilon)$ if (\ref{1.5}) holds with $C_1(d,q)=1-\varepsilon$ and $C_2(d,q)=1+\varepsilon$.
We also write $ X_N\in \mathcal M^w_+(m,q,\varepsilon)$ if $X_N\in \mathcal M^w(m,q,\varepsilon)$ and (\ref{1.5}) holds with nonnegative weights $\lambda_\nu$.
We note that the most powerful results are for $\mathcal M(m,q,0)$,
when the $L_q$ norm of $f\in X_N$ is discretized exactly by the formula with equal weights $1/m$. In case $X_N\in \mathcal M(m,q,0)$ we say that $X_N$ admits {\it exact discretization} with parameters $m$ and $q$. In case $X_N\in \mathcal M^w(m,q,0)$ we say that $X_N$ admits {\it exact weighted discretization} with parameters $m$ and $q$.
In the above formulations of the problems we only ask about existence of
either good $\{\xi^\nu\}$ or good $\{\xi^\nu,{\langle}_\nu\}$. Certainly, it is important to
have either explicit constructions of good $\{\xi^\nu\}$ ($\{\xi^\nu,{\langle}_\nu\}$) or
deterministic ways to construct good $\{\xi^\nu\}$ ($\{\xi^\nu,{\langle}_\nu\}$). Thus, the
Marcinkiewicz-type problem can be split into the following four problems: under some assumptions on $X_N$
\begin{description}
\item[(I)]
Find a condition on $m$ for $X_N \in \mathcal M(m,q)$;
\item[(II)]
Find a condition on $m$ for $X_N \in \mathcal M^w(m,q)$;
\item[(III)]
Find a condition on $m$ such that there exists a deterministic construction
of $\{\xi^\nu\}_{\nu=1}^m$ satisfying (\ref{1.1}) for all $f\in X_N$;
\item[(IV)]
Find a condition on $m$ such that there exists a deterministic construction
of $\{\xi^\nu,{\langle}_\nu\}_{\nu=1}^m$ satisfying (\ref{1.5}) for all $f\in X_N$.
\end{description}
We note that the setting of the Marcinkiewicz-type problems is motivated by
applications. For instance, a typical approach to solving a continuous problem numerically -- the Galerkin method --
suggests searching for an approximate solution from a given finite dimensional subspace. A standard way to measure an error of approximation is an appropriate $L_q$ norm, $1\le q\le\infty$. Thus, the problem of discretization of the $L_q$ norms of functions from a given finite dimensional subspace arises in a very natural way.
The paper contains both new results and a brief survey.
Section \ref{survey} provides a survey of
known results on the Marcinkiewicz-type discretization. This section does not contain
new results.
In Sections \ref{Ex} and \ref{constr} we present new results with brief discussions. These results are devoted
to exact weighted discretization. In particular, Theorems~\ref{gfT1} and~\ref{gfT2}
solve completely the problem of exact weighted discretization for general finite dimensional subspaces. Theorem~\ref{thm-4-1} provides a more general version of the Tchakaloff's theorem (e.g., see~\cite{P}) with a different proof.
Section \ref{hyper} is devoted to the problem of Marcinkiewicz-type discretization in $L_\infty$ on the subspace of trigonometric polynomials with frequencies from a hyperbolic cross. This problem is still open (see Open problem 5 in the last section).
Here we present new results, which complement
a phenomenon discovered earlier (see the discussion in Section \ref{survey}).
{ In Section \ref{ungen} we present recent results from \cite{KT168} and \cite{KT169} on sampling discretization of the uniform norm of elements of finite dimensional subspaces. These results show that the Marcinkiewicz-type inequalities in the uniform norm are different from their counterparts in $L_1$ and $L_2$. }
In Section \ref{ud} we address the following important from the point of view of applications
feature of discretization -- {\it universality} (see \cite{Tem16} and \cite{TBook}). Universality means that we want to build a discretization pair $(\{\xi^\nu\},\{\lambda_\nu\})$ which is good for each subspace from a given collection instead of being good only for a single given subspace. We give there (see Subsection 6.1) a detailed
survey of known results on universal discretization. Also, we present new results (see Subsection 6.2) on universal discretization.
In Section \ref{OP} we present some open problems.
\section{A brief survey}
\label{survey}
\subsection{Trigonometric polynomials}\label{sec2.1}
In this subsection we deal with the $2\pi$-periodic case of $d$-variate functions. In this case $\Omega =\mathbb T^d$ and $\mu$ is a normalized Lebesgue measure on $\mathbb T^d$.
We discuss discretization theorems of Marcinkiewicz-type for subspaces of the
trigonometric polynomials. By $Q$ we denote a finite subset of $\mathbb Z^d$, and $|Q|$ stands for the number of elements in $Q$. Let
$$
\mathcal T(Q):= \{f: f=\sum_{\mathbf k\in Q}c_\mathbf k e^{i(\mathbf k,\mathbf x)},\ \ c_{\mathbf k}\in\mathbb{C}\}.
$$
Let us start with the well-known results related to the Marcinkiewicz-type discretization theorems for the trigonometric polynomials.
We first consider the case $Q=\Pi(\mathbf N):=[-N_1,N_1]\times \cdots \times [-N_d,N_d]$, $N_j \in {\mathbb{N}}$ or $N_j=0$, $j=1,\dots,d$, $\mathbf N=(N_1,\dots,N_d)$.
We set
\begin{align*}
P(\mathbf N) := \Bigl\{\mathbf n = (n_1 ,\dots,n_d)\in\mathbb Z^d:\
0\le n_j\le 2N_j ,\ j = 1,\dots,d \Bigr\},
\end{align*}
and
$$
\mathbf x^{\mathbf n}:=\left(\frac{2\pi n_1}{2N_1+1},\dots,\frac{2\pi n_d}
{2N_d+1}\right),\qquad \mathbf n\in P(\mathbf N) .
$$
For any $t\in \mathcal T(\Pi(\mathbf N))$, one has
$
\|t\|_2^2 =\vartheta(\mathbf N)^{-1}\sum_{\mathbf n\in P(\mathbf N)}
\bigl|t(\mathbf x^{\mathbf n})\bigr|^2,
$
where $\vartheta(\mathbf N) := \prod_{j=1}^d (2N_j + 1)=\dim\mathcal T(\Pi(\mathbf N))$.
In particular, this implies that for any $\mathbf N$ one has
\begin{equation}\label{1.3}
\mathcal T(\Pi(\mathbf N)) \in \mathcal M(\vartheta(\mathbf N),2,0).
\end{equation}
In the case $1<q<\infty$,
the
well-known Marcinkiewicz discretization theorem (for $d=1$) is given as follows (see \cite{Z}, Ch.10, \S7 and \cite{TBook}, Ch.1, Section~2): for $t\in \mathcal{T}(\Pi(\mathbf N))$,
$$
C_1(d,q)\|t\|_q^q \le\vartheta(\mathbf N)^{-1}\sum_{\mathbf n\in P(\mathbf N)}
\bigl|t(\mathbf x^{\mathbf n})\bigr|^q \le C_2(d,q)\|t\|_q^q,\quad 1<q<\infty.
$$
This yields the following extension of~\eqref{1.3}:
\
\mathcal T(\Pi(\mathbf N)) \in \mathcal M(\vartheta(\mathbf N),q),\quad 1<q<\infty.
\]
For $q=1$ or $q=\infty$, one needs some adjustments. Let
\begin{align*}
P'(\mathbf N) := \Bigl\{\mathbf n &= (n_1,\dots,n_d)\in\mathbb Z^d:\
1\le n_j\le 4N_j ,\ j = 1,\dots,d \Bigr\}
\end{align*}
and
$$
\mathbf x(\mathbf n) :=\left (\frac{\pi n_1}{2N_1} ,\dots,\frac{\pi n_d}{2N_d}
\right) ,\qquad \mathbf n\in P'(\mathbf N) .
$$
If $N_j = 0$, we let $x_j (\mathbf n) = 0$. Set ${\overline N} := \max (N,1)$ and $\nu(\mathbf N) := \prod_{j=1}^d {\overline N_j}$.
Therefore, the following Marcinkiewicz-type discretization theorem
$$
C_1(d,q)\|t\|_q^q \le\nu(4\mathbf N)^{-1}\sum_{\mathbf n\in P'(\mathbf N)}
\bigl|t(\mathbf x({\mathbf n}))\bigr|^q \le C_2(d,q)\|t\|_q^q,\quad 1\le q\le \infty,
$$
implies that
\
\mathcal T(\Pi(\mathbf N)) \in \mathcal M(\nu(4\mathbf N),q),\quad 1\le q\le \infty.
\]
We note that $\nu(4\mathbf N) \le C(d) \dim \mathcal T(\Pi(\mathbf N))$.
Let us now discuss
the Marcinkiewicz-type discretization theorems for the hyperbolic cross trigonometric polynomials (see~\cite{DTU} for a recent survey covering a variety of topics related to the hyperbolic cross approximation). For $\mathbf s\in\mathbb Z^d_+$
we define
$$
\rho (\mathbf s) := \{\mathbf k \in \mathbb Z^d : [2^{s_j-1}] \le |k_j| < 2^{s_j}, \quad j=1,\dots,d\}
$$
where $[x]$ denotes the integer part of $x$.
By $Q_n$ denote
the step hyperbolic cross, i.e.,
$$
Q_n := \bigcup_{\mathbf s:\|\mathbf s\|_1\le n} \rho(\mathbf s).
$$
Then the corresponding set of the hyperbolic cross polynomials is given by
$$
\mathcal T(Q_n) := \left\{f: f=\sum_{\mathbf k\in Q_n} c_\mathbf k e^{i(\mathbf k,\mathbf x)},\ \ c_\mathbf k\in\mathbb{C}\right\}.
$$
The problem on
obtaining the sharp Marcinkiewicz-type discretization theorems for the hyperbolic cross trigonometric polynomials is not solved yet.
To the best of our knowledge, no sharp results on the growth of $m$ as a function on $n$ for the relation $\mathcal T(Q_n) \in \mathcal M(m,q)$ to hold for $1\le q\le \infty$, $q\neq 2$, are known.
Since $Q_n \subset \Pi(2^n,\dots,2^n)$, from the above mentioned results we have
$$
\mathcal T(Q_n)\in \mathcal M(m,q),\quad \text{provided} \quad m\ge C(d)2^{dn},\quad 1\le q\le \infty,
$$
with large enough $C(d)$.
It seems that the first nontrivial result related to this problem was derived in \cite{VT27}, where the set of points $\{\xi^\nu\}_{\nu=1}^p$ with $p\ll 2^{2n}n^{d-1}$ such that for all $t\in \mathcal T(Q_n)$ the inequality
$$
\|t\|_2^2 \le \frac{1}{p} \sum_{\nu=1}^p |t(\xi^\nu)|^2
$$
holds was constructed. Later on, a very nontrivial surprising negative result was obtained in the case of $q=\infty$ (see \cite{KT3, KT4, KaTe03} and Section \ref{hyper} below). It was proved that in order to have
$\mathcal T(Q_n)\in\mathcal M(m,\infty)$ it is necessary that
$m\gg |Q_n|^{1+c}$ with absolute constant $c>0$.
Moreover, it is worth mentioning that some deep general results on submatrices of orthogonal matrices imply
important Marcinkiewicz-type discretization theorems for $q=2$.
For example, Rudelson's theorem \cite{Rud} yields the following result
\
\mathcal T(Q_n)\in \mathcal M(m,2),\quad \text{provided} \quad m\ge C(d)|Q_n|n
\]
with large enough $C(d)$; see also Subsection \ref{subsection General subspaces 2}.
Let now discuss a recent breakthrough result by J. Batson, D.A. Spielman, and N. Srivastava \cite{BSS}, which will be written using our notations.
\begin{Theorem}[\cite{BSS}] \label{thm:BSS}
Let $\Omega_M=\{x^j\}_{j=1}^M$ be a discrete set with the probability measure $\mu(x^j)=1/M$, $j=1,\dots,M$.
Let also
$\{u_i(x)\}_{i=1}^N$ be a real system of functions on $\Omega_M$.
Then for any
number $b>1$ there exist a set of weights $w_j\ge 0$ such that $|\{j: w_j\neq 0\}| \le bN$ so that for any $f\in Y_N:= \operatorname{span}\{u_1,\dots,u_N\}$ we have
\begin{equation}\label{C2'}
\|f\|_{L_2(\Omega_M)}^2 \le \sum_{j=1}^M w_jf(x^j)^2 \le \frac{b+1+2\sqrt{b}}{b+1-2\sqrt{b}}\|f\|_{L_2(\Omega_M)}^2.
\end{equation}
\end{Theorem}
As a particular case, we obtain the weighted version of
the $L_2$ Marcinkiewicz-type discretization theorem, that is,
(\ref{1.1}) holds for the above $X_N$ with $m\ge cN$
with the general weights $w_j$ instead of weights $1/m$.
The next theorem was derived in~\cite{VT158} from the
recent paper by S.~Nitzan, A.~Olevskii, and A.~Ulanovskii~\cite{NOU}, which in turn is based on the paper of A.~Marcus, D.A.~Spielman, and N.~Srivastava~\cite{MSS}.
\begin{Theorem}[{\cite[Theorem~1.1]{VT158}}] \label{NOUth}There are three positive absolute constants $C_1$, $C_2$, and $C_3$ with the following properties: For any $d\in {\mathbb{N}}$ and any $Q\subset \mathbb Z^d$ there exists a set of $m \le C_1|Q| $ points $\xi^j\in \mathbb T^d$, $j=1,\dots,m$ such that for any $f\in \mathcal T(Q)$
we have
$$
C_2\|f\|_2^2 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)|^2 \le C_3\|f\|_2^2.
$$
\end{Theorem}
In other words, Theorem \ref{NOUth} provides a solution of the Marcinkiewicz-type discretization theorem for the $\mathcal T(Q)$ in the $L_2$ case for any $Q$. For more details regarding
the $L_2$ case see Subsection \ref{subsection General subspaces 2}, the paper
\cite{VT159}, and Kashin's paper \cite{Ka}, where the author discusses a
recent spectacular progress in the area of submatrices of orthogonal matrices.
We formulate some other results of \cite{VT158}. It was proved there that for $d=2$
$$
\mathcal T(Q_n) \in \mathcal M(m,1),\quad \text{provided} \quad m\ge C |Q_n|n^{7/2}
$$
with large enough $C$,
and for $d\ge 3$
$$
\mathcal T(Q_n) \in \mathcal M(m,1),\quad \text{provided} \quad m\ge C(d) |Q_n|n^{d/2+3}
$$
with large enough $C(d)$.
The above result was improved in \cite{VT159} to the following one. For any $Q\subset \Pi(\mathbf N)$ with $\mathbf N=(2^n,\dots,2^n)$ we have
$$
\mathcal T(Q) \in \mathcal M(m,1),\quad \text{provided} \quad m\ge C(d) |Q_n|n^{7/2}
$$
with large enough $C(d)$.
This gives rise to the following intriguing open problem (see also open problem 5 in the last section):
does the relation $\mathcal T(Q_n) \in \mathcal M(m,1)$ hold with $ m\asymp |Q_n|$?
We note that the results of \cite{VT158} and \cite{VT159} mentioned above were derived using probabilistic technique.
In more detail, the following ingredients were used: a variant of the Bernstein concentration measure inequality
from \cite{BLM}, the chaining technique from \cite{KoTe} (see also \cite{Tbook2}, Ch.4), and the recently obtained \cite{VT156} bounds of the entropy numbers. It is worth mentioning that the application of chaining technique was initiated by A.N. Kolmogorov in the 30s of the last century.
After that results
of these type were established in the study of the central limit theorem in probability theory (see, e.g., \cite{GZ}).
See also \cite{Ta} for further results on the chaining technique.
Let us stress again that the approach used in~\cite{VT158} is based on the probabilistic technique.
As a consequence, we derive the existence of good points for the Marcinkiewicz-type discretization theorems,
but no algorithm of construction of these points is offered.
We believe that the problem of
deterministic constructions of point sets, which give at least the same bounds for $m$ as the probabilistic approach does,
is of great importance.
A deterministic construction based on number theoretical
considerations was suggested in \cite{VT158}.
Even though
this approach can be applied for quite general finite sets $Q\subset \mathbb Z^d$, it is restricted to the case $q=2$.
Let us discuss the case $q=2$ in more detail.
First, using the probabilistic technique,
one proves the Marcinkiewicz-type discretization theorem for $m\ge C |Q_n| $ with some large enough constant $C$ (see Theorem \ref{NOUth}).
Second,
the deterministic Marcinkiewicz-type discretization theorem in $L_2$ holds (see \cite{VT158}) for $m\ge C(d) |Q_n|^2 $ with large enough constant $C(d)$
in the exact form with $C_1(d,2)=C_2(d,2)=1$ (see Sections \ref{Intr} and \ref{Ex}).
Namely, the exact discretization theorem states that for a given set $Q$ we construct a set $\{\xi^\nu\}_{\nu=1}^m$ with $m\le C(d)|Q|^2$ such that for any $t\in \mathcal T(Q)$ we have
$$
\|t\|_2^2 = \frac{1}{m}\sum_{\nu=1}^m |t(\xi^\nu)|^2.
$$
Note that
the probabilistic approach
requires bounds on the entropy numbers $\varepsilon_k(\mathcal T(Q)_q,L_\infty)$ of the unit $L_q$ balls of $\mathcal T(Q)$ in $L_\infty$, which is a deep and demanding question
by itself. To attack this problem, an approach using greedy approximation methods has been recently established in \cite{VT156}.
We discussed in \cite{KT168} and \cite{KT169} the following setting of the discretization problem of the uniform norm.
Let $S_m:=\{\xi^j\}_{j=1}^m \subset {\mathbb T}^d$ be a finite set of points. Clearly,
$$
\|f\|_{L_\infty(S_m)} := \max_{1\le j\le m} |f(\xi^j)| \le \|f\|_\infty.
$$
We are interested in estimating the following quantities
$$
D(Q,m):=D(Q,m,d):= \inf_{S_m}\sup_{f\in\mathcal T(Q)}\frac{\|f\|_\infty}{\|f\|_{L_\infty(S_m)}},
$$
$$
D(N,m):=D(N,m,d):= \sup_{Q,|Q|=N} D(Q,m,d).
$$
Certainly, one should assume that $m\ge N$. Then the characteristic $D(Q,m)$ guarantees that there exists a set of $m$ points $S_m$ such that for any $f\in\mathcal T(Q)$ we have
$$
\|f\|_\infty\le D(Q,m)\|f\|_{L_\infty(S_m)}.
$$
In the case $d=1$ and $Q=[-n,n]$ classical Marcinkiewicz theorem (see \cite{VTbookMA}, p. 24)
gives for $m\ge 4n$ that $D([-n,n],4n)\le C$. Similar relation holds for $D([-n_1,n_1]\times\cdots\times[-n_d,n_d], (4n_1)\times\cdots\times(4n_d))$ (see \cite{VTbookMA}, p. 102).
It was proved in \cite{KT169} that for a pair $N$, $m$, such that $m\asymp N$ we have $D(N,m)\asymp N^{1/2}$. We formulate this result as a theorem.
\begin{Theorem}[\cite{KT169}] \label{ITmain} For any constant $c\ge 1$ there exists a positive constant $C$ such that for any pair of parameters $N$, $m$, with $m\le cN$ we have
$$
D(N,m)\ge CN^{1/2}.
$$
Also, there are two positive absolute constants $c_1$ and $C_1$ with the following property: For any $d\in {\mathbb{N}}$ we have for $m\ge c_1N$
$$
D(N,m,d)\le C_1N^{1/2}.
$$
\end{Theorem}
The first part of Theorem \ref{ITmain} follows from Corollary \ref{BC1} (see (\ref{B6})) and the second part follows from Theorem \ref{CT2}.
It is interesting to compare Theorem \ref{ITmain}, which provides a result on discretization of the uniform norm, with the cited above known result -- Theorem \ref{NOUth} -- on discretization of the $L_2$ norm.
\subsection{General subspaces in $L_1$}
We begin with the definition of the entropy numbers.
Let $X$ be a Banach space and let $B_X$ denote the unit ball of $X$ with the center at $0$. Denote by $B_X(y,r)$ a ball with center $y$ and radius $r$, that is, $B_X(y,r)=\{x\in X:\|x-y\|\le r\}$. For a compact set $A$ and a positive number $\varepsilon$ we define the covering number $N_\varepsilon(A)$
as follows
$$
N_\varepsilon(A) := N_\varepsilon(A,X)
:=\min \{n : \exists y^1,\dots,y^n\in A,\ A\subseteq \cup_{j=1}^n B_X(y^j,\varepsilon)\}.
$$
It is convenient to consider along with the entropy $H_\varepsilon(A,X):= \log_2 N_\varepsilon(A,X)$ the entropy numbers $\varepsilon_k(A,X)$:
$$
\varepsilon_k(A,X) :=\inf \{\varepsilon : \exists y^1,\dots ,y^{2^k} \in A ,\ A \subseteq \cup_{j=1}
^{2^k} B_X(y^j,\varepsilon)\}.
$$
In our definition of $N_\varepsilon(A)$ and $\varepsilon_k(A,X)$ we require $y^j\in A$. In a standard definition of $N_\varepsilon(A)$ and $\varepsilon_k(A,X)$ this restriction is not imposed.
However, it is well known (see \cite{Tbook2}, p.208) that these characteristics may differ at most by a factor $2$.
The following general conditional result
has been recently obtained in \cite{VT159}.
\begin{Theorem}[\cite{VT159}] \label{T4.10} Suppose that the $L_1$ unit ball $X^1_N:=\{f\in X_N: \|f\|_1\le 1\}$ of a subspace $X_N$ satisfies the condition $(B\ge 1)$
$$
\varepsilon_k(X^1_N,L_\infty) \le B\left\{\begin{array}{ll} N/k, &\quad k\le N,\\
2^{-k/N},&\quad k\ge N.\end{array} \right.
$$
Then for large enough absolute constant $C$ there exists a set of $$m \le CNB(\log_2(2N\log_2(8B)))^2$$ points $\xi^j\in \Omega$, $j=1,\dots,m$, such that for any $f\in X_N$
we have
$$
\frac{1}{2}\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le \frac{3}{2}\|f\|_1.
$$
\end{Theorem}
In particular,
this result shows that the investigation of the entropy numbers $\varepsilon_k(X^1_N,L_\infty)$ plays a crucial
role to prove the Marcinkiewicz discretization theorems in $L_1$.
The study
of the entropy numbers is a highly nontrivial and intrinsically interesting subject. Let us show this for
trigonometric polynomials.
On the one hand, it is known \cite{VT156} that in the case $d=2$
one has
\begin{equation}\label{6.1}
\varepsilon_k(\mathcal T( Q_n)_1,L_\infty)\ll n^{1/2} \left\{\begin{array}{ll} (| Q_n|/k) \log (4| Q_n|/k), &\quad k\le 2| Q_n|,\\
2^{-k/(2| Q_n|)},&\quad k\ge 2| Q_n|,\end{array}
\right.
\end{equation}
where
$\mathcal T( Q_n)_1=\{f\in \mathcal T( Q_n) : \|f\|_1\le 1\}$.
The proof of the estimate (\ref{6.1}) relies on a version of the Small Ball Inequality for the trigonometric system obtained for the wavelet type system (see \cite{VT156}). This proof is strongly based on the two-dimensional structure and its extension for higher dimensional case is problematic.
On the other hand, by
the trivial estimate $\log (4| Q_n|/k) \ll n$,
(\ref{6.1}) yields the following inequality
\begin{equation}\label{6.2}
\varepsilon_k(\mathcal T( Q_n)_1,L_\infty)\ll n^{3/2} \left\{\begin{array}{ll} | Q_n|/k , &\quad k\le 2| Q_n|,\\
2^{-k/(2| Q_n|)},&\quad k\ge 2| Q_n|.\end{array} \right.
\end{equation}
Even though to obtain new upper bounds of the entropy numbers of smoothness classes
the latter inequality is less applicable than estimate (\ref{6.1}),
both estimates (\ref{6.1}) and (\ref{6.2}), applied to the Marcinkiewicz-type discretization theorems,
give the same bounds on the number of nodes $m\ll |Q_n|n^{7/2}$.
As was mentioned above,
an extension of (\ref{6.1}) to the case
$d>2$ is not established. A somewhat straightforward technique given
in \cite{VT158} allows us to claim that for all $d$
\
\varepsilon_k(\mathcal T( Q_n)_1,L_\infty)\ll n^{d/2} \left\{\begin{array}{ll} (| Q_n|/k) \log (4| Q_n|/k), &\quad k\le 2| Q_n|,\\
2^{-k/(2| Q_n|)},&\quad k\ge 2| Q_n|.\end{array} \right.
\]
This can be used to derive the Marcinkiewicz inequality (\ref{1.1}) in $L_1$ (see \cite{VT158}).
We stress that in the paper \cite{VT159} the proof of (\ref{6.2}) is given for all $d$ and for general sets $\mathcal T(Q)_1$ instead of $\mathcal T(Q_n)_1$.
A very interesting open question is to investigate, even in the special case of the hyperbolic cross polynomials $\mathcal T(Q_n)$, if the relation $\mathcal T(Q_n) \in \mathcal M(m,1)$ with $m\asymp |Q_n|$ is valid.
From the results of \cite{VT158} and \cite{VT159}, the above relation holds with $m \gg |Q_n|n^{7/2}$.
The extra factor $n^{7/2}$ appears as a result of applying (\ref{6.2}), which contributed $n^{3/2}$,
and of applying the chaining technique, which contributed $n^2$.
Let $X_N=\operatorname{span}(u_1,\dots,u_N)$ be a real subspace of $L_1(\Omega)$.
Let us impose several assumptions on the system $\{u_i\}_{i=1}^N$ of real functions, which are needed to state the
discretization result in the case $q=1$ (\cite{VT159}).
{\bf A.} There exist $\alpha>0$, $\beta$, and $K_1$ such that for all $i\in\{1,\dots,N\}$ we have
\
|u_i(\mathbf x)-u_i(\mathbf y)| \le K_1N^\beta\|\mathbf x-\mathbf y\|_\infty^\alpha,\quad \mathbf x,\mathbf y \in \Omega.
\]
{\bf B.} There exists a constant $K_2$ such that $\|u_i\|_\infty^2 \le K_2$, $i=1,\dots,N$.
{\bf C.} Denote $X_N:= \operatorname{span}(u_1,\dots,u_N)$. There exist two constants $K_3$ and $K_4$ such that the following Nikol'skii-type inequality holds for all $f\in X_N$
\
\|f\|_\infty \le K_3N^{K_4/p}\|f\|_p,\quad p\in [2,\infty).
\]
Now we are in a position to formulate the main result of \cite{VT159}.
\begin{Theorem}[\cite{VT159}] \label{T6.1} Suppose that a real orthonormal system $\{u_i\}_{i=1}^N$ satisfies conditions {\bf A}, {\bf B}, and {\bf C}. Then for large enough $C_1=C(d,K_1,K_2,K_3,K_4,\Omega,\alpha,\beta)$ there exists a set of $m \le C_1N(\log N)^{7/2}$ points $\xi^j\in \Omega$, $j=1,\dots,m$, such that for any $f\in X_N$
we have
$$
\frac{1}{2}\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le \frac{3}{2}\|f\|_1.
$$
\end{Theorem}
\subsection{General subspaces in $L_2$}\label{subsection General subspaces 2}
In this subsection we consider some known results related to the discretization theorems and, in particular, we discuss applications of the recent results on random matrices to derive the Marcinkiewicz-type theorem in $L_2$.
We start with an important result on submatrices of an orthogonal matrix obtained by M. Rudelson.
Let us formulate it in our notations.
\begin{Theorem}[\cite{Rud}]
Let $\Omega_M=\{x^j\}_{j=1}^M$ be a discrete set with the probability measure $\mu(x^j)=1/M$, $j=1,\dots,M$. Let also
$\{u_i(x)\}_{i=1}^N$ be a real orthonormal system on $\Omega_M$ satisfying the following condition: for all $j$
\begin{equation}\label{5.1}
\sum_{i=1}^Nu_i(x^j)^2 \le Nt^2
\end{equation}
with some $t\ge 1$.
Then for every $\epsilon>0$ there exists a set $J\subset \{1,\dots,M\}$ of indices with cardinality
\begin{equation}\label{5.1a}
m:=|J| \le C\frac{t^2}{\epsilon^2}N\log\frac{Nt^2}{\epsilon^2}
\end{equation}
such that for any $f=\sum_{i=1}^N c_iu_i$ we have
$$
(1-\epsilon)\|f\|_{L_2(\Omega_M)}^2 \le \frac{1}{m} \sum_{j\in J} f(x^j)^2 \le (1+\epsilon)\|f\|_{L_2(\Omega_M)}^2.
$$
\end{Theorem}
As a corollary,
this result yields that if an orthonormal system $\{u_i\}_{i=1}^N$ on $\Omega_M$ satisfies (\ref{5.1}), one has
$$
{\mathcal U}_N := \operatorname{span}(u_1,\dots,u_N) \in \mathcal M(m,2)\quad \text{provided}\quad m\ge CN\log N
$$
with large enough $C$.
We remark that condition (\ref{5.1}) is fulfilled if the system $\{u_i\}_{i=1}^N$ is uniformly bounded: $\|u_i\|_{L_\infty(\Omega_M)}\le t$, $i=1,\dots,N$.
To state the next result,
we need the following condition on
the system $\{u_j\}_{j=1}^N$, cf.~\eqref{5.1}.
{\bf Condition E.} There exists a constant $t$ such that
\begin{equation}\label{ud5}
w(x):=\sum_{i=1}^N u_i(x)^2 \le Nt^2, \quad x\in\Omega.
\end{equation}
\begin{Theorem}[\cite{VT159}] \label{T5.4} Let $\{u_i\}_{i=1}^N$ be a real orthonormal system, satisfying condition {\bf E}.
Then for every $\epsilon>0$ there exists a set $\{\xi^j\}_{j=1}^m \subset \Omega$ with
$$
m \le C\frac{t^2}{\epsilon^2}N\log N
$$
such that for any $f=\sum_{i=1}^N c_iu_i$ we have
$$
(1-\epsilon)\|f\|_2^2 \le \frac{1}{m} \sum_{j=1}^m f(\xi^j)^2 \le (1+\epsilon)\|f\|_2^2.
$$
\end{Theorem}
Let us compare this theorem with the Rudelson result. First, Theorem~\ref{T5.4} establishes the Marcinkievicz-type discretization theorem for a general domain $\Omega$ instead of a discrete set $\Omega_M$.
Second, in Theorem \ref{T5.4} we have the $\log N$ term in place of $\log\frac{Nt^2}{\epsilon^2}$ in (\ref{5.1a}).
In its turn, the proof of
Theorem \ref{T5.4} rests on the following result on random matrices
\begin{Theorem}[{\cite[Theorem 1.1]{Tro12}}] \label{T5.3} Consider a finite sequence $\{T_k\}_{k=1}^m$ of independent, random, self-adjoint matrices with dimension $N$. Assume that
each random matrix is semi-positive and satisfies
$$
\lambda_{\max}(T_k) \le R\quad \text{almost surely}.
$$
Define
$$
s_{\min} := \lambda_{\min}\left(\sum_{k=1}^m \mathbb E(T_k)\right) \quad \text{and}\quad
s_{\max} := \lambda_{\max}\left(\sum_{k=1}^m \mathbb E(T_k)\right).
$$
Then
$$
\mathbb P\left\{\lambda_{\min}\left(\sum_{k=1}^m T_k\right) \le (1-\eta)s_{\min}\right\} \le
N\left(\frac{e^{-\eta}}{(1-\eta)^{1-\eta}}\right)^{s_{\min}/R}
$$
for $\eta\in[0,1)$ and
$$
\mathbb P\left\{\lambda_{\max}\left(\sum_{k=1}^m T_k\right) \ge (1+\eta)s_{\max}\right\} \le
N\left(\frac{e^{\eta}}{(1+\eta)^{1+\eta}}\right)^{s_{\max}/R},
$$
for $\eta\ge 0$.
\end{Theorem}
\subsection{General subspaces in $L_\infty$ and $L_2$}
\label{gensub}
We now demonstrate how the above Theorem \ref{NOUth}, which, basically, solves the problem of the Marcinkiewicz-type discretization for the $\mathcal T(Q)$ in the $L_2$ case, was used in \cite{KT169} in discretization of the uniform norm.
\begin{Theorem}[\cite{KT169}] \label{CT2} There are two positive absolute constants $C_1$ and $C_4$ with the following properties: For any $d\in {\mathbb{N}}$ and any $Q\subset \mathbb Z^d$ there exists a set $S_m$ of $m \le C_1|Q| $ points $\xi^j\in \mathbb T^d$, $j=1,\dots,m$, such that for any $f\in \mathcal T(Q)$
we have
$$
\|f\|_\infty \le C_4|Q|^{1/2}\|f\|_{L_\infty(S_m)}.
$$
\end{Theorem}
\begin{proof} We use the set of points provided by Theorem \ref{NOUth}. Then $m\le C_1|Q|$ and for any $f\in\mathcal T(Q)$ we have
\begin{align*}
\|f\|_\infty & \le |Q|^{1/2}\|f\|_2 \le |Q|^{1/2} C_2^{-1/2} \left(\frac{1}{m}\sum_{j=1}^m |f(\xi^j)|^2\right)^{1/2} \\ & \le |Q|^{1/2} C_2^{-1/2} \|f\|_{L_\infty(S_m)}.
\end{align*}
\end{proof}
We now present some results for more general subspaces than $\mathcal T(Q)$, which we discussed above.
\begin{Theorem}[\cite{VT158}] \label{CT3} Let $\Omega_M=\{x^j\}_{j=1}^M$ be a discrete set with the probability measure $\mu(x^j)=1/M$, $j=1,\dots,M$. Assume that
$\{u_i(x)\}_{i=1}^N$ is an orthonormal on $\Omega_M$ system (real or complex). Assume in addition that this system has the following property: for all $j=1,\dots, M$ we have
\begin{equation}\label{C1}
\sum_{i=1}^N |u_i(x^j)|^2 = N.
\end{equation}
Then there is an absolute constant $C_1$ such that there exists a subset $J\subset \{1,2,\dots,M\}$ with the property: $m:=|J| \le C_1 N$ and
for any $f\in Y_N:= \operatorname{span}\{u_1,\dots,u_N\}$ we have
$$
C_2 \|f\|_{L_2(\Omega_M)}^2 \le \frac{1}{m}\sum_{j\in J} |f(x^j)|^2 \le C_3 \|f\|_{L_2(\Omega_M)}^2,
$$
where $C_2$ and $C_3$ are absolute positive constants.
\end{Theorem}
We note that assumption (\ref{C1}) implies the discrete Nikol'skii inequality for $f\in Y_N$
\begin{equation}\label{C2a}
\|f\|_{L_\infty(\Omega_M)} \le N^{1/2}\|f\|_{L_2(\Omega_M)}.
\end{equation}
In the same way as we derived above Theorem \ref{CT2} from Theorem \ref{NOUth} and the Nikol'skii inequality we derive the following Theorem \ref{CT4} from Theorem~\ref{CT3} and (\ref{C2a}).
\begin{Theorem}\label{CT4} Let $\Omega_M=\{x^j\}_{j=1}^M$ be a discrete set with the probability measure $\mu(x^j)=1/M$, $j=1,\dots,M$. Assume that
$\{u_i(x)\}_{i=1}^N$ is an orthonormal on $\Omega_M$ system (real or complex). Assume in addition that this system has the following property: for all $j=1,\dots, M$ we have
\begin{equation}\label{C1'}
\sum_{i=1}^N |u_i(x^j)|^2 = N.
\end{equation}
Then there is an absolute constant $C_1$ such that there exists a subset $J\subset \{1,2,\dots,M\}$ with the property: $m:=|J| \le C_1 N$ and
for any $f\in Y_N:= \operatorname{span}\{u_1,\dots,u_N\}$ we have
$$
\|f\|_{L_\infty(\Omega_M)} \le C_4N^{1/2}\max_{j\in J} |f(x^j)|,
$$
where $C_4$ is an absolute positive constant.
\end{Theorem}
We now comment on a recent
result by J. Batson, D.A. Spielman, and N. Srivastava \cite{BSS} stated above in Theorem~\ref{thm:BSS}. Considering a new subspace $Y'_N:=\operatorname{span}\{1,u_1,\dots,u_N\}$ and applying the above result we see that in the above result we can list one more property of weights $w_j$: $\sum_{j=1}^M w_j \le C(b)$. Therefore, in the same way as we proved Theorem \ref{CT2} we can prove the following Theorem \ref{CT5} (see \cite{KT169}).
\begin{Theorem}\label{CT5}
Let $\Omega_M=\{x^j\}_{j=1}^M$ be a discrete set with the probability measure $\mu(x^j)=1/M$, $j=1,\dots,M$. Assume that a real subspace $Y_N$ satisfies the Nikol'skii-type inequality: for any $f\in Y_N$
$$
\|f\|_{L_\infty(\Omega_M)} \le H(N)\|f\|_{L_2(\Omega_M)}.
$$
Then for any $a>1$ there exists a subset $J\subset \{1,2,\dots,M\}$ with the property: $m:=|J| \le a N$ and
for any $f\in Y_N:= \operatorname{span}\{u_1,\dots,u_N\}$ we have
$$
\|f\|_{L_\infty(\Omega_M)} \le C(a)H(N)\max_{j\in J} |f(x^j)|,
$$
where $C(a)$ is a positive constant.
\end{Theorem}
An important feature of the above Theorems \ref{CT3} -- \ref{CT5} is that the domain is a discrete set $\Omega_M$. However, the statements of those theorems do not depend on $M$. This allows us to easily generalize some of those results to the case of general domain $\Omega$. We illustrate it on the example of generalization of Theorem \ref{CT5}. The way to do that is based on good approximation of $\|f\|_{L_2(\Omega)}$ and $\|f\|_\infty$ by $\|f\|_{L_2(\Omega_M)}$ and $\|f\|_{\Omega_M}$ respectively. We begin with the $L_2$ case.
\begin{Proposition}\label{CP1} Let $Y_N:=\operatorname{span}(u_1(\mathbf x),\dots,u_N(\mathbf x))$ with $\{u_i(\mathbf x)\}_{i=1}^N$ being a real orthonormal on $\Omega$ with respect to a probability measure $\mu$ basis for $Y_N$. Assume that $\|u_i\|_4:=\|u_i\|_{L_4(\Omega,\mu)} <\infty$ for all $i=1,\dots,N$. Then for any $\delta>0$ there exists
a set $\Omega_M=\{\mathbf x^j\}_{j=1}^M$ such that for any $f\in Y_N$
\begin{equation}\label{C6'}
| \|f\|_{L_2(\Omega)}^2 - \|f\|_{L_2(\Omega_M)}^2| \le \delta \|f\|_{L_2(\Omega)}^2
\end{equation}
where
$$
\|f\|_{L_2(\Omega_M)}^2 := \frac{1}{M}\sum_{j=1}^M |f(\mathbf x^j)|^2.
$$
\end{Proposition}
\begin{proof}
Consider a real function $f\in L_2(\Omega):=L_2(\Omega,\mu)$ with respect to a probability measure $\mu$. Define $\Omega^M:= \Omega\times\cdots\times\Omega$ and $\mu^M:=\mu \times\cdots\times \mu$. For $\mathbf x^j\in \Omega$ denote $\mathbf z:= (\mathbf x^1,\dots,\mathbf x^M)\in \Omega^M$ and for $g\in L_1(\Omega^M, \mu^M)$
$$
\mathbb E(g):= \int_{\Omega^M}g(\mathbf z)d\mu^M.
$$
Then it is well known from the study of the Monte Carlo integration method that we have for $g\in L_2(\Omega,\mu)$
\begin{equation}\label{C2}
\mathbb E\left(\left(\int_\Omega g d\mu - \frac{1}{M}\sum_{j=1}^M g(\mathbf x^j)\right)^2\right) \le \|g\|_2^2/M.
\end{equation}
Denote $U:= \max_{1\le i\le N} \|u_i\|_4$. Then for $g=u_iu_j$, $1\le i,j\le N$ we find from (\ref{C2}) and the Markov inequality that for any $\varepsilon>0$ we have
\begin{equation}\label{C3}
\mu^M\left\{\mathbf z: \left(\int_\Omega g d\mu - \frac{1}{M}\sum_{j=1}^M g(\mathbf x^j)\right)^2 \ge \varepsilon\right\} \le
\frac{U^4}{\varepsilon M}.
\end{equation}
Therefore, for any $\varepsilon>0$ we can find big enough $M=M(\varepsilon,N)$ such that there exists a set
$\Omega_M=\{\mathbf x^j\}_{j=1}^M$ such that for all $g$ of the form $g=u_iu_j$, $1\le i,j\le N$
we have
\begin{equation}\label{C4}
\left|\int_\Omega g d\mu - \frac{1}{M}\sum_{j=1}^M g(\mathbf x^j)\right| \le \varepsilon^{1/2}.
\end{equation}
Consider $f= \sum_{i=1}^N b_iu_i$. Then $\|f\|_{L_2(\Omega)}^2=\sum_{i=1}^N b_i^2$.
Inequality (\ref{C4}) implies
\begin{equation}\label{C5}
| \|f\|_{L_2(\Omega)}^2 - \|f\|_{L_2(\Omega_M)}^2| \le \varepsilon^{1/2} N \|f\|_{L_2(\Omega)}^2.
\end{equation}
Now, for a $\delta>0$ choosing $\varepsilon= \delta^2 N^{-2}$ we obtain from (\ref{C5}) that
\begin{equation}\label{C6}
| \|f\|_{L_2(\Omega)}^2 - \|f\|_{L_2(\Omega_M)}^2| \le \delta \|f\|_{L_2(\Omega)}^2.
\end{equation}
\end{proof}
Proposition \ref{CP1} and the above mentioned fundamental result (\ref{C2'}) imply the following discretization result.
\begin{Theorem}\label{CT5'} Let $Y_N:=\operatorname{span}(u_1(\mathbf x),\dots,u_N(\mathbf x))$ with $\{u_i(\mathbf x)\}_{i=1}^N$ being a real orthonormal on $\Omega$ with respect to a probability measure $\mu$ basis for $Y_N$. Assume that $\|u_i\|_4:=\|u_i\|_{L_4(\Omega,\mu)} <\infty$ for all $i=1,\dots,N$.
Then for any
number $a>1$ there exist a set of points $S_m=\{\xi^j\}_{j=1}^m$ and a set of positive weights $\{w_j\}_{j=1}^m$ with $m \le aN$ so that for any $f\in Y_N:= \operatorname{span}\{u_1,\dots,u_N\}$ we have
\begin{equation}\label{C2''}
\frac{1}{2}\|f\|_2^2 \le \sum_{j=1}^m w_jf(\mathbf x^j)^2 \le C(a)\|f\|_2^2.
\end{equation}
\end{Theorem}
Clearly, the assumptions of Theorem \ref{CT5'} can be written in a shorter form: $Y_N$ is a $N$-dimensional subspace of $L_4(\Omega,\mu)$.
Let us now consider the case $L_\infty$, i.e. the case of the uniform norm. Assume that
$\Omega \subset {\mathbb R}^d$ is a compact set and $Y_N:=\operatorname{span}(u_1(\mathbf x),\dots,u_N(\mathbf x))$ with $\{u_i(\mathbf x)\}_{i=1}^N$ being an orthonormal basis of continuous functions for $Y_N$. Then it is easy to see that for an $\varepsilon>0$ we can find $\Omega_M=\{\mathbf x^j\}_{j=1}^M$ such that for all $g$ of the form $g=u_i$, $1\le i\le N$, we have
\begin{equation}\label{C7}
\left|\|g\|_\infty - \|g\|_{\Omega_M}\right| \le \varepsilon.
\end{equation}
We derive from here the following analog of (\ref{C6}): for any $\delta>0$ there exists $M=M(\delta)$ such that for all $f\in Y_N$
\begin{equation}\label{C8}
\left|\|f\|_\infty - \|f\|_{\Omega_M}\right| \le \delta \|f\|_\infty.
\end{equation}
\begin{Remark}\label{CR1}
We will need a set $\Omega_M$ such that both (\ref{C6}) and (\ref{C8}) are satisfied.
It is easy to see that under assumption of continuity of functions in $Y_N$ on $\Omega=[0,1]^d$ and $\mu$ being the Lebesgue measure on a compact $\Omega$ we can
achieve both (\ref{C6}) and (\ref{C8}) by dividing $[0,1]^d$ into small enough cubes of the same volume.
\end{Remark}
We now prove the following result from \cite{KT169}.
\begin{Theorem}\label{CT6}
Let $\Omega := [0,1]^d$. Assume that a real subspace $Y_N\subset {\mathcal C}(\Omega)$ satisfies the Nikol'skii-type inequality: for any $f\in Y_N$
\begin{equation}\label{C9}
\|f\|_\infty \le H(N)\|f\|_2,\quad \|f\|_2 := \left(\int_\Omega |f(\mathbf x)|^2d\mu\right)^{1/2},
\end{equation}
where $\mu$ is the Lebesgue measure on $\Omega$.
Then for any $a>1$ there exists a set $S_m=\{\xi^j\}_{j=1}^m\subset \Omega$ with the property: $m \le a N$ and
for any $f\in Y_N$ we have
$$
\|f\|_\infty \le C(a)H(N)\max_{1\le j\le m} |f(\xi^j)|,
$$
where $C(a)$ is a positive constant.
\end{Theorem}
\begin{proof} The proof consists of two steps. First, using (\ref{C6}) with $\delta=1/2$, we find a discrete set $\Omega_M$ such that for any $f\in Y_N$ we have
\begin{equation}\label{C10}
| \|f\|_{L_2(\Omega)}^2 - \|f\|_{L_2(\Omega_M)}^2| \le \|f\|_{L_2(\Omega)}^2/2.
\end{equation}
Second, we consider a new space $Y_N(\Omega_M)$ which consists of all $f\in Y_N$ restricted to the set $\Omega_M$. Introduce a probability measure $\nu$ on $\Omega_M$ by
$\nu(\mathbf x^j)=1/M$, $j=1,\dots,M$. Our assumption that $Y_N$ satisfies the Nikol'skii inequality
(\ref{C9}) and the relation (\ref{C10}) imply that the $Y_N(\Omega_M)$ also satisfies the Nikol'skii inequality. Applying Theorem \ref{CT5} we find a subset $S_m \subset \Omega_M$
with $m\le aN$ such that
\begin{equation}\label{C11}
\|f\|_{L_\infty(\Omega_M)} \le C'(a)H(N)\|f\|_{L_\infty(S_m)}.
\end{equation}
By Remark \ref{CR1} we can claim that $\Omega_M$ guarantees simultaneously (\ref{C6}) and (\ref{C8}). Then by (\ref{C8}) with $\delta=1/2$ we obtain from (\ref{C11})
$$
\|f\|_\infty \le C''(a)H(N)\|f\|_{L_\infty(S_m)}.
$$
This completes the proof of Theorem \ref{CT6}.
\end{proof}
\subsection{The Marcinkiewicz theorem and sparse approximation}\label{sec2.4}
We now give some general remarks on the case $q=2$, which illustrate the problem. We describe the properties of the subspace $X_N$ in terms of a system $\mathcal U_N:=\{u_i\}_{i=1}^N$ of functions such that
$X_N = \operatorname{span}\{u_i, i=1,\dots,N\}$. In the case $X_N \subset L_2$ we assume that
the system is orthonormal on $\Omega$ with respect to measure $\mu$. In the case of real functions we associate with $x\in\Omega$ the matrix $G(x) := [u_i(x)u_j(x)]_{i,j=1}^N$. Clearly, $G(x)$ is a symmetric positive semi-definite matrix of rank $1$.
It is easy to see that for a set of points $\xi^k\in \Omega$, $k=1,\dots,m$, and $f=\sum_{i=1}^N b_iu_i$ we have
\begin{equation}\label{Gb}
\sum_{k=1}^m{\langle}_k f(\xi^k)^2 - \int_\Omega f(x)^2 d\mu = {\mathbf b}^T\left(\sum_{k=1}^m {\langle}_k G(\xi^k)-I\right){\mathbf b},
\end{equation}
where ${\mathbf b} = (b_1,\dots,b_N)^T$ is the column vector and $I$ is the identity matrix. Therefore,
the $\mathcal M^w(m,2)$ problem is closely connected with a problem of approximation (representation) of the identity matrix $I$ by an $m$-term approximant with respect to the system $\{G(x)\}_{x\in\Omega}$. It is easy to understand that under our assumptions on the system $\mathcal U_N$ there exist a set of nodes $\{\xi^k\}_{k=1}^m$ and a set of weights $\{{\langle}_k\}_{k=1}^m$, with $m\le N(N+1)/2$ such that
$$
I = \sum_{k=1}^m {\langle}_k G(\xi^k)
$$
and, therefore, we have for any $X_N \subset L_2$ that
\
X_N \in \mathcal M^w\big(\tfrac{N(N+1)}2,2,0\big).
\]
For the alternative proof see Theorem \ref{gfT1} in case $q=2$.
As we have seen the Marcinkiewicz-type
discretization problem in $L_2$ is closely connected with approximation of the identity matrix $I$ by an $m$-term approximant of the form $\frac{1}{m}\sum_{k=1}^m G(\xi^k)$ in the operator norm from $\ell^N_2$ to $\ell^N_2$ (spectral norm).
In a similar way, the Marcinkiewicz-type discretization problem with weights (in $L_2$) is closely connected with approximation of the identity matrix $I$ by an $m$-term approximant of the form $ \sum_{k=1}^m \lambda_k G(\xi^k)$ in the operator norm from $\ell^N_2$ to $\ell^N_2$.
Hence, one can study the following sparse approximation problem.
Let the system $\{u_i(x)\}_{i=1}^N$ be orthonormal and bounded. Then
\begin{equation}\label{5.2b'}
w(x):=\sum_{i=1}^N u_i(x)^2 \le B.
\end{equation}
Consider
the dictionary
$$
{\mathcal D}^u := \{g_x\}_{x\in\Omega},\quad g_x:= G(x)B^{-1},\quad G(x):=[u_i(x)u_j(x)]_{i,j=1}^N.
$$
Then condition (\ref{5.2b'}) assures that for the Frobenius norm of $g_x$ one has
\
\|g_x\|_F = w(x)B^{-1} \le 1.
\]
Our assumption on the orthonormality of the system $\{u_i\}_{i=1}^N$ gives
$$
I = \int_\Omega G(x)d\mu = B\int_\Omega g_x d\mu,
$$
which implies that $I/B \in A_1({\mathcal D}^u)$, where $A_1({\mathcal D}^u)$ is the closure
of the convex hull of the dictionary ${\mathcal D}^u$.
We now comment on the use of greedy approximation approach to obtain a deterministic construction
of $\{\xi^\nu,{\langle}_\nu\}_{\nu=1}^m$ providing exact discretization for all $f\in X_N$.
We use the Weak Orthogonal Greedy Algorithm (Weak Orthogonal Matching Pursuit) for $m$-term approximation, which is defined as follows
(see \cite{Tbook2}).
{\bf Weak Orthogonal Greedy Algorithm (WOGA).} Let $t\in (0,1]$ be a weakness parameter. We define
$f^{o,t}_0 :=f$. Then for each $m\ge 1$ we inductively define:
(1) $\varphi^{o,t}_m \in {\mathcal D}$ is any element satisfying
$$
|\langle f^{o,t}_{m-1},\varphi^{o,t}_m\rangle | \ge t
\sup_{g\in {\mathcal D}} |\langle f^{o,t}_{m-1},g\rangle |.
$$
(2) Let $H_m^t := \operatorname{span} (\varphi_1^{o,t},\dots,\varphi^{o,t}_m)$ and let
$P_{H_m^t}(f)$ denote an operator of orthogonal projection onto $H_m^t$.
Define
$$
G_m^{o,t}(f,{\mathcal D}) := P_{H_m^t}(f).
$$
(3) Define the residual after $m$th iteration of the algorithm
$$
f^{o,t}_m := f-G_m^{o,t}(f,{\mathcal D}).
$$
In the case $t=1$ the WOGA is called the Orthogonal
Greedy Algorithm (OGA).
It is clear from the definition of the WOGA that in case of a finite dimensional Hilbert space $H$ it terminates after $M:=\dim H$ iterations. Consider the Hilbert space
$H^u$ to be a closure in the Frobenius norm of $\operatorname{span}\{g_x, x\in\Omega\}$ with the inner product generated by the Frobenius norm: for $A=[a_{i,j}]_{i,j=1}^N$ and
$B=[b_{i,j}]_{i,j=1}^N$
$$
\<A,B\> = \sum_{i,j=1}^N a_{i,j}b_{i,j}
$$
in case of real matrices (with standard modification in case of complex matrices).
We apply the WOGA in
the Hilbert space $H^u$ with respect to the dictionary ${\mathcal D}^u$. The above remark shows that it
provides us a constructive proof of Theorem \ref{gfT1} in case $q=2$.
Under additional assumptions on the system $\{u_i\}$ we can obtain some constructive results for the Marcinkiewicz-type
discretization problem in $L_2$. We use the following greedy algorithm.
{\bf Relaxed Greedy Algorithm (RGA).} Let $f^r_0:=f$
and $G_0^r(f):= 0$. For a function $h$ from a real Hilbert space $H$, let $g=g(h)$ denote the function from ${\mathcal D}$, which
maximizes $\langle h,g\rangle$ (we assume the existence of such an element). Then, for each $m\ge 1$, we inductively define
$$
G_m^r(f):=
\left(1-\frac{1}{m}\right)G_{m-1}^r(f)+\frac{1}{m}g(f_{m-1}^r), \quad
f_m^r:= f-G_m^r(f).
$$
We make use of the known approximation error of the RGA (see \cite{Tbook2}, p.90).
For a dictionary ${\mathcal D}$ in a Hilbert space $H$ with an inner product $\<\cdot,\cdot\>$, $A_1({\mathcal D})$ denotes the closure
of the convex hull of the dictionary ${\mathcal D}$.
\begin{Theorem}\label{T4.1g} For the Relaxed Greedy Algorithm we have, for each $f\in
A_1({\mathcal D})$, the estimate
$$
\|f-G_m^r(f)\|\le \frac{2}{\sqrt{m}},\quad m\ge 1.
$$
\end{Theorem}
We impose the following restriction on the system $\{u_i\}$: $w(x) \le Nt^2$, i.e., $B=Nt^2$.
Using the RGA, we apply
Theorem \ref{T4.1g} for any $m\in {\mathbb{N}}$ to constructively find points $\xi^1,\dots,\xi^m$ such that
\
\left\|\frac{1}{m}\sum_{k=1}^m G(\xi^k)-I\right\|_F \le 2Nt^2 m^{-1/2}.
\]
Therefore, using the inequality $\|A\|\le \|A\|_F$ and relation (\ref{Gb}) we arrive at the following result.
\begin{Theorem}[{\cite[Proposition~5.1]{VT159}}] \label{P5.1g} Let $\{u_i\}_{i=1}^N$ be an orthonormal system, satisfying condition {\bf E}. Then there exists a constructive set $\{\xi^j\}_{j=1}^m \subset \Omega$ with $m\le C(t)N^2$
such that for any $f=\sum_{i=1}^N c_iu_i$ we have
$$
\frac{1}{2}\|f\|_2^2 \le \frac{1}{m} \sum_{j=1}^m f(\xi^j)^2 \le \frac{3}{2}\|f\|_2^2.
$$
\end{Theorem}
\section{ Exact weighted discretization}
\label{Ex}
\subsection{A general result on exact recovery and numerical integration}
We begin with a simple useful result which is well-known in many special cases.
\begin{Proposition}\label{gfP1} Suppose $\{u_i(x)\}_{i=1}^N$ is linearly independent system of functions on $\Omega$. Then there exist a set of points $\{\xi^j\}_{j=1}^N\subset \Omega$ and a set of functions $\{\psi_j(x)\}_{j=1}^N$ such that for any $f\in X_N:=\operatorname{span}(u_1(x),\dots,u_N(x))$ we have
$$
f(x) = \sum_{j=1}^N f(\xi^j)\psi_j(x).
$$
\end{Proposition}
\begin{proof} For points $x^1,\dots,x^k$ consider the matrix $U(x^1,\dots,x^k) := [u_i(x^j)]_{i,j=1}^k$ with elements $u_i(x^j)$.
\begin{Lemma}\label{gfL1} Under the conditions of Proposition \ref{gfP1} there exists a set of points $\{\xi^j\}_{j=1}^N\subset \Omega$ such that $D(\xi^1,\dots,\xi^N):= \det U(\xi^1,\dots,\xi^N) \neq 0$.
\end{Lemma}
\begin{proof}
We prove this lemma by induction.
Indeed, by the linear independence assumption we find $\xi^1$ such that $u_1(\xi^1)\neq 0$. Suppose $2\le k\le N$ and we found a set $\{\xi^j\}_{j=1}^{k-1}$ such that
$D(\xi^1,\dots,\xi^{k-1})\neq 0$. Consider the function $D(\xi^1,\dots,\xi^{k-1},x)$, $x\in \Omega$. This function is a nontrivial linear combination of $u_1(x),\dots,u_k(x)$.
Therefore, there exists $\xi^k\in\Omega$ such that $D(\xi^1,\dots,\xi^{k})\neq 0$.
This completes the proof of the existence of points $\{\xi^j\}_{j=1}^N\subset \Omega$ such that $D(\xi^1,\dots,\xi^N) \neq 0$.
\end{proof}
Let $f \in X_N$. Then $f$ has a unique
representation $f(x) = \sum_{i=1}^N b_iu_i(x)$. The set of coefficients $\mathbf b:=(b_1,\dots,b_N)$
is uniquely determined from the linear system
$$
(f(\xi^1),\dots,f(\xi^N)) = \mathbf b U(\xi^1,\dots,\xi^N)
$$
and each $b_i$ is a linear combination of $f(\xi^j)$, $j=1,\dots,N$.
This completes the proof of Proposition \ref{gfP1}.
\end{proof}
As a direct corollary of Proposition \ref{gfP1} we obtain the following result on exact
numerical integration.
\begin{Proposition}\label{gfP2} Suppose $\{u_i(x)\}_{i=1}^N$ is a linearly independent system of integrable functions with respect to the measure $\mu$ on $\Omega$. Then there exist a set of points $\{\xi^j\}_{j=1}^N\subset \Omega$ and a set of weights $\{\lambda_j\}_{j=1}^N$ such that for any $f\in X_N:=\operatorname{span}(u_1(x),\dots,u_N(x))$ we have
$$
\int_\Omega f(x)d\mu = \sum_{j=1}^N \lambda_j f(\xi^j) .
$$
\end{Proposition}
Proposition \ref{gfP2} shows that for any $N$-dimensional subspace of integrable functions we can find an exact cubature formula with $N$ nodes. However, it is not true
for numerical integration by the Quasi-Monte Carlo methods, i.e. by methods with equal weights. Let $\alpha \in (0,1)$ be an irrational number. A trivial example of $\Omega=[0,1]$, $\mu$ is the Lebsgue measure, and $f(x)=1/\alpha$ for $x\in [0,\alpha)$, $f(x) = -(1-\alpha)^{-1}$ for $x\in [\alpha,1]$, shows that there is no Quasi-Monte Carlo quadrature formula, which integrates $f$ exactly.
\subsection{General exact weighted discretization results}
For simplicity, we shall use the notation $L_q(\Omega, \mu)$, or $L_q(\Omega)$ or simply $L_q$ to denote the Lebesgue $L_q$-space defined with respect to the measure $\mu$ on $\Omega$, whenever it does not cause any confusion from the context.
We begin with a general result establishing an exact weighted discretization theorem in $L_q(\Omega, \mu)$ for a general measure space $(\Omega,\mu)$ and even exponent $q$ with at most
$$
M(N,q):= {N+q-1 \choose q}=\frac{(N+q-1)!}{q!(N-1)!}\asymp N^q
$$
nodes, where $N$ is the dimension of the space. We also give an example of a space with discrete $\Omega$ showing that the number ${N+q-1 \choose q}$ cannot be improved.
\begin{Theorem}\label{gfT1}
Let $q$ be an even positive integer, $N\in{\mathbb{N}}$, and $M:=M(N,q)$. For every $N$-dimensional real subspace $X_N\subset L_q(\Omega,\mu)$ we have that
$X_N \in \mathcal M^w\left(M,q,0\right)$.
\end{Theorem}
We point out that under some extra conditions on $\Omega$ a stronger result ensuring positivity of the weights will be given in Corollary~\ref{thm:positive general} of Section~\ref{sec:3-1}.
\begin{proof} For $\mathbf k=(k_1,\dots,k_N)\in \mathbb Z^N_+$ denote $u_\mathbf k:= u_1^{k_1}\cdots u_N^{k_N}$, where $X_N=\operatorname{span}\{u_1,\dots,u_N\}$. Consider the linear space
$$
X_N(q) :=\operatorname{span}\{u_\mathbf k:\, \mathbf k\in K(N,q)\},
$$
where $K(N,q):=\{(k_1,\dots,k_N)\in\mathbb Z_+^N:k_1+\dots+k_N=q\}$.
Then $\dim(X_N(q)) \le M:= M(N,q)$ and $X_N(q) \subset L_1(\Omega,\mu)$.
Proposition \ref{gfP2} implies that there exist a set of points $\{\xi^j\}_{j=1}^M\subset \Omega$ and a set of weights $\{\lambda_j\}_{j=1}^M$ such that for any $f\in X_N(q)$ we have
$$
\int_\Omega f(x)d\mu = \sum_{j=1}^M \lambda_j f(\xi^j) .
$$
In particular, this implies that for any $f\in X_N$ we have
$$
\int_\Omega f(x)^qd\mu = \sum_{j=1}^M \lambda_j f(\xi^j)^q .
$$
\end{proof}
Let $\Omega_M=\{\xi^j\}_{j=1}^M$ be a discrete set with the probability measure $\mu(\xi^j)=1/M$, $j=1,\dots,M$.
\begin{Theorem}\label{gfT2} Let $q$ be an even positive integer, $N\in{\mathbb{N}}$ and $M:=M(N,q)$.
There exist a discrete set $\Omega_M$ and an $N$-dimensional real subspace $X_N\subset L_q(\Omega_M)$ such that $X_N\notin \mathcal M^w(m,q,0)$ for any $m<M$.
\end{Theorem}
\begin{proof}
We begin with some preliminaries.
Suppose $\{u_j\}_{j=1}^N$ is a basis of $X_N$, where $X_N\subset L_q( \Omega_M,\mu)$. We have $X_N \in \mathcal M^w\left(m,q,0\right)$ if and only if there exist $\xi^\nu\in \Omega_M$ and $\lambda_\nu\in{\mathbb R}$, $\nu=1,\dots,m$, such that for any $b_j\in{\mathbb R}$, $j=1,\dots,N$, with $f=\sum_{j=1}^N b_j u_j$ we have
\begin{gather} \nonumber
0=\int_{ \Omega_M}|f|^qd\mu - \sum_{\nu=1}^m\lambda_\nu|f(\xi^\nu)|^q
= \int_{ \Omega_M}f^qd\mu - \sum_{\nu=1}^m\lambda_\nu f(\xi^\nu)^q \\
= \sum_{(k_1,\dots,k_N)\in K(N,q)}\frac{q!}{k_1!\dots k_N!}
\left(\int_{ \Omega_M} \prod_{j=1}^N u_j^{k_j} d\mu -\sum_{\nu=1}^m \lambda_\nu \prod_{j=1}^N u_j(\xi^\nu)^{k_j} \right)
\prod_{j=1}^N b_j^{k_j}, \label{eqn:q diff}
\end{gather}
where, as above $K(N,q):=\{(k_1,\dots,k_N)\in\mathbb Z_+^N:k_1+\dots+k_N=q\}$. Due to linear independence of multivariate monomials, \eqref{eqn:q diff} holds for any $b_j$ if and only if
\begin{equation}\label{eqn:K reduction1}
\sum_{\nu=1}^m \lambda_\nu \prod_{j=1}^N u_j(\xi^\nu)^{k_j}=\int_ {\Omega_M} \prod_{j=1}^N u_j^{k_j} d\mu,
\quad (k_1,\dots,k_N)\in K(N,q).
\end{equation}
Denote by $\mathcal P(N,q)$ the space of homogeneous algebraic polynomials in $N$ variables $x_1,\dots,x_N$ of degree $q$:
$$
\mathcal P(N,q):=\text{span} \{ x_1^{k_1}\cdots x_N^{k_N}:\, \mathbf k= (k_1,\dots,k_N)\in K(N,q)\}.
$$
Then $\dim(\mathcal P(N,q)) = M$ and by Lemma \ref{gfL1} with $u_\mathbf k(\mathbf x):= x_1^{k_1}\cdots x_N^{k_N}$, $\mathbf x=(x_1,\dots,x_N)$, $\mathbf k= (k_1,\dots,k_N)$, $\mathbf k\in K(N,q)$ there exists a set of points $\{\xi^\nu\}_{\nu=1}^M$ such that $D(\xi^1,\dots,\xi^M):= \det U(\xi^1,\dots,\xi^M) \neq 0$. Define $Y_N$ as a restriction of $\mathcal P(N,q)$ onto the
set $\Omega_M:= \{\xi^\nu\}_{\nu=1}^M$. Introduce a probability
measure $\mu$ on $\Omega_M$ by $\mu(\{\xi^\nu\})=1/M$, $\nu=1,\dots,M$.
Then on one hand from the definition of $\mu$ we obtain for any $f\in Y_N$ that
\begin{equation}\label{gf1'}
\int_{\Omega_M} fd\mu = \frac{1}{M} \sum_{\nu=1}^M f(\xi^\nu).
\end{equation}
On the other hand, define the space $X_N:= \operatorname{span}\Bigl\{u_j\Bigl|_{\Omega_M}:\ \ j=1,2,\cdots, N\Bigr\}$ of functions on $\Omega_M$ with measure $\mu$, where $u_j(\mathbf{x})=x_j$, $j=1,\dots, N$.
Assume that for any $f\in X_N$,
\begin{equation}\label{gf2'}
\int_{\Omega_M} f^qd\mu = \sum_{\nu=1}^M {\langle}_\nu f(\xi^\nu)^q.
\end{equation}
Then by (\ref{eqn:K reduction1}) and the choice of $\{\xi^\nu\}_{\nu=1}^M$ the set of weights satisfying (\ref{gf2'}) is unique. Relation (\ref{gf1'}) shows that ${\langle}_\nu =1/M$,
$\nu=1,\dots,M$. Therefore, none of these ${\langle}_\nu$ is equal to zero.
This argument completes the proof.
\end{proof}
Now we show that one cannot obtain an exact weighted Marcinkiewicz-type theorem in $L_q$ when $q$ is not an even integer.
\begin{Proposition}
Consider $X_2:=\{\alpha \sin t+\beta \cos t: \alpha,\beta\in{\mathbb R}\}$ as a subspace of $L_q(\mathbb T)$, where $\mathbb T$ is the unit circle and $q$ is not an even integer, $1\le q<\infty$. Then $X_2\notin \mathcal M^w(m,q,0)$ for any $m\in{\mathbb{N}}$.
\end{Proposition}
\begin{proof}
Suppose to the contrary that for some distinct $\xi^\nu\in[0,2\pi)$ and non-zero $\lambda_\nu$, $\nu=1,\dots,m$, we have
\begin{equation}\label{eqn:circle example1}
\frac1{2\pi}\int_0^{2\pi} |f(t)|^qdt=\sum_{\nu=1}^m \lambda_\nu |f(\xi^\nu)|^q
\end{equation}
for every $f\in X_2$. For any $\theta\in{\mathbb R}$ we define $f_\theta(t):=\sin(\theta-t)\in X_2$ and note that
\[
\frac1{2\pi}\int_0^{2\pi} |f_\theta(t)|^qdt=\frac1{2\pi}\int_0^{2\pi} |\sin t|^qdt=c(q),
\]
where $c(q)>0$ depends only on $q$ and does not depend on $\theta$. Therefore, by~\eqref{eqn:circle example1} we have
\[
c(q)=\sum_{\nu=1}^m \lambda_\nu |f_\theta(\xi^\nu)|^q=\sum_{\nu=1}^m \lambda_\nu |\sin(\theta-\xi^\nu)|^q=:G(\theta),
\]
i.e., $G$ is a constant function. However, if $n-1<q\le n$, $n\in{\mathbb{N}}$ and $q$ is not an even integer, then the function $g(t):=|\sin t|^q$ is infinitely smooth when $t\in(0,\pi)$ while $g^{(n)}(0)$ does not exist. As $g$ is a $\pi$-periodic function and
$G(\theta)=\sum_{\nu=1}^m\lambda_\nu g(\theta-\xi^\nu)$, without loss of generality we may assume that $|\xi^\nu-\xi^{v'}|\neq \pi$ for $1\leq v, v'\leq m$.
It then follows that $G^{(n)}(\xi^1)$ does not exist, which contradicts the fact that $G$ is constant.
\end{proof}
\subsection{Relation between exact weighted discretization and recovery problem}
We now discuss a connection between exact weighted discretization theorem in $L_2$ and exact recovery.
\begin{Proposition}\label{gfP3} Let $N$-dimesional real $X_N$ be a subspace of $L_2(\Omega)$. Suppose sets of points $\{\xi^j\}_{j=1}^m$ and of weights $\{\lambda_j\}_{j=1}^m$ are such that for any $f\in X_N$ we have
\begin{equation}\label{gf1}
\|f\|_2^2 = \sum_{j=1}^m \lambda_j f(\xi^j)^2.
\end{equation}
Then for any orthonormal basis $\{u_i\}_{i=1}^N$ of $X_N$ we have for any $f\in X_N$
\begin{equation}\label{gf2}
f(x) = \sum_{j=1}^m \lambda_j f(\xi^j)D(x,\xi^j),\quad D(x,y):= \sum_{i=1}^N u_i(x)u_i(y).
\end{equation}
\end{Proposition}
\begin{proof} For $f,g\in X_N$ denote
$$
\<f,g\> := \int_\Omega f(x)g(x)d\mu.
$$
Using identity $4\<f,g\>= \|f+g\|_2^2 - \|f-g\|_2^2$, we obtain from (\ref{gf1}) that for any $f,g\in X_N$
$$
\<f,g\> = \sum_{j=1}^m \lambda_j f(\xi^j)g(\xi^j).
$$
This implies
$$
f(x) = \<f,D(x,\cdot)\> = \sum_{j=1}^m \lambda_j f(\xi^j)D(x,\xi^j),
$$
which completes the proof.
\end{proof}
Note that if for some orthonormal bases $\{u_i\}_{i=1}^N$ of $X_N$ there exist
sets of points $\{\xi^j\}_{j=1}^m$ and of weights $\{\lambda_j\}_{j=1}^m$ such that
(\ref{gf2}) holds for all $f\in X_N$ then also (\ref{gf1}) holds for all $f\in X_N$.
Indeed,
$$
\sum_{j=1}^m \lambda_j f(\xi^j)^2 = \sum_{j=1}^m \lambda_j f(\xi^j)\int_\Omega f(y)D(\xi^j,y)d\mu
$$
$$
= \int_\Omega f(y)\left(\sum_{j=1}^m \lambda_j f(\xi^j) D(\xi^j,y)\right)d\mu = \int_\Omega f(y)^2d\mu = \|f\|_2^2.
$$
Given a finite subset $Q$ of $\mathbb Z^d$, we denote
$$
\mathcal T(Q):= \Bigl\{f: f=\sum_{\mathbf k\in Q}c_\mathbf k e^{i(\mathbf k,\mathbf x)},\ \ c_{\mathbf{k}}\in \mathbb{C},\ \ \mathbf k\in Q\Bigr\}.
$$
Also, given $\mathbf N=(N_1,\dots,N_d)\in\mathbb{Z}_+^d$, we write $\mathcal T(\mathbf N)$ for the set $\mathcal T(\Pi(\mathbf N))$ with $$\Pi(\mathbf N):=\bigl\{ \mathbf k=(k_1, \cdots, k_d)\in\mathbb Z_+^d:\ \ |k_j|\leq N_j,\ \ j=1,\cdots, d\bigr\}. $$
\begin{Proposition}\label{gfP4} Let $\mathbf N=(N_1,\dots,N_d)\in\mathbb{Z}_+^d$. Suppose a cubature formula
$\Lambda_m(\cdot,\xi)$ is exact for $\mathcal T(2\mathbf N)$.
Then $m\ge \prod_{j=1}^d (2N_j+1)$.
\end{Proposition}
\begin{proof} The proof is by contradiction. Suppose $m< \prod_{j=1}^d (2N_j+1)$.
Then, using the fact $\dim \mathcal T(\mathbf N) = \prod_{j=1}^d (2N_j+1)$, we find a non-zero $f\in \mathcal T(\mathbf N)$ such that $f(\xi^\nu)=0$, $\nu=1,\dots,m$. Then $|f|^2\in \mathcal T(2\mathbf N)$ and
$\int_{\mathbb T^d}|f(\mathbf x)|^2d\mu \neq 0$ but $\Lambda_m(|f|^2,\xi)=0$. We got a contradiction, which proves Proposition~\ref{gfP4}.
\end{proof}
\subsection{Exact weighted discretization for spaces of spherical harmonics}
Theorems \ref{gfT1} and \ref{gfT2} solve the problem of optimal behavior of $m$ for
exact weighted discretization in the general setting. Theorem \ref{gfT1} shows that in case of even natural number $q$ we always have $X_N\in \mathcal M^w(M(N,q),q,0)$.
Theorem \ref{gfT2} shows that the parameter $m=M(N,q)$ is the best possible one in a general setting. However, it is well known that for specific subspaces $X_N$ the growth of $m$ allowing exact weighted discretization may be much slower than
$M(N,q)\asymp N^q$. In this subsection we show that the subspaces of spherical harmonics are as bad (in the sense of order) as the worst subspaces.
Let $ \mathcal{H}_n^2$ denote the space of spherical harmonics of degree $n$ on the unit sphere $\mathbb{S}^2$ of $\mathbb{R}^3$. It is known that $\text{dim}\ (\mathcal{H}_n^2)=2n+1$. Let $\{ Y_{n,j}\}_{j=1}^{2n+1}$ denote an orthonormal basis in $ \mathcal{H}_n^2$. Denote by $d\sigma$ the surface Lebesgue measure on $\mathbb{S}^2$ normalized by $\int_{\mathbb{S}^2} d\sigma=1$.
\begin{Theorem}\label{thm-1-1}
Assume that there exist distinct points $\xi_1, \cdots, \xi_m\in\mathbb{S}^2$ and real numbers $\lambda_1,\cdots, \lambda_m$ such that
\begin{equation}\label{1}\int_{\mathbb{S}^2} |f(x)|^2 \, d\sigma(x) =\sum_{j=1}^m \lambda_j |f(\xi_j)|^2,\ \ \ \forall f\in \mathcal{H}_n^2.\end{equation}
Then
$m\ge \frac {n(n+1)}2$. \end{Theorem}
For the proof of Theorem \ref{thm-1-1}, we need the following identity on ultraspherical polynomials, which can be found in \cite[p. 39, (5.7)]{A1}.
\begin{Lemma} For each positive integer $n$ and every $\lambda>0$,
\begin{equation}\label{2} \Bigl|C_n^\lambda (t)\Bigr|^2 =\sum_{j=0}^n b^{\lambda}_{n,j}\frac {2j+\lambda}{\lambda} C_{2j}^{\lambda}(t),\end{equation}
where
$$ b_j^{\lambda} =\frac{\lambda}{n+\lambda+j}\frac{ (\lambda)_{n-j} ( (\lambda)_j)^2 (2\lambda)_{n+j} (2j)!} {(n-j)! (j!)^2 (\lambda)_{n+j} (2\lambda)_{2j} },\ \ 0\leq j\leq n,$$
and
$(a)_j=a(a+1) \cdots (a+j-1)$.
\end{Lemma}
\begin{proof}[Proof of Theorem \ref{thm-1-1}]
First, we show that
\begin{equation}\label{1-3-0}\sum_{j=1}^m \lambda_j =1.\end{equation}
Recall that the function $(2j+1) C_j^{1/2}(x\cdot y)$, $x, y\in\mathbb{S}^2$ is the reproducing kernel of the space $ \mathcal{H}_n^2$.
Thus, for each $x\in\mathbb{S}^2$, we have
\begin{align*}
\int_{\mathbb{S}^2} |C_n^{1/2} (x\cdot y)|^2 \, d\sigma(y) =\sum_{j=1}^m \lambda_{j} |C_n^{1/2}(x\cdot \xi_j)|^2 .
\end{align*}
Integrating over $x\in\mathbb{S}^2$ then gives
\begin{align*}
\int_{\mathbb{S}^2} \int_{\mathbb{S}^2} |C_n^{1/2} (x\cdot y)|^2 \, d\sigma(y)d\sigma(x) & =\sum_{j=1}^m \lambda_{j} \int_{\mathbb{S}^2}|C_n^{1/2}(x\cdot \xi_j)|^2\, d\sigma(x) \\
&=\bigl(\sum_{j=1}^m \lambda_j\bigr)\int_{\mathbb{S}^2}\int_{\mathbb{S}^2} |C_n^{1/2} (x\cdot y)|^2 \, d\sigma(x)d\sigma(y).
\end{align*}
This implies
\eqref{1-3-0}.
Next, we show that
\begin{equation}\label{1-4-0}\int_{\mathbb{S}^2} f(x)\, d\sigma(x) =\sum_{j=1}^m \lambda_{j} f(\xi_j),\ \ \ \forall f\in \bigoplus_{j=0}^{n} \mathcal{H}^2_{2j}.\end{equation}
Indeed, using \eqref{1}, we have
\begin{align*}
\int_{\mathbb{S}^2} \int_{\mathbb{S}^2} |C^{1/2}_n(x\cdot y)|^2\, d\sigma(x) d\sigma(y)& =\sum_{j=1}^m \lambda_{j} \int_{\mathbb{S}^2}| C^{1/2}_n(x\cdot \xi_j)|^2 \, d\sigma(x) \\
&=\sum_{j=1}^m \sum_{k=1}^m \lambda_{j}\lambda_{k} |C^{1/2}_n(\xi_j\cdot \xi_k)|^2, \end{align*}
which, using \eqref{2} and \eqref{1-3-0}, equals
\begin{align*} & =b_{n,0}^{\frac12}+ \sum_{i=1}^n b_{n,i}^{\frac12}\sum_{j=1}^m \sum_{k=1}^m \lambda_{j}\lambda_{k} (4i+1) C_{2i}^{1/2} (\xi_j\cdot \xi_k).\end{align*}
It then follows by the addition formula for spherical harmonics that
\begin{align*}\int_{\mathbb{S}^2} \int_{\mathbb{S}^2} |C^{1/2}_n(x\cdot y)|^2\, d\sigma(x) d\sigma(y) &=b_{n,0}^{\frac12}+ \sum_{i=1}^n b_{n,i}^{\frac12}\sum_{\ell=1}^{4i+1} \sum_{j=1}^m \sum_{k=1}^m \lambda_{j}\lambda_{k} Y_{2i,\ell}(\xi_j) Y_{2i,\ell}(\xi_k)\\
&=b_{n,0}^{\frac12}+ \sum_{k=1}^n b_{n,k}^{\frac12}\sum_{j=1}^{4k+1}\Bigl| \sum_{i=1} ^m \lambda_{i} Y_{2k,j}(\xi_i) \Bigr|^2.
\end{align*}
Note that by \eqref{2},
$$ b_{n,0}^{\frac12} = \int_{\mathbb{S}^2} \int_{\mathbb{S}^2} |C^{1/2}_n(x\cdot y)|^2\, d\sigma(x) d\sigma(y).$$
It follows that for $1\leq k\leq n$ and $1\leq j\leq 4k+1$,
$$\sum_{i=1}^m \lambda_{i} Y_{2k,j}(\xi_i)=0.$$
This together with \eqref{1-3-0} implies \eqref{1-4-0}.
Finally, we show that
m \ge (n_0+1) (2n_0+1),
$
where $n_0$ is the integer part of $n/2$.
To see this, note that
for each $f\in V_n: =\bigoplus_{0\leq j\leq n/2}\mathcal{ H}_{2j}^2$, we have
$ |f|^2 \in \bigoplus_{0\leq j\leq n}\mathcal{ H}_{2j}^2.$
Thus, using \eqref{1-4-0}, we obtain
$$ \int_{\mathbb{S}^2} |f(x)|^2 \, d\sigma(x) =\sum_{i=1}^m \lambda_{i} |f(\xi_i)|^2,\ \ \forall f\in V_n.$$
In particular, this implies that
$$m \ge \text{dim} ( V_n)=\sum_{0\leq j\leq n/2} (4j+1)=(n_0+1) (2n_0+1).$$
\end{proof}
\subsection{Exact weighted discretization for trigonometric polynomials}\label{Ex--}
In this subsection we show that some subspaces of trigonometric polynomials are as bad (in the sense of order) as the worst subspaces. Recall that given a finite subset $Q$ of $\mathbb Z^d$,
$$
\mathcal T(Q):= \Bigl\{f: f=\sum_{\mathbf k\in Q}c_\mathbf k e^{i(\mathbf k,\mathbf x)},\ \ c_{\mathbf{k}}\in \mathbb{C},\ \ \mathbf k\in Q\Bigr\}.
$$
We begin with a univariate trigonometric polynomials.
\begin{Theorem} Let $N$ be a given positive integer and let
$$
Q:=\Bigl\{j^2:\ \ j=1,2,\cdots, N\Bigr\}\cup\Bigl\{0, 1, 2, \cdots, 2N\Bigr\}.
$$
Assume that there are points $ x_1, \cdots, x_m\in [0,2\pi)$ and real numbers $\lambda_1,\cdots, \lambda_m$ such that
\begin{equation}\label{5-2}\frac 1{2\pi} \int_0^{2\pi} |f( x)|^2\, dx = \sum_{j=1}^m \lambda_j |f(x_j)|^2,\ \ \forall f\in \mathcal{T}(Q).\end{equation}
Then
$$
m\ge N^2\ge \frac {(|Q|-1)^2}9.
$$
\end{Theorem}
\begin{proof}
Note first that since $\text{Re}(a\overline{b})=\frac1{4}\big(|a+b|^2-|a-b|^2\big)$, \eqref{5-2} implies that
\begin{equation}\label{5-2'}
\frac 1{2\pi}\int_0^{2\pi} f(x) \overline{g(x)}\, dx =\sum_{j=1}^m \lambda_j f(x_j) \overline{g(x_j)},\ \ \forall f,g\in \mathcal{T}(Q).
\end{equation}
Applying this last formula to $f(x)=e^{ijx}$ and $g(x)=e^{ikx}$ with $j,k\in Q$, and using linearity of the integral, we then conclude that
\begin{equation}\label{5-3}
\frac 1{2\pi}\int_0^{2\pi} h(x)\, dx =\sum_{j=1}^m \lambda_j h(x_j),\ \ \forall h\in \mathcal{T}(Q-Q).
\end{equation}
Next, we note that $3N+1-\sqrt{2N}\leq |Q| \leq 3N+1$, and
\begin{eqnarray*} Q-Q &\supset&
\Big(\bigcup_{k=1}^N \{ k^2, k^2-1, \cdots, k^2-2N\}\Big)
\\
&&\cup\Big(\{1^2, 2^2, \cdots, N^2\}\Big).
\end{eqnarray*}
Since
$ k^2-(k-1)^2=2k-1<2N$ for $1\leq k\leq N$,
this together with symmetry implies that
\begin{equation}\label{5-4}
\Bigl\{ \pm j:\ \ j=0,1,2,\cdots, N^2\Bigr\}\subset Q-Q.
\end{equation}
Finally, \eqref{5-4} combined with \eqref{5-3} implies that the cubature formula in \eqref{5-3} is exact for every $f\in\mathcal{T}(N^2)$.
Thus,
by Proposition \ref{gfP4} we obtain the required lower bound.
\end{proof}
It is worth mentioning that the construction of the set $Q$ satisfying the conditions
$|Q| \asymp N$ and \eqref{5-4}
is closely related to the so-called Sidon's sets, see, e.g., \cite{sidon1, sidon2}.
We now give one more example of a subspace of multivariate trigonometric polynomials, which is ``difficult'' for exact weighted discretization. Let
$\mathcal R\mathcal T(n)$ denote the set of real trigonometric polynomials of degree at most $n$.
For $q=2^s$, $s\in{\mathbb{N}}$, consider the following subspace
$$
X_N:=\{f(x_1,\dots,x_q)=f_1(x_1)+\cdots+f_q(x_q):\, f_j\in \mathcal R\mathcal T(n),\, j=1,\dots,q\}.
$$
Then $N=\dim(X_N) = (2n+1)q$. Assume that sets $\{\xi^\nu\}$ and $\{{\langle}_\nu\}$, $\nu=1,\dots,m$ are such that for any $f\in X_N$ we have
$$
(2\pi)^{-q}\int_{\mathbb T^q}f(\mathbf x)^qd\mathbf x = \sum_{\nu=1}^m {\langle}_\nu f(\xi^\nu)^q.
$$
Using the form $q=2^s$ and applying $s$ times the argument, which we used above to derive (\ref{5-2'}) from (\ref{5-2}), we obtain that for any $f_j\in \mathcal R\mathcal T(n)$, $j=1,\dots,q$ we have
$$
(2\pi)^{-q}\int_{\mathbb T^q}f_1(x_1)\cdots f_q(x_q)d\mathbf x = \sum_{\nu=1}^m {\langle}_\nu f_1(\xi^\nu_1)\cdots f_q(\xi^\nu_q).
$$
In particular, this implies that the cubature formula with nodes $\{\xi^\nu\}$ and weights $\{{\langle}_\nu\}$, $\nu =1,\dots,m$, is exact for $\mathcal T(\mathbf N)$, $\mathbf N=(n,\dots,n)$. Therefore, by
Proposition \ref{gfP4} we get $m\ge n^q$.
\section{Exact weighted discretization with constraints on the weights}
\label{constr}
In Section \ref{Ex} we discussed the problem of exact weighted discretization and
related problems of recovery and numerical integration. In that setting we did not impose any restrictions on the weights $\{{\langle}_\nu\}$. In this section we consider numerical integration and exact weighted discretization with additional constraints on
weights $\{{\langle}_\nu\}$. We only consider two natural types of constraint.
{\textsc{Positivity.}} We assume that ${\langle}_\nu \ge 0$, $\nu=1,\dots,m$.
{\textsc{Stability.}} We assume that $\sum_{\nu=1}^m |{\langle}_\nu| \le B$.
\\In Subsection \ref{section stable exact} we will also consider a more general stability property than the one above.
\subsection{ Exact weighted discretization with positive weights}
\label{sec:3-1}
In this section, we shall prove that given an $N$-dimensional subspace of continuous and integrable functions on a sequentially compact space, one can always find an exact positive cubature formula with at most $N$ nodes.
\begin{Theorem}\label{thm-4-1}
Let $\Omega$ be a sequentially compact topological space with the probability Borel measure $\mu$. Then for each given $N$-dimensional real linear subspace $X_N$ of $L_1(\Omega,\mu)\cap C(\Omega)$, there exist a set of $N$ points $\{\xi^1, \cdots, \xi^N\}\subset \Omega$ and a set of nonnegative real numbers $\lambda_1,\cdots, \lambda_N$ such that
\begin{equation}\label{4-1-0}
\int_{\Omega} f(x)\, d\mu(x) =\sum_{j=1}^{N}\lambda_j f(\xi^j),\quad \ \forall f\in X_N.
\end{equation}
\end{Theorem}
This theorem guarantees existence of exact positive cubature formula with at most $N$ nodes. One can observe that we actually have $2N$ parameters as both the nodes and the weights can be chosen, while the dimension of the subspace is $N$. Therefore, in many concrete situations reduction of the number of nodes is possible. Perhaps the simplest example is the classical Gaussian quadrature of highest algebraic degree of exactness, see, e.g.~\cite{Krylov}.
In the case when $\Omega$ is a compact subset of ${\mathbb R}^d$, and $X_N$ is the space of all real algebraic polynomials in $d$ variables of total degree at most $n$, Theorem~ \ref{thm-4-1} is known as the Tchakaloff theorem, and its proof can be found in \cite{P} (see also \cite{Wi}). It is worthwhile to point out that
Theorem~ \ref{thm-4-1} here is applicable in a more general setting, and our proof is different from that of the Tchakaloff theorem in \cite{P}.
Theorem \ref{thm-4-1} has two useful corollaries, the first of which provides an exact positive cubature formula with one more node (i.e., $N+1$ nodes instead of $N$ nodes) and the additional property that the sum of all the weights $\lambda_\nu$ is $1$.
\begin{Corollary}\label{cor-3-1-FD} Under the conditions of Theorem~ \ref{thm-4-1}, there
exist a set of $N+1$ points $\{\xi^1, \cdots, \xi^{N+1}\}\subset \Omega$ and a set of nonnegative real numbers $\lambda_1,\cdots, \lambda_{N+1}$ such that $\sum_{j=1}^{N+1} \lambda_j=1$ and
\begin{equation}\label{4-1-0}
\int_{\Omega} f(x)\, d\mu(x) =\sum_{j=1}^{N+1}\lambda_j f(\xi^j),\quad \ \forall f\in X_N.
\end{equation}
\end{Corollary}
The proof of Corollary \ref{cor-3-1-FD} is almost identical to that of Theorem \ref{thm-4-1}. The only difference is that one uses the Carath\'eodory theorem instead of Lemma~\ref{lem:carath positive} below.
The second corollary guarantees the existence of an exact weighted discretization theorem
with positive weights and at most $M(N,q)$ nodes for each even positive integer $q$ and each given $N$-dimensional real linear subspace of $L_q$, where
$$
M(N,q):= {N+q-1 \choose q}=\frac{(N+q-1)!}{q!(N-1)!}\asymp N^q.
$$
Following the proof of Theorem \ref{gfT1}, one can easily deduce from Theorem~\ref{thm-4-1} the following corollary, which in particular improves Theorem \ref{gfT1} in the sense that all the weights $\lambda_\nu$ are nonnegative.
\begin{Corollary}\label{thm:positive general}
Assume that the conditions of Theorem~ \ref{thm-4-1} are satisfied, and
$X_N\subset L_q(\Omega,\mu)$ for some positive integer $q$. Let $M:={N+q-1 \choose q}$. Then there exist $\xi^\nu\in\Omega$ and $\lambda_\nu\ge0$, $\nu=1,\dots,M$, such that
\[
\int_{\Omega}f^qd\mu = \sum_{\nu=1}^M\lambda_\nu f(\xi^\nu)^q,\quad \ \forall f\in X_N.
\]
In particular, if $q$ is even, then $X_N \in \mathcal M^w_+(M,q,0)$.
\end{Corollary}
Note that according to Theorem \ref{gfT2}, the lower bound ${N+q-1 \choose q}$ for the number of nodes in the exact weighted discretization theorem
is sharp even without the positivity assumption.
Now we turn to the proof of Theorem \ref{thm-4-1}. We will use the notation $\operatorname{conv}(E)$ to denote the convex hull of a set $E\subset{\mathbb R}^M$, while $\overline{\text{conv} (E)}$ will denote the closure of $\operatorname{conv}(E)$. We need the following lemma:
\begin{Lemma}\label{lem:carath positive}
Suppose that $E\subset{\mathbb R}^M$ and $z\in\operatorname{conv}(E)$. Then one can find $ y_\nu\in E$ and $\lambda_\nu\ge0$, $\nu=1,\dots,M$, satisfying $z=\sum_{\nu=1}^M\lambda_\nu y_\nu$.
\end{Lemma}
\begin{proof}
This is a corollary from a generalization of the Carath\'eodory theorem. For instance, one can use~\cite[Corollary~17.1.2, p.~156]{Ro} for one-element sets $C_y:=\{y\}$, $y\in E=:I$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm-4-1}] Let $u_1,\cdots, u_N$ be a basis of the linear space $X_N$.
Define the mapping
$\Phi:\ \ \Omega\to {\mathbb R}^N$ by $\Phi(x)=(u_1(x),\cdots, u_N(x))$, $x\in\Omega$.
By linearity, it is easily seen that relation \eqref{4-1-0} is equivalent to the system of equations:
\begin{equation}\label{3-3-FD}
\sum_{j=1}^N \lambda_j \Phi(x_j)=\mathbf{a},\ \ x_1,\dots, x_N\in\Omega,\ \ \lambda_1,\dots, \lambda_N\ge 0,
\end{equation}
where $$\mathbf{a}=\Bigl( \int_{\Omega} u_1(x) \, d\mu, \int_{\Omega} u_2(x)\, d\mu, \dots, \int_{\Omega} u_N(x)\, d\mu\Bigr)=\int_{\Omega} \Phi(x) \, d\mu(x).$$
For the proof of \eqref{3-3-FD}, by Lemma \ref{lem:carath positive}, it suffices to prove that $\mathbf{a} \in\text{conv} (E)$, where $E=\Phi(\Omega)\subset \mathbb{R}^N$.
To this end, we first prove that $\mathbf{a} \in \overline {\text{conv} (E)}$. Assume to the contrary that this is not true. Then by the convex separation theorem in $\mathbb{R}^N$, we can find $\alpha\in\mathbb{R}^N$ and $t\in\mathbb{R}$ such that $\alpha \cdot \mathbf{a} >t\ge \sup_{x\in \Omega} \alpha \cdot \Phi(x)$. This gives a contradiction since $\alpha\cdot\mathbf{a}=\int_\Omega \alpha\cdot \Phi(x) d\mu(x)$.
Next, we show that $\mathbf{a} \in {\text{conv} (E)}$. Since $\mathbf{a} \in \overline {\text{conv} (E)}$, it follows by the Carath\'eodory theorem that for any positive integer $n$, there exist $\lambda_{n,v}\ge 0$ and $\xi_{n,v}\in\Omega$, $\nu=1,\cdots, N+1$ such that
$\sum_{\nu=1}^{N+1} \lambda_{n,v}=1$ and
$$ \Bigl\| \mathbf{a} - \sum_{\nu=1}^{N+1} \lambda_{n,v} \Phi(\xi_{n,v})\Bigr\|\leq n^{-1}.$$
Since $\Omega$ is sequentially compact, without loss of generality, we may assume that
$\lim_{n\to\infty}\lambda_{n,v}=\lambda_\nu\ge 0$ and $\lim_{n\to\infty} \xi_{n,v} =\xi_v \in \Omega$, $\nu=1, \dots, N+1$. Then $\sum_{\nu=1}^{N+1}\lambda_{v} =1$ and
by continuity of the mapping $\Phi: \Omega \to \mathbb{R}^N$,
$\mathbf{a} =\sum_{\nu=1}^{N+1} \lambda_\nu \Phi(\xi_v)$. This proves that $\mathbf{a} \in \text{conv}(E)$.
\end{proof}
\subsection{Stable exact weighted discretization}\label{section stable exact}
In this subsection we prove that the one-sided Marcinkiewicz-type estimate implies the existence of exact cubature formula with the same number of nodes.
Let $\Omega$ be a subset of ${\mathbb R}^d$ equipped with a probability Borel measure $\mu$.
Let $1\leq p\leq \infty$ and $X_N\subset L_p(\Omega, \mu)\cap C(\Omega)$ be an $N$-dimensional real subspace.
\begin{Theorem} Assume that there exist a finite subset $W\subset \Omega$ and a set $\{\mu_{\omega}:\ \ \omega\in W\}$ of positive numbers such that
\begin{equation}\label{3-1-0}
\|f\|_{L_p(\Omega, \mu)} \leq C_1 \Bigl(\sum_{\omega\in W} \mu_{\omega} |f(\omega)|^p\Bigr)^{1/p},\quad \ \forall f\in X_N,
\end{equation}
if $p<\infty$, and
\
\|f\|_{L_\infty(\Omega, \mu)} \leq C_1\sup_{\omega\in W} |f(\omega)|,\quad \ \forall f\in X_N,
\]
if $p=\infty$. Then there exists a sequence of real numbers $\{ \lambda_\omega:\ \ \omega\in W\}$ such that
$$ \int_{\Omega} f(x)\, d\mu(x) =\sum_{\omega\in W} \lambda_{\omega} f(\omega),\ \ \ \forall f\in X_N,$$
and
\begin{equation}\label{3-1}
\left(\sum_{\omega\in W} \left|\frac{\lambda_\omega}{\mu_{\omega}}\right|^{p'} \mu_{\omega}\right)^{1/p'}\leq C_1,
\end{equation}
where $\tfrac1p+\tfrac1{p'}=1$ if $p\ne1$, while in the case of $p=1$ we obtain
$$|\lambda_\omega|\leq C_1\mu_\omega \quad \mbox{for all}\quad \omega\in W$$
instead of~\eqref{3-1}.
\end{Theorem}
\begin{proof}
We give the proof for the case $p\ne1$ only, the required modifications for $p=1$ are obvious. Denote by $X_N^\ast$ the dual space of $X_N$. Define $$ E:=\Bigl\{ \sum_{\omega\in W} \lambda_\omega \delta_\omega:\ \ \lambda_\omega\in{\mathbb R},\ \ \Bigl(\sum_{\omega\in W} |\lambda_\omega|^{p'} \mu_{\omega}^{1-p'}\Bigr)^{1/p'}\leq C_1\Bigr\},$$
where $\delta_x$ denotes the linear functional in $ X_N^\ast$ given by $\delta_x(g)=g(x)$, $g\in X_N$, $x\in\Omega$. Clearly, it is sufficient to show that the linear functional $\ell\in X_N^\ast$ given by
$$\ell(g):=\int_{\Omega} g(x)\, d\mu(x),\ \ g\in X_N,$$
lies in the set $E$. Assume to the contrary that $\ell\notin E$. It then follows by the convex separation theorem that there exists a nonzero function $f\in X_N$ such that
$$ \Bigl\langle \sum_{\omega\in W} \lambda_\omega \delta_{\omega}, f\Bigr\rangle =\sum_{\omega\in W} \lambda_{\omega} f(\omega) <1< \int_{\Omega} f(x)\, d\mu(x)\leq \|f\|_p$$
for every sequence $\{\lambda_\omega\}_{\omega\in\Lambda}$ of real numbers satisfying \eqref{3-1}. Taking supremum over all real sequences $\{\lambda_\omega\}_{\omega\in W}$ satisfying \eqref{3-1}, we obtain
$$C_1\Bigl(\sum_{\omega\in W} \mu_{\omega} |f(\omega)|^p\Bigr)^{1/p'} < \|f\|_{L_p(\Omega, d\mu)},$$
which contradicts the condition \eqref{3-1-0}.
\end{proof}
\section{Marcinkiewicz-type inequality for the hyperbolic cross polynomials for $q=\infty$ }
\label{hyper}
Recall that the set of hyperbolic polynomials is defined as
$$
\mathcal{T}(N):= \mathcal T(N,d) := \Bigl\{ f:\ \ f=\sum_{\mathbf{k} \in \Gamma(N)} c_{\mathbf{k}} e^{i (\mathbf{k},\mathbf x)}\Bigr\},$$
where $ \Gamma(N)$ is the hyperbolic cross
$$ \Gamma(N):=\Gamma(N,d) :=\Bigl\{\mathbf{k}\in\mathbb Z^d:\ \ \ \prod_{j=1}^d \max\{ |k_j|, 1\} \leq N\Bigr\}.$$
Throughout this section, we define
$$\alpha_d:=\sum_{j=1}^d \frac 1j\qquad\mbox{ and}\qquad
\beta_d:=d-\alpha_d.$$ We use the following notation here. For $\mathbf x\in\mathbb T^d$ and $j\in\{1,\dots,d\}$ we denote $\mathbf x^j:=(x_1,\dots,x_{j-1},x_{j+1},\dots,x_d)$.
Our main result in this section can be stated as follows.
\begin{Theorem}\label{thm-2-1} For each $d\in{\mathbb{N}}$ and each $N\in{\mathbb{N}}$ there exists a set $W(N,d)$ of at most $C_d N^{\alpha_d}(\log N)^{\beta_d}$ points in $[0, 2\pi)^d$ such that for all $f\in\mathcal{T}(N)$,
$$ \|f\|_\infty \le C(d) \max_{\mathbf w\in W(N,d) }|f(\mathbf w)|.$$
\end{Theorem}
Theorem \ref{thm-2-1} for $d=1$ is well known (see, for instance, Subsection 6.1 for a detailed discussion). We prove Theorem \ref{thm-2-1} by induction on $d$. For readers' convenience, first we demonstrate in Subsection \ref{subsection2:1} the step from $d=1$ to $d=2$. Second, we demonstrate in Subsection \ref{subsection2:2} the general step from $d-1$ to $d$. An important ingredient in the proof is the following Bernstein's inequality (see, for instance, \cite{Tmon}):
\begin{Lemma}\label{lem-2-1}
For each $f\in \mathcal{T}(N)$,
$$\|f^{(1,\cdots, 1)}\|_\infty \leq C(d) N (\log N)^{d-1} \|f\|_\infty.$$
\end{Lemma}
\subsection{Step from $d=1$ to $d=2$}\label{subsection2:1}
This subsection is devoted to the proof of Theorem \ref{thm-2-1} for $d=2$. As we already pointed out above Theorem \ref{thm-2-1} is known in the case $d=1$.
For $M\in{\mathbb{N}}$ define
$$
V_M:=\Bigl\{ \frac {2\pi j }{M}:\ \ j=0,1,\cdots, M-1\Bigr\} .
$$
For natural numbers $M$ and $N$ we set
$$
V(M,N,2,j) := \{\mathbf x\in \mathbb T^2: x_j \in V_M,\, \mathbf x^j \in W(N,1)\}, \quad j=1,2,
$$
where $2$ stands for dimension.
Finally, define
$$
W:=W_{M,N}:= V(M,N,2,1)\cup V(M,N,2,2).
$$
Let $\varepsilon\in (0, 1/8)$ be a small positive number. In our further argument we specify $M\in{\mathbb{N}}$ to be the smallest number satisfying the inequality
\begin{equation}\label{4.1b}
C_0 M^{-2} N \log N \leq \varepsilon,
\end{equation}
with a sufficiently large positive constant $C_0$.
It is easily seen that then $|W_{M,N}| \le C(\epsilon) N^{3/2}(\log N)^{1/2}$.
In this subsection, we show that for each $f\in \mathcal{T}(N)$,
\begin{equation}\label{2-1}
\|f\|_\infty \leq C(\varepsilon) \max_{\mathbf w\in W} |f(\mathbf w)|.
\end{equation}
Assume that $\mathbf x \in [0,2\pi)^2$ is such that $\|f\|_\infty =|f(\mathbf x)|$. Let $\mathbf a = (a_1,a_2)$, $a_j\in V_M$, $j=1,2$ be such that $0\le x_j-a_j\leq 2\pi M^{-1}$, $j=1,2$.
Then using Lemma \ref{lem-2-1}, we obtain
$$
\Bigl| f(a_1,a_2)-f(a_1,x_2) - f(x_1,a_2)+f(x_1,x_2)\Bigr|=\Bigl| \int_{a_1}^{x_1} \int_{a_2}^{x_2} f^{(1,1)}(u,v) \, dvdu\Bigr|
$$
$$
\leq C_1 N( \log N) M^{-2}\|f\|_\infty \leq \varepsilon \|f\|_\infty
$$
provided $C_1\le C_0$.
In particular, this implies that
\begin{align*}
\max\Bigl\{|f(a_1,a_2)|, |f(a_1,x_2)|, | f(x_1,a_2)| \Bigr\}\ge \frac {1-\varepsilon} 3 \|f\|_\infty.
\end{align*}
Suppose we have (the other two cases are treated in the same way)
\begin{equation}\label{4.2n}
|f(a_1,x_2)| \ge \frac{1-\varepsilon}{3} \|f\|_\infty.
\end{equation}
Then by Theorem \ref{thm-2-1} with $d=1$ we obtain
$$
|f(a_1,x_2)| \le \max_{u\in\mathbb T} |f(a_1,u)|
$$
\begin{equation}\label{4.3n}
\le C(1) \max_{w\in W(N,1)}|f(a_1,w)| \le C(1)\max_{\mathbf w\in W }|f(\mathbf w)|.
\end{equation}
Inequalities (\ref{4.2n}) and (\ref{4.3n}) imply (\ref{2-1}) with $W=W_{M,N}$, where $M$ satisfies condition (\ref{4.1b}).
\subsection{Step from $d-1$ to $d$ } \label{subsection2:2}
In this subsection we prove Theorem \ref{thm-2-1} for all $d\ge 2$.
We use induction on the dimension $d$.
Assume that Theorem \ref{thm-2-1} has been proved for the case of $d-1$. That is, there exists a set $W(N,d-1)\subset [0, 2\pi)^{d-1}$ of at most $C_{d-1} N^{\alpha_{d-1}}(\log N)^{\beta_{d-1}}$ points such that
$$
\|f\|_\infty \leq C(d-1) \max_{\mathbf w\in W(N,d-1)} |f(\mathbf w)|,\ \ \ \forall f\in \mathcal{T}(N,d-1).
$$
For natural numbers $M$ and $N$ define
$$
V(M,N,d,j) := \{\mathbf x\in \mathbb T^d: x_j \in V_M,\, \mathbf x^j \in W(N,d-1)\}, \quad j=1,\dots,d.
$$
Finally, define
$$
W(d):=W_{M,N}(d):= \cup_{j=1}^d V(M,N,d,j).
$$
Let $\varepsilon\in (0, 1/8)$ be a small positive number. In our further argument we specify $M\in {\mathbb{N}}$ to be the smallest number satisfying the inequality
\
C_0(d) M^{-d} N (\log N)^{d-1} \leq \varepsilon,
\]
with a sufficiently large positive constant $C_0(d)$.
It is easily seen that then
\begin{equation}\label{4.6n}
|W_{M,N}(d)| \leq C_{d-1} M N^{\alpha_{d-1}}(\log N)^{\beta_{d-1}}\leq C(d,\varepsilon) N^{\alpha_d} (\log N)^{\beta_d},
\end{equation}
where
$\alpha_d=\alpha_{d-1}+\frac 1d =\sum_{j=1}^d \frac 1j$ and
$\beta_d=\beta_{d-1} + 1-\frac 1d =d-\alpha_d$.
Assume that $\mathbf x \in [0,2\pi)^d$ is such that $\|f\|_\infty =|f(\mathbf x)|$. Let $\mathbf a = (a_1,\dots,a_d)$, $a_j\in V_M$, $j=1,\dots,d$ be such that $0\le x_j-a_j\leq 2\pi M^{-1}$, $j=1,\dots,d$.
By a straightforward calculation we have
\begin{align*}
& \Bigl|\int_{a_1}^{x_1} \dots \int_{a_d}^{x_d} f^{(1,\dots, 1)}(u_1, \dots, u_d)\, du_d \dots du_1 \Bigr|
=\Bigl| \sum_{\mathbf{y}\in \mathbf{A}} (-1)^{n_{\mathbf{y}}} f(\mathbf{y})\Bigr|,
\end{align*}
where
$$
\mathbf{A}:=\Bigl\{(y_1,\dots, y_d):\ \ y_j=a_j\ \ \text{or}\ \ x_j\ \ \text{for $j=1,\dots, d$}\Bigr\}
$$
and
$$n_{\mathbf{y}} =\left|\Bigl\{ j:\ \ y_j =a_j,\ \ 1\leq j\leq d\Bigr\}\right|.
$$
It follows by Lemma \ref{lem-2-1} that
\begin{align*}
\sum_{\mathbf{y}\in \mathbf{A}\setminus \{\mathbf{x}\} } |f(\mathbf {y})| \ge (1-C(d) N(\log N)^{d-1} M^{-d})\|f\|_\infty \ge (1-\varepsilon)\|f\|_\infty,
\end{align*}
provided $C(d)\le C_0(d)$.
This implies
$$
\max_{\mathbf{y}\in \mathbf{A}\setminus \{\mathbf{x}\}} |f(\mathbf{y})|\ge \frac{1-\varepsilon}{2^d-1} \|f\|_\infty.
$$
Let $\mathbf y^0\in \mathbf{A}\setminus \{\mathbf{x}\}$ be the one for which the inequality
\begin{equation}\label{4.4n}
|f(\mathbf y^0)| \ge \frac{1-\varepsilon}{2^d-1}\|f\|_\infty
\end{equation}
holds. Then there exists $j:=j_{\mathbf{y^0}}\in\{1,\dots, d\}$ such that $y_j^0=a_j$. For simplicity of notations assume that $j=1$. By the induction assumption and the definition of $W_{M,N}(d)$ we have
$$
|f(\mathbf{y^0})|\leq \sup_{\mathbf{u}\in [0, 2\pi)^{d-1}}|f(a_1, \mathbf{u})|\leq C(d-1)\max_{\mathbf{w}\in W(N,d-1)} |f(a_1, \mathbf{w})|
$$
\begin{equation}\label{4.5n}
\leq C(d-1) \max_{\mathbf{w}\in W_{M,N}(d)}|f(\mathbf w)| .
\end{equation}
Combining inequalities (\ref{4.4n}), (\ref{4.5n}) and taking into account bound (\ref{4.6n}) we
complete the proof of Theorem \ref{thm-2-1} with $W(N,d) = W_{M,N}(d)$.
\subsection{Some historical remarks and an application to Remez inequalities}
It is well known (see Subsection 6.1 for a detailed discussion) that
$$
\mathcal T(\Pi(N)) \in \mathcal M(C(d)N^d,\infty),\quad \Pi(N):= \{\mathbf k\in\mathbb Z^d: |k_j| \le N, j=1,\dots,d\}.
$$
In particular, this implies that
$$
\mathcal T(N) \in \mathcal M(C(d)N^d,\infty).
$$
Theorem \ref{thm-2-1} shows that we can improve the above relation to
$$
\mathcal T(N) \in \mathcal M(C(d)N^{\alpha_d}(\log N)^{\beta_d},\infty).
$$
Note that $\alpha_d \asymp \ln d$.
A trivial lower bound for $m$ in the inclusion $\mathcal T(N) \in \mathcal M(m,\infty)$ is $m\ge \dim(\mathcal T(N)) \asymp N(\log N)^{d-1}$. The following nontrivial lower bound was obtained in \cite{KT3} -- \cite{KaTe03}.
\begin{Theorem}\label{C2.5.1} Let a set $W\subset \mathbb T^2$ have a property:
$$
\forall t \in \mathcal T(N) \qquad \|t\|_\infty \le b(\log N)^\alpha \max_{\mathbf w\in W} |t(\mathbf w)|
$$
with some $0\le \alpha <1/2$. Then
$$
|W| \ge C_1 N \log N e^{C_2b^{-2}(\log N)^{1-2\alpha}}.
$$
\end{Theorem}
In particular, Theorem \ref{C2.5.1} with $\alpha=0$ implies that a necessary condition on $m$ for inclusion $\mathcal T(N) \in \mathcal M(m,\infty)$ is $m\ge \dim(\mathcal T(N)) N^c$ with positive absolute constant $c$.
An operator $T_N$ with the following properties was constructed in \cite{T93}.
The operator $T_N$ has the form
$$
T_N(f) = \sum_{j=1}^m f(\mathbf x^j) \psi_j(\mathbf x),\quad m\le c(d)N (\log N)^{d-1},\quad \psi_j \in \mathcal T(N 2^d)
$$
and
\begin{equation}\label{3.21n}
T_N(f) =f,\quad f\in \mathcal T(N),
\end{equation}
\begin{equation}\label{3.22n}
\|T_N\|_{L_\infty\to L_\infty} \asymp (\log N)^{d-1}.
\end{equation}
Points $\{\mathbf x^j\}$ are form the Smolyak net.
Properties (\ref{3.21n}) and (\ref{3.22n}) imply that all $f\in\mathcal T(N)$ satisfy the discretization inequality (see \cite{KaTe03})
\
\|f\|_\infty \le C(d)(\log N)^{d-1} \max_{1\le j\le m} |f(\mathbf x^j)|.
\]
The general form of the Remez inequality for a function $f\in X_N\subset L_p(\Omega)$, $0<p\le \infty$, reads as follows:
for any Lebesgue measurable $B\subset \Omega$ with the measure $\operatorname{meas}(B)\le b<1$
$$
\|f\|_{L_p(\Omega)} \le C (N, \operatorname{meas}(B), p)\|f\|_{L_p(\Omega\setminus B)}.
$$
Applications of Remez type inequalities include many different results in approximation theory and harmonic analysis; see \cite{remezpaper} for more details and references.
For trigonometric polynomials $\mathcal T(Q)$ with frequencies from $Q\subset \mathbb Z^d$ (here $X_N=\mathcal T(Q)$ and $\Omega=\mathbb T^d$) the following result is well known \cite{remez}.
For $d\ge 1$ and
$$
Q=\Pi(\mathbf N):= \{\mathbf k \in \mathbb Z^d : |k_j| \le N_j, \quad j=1,\dots,d\},
$$
where $N_j\in{\mathbb{N}}$, for any $p\in (0,\infty]$,
we have that $C (N, \operatorname{meas}(B), p)= C (d, p)$ provided that $$\operatorname{meas}(B)\le \frac{C}{\prod_{j=1}^dN_j}.$$
The investigation of the Remez-type inequalities for the
hyperbolic cross trigonometric polynomials
with
\begin{equation}\label{qq} Q =\Gamma(N)=\Bigl\{\mathbf{k}\in\mathbb Z^d:\ \ \ \prod_{j=1}^d \max\{ |k_j|, 1\} \leq N\Bigr\}
\end{equation}
has been recently initiated in \cite{remezpaper}.
It turns out that for such polynomials
the problem to obtain the optimal Remez inequalities
has different solutions when $p<\infty$ and $p=\infty$.
If $p<\infty$, then $C (N, \operatorname{meas}(B), p)= C (d, p)$ provided that $$\operatorname{meas}(B)\le \frac{C}{N}.$$
The case $p=\infty$ was also studied in \cite{remezpaper}.
\begin{Theorem}\label{T3.1}
There exist two positive constants $C_1(d)$ and $C_2(d)$ such that for any set $B\subset \mathbb T^d$ of normalized measure $$\operatorname{meas}(B)\le \frac{C_2(d)}{N(\log N)^{d-1}}$$ and for any
$f\in \mathcal T(Q)$, where $Q$ is given by (\ref{qq}), we have
\begin{equation}\label{qqq}\|f\|_\infty \le C_1(d)(\log N)^{d-1} \sup_{{\mathbf u}\in \mathbb T^d \setminus B} |f({\mathbf u})|.
\end{equation}\end{Theorem}
It is worth mentioning that this result is sharp with respect to the logarithmic factor.
This is because
the following statement is false (see \cite{remezpaper}).
{\it There exist $\delta>0$, $A$, $c$, and $C$ such that for any $f\in \mathcal T(N)$ and any set $B\subset \mathbb T^d$ of measure
$\operatorname{meas}(B) \le (cN(\log N)^A)^{-1}$ the Remez-type inequality holds
$$\|f\|_\infty \le C(\log N)^{(d-1)(1-\delta)} \sup_{{\mathbf u}\in \mathbb T^d\setminus B} |f({\mathbf u})|.
$$}
Let us now give a nontrivial Remez inequality with no logarithmic factor in (\ref{qqq}).
It follows from the fact \cite[Th.2.4]{remezpaper} that
the discretization inequality implies Remez inequality in $L_\infty$.
Together with Theorem \ref{thm-2-1} this implies
\begin{Theorem}\label{thm-remez}
Let $d\ge 2$, $\alpha_d=\sum_{j=1}^d \frac 1j$, and
$\beta_d=d-\alpha_d$.
There exist two positive constants $C_1(d)$ and $C_2(d)$ such that for any set $B\subset \mathbb T^d$ of normalized measure $$\operatorname{meas}(B)\le \frac{C_2(d)}{N^{\alpha_d}(\log N)^{\beta_d}}$$ and for any
$f\in \mathcal T(Q)$, where $Q$ is given by (\ref{qq}), we have
$$\|f\|_\infty \le C_1(d) \sup_{{\mathbf u}\in \mathbb T^d \setminus B} |f({\mathbf u})|.
$$
\end{Theorem}
\section{Marcinkiewicz-type inequality for general trigonometric polynomials for $q=\infty$}
\label{ungen}
In this section we present results from \cite{KT168} and \cite{KT169}. More specifically, we
demonstrate how to obtain the first part of Theorem \ref{ITmain}.
\subsection{Small ball inequality}
\label{A}
In this section we consider special subspaces of the univariate trigonometric polynomials. For $n\in {\mathbb{N}}$ let $\mathcal K :=\{k_j\}_{j=n}^{2n-1}$ be a finite set of $n$ natural numbers such that $k_{j+1}>k_j$, $j=n,\dots,2n-2$. For $\nu\in{\mathbb{N}}$ define
$$
\mathcal T(\mathcal K,\nu) := \left\{f\, :\, f=\sum_{j=n}^{2n-1} p_j(x) e^{ik_jx}\right\},
$$
where $p_j\in \mathcal T(\nu):=\mathcal T([-\nu,\nu])$, $j=n,\dots, 2n-1$, $n=1,2,\dots$.
We prove some results for a set $\mathcal K$ and a number $\nu$ satisfying the following condition.
{\bf Condition L.} Suppose that all $k_j$, $j=n,\dots,2n-1$, are divisible by $k_n$ and that there exists a number $b>1$ such that $k_{j+1}\ge bk_j$, $j=n,\dots,2n-2$. Moreover, there is a constant $K$ such that we have $\nu\le (b-1)k_n/3$ and $\nu n \le Kk_n$.
\begin{Theorem}\label{AT1} Suppose that the pair $\mathcal K$, $\nu$ satisfies Condition L. Then there exists a constant $C=C(K,b)$, which may only depend on $K$ and $b$ such that for any
\begin{equation}\label{A1}
f=\sum_{j=n}^{2n-1} p_j(x) e^{ik_jx}
\end{equation}
we have for all $x\in [0,2\pi)$
\begin{equation}\label{A2}
\sum_{j=n}^{2n-1} |p_j(x)| \le C\|f\|_\infty.
\end{equation}
\end{Theorem}
\begin{proof} Take a point $x_0\in [0,2\pi)$ and prove (\ref{A2}) for this point. First of all,
considering a convolution of $f(x)$ with $\mathcal V_{\nu}(x) e^{ik_jx}$, where $\mathcal V_N(x)$ is the
de la Vall{\'e}e Poussin kernel (see \cite{VTbookMA}, p. 10), we obtain
\begin{equation}\label{A3}
\|p_j\|_\infty \le C_1\|f\|_\infty =: A,\quad j=n,\dots, 2n-1.
\end{equation}
Second, consider $f(x)$ with $x=x_0+ y/k_n$, $y\in [0,2\pi)$. By the Bernstein inequality
we get from (\ref{A3}) for $y\in [0,2\pi)$
\begin{equation}\label{A4}
|p_j(x_0+y/k_n)-p_j(x_0)| \le A\nu 2\pi/k_n.
\end{equation}
We now use a well known fact from the theory of lacunary series (see \cite{Z}, Ch.6). For a function
\begin{equation}\label{A5}
g(y):=\sum_{j=n}^{2n-1} c_je^{ik_jy}
\end{equation}
we have
\begin{equation}\label{A6}
\sum_{j=n}^{2n-1} |c_j| \le C_2(b)\|g\|_\infty
\end{equation}
with a constant $C_2(b)$, which may only depend on $b$.
Consider a function
$$
g(y) := \sum_{j=n}^{2n-1} p_j(x_0) e^{i(k_jx_0+k_jy/k_n)}.
$$
Then, $\{k_j/k_n\}_{j=n}^{2n-1}$ is a lacunary set and (\ref{A4}), (\ref{A6}) imply
$$
\sum_{j=n}^{2n-1} |p_j(x_0)| \le C_2(b)\|g\|_\infty \le \max_{y\in[0,2\pi)} |f(x_0+y/k_n)| +
nA\nu 2\pi/k_n \le C(K,b)\|f\|_\infty.
$$
This completes the proof of Theorem \ref{AT1}.
\end{proof}
Theorem \ref{AT1} implies immediately the following result.
\begin{Theorem}\label{AT2} Suppose that the pair $\mathcal K$, $\nu$ satisfies Condition L. Then there exists a constant $C=C(K,b)$, which may only depend on $K$ and $b$ such that for any
\begin{equation}\label{A1'}
f=\sum_{j=n}^{2n-1} p_j(x) e^{ik_jx}
\end{equation}
we have
\begin{equation}\label{A2'}
\sum_{j=n}^{2n-1} \|p_j\|_1 \le C\|f\|_\infty.
\end{equation}
\end{Theorem}
As above
for a finite set $\Lambda \subset \mathbb Z^d$ denote $\mathcal T(\Lambda)$ the set of trigonometric polynomials with frequencies in $\Lambda$. Denote
$$
\mathcal T(\Lambda)_p :=\{f\in\mathcal T(\Lambda): \|f\|_p\le 1\}.
$$
For a finite set $\Lambda$ we assign to each $f = \sum_{\mathbf k\in \Lambda} \hat f(\mathbf k) e^{i(\mathbf k,\mathbf x)}\in \mathcal T(\Lambda)$ a vector
$$
A(f) := \{(\text{Re}(\hat f(\mathbf k)), \text{Im}(\hat f(\mathbf k))),\quad \mathbf k\in \Lambda\} \in {\mathbb R}^{2|\Lambda|}
$$
where $|\Lambda|$ denotes the cardinality of $\Lambda$ and define
$$
B_\Lambda(L_p) := \{A(f) : f\in \mathcal T(\Lambda)_p\}.
$$
The volume estimates of the sets $B_\Lambda(L_p)$ and related questions have been studied in a number of papers: the case $\Lambda=[-n,n]$, $p=\infty$ in \cite{K1}; the case $\Lambda=[-N_1,N_1]\times\cdots\times[-N_d,N_d]$, $p=\infty$ in \cite{TE2}, \cite{T3}. In the case $\Lambda = \Pi(\mathbf N,d) := [-N_1,N_1]\times\cdots\times[-N_d,N_d]$, $\mathbf N:=(N_1,\dots,N_d)$, the following estimates follow from results of \cite{K1}, \cite{TE2}, and \cite{T3} (see also \cite{VTbookMA}, p.333).
\begin{Theorem}\label{AT3} For any $1\le p\le \infty$ we have
$$
(vol(B_{\Pi(\mathbf N,d)}(L_p)))^{(2|\Pi(\mathbf N,d)|)^{-1}} \asymp |\Pi(\mathbf N,d)|^{-1/2},
$$
with constants in $\asymp$ that may depend only on $d$.
\end{Theorem}
Denote
$$
\Lambda(\mathcal K,\nu):= \cup_{j=n}^{2n-1}\Lambda_j(\nu),\qquad \Lambda_j(\nu):= [k_j-\nu,k_j+\nu].
$$
We now estimate from above
the $vol(B_{\Lambda(\mathcal K,\nu)}(L_\infty))$ under Condition L.
\begin{Theorem}\label{AT4} Suppose that the pair $\mathcal K$, $\nu$ satisfies Condition L. Then
$$
(vol(B_{\Lambda(\mathcal K,\nu)}(L_\infty)))^{(2|\Lambda(\mathcal K,\nu)|)^{-1}} \le C'(n|\Lambda(\mathcal K,\nu)|)^{-1/2}.
$$
\end{Theorem}
\begin{proof} Let $f\in \mathcal T(\Lambda(\mathcal K,\nu))$ and $\|f\|_\infty \le 1$. Then $f$ has a form (\ref{A1'}) and by Theorem \ref{AT2} we get
\begin{equation}\label{A9}
\sum_{j=n}^{2n-1} \|p_j\|_1 \le C.
\end{equation}
Inequality (\ref{A9}) guarantees that there exist numbers $a_j:= [n\|p_j\|_1/C]+1\in {\mathbb{N}}$ such that
\begin{equation}\label{A10}
\|p_j\|_1 \le \frac{Ca_j}{n},\qquad \sum_{j=n}^{2n-1} a_j \le 2n.
\end{equation}
Denote $A(n):= \{\mathbf a=(a_n,\dots,a_{2n-1})\in {\mathbb{N}}^n: a_n+\dots+a_{2n-1}\le 2n\}$. Then
\begin{equation}\label{A11}
vol(B_{\Lambda(\mathcal K,\nu)}(L_\infty)) \le \sum_{\mathbf a \in A(n)} \prod_{j=n}^{2n-1} vol(B_{\Lambda_j(\nu)}(L_1))(Ca_j/n)^{2(2\nu+1)}.
\end{equation}
For $\{a_j\}$ satisfying inequality (\ref{A10}) we obtain
\begin{equation}\label{A12}
a_n\cdots a_{2n-1} \le ((a_n+\cdots+a_{2n-1})/n)^n \le 2^n.
\end{equation}
It is known that
$$
|\{(b_1,\dots,b_n)\in \mathbb Z_+^n: b_1+\cdots+b_n =q\}| = \binom{n+q-1}{q}.
$$
Therefore, for the number of summands in (\ref{A11}) we have
\begin{equation}\label{A13}
|A(n)| \le \sum_{q=0}^n \binom{n+q-1}{q} \le 2^{2n}.
\end{equation}
Combining (\ref{A11}) -- (\ref{A13}), using Theorem \ref{AT3} and taking into account that $|\Lambda(\mathcal K,\nu)|= n(2\nu+1)$ we obtain
\begin{equation}\label{A14}
(vol(B_{\Lambda(\mathcal K,\nu)}(L_\infty)))^{(2|\Lambda(\mathcal K,\nu)|)^{-1}} \le C'(n|\Lambda(\mathcal K,\nu)|)^{-1/2}.
\end{equation}
\end{proof}
\subsection{Discretization}
\label{B}
The above Theorem \ref{AT4} implies an interesting and surprising result on discretization for polynomials from $\mathcal T(\Lambda(\mathcal K,\nu))$. We derive from Theorem \ref{BT3}, which is a corollary of Theorem \ref{AT4},
that there is no analog of the Marcinkiewicz theorem in $L_\infty$ for polynomials from $\mathcal T(\Lambda(\mathcal K,\nu))$.
We present here some results from \cite{KaTe03} (see also \cite{VTbookMA}, pp. 344--345).
We begin with the following conditional statement.
\begin{Theorem}\label{BT1} Assume that a finite set $\Lambda\subset \mathbb Z^d$ has the following properties:
\begin{equation}\label{B1}
(vol(B_\Lambda(L_\infty)))^{1/D} \le K_1D^{-1/2},\quad D:=2|\Lambda|,
\end{equation}
and a set $\Omega_M =\{\mathbf x^1,\dots,\mathbf x^M\}$ satisfies the condition
\begin{equation}\label{B2}
\forall f \in \mathcal T(\Lambda) \qquad \|f\|_\infty\le K_2\|f\|_{\Omega_M},\quad
\|f\|_{\Omega_M}:=\max_{\mathbf x\in \Omega_M}|f(\mathbf x)|.
\end{equation}
Then there exists an absolute constant $c>0$ such that
$$
M\ge \frac{|\Lambda|}{e}e^{c(K_1K_2)^{-2}}.
$$
\end{Theorem}
\begin{proof} We use the following result of E. Gluskin \cite{G}.
\begin{Theorem}\label{BT2} Let $Y=\{\mathbf y_1,\dots,\mathbf y_S\} \subset {\mathbb R}^D$, $\|\mathbf y_i\|=1$, $i=1,\dots,S$, $S\ge D$, and
$$
W(Y) := \{\mathbf x\in {\mathbb R}^D:|(\mathbf x,\mathbf y_i)| \le 1,\quad i=1,\dots,S\}.
$$
Then
$$
(vol(W(Y)))^{1/D} \ge C(1+\ln (S/D))^{-1/2}.
$$
\end{Theorem}
By our assumption (\ref{B2}) we have
\begin{equation}\label{B1p}
\forall f \in \mathcal T(\Lambda) \qquad \|f\|_\infty\le K_2\|f\|_{\Omega_M}.
\end{equation}
Thus,
\begin{equation}\label{B2p}
\{A(f):f\in \mathcal T(\Lambda),\quad |f(\mathbf x)|\le K_2^{-1},\quad \mathbf x\in \Omega_M\} \subseteq B_\Lambda(L_\infty).
\end{equation}
Further
$$
|f(\mathbf x)|^2 = |\sum_{\mathbf k\in\Lambda}\hat f(\mathbf k) e^{i(\mathbf k,\mathbf x)}|^2 =
$$
$$
\left(\sum_{\mathbf k\in\Lambda}\text{Re}\hat f(\mathbf k) \cos(\mathbf k,\mathbf x) - \text{Im}\hat f(\mathbf k) \sin(\mathbf k,\mathbf x)\right)^2
$$
$$
+\left(\sum_{\mathbf k\in\Lambda}\text{Re}\hat f(\mathbf k) \sin(\mathbf k,\mathbf x) + \text{Im}\hat f(\mathbf k) \cos(\mathbf k,\mathbf x)\right)^2 .
$$
We associate with each point $\mathbf x\in \Omega_M$ two vectors $\mathbf y^1(\mathbf x)$ and $\mathbf y^2(\mathbf x)$ from ${\mathbb R}^D$:
$$
\mathbf y^1(\mathbf x) := \{(\cos(\mathbf k,\mathbf x),-\sin(\mathbf k,\mathbf x)),\quad \mathbf k\in \Lambda\},
$$
$$
\mathbf y^2(\mathbf x) := \{(\sin(\mathbf k,\mathbf x),\cos(\mathbf k,\mathbf x)),\quad \mathbf k\in \Lambda\}.
$$
Then
$$
\|\mathbf y^1(\mathbf x)\|^2 =\|\mathbf y^2(\mathbf x)\|^2 = |\Lambda|
$$
and
$$
|f(\mathbf x)|^2 = (A(f),\mathbf y^1(\mathbf x))^2 +(A(f),\mathbf y^2(\mathbf x))^2.
$$
It is clear that the condition $|f(\mathbf x)| \le K_2^{-1}$ is satisfied if
$$
|(A(f),\mathbf y^i(\mathbf x))| \le 2^{-1/2}K_2^{-1}, \quad i=1,2.
$$
Let now
$$
Y:=\{\mathbf y^i(\mathbf x)/\|\mathbf y^i(\mathbf x)\|:\quad \mathbf x\in \Omega_M,\quad i=1,2\}.
$$
Then $S=2M$ and by Theorem \ref{BT2}
\begin{equation}\label{B3p}
(vol(W(Y)))^{1/D} \ge C(1+\ln (S/D))^{-1/2} .
\end{equation}
Using that the condition
$$
|(A(f),\mathbf y^i(\mathbf x))|\le 1
$$
is equivalent to the condition
$$
|(A(f),\mathbf y^i(\mathbf x)/\|\mathbf y^i(\mathbf x)\|)| \le (D/2)^{-1/2}
$$
we get from (\ref{B2p}) and (\ref{B3p})
$$
(vol(B_\Lambda(L_\infty)))^{1/D} \ge C'D^{-1/2}K_2^{-1}(1+\ln (S/D))^{-1/2}.
$$
We now use our assumption (\ref{B1}) and obtain
\begin{equation}\label{B4p}
K_1K_2 \ge C'(\ln (eM/|\Lambda|))^{-1/2}.
\end{equation}
This completes the proof of Theorem \ref{BT1}.
\end{proof}
We now give some corollaries of Theorem \ref{BT1}.
\begin{Theorem}\label{BT3} Assume that a finite set $\Omega\subset \mathbb T$ has
the following property.
\begin{equation}\label{B4}
\forall t\in \mathcal T(\Lambda(\mathcal K,\nu)) \qquad \|t\|_\infty \le K_2\|t\|_{\Omega}.
\end{equation}
Then
$$
|\Omega| \ge \frac{|\Lambda(\mathcal K,\nu)|}{e}e^{Cn/K_2^2}
$$
with an absolute constant $C>0$.
\end{Theorem}
\begin{proof} By Theorem \ref{AT2} we have with $D:=2|\Lambda(\mathcal K,\nu)|$
$$
(vol(B_{\Lambda(\mathcal K,\nu)}(L_\infty)))^{1/D} \le C(n|\Lambda(\mathcal K,\nu)|)^{-1/2} \le Cn^{-1/2}D^{-1/2}
$$
with a constant $C>0$, which may depend on $K$ and $b$. Using Theorem \ref{BT1} we obtain
$$
|\Omega|\ge \frac{|\Lambda(\mathcal K,\nu)|}{e}e^{Cn/K_2^2}.
$$
This proves Theorem \ref{BT3}.
\end{proof}
\begin{Corollary}\label{BC1} Denote $N:= |\Lambda(\mathcal K,\nu)|$. Theorem \ref{BT3} implies for $m\ge N$
\begin{equation}\label{B5}
D(\Lambda(\mathcal K,\nu),m) \ge Cn^{1/2} \left(\ln\frac{em}{N}\right)^{-1/2}.
\end{equation}
In particular, (\ref{B5}) with $\nu=0$ implies for $m\le c'N$ that
\begin{equation}\label{B6}
D(\Lambda(\mathcal K,0),m) \ge C'N^{1/2} \quad \Rightarrow\quad D(N,m)\ge C'N^{1/2}.
\end{equation}
\end{Corollary}
\begin{Remark}\label{BR1} In a particular case $K_2=Bn^\alpha$, $0\le \alpha\le 1/2$, Theorem \ref{BT3} gives
$$
|\Omega|\ge \frac{|\Lambda(\mathcal K,\nu)|}{e}e^{CB^{-2}n^{1-2\alpha}}.
$$
\end{Remark}
\begin{Corollary}\label{BC2} Let a set $\Omega\subset \mathbb T$ have a property:
$$
\forall t \in \mathcal T(\Lambda(\mathcal K,\nu)) \qquad \|t\|_\infty \le Bn^\alpha\|t\|_{\infty,\Omega}
$$
with some $0\le \alpha <1/2$. Then
$$
|\Omega| \ge C_3|\Lambda(\mathcal K,\nu)|e^{CB^{-2}n^{1-2\alpha}} \ge C_1(K,b,B,\alpha)|\Lambda(\mathcal K,\nu)|e^{C_2(K,b,B,\alpha)n^{1-2\alpha}}.
$$
\end{Corollary}
In the case $\nu=0$ we have $|\Lambda(\mathcal K,\nu)|=n$ and, therefore, Corollary \ref{BC2} with $\alpha=0$
claims that for $D(\Lambda(\mathcal K,0),m)\le B$ we need $m\ge Ce^{cn}$ points for discretization.
\section{Universal discretization}
\label{ud}
{\bf Universal discretization problem.} This problem is about finding (proving existence) of
a set of points, which is good in the sense of the above Marcinkiewicz-type discretization
for a collection of linear subspaces. We formulate it in an explicit form. Let $\mathcal X_N:= \{X_{N,j}\}_{j=1}^k$ be a collection of linear subspaces $X_{N,j}$ of the $L_q(\Omega)$, $1\le q \le \infty$. We say that a set $\{\xi^\nu \in \Omega, \nu=1,\dots,m\}$ provides {\it universal discretization} for the collection $\mathcal X_N$ if, in the case $1\le q<\infty$, there are two positive constants $C_i(d,q)$, $i=1,2$, such that for each $j\in\{1,\dots,k\}$ and any $f\in X_{N,j}$ we have
\
C_1(d,q)\|f\|_q^q \le \frac{1}{m} \sum_{\nu=1}^m |f(\xi^\nu)|^q \le C_2(d,q)\|f\|_q^q.
\]
In the case $q=\infty$ for each $j\in\{1,\dots,k\}$ and any $f\in X_{N,j}$ we have
\begin{equation}\label{1.2u}
C_1(d)\|f\|_\infty \le \max_{1\le\nu\le m} |f(\xi^\nu)| \le \|f\|_\infty.
\end{equation}
\subsection{Anisotropic trigonometric polynomials}
\label{udsub1}
The problem of universal discretization for some special subspaces of the
trigonometric polynomials was studied in \cite{VT160}. Recall that for a finite subset $Q$ of $\mathbb Z^d$,
$$
\mathcal T(Q):= \Bigl\{f: f=\sum_{\mathbf k\in Q}c_\mathbf k e^{i(\mathbf k,\mathbf x)},\ \ c_{\mathbf{k}}\in \mathbb{C},\ \ \mathbf k\in Q\Bigr\}.
$$
For $\mathbf s\in\mathbb Z^d_+$
define
$$
R(\mathbf s) := \{\mathbf k \in \mathbb Z^d : |k_j| < 2^{s_j}, \quad j=1,\dots,d\}.
$$
Clearly, $R(\mathbf s) = \Pi(\mathbf N)$ with $N_j = 2^{s_j}-1$. Consider the collection ${\mathcal C}(n,d):= \{\mathcal T(R(\mathbf s)), \|\mathbf s\|_1=n\}$.
The following result was proved in \cite{VT160}.
\begin{Theorem}\label{udT1} For every $1\le q\le\infty$ there exists a large enough constant $C(d,q)$, which depends only on $d$ and $q$, such that for any $n\in {\mathbb{N}}$ there is a set $\xi:=\{\xi^\nu\}_{\nu=1}^m\subset \mathbb T^d$, with $m\le C(d,q)2^n$ that provides universal discretization in $L_q$ for the collection ${\mathcal C}(n,d)$.
\end{Theorem}
Theorem \ref{udT1}, basically, solves the universal discretization problem for the collection ${\mathcal C}(n,d)$. It provides the upper bound $m\le C(d,q)2^n$ with $2^n$ being of the order of the dimension of each $\mathcal T(R(\mathbf s))$ from the collection ${\mathcal C}(n,d)$.
Obviously, the lower bound for the cardinality of a set, providing the Marcinkiewicz discretization theorem for $\mathcal T(R(\mathbf s))$ with $\|\mathbf s\|_1=n$, is $\ge C(d)2^n$.
It was observed in \cite{VT160} that the universal discretization problem in $L_\infty$
for the collection ${\mathcal C}(n,d)$ is, in a certain sense, equivalent to the minimal {\it dispersion} problem. Let us describe this phenomenon in detail. Let $d\ge 2$ and $[0,1)^d$ be the $d$-dimensional unit cube. For $\mathbf x,\mathbf y \in [0,1)^d$ with $\mathbf x=(x_1,\dots,x_d)$ and $\mathbf y=(y_1,\dots,y_d)$ we write $\mathbf x < \mathbf y$ if this inequality holds coordinate-wise. For $\mathbf x<\mathbf y$ we write $[\mathbf x,\mathbf y)$ for the axis-parallel box $[x_1,y_1)\times\cdots\times[x_d,y_d)$ and define
$$
\mathcal B:= \{[\mathbf x,\mathbf y): \mathbf x,\mathbf y\in [0,1)^d, \mathbf x<\mathbf y\}.
$$
For $n\ge 1$ let $T$ be a set of points in $[0,1)^d$ of cardinality $|T|=n$. The volume of the largest empty (from points of $T$) axis-parallel box, which can be inscribed in $[0,1)^d$, is called the dispersion of $T$:
$$
\text{disp}(T) := \sup_{B\in\mathcal B: B\cap T =\emptyset} vol(B).
$$
An interesting extremal problem is to find (estimate) the minimal dispersion of point sets of fixed cardinality:
$$
\text{disp*}(n,d) := \inf_{T\subset [0,1)^d, |T|=n} \text{disp}(T).
$$
It is known that
\begin{equation}\label{ud1}
\text{disp*}(n,d) \le C^*(d)/n.
\end{equation}
Inequality (\ref{ud1}) with $C^*(d)=2^{d-1}\prod_{i=1}^{d-1}p_i$, where $p_i$ denotes the $i$th prime number, was proved in \cite{DJ} (see also \cite{RT}). The authors of \cite{DJ} used the Halton-Hammersly set of $n$ points (see \cite{Mat}). Inequality (\ref{ud1}) with $C^*(d)=2^{7d+1}$ was proved in
\cite{AHR}. The authors of \cite{AHR}, following G. Larcher, used the $(t,r,d)$-nets.
\begin{Definition}\label{udD1} A $(t,r,d)$-net (in base $2$) is a set $T$ of $2^r$ points in
$[0,1)^d$ such that each dyadic box $[(a_1-1)2^{-s_1},a_12^{-s_1})\times\cdots\times[(a_d-1)2^{-s_d},a_d2^{-s_d})$, $1\le a_j\le 2^{s_j}$, $j=1,\dots,d$, of volume $2^{t-r}$ contains exactly $2^t$ points of $T$.
\end{Definition}
A construction of such nets for all $d$ and $t\ge Cd$, $r\ge t$ is given in \cite{NX}.
The following conditional theorem, based on the concept of dispersion, was proved in \cite{VT160}.
\begin{Theorem}\label{udT2} Let a set $T$ with cardinality $|T|= 2^r=:m$ have dispersion
satisfying the bound disp$(T) < C(d)2^{-r}$ with some constant $C(d)$. Then there exists
a constant $c(d)\in {\mathbb{N}}$ such that the set $2\pi T:=\{2\pi\mathbf x: \mathbf x\in T\}$ provides the universal discretization in $L_\infty$ for the collection ${\mathcal C}(n,d)$ with $n=r-c(d)$.
\end{Theorem}
The following Theorem \ref{udT3} (see \cite{VT160}) can be seen as an inverse to Theorem~\ref{udT2}.
\begin{Theorem}\label{udT3} Assume that $T\subset [0,1)^d$ is such that the set $2\pi T$ provides universal discretization in $L_\infty$ for the collection
${\mathcal C}(n,d)$ with a constant $C_1(d)$ (see (\ref{1.2u})). Then there exists a positive constant $C(d)$ such that disp$(T) \le C(d)2^{-n}$.
\end{Theorem}
\subsection{Arbitrary trigonometric polynomials}
\label{udsub2}
For $n\in {\mathbb{N}}$ denote $\Pi_n :=\Pi(\mathbf N)\cap \mathbb Z^d$ with $\mathbf N =(2^{n-1}-1,\dots,2^{n-1}-1)$, where, as above, $\Pi(\mathbf N) := [-N_1,N_1]\times\cdots\times[-N_d,N_d]$.
Then $|\Pi_n| = (2^n-1)^d <2^{dn}$. Let $v\in{\mathbb{N}}$ and $v\le |\Pi_n|$. Consider
$$
{\mathcal S}(v,n):= \{Q\subset \Pi_n : |Q|=v\}.
$$
Then it is easy to see that
\
|{\mathcal S}(v,n)| =\binom{|\Pi_n|}{v}<2^{dnv}.
\]
We are interested in solving the following problem of universal discretization.
For a given ${\mathcal S}(v,n)$ and $q\in [1,\infty)$ find a condition on $m$ such that there exists a set
$\xi = \{\xi^\nu\}_{\nu=1}^m$ with the property: for any $Q\in {\mathcal S}(v,n)$ and each
$f\in \mathcal T(Q)$ we have
\
C_1(q,d)\|f\|_q^q \le \frac{1}{m}\sum_{\nu=1}^m |f(\xi^\nu)|^q \le C_2(q,d)\|f\|^q_q.
\]
We present results for $q=2$ and $q=1$.
{\bf The case $q=2$.} We begin with a general construction. Let $X_N = \operatorname{span}(u_1,\dots,u_N)$, where $\{u_j\}_{j=1}^N$ is a real orthonormal system on $\mathbb T^d$.
With each $\mathbf x\in\mathbb T^d$ we associate the matrix $G(\mathbf x) := [u_i(\mathbf x)u_j(\mathbf x)]_{i,j=1}^N$. Clearly, $G(\mathbf x)$ is a symmetric matrix. For a set of points $\xi^k\in \mathbb T^d$, $k=1,\dots,m$, and $f=\sum_{i=1}^N b_iu_i$ we have
$$
\frac{1}{m}\sum_{k=1}^m f(\xi^k)^2 - \int_{\mathbb{T}^d} f(x)^2 d\mu = {\mathbf b}^T\left(\frac{1}{m}\sum_{k=1}^m G(\xi^k)-I\right){\mathbf b},
$$
where ${\mathbf b} = (b_1,\dots,b_N)^T$ is the column vector. Therefore,
\
\left|\frac{1}{m}\sum_{k=1}^m f(\xi^k)^2 - \int_{\mathbb{T}^d} f(x)^2 d\mu \right| \le
\left\|\frac{1}{m}\sum_{k=1}^m G(\xi^k)-I\right\|\|{\mathbf b}\|_2^2.
\]
We recall that
the system $\{u_j\}_{j=1}^N$ satisfies
{Condition~{\bf E}} (see~\eqref{ud5}) if there exists a constant $t$ such that
$
w(x):=\sum_{i=1}^N u_i(x)^2 \le Nt^2.
$
Let points $\mathbf x^k$, $k=1,\dots,m$, be independent uniformly distributed on $\mathbb T^d$ random variables. Then with a help of deep results on random matrices (see Theorem~\ref{T5.3} or~\cite[Theorem 1.1]{Tro12}) it was proved in~\cite{VT159} that
\
\mathbb P\left\{\left\|\sum_{k=1}^m (G(\mathbf{x}^k)-I) \right\|\ge m\eta\right\} \le N\exp\left(-\frac{m\eta^2}{ct^2N}\right)
\]
with an absolute constant $c$.
Consider real trigonometric polynomials from the collection ${\mathcal S}(v,n)$. Using the union bound for the probability we get that the probability of the event
$$
\left\|\sum_{k=1}^m (G_Q(\mathbf{x}^k)-I) \right\|\le m\eta \quad\text{for all}\quad Q\in {\mathcal S}(v,n)
$$
is bounded from below by
$$
1- |{\mathcal S}(v,n)|v\exp\left(-\frac{m\eta^2}{cv}\right).
$$
For any fixed $\eta\in(0,1/2]$ the above number is positive provided $m \ge C(d)\eta^{-2}v^2n$ with large enough $C(d)$.
The above argument proves the following result.
\begin{Theorem}\label{udT4} There exist three positive constants $C_i(d)$, $i=1,2,3$,
such that for any $n,v\in{\mathbb{N}}$ and $v\le |\Pi_n|$ there is a set $\xi =\{\xi^\nu\}_{\nu=1}^m \subset \mathbb T^d$, with $m\le C_1(d)v^2n$, which provides universal discretization
in $L_2$ for the collection ${\mathcal S}(v,n)$: for any $f\in \cup_{Q\in {\mathcal S}(v,n)} \mathcal T(Q)$
$$
C_2(d)\|f\|_2^2 \le \frac{1}{m} \sum_{\nu=1}^m |f(\xi^\nu)|^2 \le C_3(d)\|f\|_2^2.
$$
\end{Theorem}
The classical Marcinkiewicz-type result for $\mathcal T(\Pi_n)$ provides a universal set $\xi$ with cardinality $m\le C(d)2^{dn}$. Thus, Theorem \ref{udT4} gives a non-trivial result
for $v$ satisfying $v^2n\le C(d)2^{dn}$.
{\bf Case $q=1$.} Similar to the case $q=2$ a result on the universal discretization
for the collection ${\mathcal S}(v,n)$ will be derived from the probabilistic result on the Marcinkiewicz-type theorem for $\mathcal T(Q)$, $Q\subset \Pi_n$. However, the probabilistic technique used in the case of $q=1$ is different from the probabilistic technique used in the case $q=2$. The proof of Theorem 3.1 from \cite{VT159} gives the following result.
\begin{Theorem}\label{udT5} Let points $\mathbf x^j\in\mathbb T^d$, $j=1,\dots,m$, be independently and uniformly distributed on $\mathbb T^d$. There exist positive constants $C_1(d)$, $C_2$, $C_3$, and $\kappa\in (0,1)$ such that for any $Q\subset \Pi_n$ and $m \ge yC_1(d)|Q|n^{7/2}$, $y\ge 1$,
$$
\mathbb P\left\{\text{For any}\quad f\in\mathcal T(Q), \quad C_2\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\mathbf x^j)| \le C_3\|f\|_1\right\} \ge 1-\kappa^y.
$$
\end{Theorem}
Therefore, using the union bound for probability we obtain the Marcinkiewicz-type inequalities
for all $Q\in {\mathcal S}(v,n)$ with probability at least $1-|{\mathcal S}(v,n)|\kappa^y$. Choosing $y= y(v,n):= C(d)vn$ with large enough $C(d)$ we get
$$
1-|{\mathcal S}(v,n)|\kappa^{y(v,n)}>0.
$$
This argument implies the following result on universality in $L_1$.
\begin{Theorem}\label{udT6} There exist three positive constants $C_1(d)$, $C_2$, $C_3$,
such that for any $n,v\in{\mathbb{N}}$ and $v\le |\Pi_n|$ there is a set $\xi =\{\xi^\nu\}_{\nu=1}^m \subset \mathbb T^d$, with $m\le C_1(d)v^2n^{9/2}$, which provides universal discretization
in $L_1$ for the collection ${\mathcal S}(v,n)$: for any $f\in \cup_{Q\in {\mathcal S}(v,n)} \mathcal T(Q)$
$$
C_2\|f\|_1 \le \frac{1}{m} \sum_{\nu=1}^m |f(\xi^\nu)| \le C_3\|f\|_1.
$$
\end{Theorem}
The classical Marcinkiewicz-type result for $\mathcal T(\Pi_n)$ provides a universal set $\xi$ with cardinality $m\le C(d)2^{dn}$. Thus, Theorem \ref{udT6} gives a non-trivial result
for $v$ satisfying $v^2n^{9/2}\le C(d)2^{dn}$.
\section{Open problems}
\label{OP}
We collect a number of open problems in this section. Probably, some of them are
rather simple and others are very difficult. By listing these problems we want to illustrate
that there are many interesting and important directions to go.
\subsection{Exact}
Results of Section \ref{Ex} solve the problem of exact weighted discretization.
The problem of exact discretization, that is the problem with equal weights $1/m$,
is still open.
\begin{description}
\item[Open problem 1.]Find necessary and sufficient conditions on $X_N$ for
$X_N \in \mathcal M(c(d)N^2,2,0)$.
\end{description}
Theorem 4.4 from \cite{VT158} gives the following relation for the trigonometric polynomials $\mathcal T(Q)$ with frequencies from $Q\subset \mathbb Z^d$, satisfying some extra conditions,
\begin{equation}\label{2.2}
\mathcal T(Q) \in \mathcal M(c(d)|Q|^2,2,0).
\end{equation}
Results of Subsection \ref{Ex--} show that (\ref{2.2}) cannot be improved by replacing $|Q|^2$ by a slower growing function on $|Q|$.
\begin{description}
\item[Open problem 2.]
Does (\ref{2.2}) hold for all $Q$?
\end{description}
\begin{description}
\item[Open problem 3 (conjecture).]
For a real subspace $X_N\subset L_2(\Omega,\mu)$ define
$$
m(X_N,w) := \min\{m: \, X_N \in \mathcal M^w(m,2,0)\}.
$$
Let $m=m(X_N,w)$ and let $\{\xi^\nu\}$, $\{{\langle}_\nu\}$, $\nu=1,\dots,m$, be such that for any $f\in X_N$ we have
$$
\int_{\Omega}f^2d\mu = \sum_{\nu=1}^m{\langle}_\nu f(\xi^\nu)^2.
$$
Then ${\langle}_\nu>0$, $\nu=1,\dots,m$.
\end{description}
\subsection{$\mathcal M(m,q)$, $1\le q\le \infty$}
\label{M}
For the trigonometric polynomials the problem is basically solved in the case $q=2$ (see Theorem \ref{NOUth} above and Theorem 1.1 from \cite{VT158}):
\begin{equation}\label{3.1}
\mathcal T(Q) \in \mathcal M(c(d)|Q|,2).
\end{equation}
\begin{description}
\item[Open problem 4.]
Find conditions (necessary and sufficient) for $\mathcal T(Q) \in \mathcal M(m,q)$ in the case $q\in [1,\infty]\setminus 2$.
\end{description}
Here is a particular case of open problem 4, which is of special interest.
\begin{description}
\item[Open problem 5.]
Prove open problem 4 for $\mathcal T(Q_n)$ -- the set of trigonometric polynomials with frequencies from a step hyperbolic cross $Q_n$.
\end{description}
A very interesting and very difficult problem is an analog of open problem~4 for general
subspaces $X_N$:
\begin{description}
\item[Open problem 6.]
Find conditions (necessary and sufficient) for $X_N \in \mathcal M(m,q)$ in the case $q\in [1,\infty]$. This problem includes conditions on both $X_N$ and $m$.
\end{description}
All the above problems, especially in the case of general $X_N$, are of interest for
$\mathcal M^w(m,q)$. Open problem 6 contains interesting subproblems. We discuss some of them.
\begin{description}
\item[Open problem 6a.]
Let $\Omega:=[0,1]^d$ be a unit $d$-dimensional cube and $\mu$ be a probability measure on $\Omega$. Take $q\in [1,\infty)$. Is the following statement true? There exists $C(d,q)$ such that for any $N$-dimensional
subspace $X_N \subset L_q(\Omega,\mu)$ we have $X_N\in \mathcal M(C(d,q)N,q)$.
\end{description}
\begin{description}
\item[Open problem 6b.]
Let $\Omega:=[0,1]^d$ be a unit $d$-dimensional cube and $\mu$ be a probability measure on $\Omega$. Take $q\in [1,\infty)$. Is the following statement true? There exists $C(d,q)$ such that for any $N$-dimensional
subspace $X_N \subset L_q(\Omega,\mu)$ we have $X_N\in \mathcal M^w(C(d,q)N,q)$.
\end{description}
It turns out that results for the Marcinkiewicz discretization problems in $L_q$, $1\le q<\infty$ and in $L_\infty$
are different. We demonstrate this phenomenon on the above Open problems~6a and~6b. In analogy with
Open problems~6a and~6b one could formulate the following version of them in the case of $L_\infty$.
\begin{description}
\item[Open problem 6c.]
Let $\Omega:=[0,1]^d$ be a unit $d$-dimensional cube. Is the following statement true? There exists $C(d)$ such that for any $N$-dimensional
subspace $X_N \subset L_\infty(\Omega)$ of continuous functions we have $X_N\in \mathcal M(C(d)N,\infty)$.
\end{description}
Open problem 6c is actually not an open problem. The negative answer to this problem follows from the first part of Theorem \ref{ITmain}. Moreover, the answer is negative even if
we restrict ourselves to subspaces $\mathcal T(Q)$ of trigonometric polynomials. The reader can find results in this paper, which give
partial progress in Open problems 6a and 6b. The most progress is made in case $q=2$. In case $q=2$ and $\mu$ is a discrete measure concentrated on $\Omega_M=\{x^j\}_{j=1}^M$ with $\mu(x^j)=1/M$, $j=1,\dots,M$, the answer to Open problem 6b is positive. This follows directly from (\ref{C2'}). It is clear that it can be generalized for many other probability measures $\mu$, for instance, for the Lebesgue measure on $\Omega$. It is likely that the answer to the Open problem 6b is "yes". Probably, the best progress in Open problem 6b for arbitrary $\mu$
is given in Theorem \ref{CT5'}. Certainly, the above two open problems are of interest in the case of trigonometric polynomials as well. We formulate them explicitly.
\begin{description}
\item[Open problem 6at.]
Let $\Omega:=\mathbb T^d$ and $q\in [1,\infty)$. Is the following statement true? There exists $C(d,q)$ such that for any $N$-dimensional
subspace $X_N = \mathcal T(Q)$ we have $X_N\in \mathcal M(C(d,q)N,q)$.
\end{description}
\begin{description}
\item[Open problem 6bt.]
Let $\Omega:=\mathbb T^d$ and $q\in [1,\infty)$. Is the following statement true? There exists $C(d,q)$ such that for any $N$-dimensional
subspace $X_N = \mathcal T(Q)$ we have $X_N\in \mathcal M^w(C(d,q)N,q)$.
\end{description}
Theorem \ref{NOUth} gives a positive answer to Open problems 6at and 6bt in the case $q=2$. In all other cases of $q$ we do not have an answer.
\begin{description}
\item[Open problem 7.]
In the case $q=\infty$ there is the Kashin-Temlyakov phenomenon,
which says that for $\mathcal T(Q_n) \in \mathcal M(m,\infty)$ it is necessary to have $m\ge c(d)|Q_n|^{1+c}$, $c>0$. Is it true that for all dimensions $\mathcal T(Q_n) \in \mathcal M(m,\infty)$ provided $m\ge C(d)|Q_n|^2$?
\end{description}
The following is a weaker form of open problem 7.
\begin{description}
\item[Open problem 8.] Theorem~\ref{thm-2-1} shows
that
$$
\mathcal T(Q_n) \in \mathcal M(C_d2^{n\alpha_d}n^{\beta_d},\infty)
$$
with $\alpha_d \asymp \ln d$.
Does there exist an absolute constant $c$ such that
$$
\mathcal T(Q_n) \in \mathcal M(C_d2^{cn},\infty) ?
$$
\end{description}
Assume that $X_N =\operatorname{span}\{u_1(x),\dots,u_N(x)\}$ where
$\{u_i(x)\}_{i=1}^N$ is a real orthonormal system on $\Omega$. The condition {\bf E} (see (\ref{ud5})) is a typical sufficient condition for some results.
For instance, let $\Omega_M=\{x^j\}_{j=1}^M$ be a discrete set with the probability measure $\mu(x^j)=1/M$, $j=1,\dots,M$. Then it is known (Rudelson for $\Omega_M$, see \cite{VT159} for general $\Omega$) that
\begin{equation}\label{3.3}
X_N \in \mathcal M(CN\log N, 2).
\end{equation}
It would be interesting to understand how important condition {\bf E} is for the Marcinkiewicz-type discretization theorems.
\subsection{$\mathcal M^w(m,q)$, $1\le q\le \infty$}
\label{Mw}
For $q=2$ there is a strong result from \cite{BSS} (see a discussion in Subsection \ref{sec2.1} and at the end of Section 6 of \cite{VT159})
\begin{equation}\label{3.4}
X_N(\Omega_M) \in \mathcal M^w(m,2,\epsilon)\quad \text{provided} \quad m \ge CN\epsilon^{-2}
\end{equation}
with large enough $C$.
\begin{description}
\item[Open problem 9.]
For which $X_N$ we have different conditions on $m$ for
$X_N\in\mathcal M(m,q)$ and $X_N\in\mathcal M^w(m,q)$?
\end{description}
\subsection{Constructive proofs}
Theorem \ref{gfT1} establishes the following inclusion for even positive integers $q$
$$
X_N \in \mathcal M(M(N,q),q,0).
$$
The proof of Theorem \ref{gfT1} is not constructive. In Subsection \ref{sec2.4} we give a constructive proof of Theorem \ref{gfT1} in case $q=2$.
\begin{description}
\item[Open problem 10.]Give a constructive proof of Theorem \ref{gfT1} for all even positive integers $q$.
\end{description}
We pointed out in Section \ref{survey} that the main technique used for proving the Marcinkiewicz-type discretization theorems is a probabilistic technique.
\begin{description}
\item[Open problem 11.] Give a constructive proof of Theorem \ref{NOUth}.
\item[Open problem 12.] Give a constructive proof of Theorem \ref{T6.1}.
\item[Open problem 13.] Give a constructive proof of Theorem \ref{T5.4}.
\end{description}
\vskip 1.0cm
\newpage
|
1,108,101,563,126 | arxiv | \section{Introduction}
The most fascinating aspect of the Casimir effect is that it attributes a physical significance to the vacuum zero-point energy of electromagnetic radiation~\cite{milonni}. This is all the more surprising as electromagnetic zero-point energy seems initially to be a theoretical malfunction, a spurious divergence that should be ignored. An infinite zero-point energy for quantum fields is a simple consequence of their description as a continuum of quantum harmonic oscillators, where all oscillation frequencies may occur~\cite{milonni,loudon}. Casimir theory attributes a real existence to a finite part of the electromagnetic zero-point energy and pressure, a part that is determined by macroscopic objects and that in turn exerts forces on those objects~\cite{lif55,dzy61,LLsp2,dalvit,simpson}. Experimental confirmation of Casimir forces allows us to take seriously an electromagnetic vacuum state whose energy density and pressure have a spatial and frequency structure that is determined by the electromagnetic susceptibilities of macroscopic materials. But the detailed structure of the electromagnetic vacuum state is not usually described in Casimir calculations because only the total vacuum energy, or total vacuum pressure at material boundaries, is required to find the forces. Moreover the sum over all frequency contributions to the force is invariably carried out as a sum over imaginary frequencies, which obscures the spectrum of the vacuum state since each imaginary-frequency component has contributions from all real frequencies. Here we study the electromagnetic vacuum state in the simplest non-trivial case and show that individual modes of light can have zero-point energies that are damped (decreased) by coupling to macroscopic materials. These damped vacuum states of light are an interesting addition to the single-mode states routinely discussed in the quantum-optics textbooks~\cite{loudon}.
Most treatments of Casimir energy are focussed on the total energy rather than on the energy of individual modes~\cite{mil14,gra03,kli15}. Moreover idealized boundary conditions are often assumed~\cite{mil14,gra03}. In~\cite{for88,ell08} the Casimir energy and stress of individual frequencies was studied but only by making use of idealized material properties that violate Kramers-Kronig relations. Here we consider dielectric functions that exhibit dispersion and absorption consistent with Kramers-Kronig relations, as is required for accurate theoretical predictions of the Casimir effect.
Casimir energy quantifies forces between objects through its derivative with respect to the separation distances of the objects. The Casimir force is also determined by the zero-point electromagnetic stress tensor, which is the most common method for computing the force~\cite{lif55,dzy61,LLsp2}. It is important to note that the Casimir energy can be calculated separately from the stress tensor, by means of the conserved quantity associated with time-translation symmetry (for non-moving materials)~\cite{phi11}. The Casimir energy obtained in this manner gives a force that agrees with that obtained from the stress tensor~\cite{phi11}, but part of the energy does not contribute to the force. This last fact is due to a self-energy contribution from material inhomogeneities (including sharp boundaries) that does not cause a force between separated objects. A complete description of Casimir energy must include such self-energy contributions, as they are necessarily present if one adopts the standard view that time-translation symmetry gives energy through Noether's theorem. The role of self-energy contributions has been recognised in discussions of the gravitational effects of Casimir energy~\cite{mil14}. Here we will consider the zero-point energy of individual (real) frequencies, including self-energy contributions. A regularization of the zero-point energy is required to obtain the physical Casimir energy as the total zero-point energy always diverges. As we consider realistic materials that obey Kramers-Kronig relations, the regularizaton is the standard one employed in the prediction of Casmir forces through the stress tensor~\cite{lif55,dzy61,LLsp2}.
Casimir forces are intimately connected to thermal radiation~\cite{lif55,dzy61,LLsp2} and our approach here is similar to investigations of the effect of material boundaries on the thermal spectrum~\cite{wil06,jou05}. Also closely related are studies of the spatial variation in the local electromagnetic density of states caused by materials, an effect that can be probed by spontaneous emission~\cite{pur46,dre68,bar98,Novotny}. In those investigations, it is the properties of light outside the materials, including close to the boundaries, that are relevant. Our interest here however is in the energy density of each mode throughout space, which crucially includes the regions inside the materials.
We are also motivated by the developing subject of nanomechanical systems~\cite{oco10,teu11,cha11,poo12,gro13}. The physics of such systems may be modelled as quantum oscillators that are damped by coupling to reservoirs representing the environment. Although the link to nanomechanical systems is not often made, the quantum theory of macroscopic electromagnetism is also a theory of quantum damped harmonic oscillators and can be formulated exactly as a quantized theory of light coupled to a reservoir~\cite{bha06,khe06,sut07,amo08,khe10,phi10}. Results for the behavior of quantum light modes in macroscopic electromagnetism are therefore instructive for studies of nanomechanical systems.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=9cm]{figure1.pdf}
\caption{A block of material with electric permittivity $\varepsilon(\omega)$ and magnetic permeability $\mu(\omega)$. The block extends infinitely in the $y$ and $z$ directions and has boundaries at $x=0$ and $x=L$. The surrounding region is vacuum, written as the vacuum limit $\varepsilon_2(\omega)\to 1$ of another material. We consider light linearly polarized in the $y$-direction and propagating in the $x$-direction perpendicular to the block faces at $x=0$ and $x=L$.}
\label{fig:block}
\end{center}
\end{figure}
\section{Set-up}
We consider a block with electric permittivity $\varepsilon(\omega)$ and magnetic permeability $\mu(\omega)$ (see Fig.~\ref{fig:block}). The block has boundaries at $x=0$ and $x=L$ and is surrounded by vacuum, but for technical reasons (see below) the vacuum region is represented as the vacuum limit $\varepsilon_2(\omega)\to 1$ of a different dielectric. Our interest is in the effect of this block on the quantum vacuum state of light, and to simplify the analysis as much as possible we consider only light modes linearly polarized in the $y$-direction and propagating in the $x$-direction. The set-up is therefore essentially one-dimensional and we treat it as such. The input and output modes in this arrangement are analysed in~\cite{gru96}.
The macroscopic Maxwell equations for light interacting with an arbitrary inhomogeneous material whose dielectric functions obey the Kramers-Kronig relations can be formulated exactly as closed system of electromagnetic fields coupled to a reservoir (see~\cite{phi10}, for example). By quantizing this system and diagonalizing its Hamiltonian~\cite{phi10} one obtains the following expression for the electric-field operator in our case of interest, where the general result is specialized to one polarization and to the case where both propagation and material inhomogeneity are in the $x$-direction:
\begin{eqnarray}
\fl
\hat{E}(x,t)=\sqrt{\frac{\hbar\mu_0}{\pi}}\int_0^\infty \rmd\omega\int_{-\infty}^\infty \rmd x' &\, g(x,x',\omega) \left[-\frac{\omega^2}{c}\sqrt{ \varepsilon_\mathrm{I}(x',\omega)} \hat{C}_\mathrm{e}(x',\omega) \right. \nonumber \\
& \left. +\rmi \omega \partial_{x'}\left( \sqrt{- \kappa_\mathrm{I}(x',\omega)} \hat{C}_\mathrm{m}(x',\omega) \right) \right] e^{-\rmi \omega t} +\mathrm{h.c.} \label{Eop}
\end{eqnarray}
Here the notation is as follows: $\varepsilon_\mathrm{I}(x,\omega)$ is the imaginary part of the inhomogeneous permittivity $\varepsilon(x,\omega)$, $\kappa_\mathrm{I}(x,\omega)$ is the imaginary part of $1/\mu(x,\omega)$, h.c.\ means hermitian conjugate, $\hat{C}_\mathrm{e}(x,\omega)$ and $\hat{C}_\mathrm{m}(x,\omega)$ are the annihilation operators for the normal modes that diagonalize the Hamiltonian and obey
\begin{equation}
\left[ \hat{C}_\mathrm{\lambda}(x,\omega),\hat{C}^\dagger_\mathrm{\lambda'}(x',\omega') \right] =\delta_{ \lambda\lambda'} \delta(x-x') \delta(\omega-\omega'), \qquad \lambda,\lambda'=\{\mathrm{e},\mathrm{m}\},
\end{equation}
and $g(x,x',\omega)$ is the retarded Green function satisfying
\begin{equation} \label{geqn}
\left(\partial_{x}\frac{1}{\mu(x,\omega)}\partial_{x}+k_0^2\varepsilon(x,\omega)\right)g(x,x',\omega)= \delta(x-x') , \qquad k_0=\frac{\omega}{c} .
\end{equation}
In the arrangement of Fig.~\ref{fig:block} the material is piecewise homogeneous and the solution of (\ref{geqn}) is well known (see~\cite{gru96} for example). The Green function determines the electric-field operator (\ref{Eop}) and the magnetic-field operator (which points in the $z$-direction) is found from $\hat{B}(x,\omega)=-\rmi \partial_{x}\hat{E}(x,\omega)/\omega$. Because of the imaginary parts of the dielectric functions in (\ref{Eop}), it is necessary to consider vacuum as the limit of a permittivity going to $1$; this limit is to be taken in final observable quantities, or earlier if this will not affect the final results.
In the absence of materials, the field operators for one-dimensional propagation can be decomposed into independent left- and right-going modes of each frequency. In our case, reflections from the block boundaries couple the left- and right-going modes. Nevertheless, in the vacuum regions outside the block the field operators and their algebra still take a relatively simple form (see also~\cite{gru96}). In the region to the right of the block ($x>L$) the electric-field operator (\ref{Eop}) can be written
\begin{equation}
\fl
\hat{E}(x,t)=\int_0^\infty \rmd\omega \sqrt{\frac{\hbar\omega}{4\pi c\varepsilon_0}} \left[ e^{\rmi \sqrt{\varepsilon_2(\omega)}\, k_0 x} \hat{a}_+(x,\omega) + e^{-\rmi \sqrt{\varepsilon_2(\omega)}\, k_0 x} \hat{a}_-(x,\omega) \right] e^{-\rmi \omega t} +\mathrm{h.c.} \label{Eopr}
\end{equation}
The operators for the right-going ($+$) and left-going ($-$) modes in (\ref{Eopr}) have the following simple algebra in the vacuum limit $\varepsilon_2(\omega)\to 1$ outside the block, where we denote this limit by an arrow:
\begin{eqnarray}
\fl
\left[ \hat{a}_+(x,\omega) , \hat{a}^\dagger_+(x,\omega) \right] \to \delta(\omega-\omega'), \qquad \left[ \hat{a}_-(x,\omega) , \hat{a}^\dagger_-(x,\omega) \right] \to \delta(\omega-\omega'), \label{aal1} \\
\fl
\left[ \hat{a}_+(x,\omega) , \hat{a}_-(x,\omega) \right] \to 0, \qquad \left[ \hat{a}_+(x,\omega) , \hat{a}^\dagger_-(x,\omega) \right] \to \rmi \zeta(\omega,L) \delta(\omega-\omega'), \label{aal2} \\
\zeta(\omega,L) = \frac{\rmi e^{-2 i k_0 L } (\varepsilon -\mu )}{ 2 i n \cot \left( n
k_0L \right) +\varepsilon+\mu}. \label{gamma}
\end{eqnarray}
Here $n=\sqrt{\varepsilon\mu}$ is the refractive index of the block and we have suppressed the frequency dependence of $\varepsilon$, $\mu$ and $n$. The failure of all the right ($+$) operators to commute with all the left ($-$) operators in (\ref{aal1}) and (\ref{aal2}) shows the coupling of these modes due to reflection from the block boundaries. Independent modes with commuting operators can be defined as follows, where $\hat{b}_1$ and $\hat{b}_2$ are the associated annihilation operators:
\begin{eqnarray}
\hat{b}_1(x,\omega)=\frac{1}{2}\left[ \left( \delta_+ + \delta_- \right) e^{-\rmi\phi_\zeta/2} \hat{a}_+(x,\omega) + \rmi \left( \delta_+ - \delta_- \right) e^{\rmi\phi_\zeta/2} \hat{a}_-(x,\omega) \right] , \\
\hat{b}_2(x,\omega)=\frac{1}{2}\left[ \left( \delta_+ + \delta_- \right) e^{\rmi\phi_\zeta/2} \hat{a}_-(x,\omega) - \rmi \left( \delta_+ - \delta_- \right) e^{-\rmi\phi_\zeta/2} \hat{a}_+(x,\omega) \right] , \\
\delta_+=\left(1+|\zeta|\right)^{-1/2}, \qquad \delta_-=\left(1-|\zeta|\right)^{-1/2},
\end{eqnarray}
where the function (\ref{gamma}) is decomposed as $\zeta=|\zeta|e^{\rmi \phi_\zeta}$. Inside the block the algebra of field operators is more complicated but is straightforwardly obtained from (\ref{Eop}) and the Green function $g(x,x',\omega)$.
\section{Vacuum-state field uncertainties}
The electromagnetic vacuum state is defined by $\hat{C}_\mathrm{e}(x,\omega)|0\rangle=0$ and $\hat{C}_\mathrm{m}(x,\omega)|0\rangle=0$, which imply $\hat{a}_\pm(x,\omega)|0\rangle=0$ for the mode operators in (\ref{Eopr}). It is then straightforward to calculate the zero-point uncertainties of the electric ($\Delta E(x)$) and magnetic ($\Delta B(x)$) fields in the region to the right of the block ($x>L$) using the algebra (\ref{aal1}) and (\ref{aal2}):
\begin{eqnarray}
\left[\Delta E(x)\right]^2=\langle0|[\hat{E}(x,t)]^2|0\rangle=\frac{\hbar c\mu_0}{2\pi} \int_0^\infty \rmd\omega \, \omega\left[1-|\zeta| \sin\left(2k_0x+\phi_\zeta\right)\right], \label{DEvac} \\
\left[\Delta B(x)\right]^2=\langle0|[\hat{B}(x,t)]^2|0\rangle=\frac{\hbar \mu_0}{2\pi c} \int_0^\infty \rmd\omega \, \omega\left[1+|\zeta| \sin\left(2k_0x+\phi_\zeta\right)\right]. \label{DBvac}
\end{eqnarray}
These field uncertainties oscillate with distance from the block boundary. The electric-field uncertainty (\ref{DEvac}) can be probed by measuring spontaneous emission at different positions (though in general modes for all angles of incidence on the block need to be included)~\cite{pur46,dre68,bar98,Novotny}.
The vacuum-state field uncertainties $\Delta E(x)$ and $\Delta B(x)$ for a general inhomogeneous block have a simple expression in terms of the Green function:
\begin{eqnarray}
\left[\Delta E(x)\right]^2=\langle0|[\hat{E}(x,t)]^2|0\rangle=-\frac{\hbar \mu_0}{\pi} \mathrm{Im} \int_0^\infty \rmd\omega \, \omega^2 g(x,x,\omega) , \label{DEgen} \\
\left[\Delta B(x)\right]^2=\langle0|[\hat{B}(x,t)]^2|0\rangle=-\frac{\hbar \mu_0}{\pi} \mathrm{Im} \int_0^\infty \rmd\omega \, \lim_{x' \to x} \partial_{x}\partial_{x'} g(x,x',\omega), \label{DBgen}
\end{eqnarray}
which follow from (\ref{Eop}) and (\ref{geqn}) (see~\cite{phi11} for example). These expressions are exactly what would be expected as the extension to the quantum vacuum of the classical results of Rytov for the field variances of thermal radiation~\cite{rytov,lif55,LLsp2}. The results (\ref{DEvac}) and (\ref{DBvac}) can also be obtained from the general expressions (\ref{DEgen}) and (\ref{DBgen}), as the Green function to the right of the block ($x>L$, $x'>L$) with $\varepsilon_2(\omega)\to 1$ is
\begin{equation}
g(x,x',\omega)=-\frac{\rmi}{2 k_0}e^{\rmi k_0|x-x'|}+\frac{1}{2k_0}\zeta e^{\rmi k_0(x+x')},
\end{equation}
where $\zeta$ is again (\ref{gamma}).
To obtain the field uncertainties inside the block we can use (\ref{DEgen}) and (\ref{DBgen}) together with the Green function in that region ($0<x<L$, $0<x'<L$). The Green function here is ($\varepsilon_2(\omega)\to 1$):
\begin{eqnarray}
\fl
g(x,x',\omega)= -\frac{\rmi\mu}{2n k_0} \left[ e^{\rmi n k_0|x-x'|}+\alpha(\omega,L)\left( e^{\rmi nk_0(x-x')} +e^{-\rmi nk_0(x-x')}\right) \right. \nonumber \\
\qquad\quad \left.+\beta(\omega,L) \left( e^{\rmi nk_0(x+x')} + e^{-\rmi nk_0(x+x'-2L)}\right) \right], \\
\alpha(\omega,L)=\left[ \left(\frac{n+\mu}{n-\mu}\right)^2 e^{-2\rmi n k_0 L} -1\right]^{-1} , \label{alpha} \\\beta(\omega,L)=\frac{\varepsilon-\mu}{2n+\varepsilon+\mu+(2n-\varepsilon-\mu)e^{2\rmi n k_0 L} }, \label{beta}
\end{eqnarray}
giving the following field uncertainties inside the block:
\begin{eqnarray}
\left[\Delta E(x)\right]^2=\frac{\hbar c \mu_0}{2\pi} \mathrm{Im} \int_0^\infty \rmd\omega \, \frac{\rmi\mu \omega }{n}\left[ 1+2\alpha +\beta\left(e^{2\rmi nk_0x} +e^{-2\rmi nk_0(x-L)} \right) \right], \label{DEmat} \\
\left[\Delta B(x)\right]^2=\frac{\hbar \mu_0}{2\pi c} \mathrm{Im} \int_0^\infty \rmd\omega \, \rmi\mu n \omega \left[ 1+2\alpha -\beta\left(e^{2\rmi nk_0x} +e^{-2\rmi nk_0(x-L)} \right) \right]. \label{DBmat}
\end{eqnarray}
\section{Vacuum-state energy} \label{sec:energy}
When macroscopic electromagnetism is formulated as a closed system of electromagnetic fields coupled to a reservoir, the total energy-density operator follows from time-translation symmetry and Noether's theorem~\cite{phi11}. The total energy has ``free" electromagnetic and reservoir terms, and also terms involving the coupling functions~\cite{phi11}. For the ground state and thermal equilibrium, the electromagnetic part of the energy can be defined using the Hamiltonian of mean force~\cite{kir35,jar04,Campisi09,sub12,HSAL11,phi15}. The resulting electromagnetic energy is the total energy minus the energy the reservoir would have if it alone were present (for the vacuum state this is also the expectation value of the Hamiltonian of mean force). This definition of energy gives the same answer for Casimir forces as obtained by using the electromagnetic stress tensor~\cite{phi11}. In our case of one polarization and propagation in the $x$-direction, the electromagnetic energy per unit length $\rho(x)$ of the vacuum state in an arbitrary inhomogeneous material is~\cite{phi11}
\begin{equation}
\fl
\rho(x)=\frac{\varepsilon_0}{2} \mathrm{Im} \int_0^\infty \rmd\omega \left\{ \frac{\rmd[\omega\varepsilon(x,\omega)]}{\rmd \omega}\left[\Delta E(x)\right]^2 +\frac{c^2}{[\mu(x,\omega)]^2} \frac{\rmd[\omega\mu(x,\omega)]}{\rmd \omega}\left[\Delta B(x)\right]^2 \right\}, \label{rhogen}
\end{equation}
where $\Delta E(x)$ and $\Delta B(x)$ are the vacuum-state field uncertainties given by (\ref{DEgen}) and (\ref{DBgen}).
Using (\ref{rhogen}), we obtain the zero-point electromagnetic energy per unit length in the presence of the block. In the region to the right of the block ($x>L$), (\ref{DEvac}) and (\ref{DBvac}) together with $\varepsilon(x,\omega)=\mu(x,\omega)=1$ in (\ref{rhogen}) give
\begin{equation} \label{rhoempty}
\rho(x)=\frac{\hbar}{2\pi c} \mathrm{Im} \int_0^\infty \rmd\omega \,\rmi \omega,
\end{equation}
which is exactly the (diverging) zero-point energy per unit length in the absence of the block. This is also the result in the region to the left of the block ($x<0$). Thus, although the electric and magnetic field uncertainties outside the block are affected by its presence, this material dependence cancels out in the energy per unit length outside the block. It should be noted that this cancelation only happens for modes propagating perpendicular to the block boundaries. As our interest is in how the block alters the electromagnetic zero-point energy (of $x$-propagating modes), we see that only the energy inside the block matters. Inserting (\ref{DEmat}) and (\ref{DBmat}) in (\ref{rhogen}) and putting $\varepsilon(x,\omega)=\varepsilon(\omega),$ $\mu(x,\omega)=\mu(\omega)$, we find the energy per unit length inside the block ($0<x<L$). Integration from $x=0$ to $x=L$ then gives the zero-point energy $\mathcal{E}$ in the block, and the result is
\begin{eqnarray}
\mathcal{E} = \int_0^\infty \rmd\omega \, W(\omega), \label{entot} \\
W(\omega)=\frac{\hbar \omega}{2\pi c} \mathrm{Im} \left[ \rmi L(1+2\alpha)\frac{\rmd (\omega n)}{\rmd \omega}
+c \beta\left(e^{2\rmi nk_0L}-1\right)\frac{\mu}{n} \frac{\rmd}{\rmd \omega} \left(\frac{n}{\mu} \right) \right], \label{W}
\end{eqnarray}
where $\alpha$ and $\beta$ are again (\ref{alpha}) and (\ref{beta}) and we have defined the spectral energy $W(\omega)$, i.e. the energy per unit frequency. The zero-point energy (\ref{entot}) of $x$-propagating modes contained in the block diverges because of the integral over mode frequencies, whereas the spectral energy (\ref{W}) is finite for each frequency. (If we integrated over all angles of incidence to the boundaries then the spectral energy would diverge; this divergence is due to the fact that spatial dispersion is not included in our dielectric functions~\cite{hor14}.)
For the modes considered here our results show that, for each frequency, the block causes a finite change in the zero-point energy. The only change in the zero-point energy occurs inside the block, where modes with a small frequency spread $\Delta\omega$ around $\omega$ have a zero-point energy $\Delta\omega W(\omega)$, whereas without the block their energy in this region would be $\Delta\omega\hbar \omega L/(2\pi c)$ (the spectral energy per unit length of empty space is, from (\ref{rhoempty}), $\hbar \omega /(2\pi c)$). Thus $W(\omega)-\hbar \omega L/(2\pi c)$ quantifies the (finite) change in zero-point energy for each frequency. This finite result at each frequency is obtained without having to regularize any diverging quantities. Because the total zero-point energy diverges (even for the limited set of modes we consider), regularization is needed to compute finite Casimir energies and forces (see below). Regularization changes the zero-point energy attributed to each mode, but the change in zero-point energy caused by the block is finite for each mode both before and after regularization.
The zero-point energy (\ref{entot}) contains a diverging part $\mathcal{E}_\mathrm{bulk}$ that is independent of the block boundaries, i.e. it is $L$ times the energy per unit length in an infinite material of refractive index $n$:
\begin{equation} \label{enbul}
\mathcal{E}_\mathrm{bulk}=\frac{\hbar L}{2\pi c} \mathrm{Im} \int_0^\infty \rmd\omega \, \rmi\omega \frac{\rmd (\omega n)}{\rmd \omega} .
\end{equation}
(The zero-point energy per unit length in an infinite homogeneous material differs from (\ref{rhoempty}) by having a factor $\rmd (\omega n)/\rmd \omega$ in the integrand, as follows from (\ref{rhogen}).) In Casimir theory, all such diverging bulk quantities are dropped, and only the finite quantities that remain are taken as physically significant~\cite{lif55,dzy61,LLsp2}. The rationale for this regularization procedure is that only macroscopic material inhomogeneities (smooth or discontinuous changes in the dielectric functions) give physical meaningful contributions to electromagnetic zero-point quantities~\cite{LLsp2}. A notable feature of the regularization is that \emph{different} infinite quantities are dropped at different points of space (the diverging bulk contribution at any point depends on the values of the dielectric functions at that point)~\cite{LLsp2}. Thus, to obtain the Casimir energy $\mathcal{E}_C$ of the modes considered here, in the presence of the block, we drop the purely bulk quantity (\ref{rhoempty}) outside the block and remove (\ref{enbul}) from (\ref{entot}), i.e. $\mathcal{E}_C=\mathcal{E}-\mathcal{E}_\mathrm{bulk}$, so that
\begin{eqnarray}
\mathcal{E}_C = \int_0^\infty \rmd\omega \, W_C(\omega), \label{encas} \\
W_C(\omega)=\frac{\hbar \omega}{2\pi c} \mathrm{Im} \left[ 2\rmi L\alpha \frac{\rmd (\omega n)}{\rmd \omega}
+c \beta\left(e^{2\rmi nk_0L}-1\right)\frac{\mu}{n} \frac{\rmd}{\rmd \omega} \left(\frac{n}{\mu} \right) \right], \label{WC}
\end{eqnarray}
where we have defined the Casimir spectral energy $W_C(\omega)$, the Casimir energy per unit frequency. As noted in the Introduction, the regularization employed in obtaining (\ref{WC}) is the standard one used to predict experimentally measured Casimir forces~\cite{lif55,dzy61,LLsp2}.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=9cm]{figure2a.pdf}
\vspace{2mm}
\includegraphics[width=9cm]{figure2b.pdf}
\caption{Electromagnetic zero-point energy per unit frequency (\ref{W}) inside a metal of length $L$ (blue curves). The material has permittivity (\ref{osc}) with $\omega_0=0$, $\Omega=8.45\,\mathrm{eV}$ and $\gamma=0.047\,\mathrm{eV}$. The dashed red lines are $\hbar \omega L/(2\pi c)$, the value of $W(\omega)$ in the same spatial region but without the block. In the top plot $L=1\,\mu\mathrm{m}$, in the bottom plot $L=10\,\mu\mathrm{m}$. The zero-point energy is less than the free-space value for frequencies $\omega\lesssim\Omega$.}
\label{fig:gold1}
\end{center}
\end{figure}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=9cm]{figure3a.pdf}
\vspace{2mm}
\includegraphics[width=9cm]{figure3b.pdf}
\caption{Casimir energy per unit frequency (\ref{WC}) of the metal blocks described in Fig.~\ref{fig:gold1}. The total Casimir energy (\ref{encas}) is positive for all lengths $L$.}
\label{fig:gold2}
\end{center}
\end{figure}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=9cm]{figure4a.pdf}
\vspace{2mm}
\includegraphics[width=9cm]{figure4b.pdf}
\caption{Electromagnetic zero-point energy per unit frequency (\ref{W}) inside a non-metallic dielectric (blue curves) of length $L$. The material has permittivity (\ref{osc}) with $\omega_0=5\,\mathrm{eV}$, $\Omega=8\,\mathrm{eV}$ and $\gamma=0.5\,\mathrm{eV}$. The dashed red lines are $\hbar \omega L/(2\pi c)$, the value of $W(\omega)$ in the same spatial region but with the block replaced by empty space. In the top plot $L=1\,\mu\mathrm{m}$, in the bottom plot $L=10\,\mu\mathrm{m}$.}
\label{fig:diel1}
\end{center}
\end{figure}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=9cm]{figure5a.pdf}
\vspace{2mm}
\includegraphics[width=9cm]{figure5b.pdf}
\caption{Casimir energy per unit frequency (\ref{WC}) of the dielectric blocks described in Fig.~\ref{fig:diel1}. The total Casimir energy (\ref{encas}) is positive for all lengths $L$ of the dielectric.}
\label{fig:diel2}
\end{center}
\end{figure}
\section{Examples}
We now investigate the spectral energies $W(\omega)$ and $W_\mathrm{C}(\omega)$ for the cases where the block is a metal or a non-metallic dielectric. In both cases we choose $\mu(\omega)=1$ and a permittivity of the form
\begin{equation} \label{osc}
\varepsilon(\omega)=1-\frac{\Omega^2}{\omega^2-\omega_0^2+\rmi\gamma\omega} .
\end{equation}
We put $\hbar=c=1$, with frequency in eV, length in $\mathrm{eV}^{-1}$, and the spectral energies $W(\omega)$ and $W_\mathrm{C}(\omega)$ dimensionless.
For a metal we use a Drude-model approximation for gold~\cite{olm12}, namely (\ref{osc}) with $\omega_0=0$, $\Omega=8.45\,\mathrm{eV}$ and $\gamma=0.047\,\mathrm{eV}$. Figure~\ref{fig:gold1} shows the zero-point energy per unit frequency $W(\omega)$ contained in the block, for block lengths $L=1\,\mu\mathrm{m}$ ($5.068\,\mathrm{eV}^{-1}$) and $L=10\,\mu\mathrm{m}$ ($50.68\,\mathrm{eV}^{-1}$), together with the free-space value $\hbar \omega L/(2\pi c)$ of $W(\omega)$ (the spectral energy in the same spatial region without the block). We see that the block lowers the zero-point energies for frequencies less than the plasma frequency ($\omega\lesssim\Omega$), compared to the free-space values. (Recall that spatial regions outside the block do not contribute to changing the zero-point energy of the modes.) Above the plasma frequency the zero-point energies are larger than the free-space values. The Casimir energy per unit frequency $W_C(\omega)$ of the block is shown in Fig.~\ref{fig:gold2}, for block lengths $L=1\,\mu\mathrm{m}$ and $L=10\,\mu\mathrm{m}$. Above the plasma frequency the Casimir energy of the modes oscillates between positive and negative values, these oscillations in frequency becoming more rapid as $L$ increases. The free-space value of $W_C(\omega)$ is of course zero, so negative values of $W_C(\omega)$ correspond to a decrease of the regularized zero-point energy below that of empty space. The total Casimir energy (\ref{encas}) turns out to be positive for all $L$.
For a non-metallic dielectric we use (\ref{osc}) with $\omega_0=5\,\mathrm{eV}$, $\Omega=8\,\mathrm{eV}$ and $\gamma=0.5\,\mathrm{eV}$ (these are not chosen to model a specific dielectric). Figures~\ref{fig:diel1} and~\ref{fig:diel2} show results for this dielectric that correspond to Figs.~\ref{fig:gold1} and~\ref{fig:gold2} for the metal. From Fig.~\ref{fig:diel1} we see that the zero-point energy of a range of frequencies is damped below the free-space value. The real part of the permittivity goes through zero at $\omega\approx5\,\mathrm{eV}$ and $\omega\approx10\,\mathrm{eV}$, and the damping of the zero-point energy occurs between these two frequencies. For parameters such that the real part of the permittivity (\ref{osc}) is positive for all frequencies, the decrease of zero-point energy below the free-space value occurs for frequencies around the resonance $\omega\approx\omega_0$. Figure~\ref{fig:diel2} shows that the Casimir energy per unit frequency $W_C(\omega)$ of the block oscillates between positive and negative values. As in the case of the metal, the total Casimir energy (\ref{encas}) is positive for all $L$.
The effect of magnetic permeability and of negative refraction can also be investigated. Again, it is found that zero-point energy is suppressed below the free-space value for some frequencies and increased for others, while the Casimir energy per unit frequency oscillates between positive and negative values with the total Casimir energy being positive.
\section{Conclusions}
In quantum optics the electromagnetic vacuum (zero-point) field uncertainty of a subset of modes is given an experimental meaning through balanced homodyne detection~\cite{loudon} and spontaneous emission~\cite{pur46}. In such considerations the issue of regularization of the total (diverging) zero-point field uncertainty can be avoided, since the zero-point quantities for a single mode, or for a single frequency outside materials, are finite. Here we considered quantum optics in the presence of a block of material, taking full account of dispersion and absorption. We showed that the material decreases the zero-point energy of some modes, while increasing that of others. When the zero-point modes are regularized (in the same manner as is used to predict the Casimir force between real materials), then certain modes have negative Casmir energy while others have positive. The total Casimir energy is positive.
The physics considered here is closely connected to that of nanomechanical systems, because it is nothing more than quantum damped oscillators. Nanomechanical systems can be modelled using a Hamiltonian that differs from that of macroscopic electromagnetism in having one or more oscillators coupled to the reservoir instead of the infinite number of oscillators of the electromagnetic field~\cite{phi12,bar15}. In the simplest case of a single quantum damped oscillator similar modifications of zero-point energy can occur~\cite{phi12}.
\section*{Acknowledgements}
I thank S.A.R. Horsley for helpful discussions.
\section*{References}
|
1,108,101,563,127 | arxiv | \section{Introduction}
\label{sec:introduction}
{P}{edestrians} are the vulnerable road users and account for 23\% of world traffic deaths in 2018 according to WHO\citep{RN261}. Naturally, China has the most pedestrians around the world due to its largest population and traffic developing state, and pedestrian safety has always been a problem of China since 26.1\% of traffic deaths are pedestrians in 2013, while in America it is 16.1\%\citep{RN180}. This terrible result comes from bad road behaviors of Chinese pedestrians, such as red light running; a report(N=31649) showed 18.54\% of pedestrians would run red lights in Changsha\citep{RN263}. These bad behaviors make the Chinese traffic environment very challenging. Fig.\ref{figure 1} is the common scene at Chinese crosswalks. \\
\begin{figure}[pos=!t]
\centering
\includegraphics[width=3.5in]{figure/figure1.jpg}
\DeclareGraphicsExtensions.
\caption{The challgening Chinese pedestrian environment}
\label{figure 1}
\end{figure}
AVs are deemed as promising solutions for safer road transportation in the future\citep{RN94,RN6998} and China is expected to become one of the largest markets for AVs\citep{RN14}. Therefore, it will be meaningful to analyze the adaptability of AVs to better protect pedestrians and give corporations and governments some useful guidance.\\
Driverless technologies are developing rapidly and according to the ability of self-driving,the Society of Automotive Engineers(SAE) divides self-driving into six levels(L0-L5: from no automation to full automation)\citep{RN288}, corresponding to driving in pedestrians traffic of different difficulty levels. However, the adaptability of driverless technologies to pedestrians is largely unknown. In previous researches, the manuscripts paid attention to pedestrian detection\citep{RN127, RN146,Guofa2019Deep}, interaction\citep{RN478}, receptivity\citep{RN74, RN78}, behavior prediction\citep{RN479}, pose estimation\citep{RN480,RN7668}, etc. These researches are related to the adaptability but did not directly point out the adaptability. \\
Similarly, review articles focused more on technology as well. Generally, pedestrian detection is reviewed to summarize and compare the detection algorithms according to the used sensors \citep{RN470, RN469, RN465, RN468, RN472, RN474}. Additionally, Deb et al. summarized the factors that influence the pedestrians' behaviors, public acceptance of fully automated vehicles as well as current interacting interfaces between pedestrians and autonomous vehicles\citep{RN464}. Cao et al. conducted a review on different methods to model crowd of pedestrians\citep{RN466}. Daniela et al. explored the ways pedestrians' intention estimation has been studied, evaluated, and evolved\citep{RN467}. Kardi et al. provided a review of a microscopic pedestrian simulation model\citep{RN475}. Sarker et al. reviewed human factors that would influence the acceptance of users to AVs including user comfort, trust, reliability, and preferences \citep{RN497}. In these reviews, the detection methods, influencing factors, and interacting interfaces were carefully summarized and these are closely related to adaptability. Nevertheless, there is a lack of systematic and direct analyses of the adaptability to pedestrians.Therefore, the adaptability analyses are valuable and needs to be supplemented. \\
We have three contributions: firstly, the paper is the first to summarize the Chinese pedestrians' phenomena through abundant data and comparision with foreign countries and analyze the key safety demands for AVs. Secondly, we conducted a comprehensive literature review on the newest driverless technologies to pedestrians, including detection, interaction as well as receptivity. Thirdly, we executed the first try on the adaptability of AVs to Chinese pedestrians and summed up the challenges as well as opportunities. The paper is useful for driverless researchers who care about pedestrian safety and researchers from other areas, such as traffic safety and public policy, who want to conduct researches related to AVs, for we offer them guidance and research directions. \\
The rest of the paper is organized as follows. Section 2 show the methods that we review and conduct our analyses. Section 3 analyzed three typical phenomena of Chinese pedestrians. A comprehensive literature review of driverless technologies for pedestrians and adaptability analyses were provided in Section 4. In Section 5, emerging challenges and opportunities for future pedestrian research were proposed. Section 6 is the conclusion of this paper.
\section{Methods}
The objective of this article is to figure out the adaptability of AVs to Chinese pedestrians by reviewing articles. The first question should be how to evaluate the adaptability logically. To analyze the adaptability step by step, we divided it into two main questions: what are the Chinese pedestrians' characteristics and their technical demands for AVs, and what are the current abilities of AVs when facing pedestrians? By solving the first, we know the safety demands for Chinese pedestrians, which would be worked as our criteria to evaluate adaptability. Then, through reviewing articles, the current abilities of AVs would be summarized. By combining the two answers, the adaptability would be analyzed and develop the answer to the adaptability of AVs to Chinese urban pedestrians. The methodology is shown in figure 2.\\
Firstly, pedestrain behaviors data was collected from the Chinese government, research articles and websites as the left side of figure 2 showed. In the searching process, we used terms like 'Chinese urban pedestrians', 'pedestrian bad behaviors', 'pedestrian accidents', 'pedestrian safety' to search results in engine (baidu, baidu scholar, google, google scholar) and filter them by reading the abstract. After collecting data, some typical behaviors of Chinese pedestrians are gained. Then, based on our technical background of AVs, the influence of these pedestrian behaviors on AVs would be carefully discussed. More importantly, some technical demands for AVs are proposed to protect pedestrian safety. In our research, these demands are worked as the criteria to evaluate the adaptability. To make it concrete, we divide it into three classes: excellent, OK, bad, to evaluate the adaptability. \\
Secondly, according to the demands, corresponding driverless technologies are searched and summarized through scholar engine and authoritative websites. In the searching process, we combined terms like 'autonomous vehicles', 'driverless technologies', 'pedestrians', and 'pedestrian safety' to search professional articles and filtered them according to our demands. After that, the abilities of AVs are categorized into groups and summarized by tables and figures. Considering the demands, we evaluate the abilities of AVs respectively. \textbf{One important thing} is that our reviewed technologies are the newest researches and are hard to tell which autonomous level they belong to. Therefore, in this article, we discuss the adaptability based on single technology rather than autonomous levels. Finally, the adaptability summary of every single technology would be our analyses of adaptability. \\
Thirdly, during the process, we found some challenging but promising problems and we analyzed them in a simple way after the adaptability analyses, to give some guidance to practitioners.\\
\begin{figure*}[pos=!t]
\centering
\includegraphics[width=7in]{figure/figure2.jpg}
\DeclareGraphicsExtensions.
\caption{The content structure of this paper}
\label{fig.2}
\end{figure*}
\section{Phenomena \& Analyses}
Pedestrians are vulnerable road users around the world and recognized as the worst victims because they are directly exposed to the impact of traffic crashes compared to vehicle passengers\citep{RN282,RN261}.
In developed countries, pedestrians would be safer. According to Road trauma Australia: 2018 statistical summary\citep{RN269}, the deaths-to-injuries rate of Australian pedestrians is 4\%. In America, 5,977 pedestrians were killed with 71,000 injuries in 2017 according to road traffic statistics\citep{RN267}. While in England, one fatal accident would take place in 50 accidents\citep{RN278}. However, as the biggest developing country, China has more severe pedestrian safety problems. According to the statistics of National Bureau of Statistics of China\citep{RN454}, the deaths-to-accidents ratio of pedestrians is about 50\%, which is far worse than that of car passengers as well as cyclists as Table \ref{table 1} shows; in other words, there will be a person dead in every two pedestrian accidents. Therefore, protecting pedestrians is an essential problem of Chinese traffic safety. \\
\begin{table*}[]
\centering
\caption{The ratio of deaths to accidents of Chinese traffic participants\citep{RN454}}
\label{table 1}
\begin{tabular}{c|ccccccccc} \hline
Traffic participants & data & 2017 & 2016 & 2015 & 2014 & 2013 & 2012 & 2011 & 2010 \\\hline
\multirow{3}{*}{All} & Accidents & 203049 & 212846 & 187781 & 196812 & 198394 & 204196 & 210812 & 219521 \\
& Deaths & 63772 & 63093 & 58022 & 58523 & 58539 & 59997 & 62387 & 65225 \\
& \textbf{Deaths/accidents} & \textbf{0.31} & \textbf{0.30} & \textbf{0.31} & \textbf{0.30} & \textbf{0.30} & \textbf{0.30} & \textbf{0.29} & \textbf{0.30}\\\hline
\multirow{3}{*}{Cyclists} & Accidents & 1576 & 1460 & 1369 & 1393 & 1304 & 1433 & 1522 & 1978 \\
& Deaths & 350 & 341 & 304 & 289 & 300 & 279 & 315 & 447 \\
& \textbf{Deaths/accidents} & \textbf{0.22} & \textbf{0.23} & \textbf{0.22} & \textbf{0.21} & \textbf{0.23} & \textbf{0.19} & \textbf{0.21} & \textbf{0.23} \\\hline
\multirow{3}{*}{Car passengers} & Accidents & 139412 & 145820 & 129155 & 136386 & 138113 & 142995 & 145338 & 148367 \\
& Deaths & 46817 & 45990 & 42388 & 42847 & 42927 & 44679 & 46100 & 46878 \\
& \textbf{Deaths/accidents} & \textbf{0.34} & \textbf{0.32} & \textbf{0.33} & \textbf{0.31} & \textbf{0.31} & \textbf{0.31} & \textbf{0.31} & \textbf{0.32} \\\hline
\multirow{3}{*}{Pedestrians} & Accidents & 2470 & 2443 & 2137 & 2242 & 2088 & 2063 & 2277 & 2565 \\
& Deaths & 1322 & 1304 & 1192 & 1247 & 1185 & 1075 & 1134 & 1222 \\
& \textbf{Deaths/accidents} & \textbf{0.54} & \textbf{0.53} & \textbf{0.56} & \textbf{0.56} & \textbf{0.57} & \textbf{0.52} & \textbf{0.50} & \textbf{0.48} \\\hline
\end{tabular}
\end{table*}
\begin{table}[]
\centering
\caption{Statistics of red lights running of pedestrians in Hangzhou (the first column is the different possibility to run red lights and the second is corresponding percentages of pedestrians, N=200) \citep{RN457}}
\label{table 2}
\begin{tabular}{ccccc}\hline
Possibility of running red light & 80\% & 50\% & 10\% & 0\% \\\hline
Percentage & 2.5\% & 17.5\% & 88\% & 1\% \\\hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{The impact of groups on running red lights of pedestrians in Yulin(Single denotes pedestrians run red lights alone and group means pedestrians together run reds lights, N=3460) \citep{RN458})}
\label{table 3}
\begin{tabular}{m{3cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}}\hline
Pedestrians state & Obey the rules & Run red lights \\\hline
Single & 76.7\% & 23.3\% \\
Group & 70.2\% & 29.8\% \\\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Pedestrians’ attitudes toward running red lights in Xian(N=425)\citep{RN459}}
\label{table 4}
\begin{tabular}{m{2cm}<{\centering}m{3cm}<{\centering}m{2cm}<{\centering}}\hline
Attitudes & Agree to run red lights & Disagree \\\hline
Percentage & 60\% & 40\% \\\hline
\end{tabular}
\end{table}
However, due to the low average education level and awareness of obeying rules, Chinese pedestrians often violate traffic rules and cause accidents, adding complexity to the Chinese driving environment.
The driverless car is a potential solution to promote traffic safety, however, whether AVs are suitable for the Chinese pedestrian environment is largely unknown. To assess the adaptability, three typical behaviors of Chinese pedestrians were summarized in the following through literature as well as open databases and we found China is one of serious countries of these behaviors. After that, the characteristics of Chinese pedestrians were analyzed. On the basis, the key technical demands for AVs were put forward as the evaluating criteria of adaptability. The followings are the analyses of three typical behaviors. \\
\begin{figure*}[pos=!t]
\centering
\includegraphics[width=7in]{figure/figure3phenomenon}
\DeclareGraphicsExtensions.
\caption{Three typical bad behaviors of Chinese pedestrians: a), b), c) running red lights; d), e), f) jaywalking; g), h), i) distraction (phone utilization)}
\label{fig.3}
\end{figure*}
\subsection{Red lights running}
Red lights running of pedestrians greatly influences their safety. In Lille of France, one third of pedestrians are reported running red lights in 2015\citep{RN265}. In America, 4.5\% of pedestrians traffic deaths happened when they failed to obey traffic signals according to national statistics in 2017\citep{RN267}. However, running lights running is more serious in China and is nicknamed the Chinese style of road crossing. Table \ref{table 2}, \ref{table 3}, \ref{table 4} as well as \ref{table 5} are statistics of running red lights done in four main cities in China. From these data, running red lights is a regular behavior when pedestrians are crossing the crosswalk with a possibility up to 70\%. What is more, the attitudes of pedestrians toward obeying light rules are negative because 60\% of pedestrians are approved of running red lights as table \ref{table 4} shows. Table \ref{table 3} reflects that the number of pedestrians influences the violating behaviors since walking in group increases the possibility to violate lights, which is related to the conformity phenomenon of psychology.\\
\begin{table*}[]
\centering
\caption{Frequency of running red lights of pedestrians in Shanghai(N=500)\citep{RN456}}
\label{table 5}
\begin{tabular}{m{2cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}}\hline
Behaviors & \multicolumn{3}{c}{Run red lights} & \multicolumn{3}{c}{See others run red lights} \\\hline
Frequency & Usually & Occalisonally & Never & Usually & Occasionally & Never \\
Percentage & 12\% & 65\% & 23\% & 67\% & 30\% & 3\% \\\hline
\end{tabular}
\end{table*}
The Fig. \ref{fig.3}a, b, c are common scenes in Chinese crosswalks, which shows three levels of running red lights according to the pedestrian number. The conflict in the scenes is that cars have to cross the crosswalk but to keep pedestrians safe. When there are minority persons violating the lights, their bodies, trajectories, tendencies as well as intent are clear and cars could slow down to avoid at sacrifice of efficiency. To make matters worse, influenced by the conformity phenomenon, persons would violate lights in group as \ref{fig.3}b shows, hence leading to pedestrian occlusion. The occlusion would cause a misunderstanding of pedestrian intent and wrong detection of pedestrians, possibly causing accidents. In rush hours, the pedestrian flow is even walking on the vehicle road. In this case, it is difficult for the cars to move forward but wait.\\
Analyzing the behaviors, we conclude detection plays an essential role in red lights running scenarios. Good detection means cars know the right position as well as features of pedestrians, and could predict the intent. Based on detection, cars could decide whether they should yield or keep their movements when pedestrians are running red lights. Additionally, special care would be given to those special pedestrians like the elderly, the disabled, the distracted as well as the wheelman according to the detected features. In the context of traditional cars, drivers could detect almost all pedestrians in the first level of red lights running according to human intelligence. When more pedestrians violate the lights, drivers impossibly detect all pedestrians as a result of limited attention and occlusion. AVs could understand the world through multiple sensors, so could they detect pedestrians well in the red lights running scenarios? \\
Additionally, intent communication between pedestrians and vehicles is of vital importance in the process. Generally, intent communication includes the moving state of pedestrians and cars and their observation of each other. Through detection, drivers know pedestrian position and features, however, who should take the road right to go need negotiating, which is tackled by communication. In the current driving environment, there already exist some nonverbal methods to communicate pedestrians with cars, such as car light (headlights, turn lights, rear lights), distance, voice of the cars. In addition, drivers play an important role in communication. During the process, drivers could interact with pedestrians by utilizing eye contact, gestures as well as voice to assign road right \citep{RN262}as shown in Fig. \ref{fig.4}a,and Lee et al.\citep{RN281} concluded that the existence of drivers would strength the safety-in-numbers effect to protect pedestrians by increasing the interaction between pedestrians and vehicles. More importantly, it will give pedestrians more trust to communicate with humans rather than machines. Compared with traditional cars, there will be no real driver in AVs as Fig. \ref{fig.4}b shows, so how do AVs communicate with pedestrians and whether pedestrians would trust AVs?
\begin{figure*}[pos=!t]
\centering
\includegraphics[width=7in]{figure/figure4}
\DeclareGraphicsExtensions.
\caption{Intent communication methods between pedestrians and vehicles: a) traditional vehicles and pedestrians b) AVs and pedestrians }
\label{fig.4}
\end{figure*}
\subsection{Jaywalking}
Jaywalking means pedestrians pass a road illegally for convenience where there is no crosswalk or some leading signs, possible with a baluster-passing behaviors. In the UK, nearly 50\% of pedestrians crossed the road ignoring the traffic signals according to YouGov poll 2013\citep{RN270}. In America, a similar YouGov poll in 2014 found that only 13\% of Americans said they had never jaywalked\citep{RN271} and America national statistics showed that 21.2\% of pedestrians traffic deaths came from pedestrians' improper crossing of roadway\citep{RN267}. The Singapore government fined 2,049 jaywalkers in the first quarter of 2013\citep{RN266}. Similarly, jaywalking is very common in China because of unreasonable crosswalk settings and a lack of awareness to obey rules. From Table \ref{table 6}, we could see jaywalking is very dangerous for 68\% pedestrian accidents are related to accidents. What is more, the research of \citep{RN460}showed that no matter in urban, suburbs or countryside, jaywalking is the biggest cause of car accidents according to Fig.\ref{fig.5}. \\
Fig. \ref{fig.3}d, e, f are common scenes of jaywalking in urban China, from which we could observe jaywalking often occurs in the area without signs, even with a fence in the middle of the road. In these scenes, the trajectories of pedestrians are irregular and difficult to predict because there are no signs to regulate their movements, and they are faster than on the crosswalks. For drivers, they would be partly lacking in concentration as a result of no expectation for pedestrians; additionally, cars would be faster due to the higher speed limit on these sections. For the road itself, some special sections, such as curved sections, cars and pedestrians are naturally difficult to detect each other. \\
These features contribute to most of jaywalking accidents, which could be divided into three types. The first type of accidents is the failure to detect each other. The drivers do not observe pedestrians cross the road and hence they crash into pedestrians directly because of no expectation for pedestrians or on special sections. The second is untimely detection to each other. In this case, it is difficult for drivers to avoid a collision as a result of vast inertia out of high speed. The third is poor communication between drivers and pedestrians. With high speed, both drivers and pedestrians have very little time to negotiate the road rights, often leading to poor communication.\\
Similarly, detection is essential to keep pedestrians safe in the jaywalking. In this scenario, cars must detect pedestrians from long distance because cars are so fast that drivers need more time and longer distance to stop. Moreover, enough communication is required to transfer the moving state. With high speed and far distance, the communicating methods between pedestrians and drivers would be limited and easily misunderstood. Therefore, more communicating methods are needed to better transmit the state. Traditional cars are troubled by remote communication and small pedestrian detection. As a high technology, how do AVs interact with remote pedestrians and could they detect remote small pedestrians?
\begin{table*}[]
\centering
\caption{The behaviors of pedestrians just before pedestrian accidents took place(N=181)\citep{RN461}}
\label{table 6}
\begin{tabular}{m{1.5cm}<{\centering}m{1cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}m{1.5cm}<{\centering}m{1.5cm}<{\centering}m{1.5cm}<{\centering}m{1.5cm}<{\centering}}\hline
Behaviors & \textbf{Jaywalk} & Walk on crosswalk in the section & Walk on crosswalk in the intersection & Stand on the road & Work by the road & Walk by the road & others \\\hline
Percentage & \textbf{68\%} & 5.5\% & 7.2\% & 2.8\% & 6.6\% & 7.7\% & 2.2\% \\\hline
\end{tabular}
\end{table*}
\begin{figure}[pos=!t]
\centering
\includegraphics[width=3.5in]{figure/figure5Jaywalking}
\DeclareGraphicsExtensions.
\caption{The causes of pedestrian accidents in three areas in Chongqing\citep{RN460}}
\label{fig.5}
\end{figure}
\subsection{Distraction}
Distraction refers to pedestrians doing other things and not paying full attention to the environment when they are passing the road, and the possible reasons are reading, eating, using smartphones, talking, and the influence of alcohol, drugs, or medication. As Judith et al. showed\citep{RN275}, phone usage has been the major reason of pedestrian distraction recently, therefore, the following mainly talk about phone usage.
\\
\begin{table*}[]
\centering
\caption{The attitudes of using phone on crosswalks of pedestrians in Hefei(questionnaire, N=405)\citep{RN388}}
\label{table 7}
\begin{tabular}{cm{5cm}<{\centering}m{5cm}<{\centering}c}\hline
Attitudes & Use phone when crossing the road this week & Involved in accidents because of phone utilization in crosswalk & Whether to be punished \\\hline
Positive&162(40\%) & 22(5.4\%)& 206(51.7\%) \\
Negative & 243(60\%) & 383(94.6\%) & 199(48.3\%) \\\hline
\end{tabular}
\end{table*}
\begin{table*}[]
\centering
\caption{The statistics of using phone on crosswalks in Wuhan(empirical research, N=2901)\citep{RN389}}
\label{table 8}
\begin{tabular}{m{2cm}<{\centering}m{3cm}<{\centering}m{3cm}<{\centering}m{3cm}<{\centering}m{3cm}<{\centering}} \hline
Behaviors & \multicolumn{3}{c}{Use phones when crossing the road} & \begin{tabular}[c]{@{}c@{}}Not use phones \\when crossing the road\end{tabular} \\\hline
\multirow{2}{*}{Percentage} & Watch phone & Have a call & Listen to music & \multirow{2}{*}{88.24\%} \\
& 6.65\% & 2.83\% & 2.28\% & \\\hline
\end{tabular}
\end{table*}
\begin{figure*}[pos=!t]
\centering
\includegraphics[width=7in]{figure/figure6phonestatistics}
\DeclareGraphicsExtensions.
\caption{The phone usage of pedestrians in different scenarios: a) Beijing (empirical research)\citep{RN462} b) Chongqing (questionnaire)\citep{RN463}}
\label{fig.6}
\end{figure*}
In America, a report of Liberty Mutual Insurance in 2013 surveyed 1,004 adults and found that 60\% of them had used phones when crossing the crosswalks\citep{RN274}. Moreover, the Consumer Product Safety Commission of America estimated that 3.76\% of American injuries are related to phone usage and Nasar et al. \citep{RN276} thought injuries caused by phone usage would be much bigger. In England, a report in 2019 found that 31.37\% of adolescent pedestrians crossed the crosswalks using phones\citep{RN277}. Similarly, phone usage of pedestrians has been an important problem of Chinese pedestrian safety.The table \ref{table 7} as well as \ref{table 8} are statistics of phone usage on crosswalks in Hefei and Wuhan respectively while Fig. \ref{fig.6} shows data collected in Beijing and Chongqing. From these data, it could be concluded that about 12\% of pedestrians would really utilize phones when crossing the road. What is more, about half of the pedestrians thought they should not be punished because of using phones\citep{RN388}. This reflects a lack of awareness to obey traffic rules of Chinese pedestrians. As can be seen from table 8, watching telephone screens, having a call and listening to music are three aspects of phone usage and watching phones is the most common behavior. During these processes, the attention of pedestrians is occupied so they hardly perceive surroundings and communicate road rights with vehicles. \\
Ling et al. emphasized that the time to cross a road would increase and the times pedestrians watch around would decrease when they use phones\citep{RN388}. According to a research\citep{RN286}, the proportion of pedestrians killed while using phones has increased by more than 3.5\% in 2010. Additionally, pedestrians using phones are more likely to conflict with vehicles than pedestrians not using phones. \\
There are two scenarios where pedestrians use their mobile phones at the crosswalk; the first is when the pedestrian traffic light is green and the second is when the pedestrian traffic light is red. In the first case, pedestrians might be relatively safe since pedestrians have the road right while cars have to wait. However, drivers might run red lights when pedestrians cross the road using phones. In this situation, distracted pedestrians cannot avoid a collision. In the second case, it will be extremely unsafe for pedestrians. Firstly, cars may not realize pedestrians are utilizing phones and think pedestrians would behave like normal persons. Furthermore, the distraction would hinder the intent communication between pedestrians and vehicles. As a result, drivers do not know how to respond to the behaviors accordingly and cause accidents. \\
From the analyses above, we could conclude the unsafety comes from little communication and poor detection when pedestrians are utilizing phones, which highlights the importance of communication as well as detection again. In the context of traditional cars, drivers could partially recognize who are using phones and slow down to keep safe. Nevertheless, the intent communication remains a problem. Under the context of AVs, could AVs detect phone utilization and communicate with pedestrians well?
\subsection{The summary of pedestrians' technical demands for AVs}
Based on the above phenomena and analyses, pedestrian safety requires good detection and communication between cars and pedestrians. In the following, the technical demands of pedestrians for AVs would be summarized. Simultaneously, we propose criteria to evaluate the ability of each technology. \\
In the detection part, in order to protect the safety of normal pedestrians (under no bad behaviors), vehicles should know the location and features of pedestrians. Moreover, occluded pedestrian detection is required to solve the problems of occlusion, which often takes place in red lights running. Additionally, remote(small) pedestrian detection is essential to protect pedestrians when pedestrians are jaywalking. Last but not the least, phone utilization detection is required to distinguish normal pedestrians and distracted ones. To evaluate the detection ability, the precision of location and classification will be reviewed and summarized. \\
In terms of communication, other than existing physical information transferring, more interfaces should be designed for interaction. For example, more external interfaces are needed to transfer the intent of cars to pedestrians in the scenarios of occlusion as well as jaywalking. As mentioned above, we suggested the state of AVs should be well transferred, which would be worked as our evaluating metric. To make it concrete, we propose the intent transferring comprised of stopping, accelerating, slowing down, as well as the observation of pedestrians, to evaluate the ability of communication. \\
Importantly, there is a hidden factor, receptivity, that plays an important role in the relationship between AVs and pedestrians. If pedestrians cannot accept AVs, the communication is meaningless for pedestrians could not trust the information. Furthermore, there will be significant obstacles to bring AVs into the market. In this article, we have considered more about Chinese pedestrians, hence, we take the receptivity of Chinese pedestrians into account. \\
In summary, the adaptability of AVs depends on the adaptability of communication, detection as well as receptivity and hence we evaluate the adaptability of the three technologies to figure out the adaptability of AVs. The following section 3 will talk about the adaptability of AVs to pedestrians in detail. At the same time, the adaptability relationship would change the current mentality of pedestrians and ask for the government’s regulations adjustment. Based on the adaptability, we would simply analyze the influence that AVs would bring to current pedestrians mentality and government regulations.
\section{The Adaptability of Driverless Technologies to Pedestrians}
China has a challenging driving environment of pedestrians, in which running red lights, jaywalking as well as phone usage take place usually. These behaviors lead to crowd, occlusion and poor communication, greatly harming the safety of pedestrians. Analyzing the scenarios, detection, communication and receptivity are summed up as key technical demands for safety. As a promising technology, AVs could perceive the world through multiple sensors, so can AVs solve these problems? What are the essential driverless technologies?\\
The adaptability could be divided into two parts: AVs adapt to pedestrians and pedestrians adapt to AVs. Corresponding to the demands, driverless technologies that are dedicated to solving these problems are divided into detection, interaction, and receptivity. In these researches, detection applies various sensors and algorithms to determine whether there are pedestrians and find their location, which shows the ability of AVs to feel the world and hence describes the adaptability of AVs to pedestrians. According to the demands, normal pedestrian detection, occluded pedestrian detection, small pedestrian detection and distracted pedestrian are reviewed and analyzed respectively. Furthermore, the technology of interaction utilizes different kinds of external interfaces to transfer the state of AVs to pedestrians, which shows the ability of pedestrians to understand AVs and hence describe the adaptability of pedestrians to AVs. And the researches of receptivity are committed to seeking factors that influence the acceptance of pedestrians to AVs; this reflects the pedestrians’ trust in AVs and also describes the adaptability of pedestrians to AVs. \\
The following will examine these technologies by reviewing articles. After that, we would give out the adaptability by combining the analyse of phenomenon with technologies. To get an intuitive result, three levels of descriptions are applied to evaluate the adaptability: bad is used for poor research status and terrible adaptability; OK is used for good research status but there is still room to improve; excellent is used for splendid work and good adaptability. \\
Furthermore, after summarizing the adaptability, the influence on these driverless technologies to the mentality of pedestrians and government regulations would be talked about in a simple manner, to offer some guidance to Practitioners.
\subsection{AVs adapt to pedestrians: detection}
As discussed above, detection is important to guarantee pedestrian safety. As a promising technology, AVs pay more attention to the detection task to recognize and localize pedestrians. However, the occlusion, small size as well as distraction of pedestrians make it a challenging task in China. To recognize the capabilities, the detection task would be divided into four subtasks including normal, occluded, small and distracted pedestrian detection.
In order to better review the ability, we first summarized the pedestrian dataset.
\subsubsection{Pedestrian dataset}
The popularity of data-based methods promotes the importance of datasets. The improvement of dataset also reflects the technical tendency of pedestrian detection. The authoritative pedestrian datasets are summarized in Table \ref{table 9}. \\
\begin{table*}[]
\centering
\begin{threeparttable}
\caption{Comparison of pedestrian detection datasets}
\label{table 9}
\begin{tabular}{m{2.6cm}<{\centering}m{1cm}<{\centering}m{1.8cm}<{\centering}m{1.8cm}<{\centering}m{1.8cm}<{\centering}m{1cm}<{\centering}m{3cm}<{\centering}m{1cm}<{\centering}} \hline
Dataset & Pedestrian number & \multicolumn{3}{c}{Occlusion labels} & Evaluation Metrics\tnote{1} & Best detector \& performances & Year \\\hline
MIT\citep{RN362} & 924 & \multicolumn{3}{c}{YES} & MR-FPPW & (HOG,100\%) & 2000 \\
USC\citep{RN363} & 816 & \multicolumn{3}{c}{NO} & ROC\tnote{2} & - & 2005 \\
INRIA\citep{RN370} & 1774 & \multicolumn{3}{c}{NO} & MR-FPPW & (F-DNN, 7\%) & 2005 \\
Daimler\citep{RN365} & 4000 & \multicolumn{3}{c}{NO} & ROC & (MLS,28\%) & 2006 \\
CVC\citep{RN364} & 1000 & \multicolumn{3}{c}{NO} & ROC & - & 2007 \\
ETH\citep{RN367} & 12,000 & \multicolumn{3}{c}{NO} & R-FPPI & (RPN+BF, 30\%) & 2007 \\
TUD-Brussels\citep{RN368} & 3247 & \multicolumn{3}{c}{NO} & P-R & (SpatialPooling,47\%) & 2009 \\
\multirowcell{2}{Caltech\\\citep{RN366}\\\citep{ RN372}} & \multirow{2}{*}{350,000} & \multicolumn{3}{c}{YES} & \multirow{2}{*}{MR-FPPI} & \multirow{2}{*}{(AR\-Ped,6\%)} & \multirow{2}{*}{2009} \\\cline{3-5}
& & \begin{tabular}[c]{@{}c@{}}No\\ occlusion(0\%)\end{tabular} & Partial (1\%-35\%) & Heavy (35\% - 80\%) & & & \\
\multirowcell{2}{KITTI}\\\\\citep{RN371} & \multirow{2}{*}{80,000} & \multicolumn{3}{c}{YES} & \multirow{2}{*}{P-R} & \multirow{2}{*}{(FichaDL, 81.73\%)} & \multirow{2}{*}{2012(2015)} \\\cline{3-5}
& & Easy(0-15\%) & Moderate (15\%-30\%) & Hard(35\%-50\%) & & & \\\hline
\end{tabular}
\begin{tablenotes}
\item[1] MR, FPPW, FPPI, R and P respectively infer to miss rate, false positive per window, false positive per image, recall and precision. the higher, the better in MR-FPPW
and P-R curve whereas the lower, the better in ROC, MR-FPPI, MR-FPPW and R-FPPI curve.
\item[2] ROC curve takes true positive rate and false negative rate as coordinate
\end{tablenotes}
\end{threeparttable}
\end{table*}
These datasets are mainly used to evaluate the capabilities of detectors, of which Caltech dataset as well as KITTI dataset are popular datasets in recent years. The number of pedestrians is related to the number of ground truth and frames, representing the difficulty of detection. Caltech dataset ranks first with 350,000 pedestrians at 1,000,000 frames. Additionally, the dataset released later often has a larger number of pedestrians because data-based methods dominate the mainstream. As the year increases, occluded labels are added to the dataset more because occlusion usually occurs in real scenarios, which is why KITTI as well as Caltech datasets become popular currently. The occlusion level could be evaluated by the fraction of occlusion, which can be calculated as one minus the fraction of visible area over the total area. However, KITTI and Caltech datasets have different occlusion labels. KITTI dataset divided scenes into three levels: easy, moderate and heavy whereas Caltech grouped them into no, partial occlusion as well as heavy occlusion based on occlusion ratio, which is shown in the third column. Naturally, the two datasets both take the occlusion into account when evaluating detectors.\\
To emphasize different things, the metrics utilized to evaluate the abilities of detectors are different. The ROC curve evaluates the detectors in terms of true positive rate and false negative rate in the early dataset. The MR-FPPI curve used by Caltech and ETH reflects the relationship between miss rate (MR) and false positive per image (FPPI), and used miss rate value when the FPPI value is 10-1 as the evaluation metric. In order to emphasize precision and recall, KITTI dataset used the P-R curve to evaluate the performance of the detectors and utilize the area between the P-R curve and coordinates as the evaluation metric. Recently, the P-R and MR-PFFI curves are utilized more in the detection task due to the popularity of KITTI and Caltech datasets. Reasonably, each dataset has an optimal detector shown in the fifth column. We cannot get a specific optimal detector in some datasets so we use a dash to represent. In the early dataset, detectors gain a splendid detection rate even around 100\% while detectors get worse in recent as a result of increasing difficulty in datasets. Additionally, the popularity of neural network-based methods are reflected in terms of the name of optimal detectors. More importantly, the websites of these datasets would rank detection methods to help researchers follow the latest development and this would be our approach to evaluate the capabilities. \\
Last but not the least, KITTI dataset collected pictures from campus, city, road, residential and person scenes, which is closer to real scenarios in an autonomous driving environment. However, the urban Chinese pedestrians would be only talked in this paper so Caltech dataset would be more suitable. \\
\subsubsection{Normal pedestrians detection}
Normal pedestrians are pedestrians who do not perform bad behaviors. We set the occlusion level, size as well as the distraction of these pedestrians to normal values. Normal pedestrian detection could assess basic detection capabilities because normal pedestrians are the largest parts of pedestrians. To check the ability, the performance of detectors is summed up based on four popular pedestrian datasets in Fig. \ref{fig.7}. The moderate and reasonable occluded levels of KITTI and Caltech datasets are selected to meet the demands of normal pedestrians. \\
It is observed that detectors in Caltech are the best with the lowest miss rate of 6\% among all the four datasets. The best detector in INRIA is slightly behind with a 7\% precision. However, the optimal detector named F-DNN2+SS in ETH dataset achieved the best miss rate of just 30\%. This is possibly because it is so old-fashioned that the latest detectors would not validate performance on it. Interestingly, the difference between Caltech training dataset and Caltech testing dataset is huge, which is because data-based methods use a testing dataset to validate while features-based methods use a training dataset. This also reflects the superiority of data-based methods.\\
In KITTI dataset, the area of the curve is utilized to evaluate detection performance and the best detector achieves an average moderate precision of 81.73\%. Roughly speaking, we could say the best detectors in Caltech with 7\% miss rate is better than in the best detectors in KITTI with 81.73\% precision. As talked above, the scenarios in Caltech dataset are more suitable for the setting of Chinese urban pedestrians, hence, the result of Caltech is more reliable in this article. In conclusion, the capabilities of normal pedestrian detection are excellent with a miss rate of 6\% and the driverless technology of normal pedestrian detection is adaptive to urban China.\\
\begin{figure*}[pos=!t]
\centering
\includegraphics[width=7in]{figure/figure8performancesofdetecors}
\DeclareGraphicsExtensions.
\caption{The performances of 10 top detectors in five popular datasets: a) Caltech training \citep{RN366, RN372} b) Caltech testing\citep{RN366, RN372} c) ETH \citep{RN367} d) INRIA \citep{RN370} e) KITTI \citep{RN371}(Data is summarized in October 1, 2019)}
\label{fig.7}
\end{figure*}
\subsubsection{Occluded pedestrians detection}
Pedestrian occlusion usually happens in driving scenarios and about 70\% of pedestrians have occlusions according to Caltech dataset\citep{RN366, RN372}, which is among the toughest problems in pedestrian detection. Since Chinese pedestrians perform badly on crosswalks, occlusion has always been a serious problem. Compared to no occlusion, occluded pedestrian needs two bounding boxes to jointly label, one for visible portion and one for full extent. Furthermore, occlusion could be divided into two categories: pedestrians are occluded by other objects, which often causes information missing and leads to false negatives, and pedestrians are occluded by other pedestrians, which brings lots of interfering information and leads to false positives\citep{RN373}. \\
To figure out the capabilities, the top 10 detectors in KITTI and Caltech datasets are summarized in terms of different occlusion levels in Table \ref{table 10}. The hard level of KITTI dataset and the heavy level of Caltech dataset are the most occluded scenarios.
\begin{table*}[]
\centering
\begin{threeparttable}
\caption{The performances of 10 top detectors of KITTI \& Caltech datasets in different occlusion levels(data is summarized in October 1, 2019)}
\label{table 10}
\begin{tabular}{m{2.5cm}<{\centering}m{1cm}<{\centering}m{2.5cm}<{\centering}m{2cm}<{\centering}m{2.5cm}<{\centering}m{1cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}} \hline
\multirow{2}{*}{Rangking \& Methods} & \multicolumn{3}{c}{KITTI dataset\tnote{1}} & \multirow{2}{*}{Rangking \& Methods} & \multicolumn{3}{c}{Caltech dataset\tnote{2}} \\
& Easy & Moderate/Change & Hard/Change\tnote{3} & & No & Patial/Change & Heavy/Change \\\hline
1. FichaDL & 88.27\% & 81.73\% /6.54\% & 75.29\%/12.98\% & 1. AR-Ped\citep{RN286} & 5.00\% & 12.00\%/7\% & 49.00\%/44\% \\
2. Alibaba-CityBrain & 88.13\% & 80.90\%/7.23\% & 74.08\%/14.05\% & 2. SDS-RCNN\citep{RN391} & 6.00\% & 15.00\%/9\% & 59.00\%/53\% \\
3. ExtAtt & 87.95\% & 79.63\%/8.32\% & 74.78\%/13.17\% & 3. F-DNN+SS\citep{RN392} & 7.00\% & 15.00\%/8\% & 54.00\%/47\% \\
4. DGIST-CellBox & 87.77\% & 79.54\%/8.23\% & 75.70\%/12.07\% & 4. F-DNN\citep{RN392} & 7.00\% & 15.00\%/8\% & 55.00\%/48\% \\
5. DH-ARI & 87.43\% & 78.29\%/8.32\% & 69.91\%/17.52\% & 5. PCN\citep{RN394} & 7.00\% & 16.00\%/9\% & 56.00\%/49\% \\
6. EM-FPS & 84.93\% & 77.61\%9.14\% & 72.52\%/12.41\% & 6. F-DNN2+SS\citep{RN393} & 6.00\% & 16.00\%/10\% & 40.00\%/34\% \\
7. F-PointNet\citep{RN374} & 87.81\% & 77.25\%/7.32\% & 74.46\%/13.35\% & 7. GDFL\citep{RN395} & 6.00\% & 17.00\%/11\% & 43.00\%/37\% \\
8. TuSimple\citep{RN376, RN375} & 86.78\% & 77.04\%/10.56\% & 72.40\%/14.38\% & 8. ADM\citep{RN397} & 7.00\% & 18.00\%/11\% & 30.00\%/23\% \\
9. THICV-YDM & 87.27\% & 76.91\%/9.74\% & 69.02\%/18.25\% & 9. TLL-TFA\citep{RN398} & 6.00\% & 18.00\%/12\% & 29.00\%/23\% \\
10. Argus-detection-v1 & 83.49\% & 75.51\%/10.36\% & 71.24\%/12.25\% & 10. MS-CNN\citep{RN396} & 8.00\% & 19.00\%/11\% & 60.00\%/52\% \\\hline
\end{tabular}
\begin{tablenotes}
\item[1] KITTI dataset uses P-R curve to evaluate detectors and the higher, the better.
\item[2] Caltech dataset uses MR-FPPI curve to evaluate detecors and the lower, the better.
\item[3] Change denotes the value of current level minus the value of first column of each dataset;
\end{tablenotes}
\end{threeparttable}
\end{table*}
It is reasonable that when occlusion levels improve, accuracies would significantly decrease. The detectors in easy occlusion of KITTI has an excellent performance since 88.27\% of pedestrians are detected (the same is true in no occlusion in Caltech). However, the performance becomes particularly poor, especially in heavy occlusion in Caltech (hard in KITTI), for 49\% of pedestrians are missed. More importantly, the change rate is huge when the difficulty of scenarios changes from easy to hard, especially in Caltech dataset with a change rate of about 40\%. This is because Caltech dataset has a more complicated occlusion status. However, occlusion is common and sometimes there is heavy occlusion taking place in jaywalking or crosswalk crossing scenarios. Therefore, both the easy level and heavy level of occlusion detection are important in the Chinese urban environment. Above all, driverless technologies perform excellent detection in easy occlusion whereas detectors perform very poorly in the heavy level. Considering the importance of easy occlusion and heavy occlusion, we think driverless technology has overall bad research in occluded pedestrian detection due to the unacceptable result of heavy occlusion.
\begin{figure*}[pos=!t]
\centering
\includegraphics[width=7in]{figure/figure9smallpedestrians}
\DeclareGraphicsExtensions.
\caption{Small pedestrian detection in three scales in Caltech dataset(Data is summarized in October 1, 2019)}
\label{fig.8}
\end{figure*}
\subsubsection{Small pedestrians detection}
Small pedestrians mean that pedestrians take up small parts of pixels in the eyes of AVs. As analyzed above, jaywalking is very common and has caused a number of accidents in the Chinese environment. In the scenarios, the speed of vehicles is fast and they need to detect pedestrians at long distances to take timely measures. Therefore, small pedestrian detection is important to protect pedestrians. However, small pedestrian detection is complicated due to low resolution and noisy presentation\citep{RN382}.\\
In the authoritative dataset, COCO of Microsoft\citep{RN381}, they define objects smaller than 32*32 pixels as small objects. In the KITTI dataset, researchers set a minimized bounding box height of 25 pixels for moderate as well as hard level while Caltech dataset categorized them into near scale, medium scale and far scale. Additionally, Dollar et al. \citep{RN366}thought that pedestrians from 30 to 80 pixels are most important for automotive settings, respectively correspond to 20m and 60m away from the pedestrians at urban speed of 15m/s. Above all, we would focus more on pedestrians with the range from 30 to 80 pixels, which include small pedestrians in autonomous settings. \\
In the COCO dataset, general small object detection reflects an average precision of 0.343, compared to medium objects of 0.556 and large objects of 0.660. This also shows the difficulty of small object detection. As for small pedestrian detection, Caltech researchers summed up the performance of detectors in three scales shown in Fig. \ref{fig.8}. Reasonably, detectors in near scale has the best performance with the lowest miss rate of nearly 0\%. However, when it comes to medium scale, the performance drops dramatically with the lowest miss rate of 23\%. To make matters worse, detectors in the far scale have a terrible miss rate of only 60\%. As analyzed above, the medium-scale (30-80 pixels) pedestrian data is more suitable for Chinese urban pedestrians. Therefore, the best performance is the TLL-TFA algorithm with a miss rate of 23\%. In summary, driverless technology is OK to detect small pedestrians but has some room to improve. \\
\subsubsection{Distracted pedestrians detection}
Distraction usually happens when pedestrians are reading books, talking to others as well as using phones and phone utilization is the main reason. Distraction can seriously harm the safety of pedestrians as discussed above. In the context of AVs, it is essential to detect pedestrians using phones and accordingly decide how to respond. \\
Although the phone utilization of drivers gains much attention in research, there is a lack of researches about pedestrians using phones. Akshay et al. proposed a vision-based framework to classify whether a pedestrian is using a phone, with the highest accuracy of 91.20\%\citep{RN383}. At the same time, a pedestrian dataset using phones was proposed as well, in which pedestrians are in high resolution. However, there is little research to further the study. \\
Yet, surprisingly, phone utilization hasn’t always been terrible for AVs. Vehicle-to-pedestrians(V2P) communication is thought as a method to guarantee active safety, in which smartphone works as a receiving device. Ahmed et al.\citep{RN127} proposed a V2P-based application to broadcast alert information, including possible collision distance and time, to both phone users and AVs. At the same time, Pooya et al.\citep{RN342} proposed a model using phones to send alert warnings to pedestrians when they are around the crosswalk. Furthermore, He et al.\citep{RN343} proposed a V2P model utilizing WIFI, Bluetooth as well as DSRC technology to establish interaction between AVs and pedestrians. In these cases, the smartphone plays an important role to avoid collision rather than lead to accidents.\\
All in all, there exists research with excellent phone utilization detection of 91.20\%, nevertheless, no other researches are conducted to support and further the study. Moreover, the smartphone has played a totally new role to interact with the pedestrians rather than ruin the safety, and the method performs well. In conclusion, there is not systematic work to support the accurate detection to distracted pedestrians but the phone has become another way to protect safety. Hence, we think the driverless technology is OK and has some room to promote.
\subsubsection{The adaptability of detection}
To summarize the above conclusions, normal pedestrian detection is excellent. Occluded pedestrian detection is bad. Distracted pedestrian detection is OK. Small pedestrian detection is OK.
\subsection{Pedestrians adapt to AVs: interaction}
According to Gunnar\citep{RN280}, human-machine interaction(HMI) transfers communication information between human users and machines via human-machine interfaces. Under the context of traffic, the interaction takes place between vehicles and road users, such as pedestrians, and human-machine interfaces are utilized as the interacting methods. When pedestrians enter the road network, they begin constant information exchange with the traffic environment \citep{RN68}. Traditionally, drivers interact with pedestrians to negotiate road rights. While under the context of AVs, drivers do not have the control right of the cars (above Level 4) and cannot know the state. Sometimes the person sitting on the seat might be distracted, leaving pedestrians to infer the state of AVs alone. In this case, pedestrians could get limited information by observing the speed and distance. There is a special need for alternative communication techniques of AVs, which must be able to substitute for the gaze of the driver\citep{RN157}. Furthermore, the interaction helps to boost acceptance and develop proper mental models when AVs are originally put into markets. More importantly, it is crucial to inform pedestrians when AVs have a failure state\citep{RN134}.\\
\subsubsection{Human Machine Interfaces}
Recently, a lot of researches has been conducted on interaction, focusing on the human-machine interfaces. The used interfaces and transferring information are summarized in Table \ref{table 11}. Google developed a patent utilizing LED displays to notify pedestrians of the state of AVs\citep{RN451}. Tobias et al. \citep{RN135} used LED strips to conduct interaction, in which lights are laid in a line and different locations as well as light numbers show different information. The middle four lights light up to express autonomous mode is on; lights expand to the sides, indicating AVs notice pedestrians while the lights shrink from the sides to the middle, which means AVs is about to start; all lights light up to indicate AVs are resting. \\
Karthik et al. proposed a fusion interface to communicate with pedestrians. In their studies, infrastructures and electric devices can also become important interfaces of interaction\citep{RN65}. They have proposed four prototypes and prototype 1 utilized a speaker coupled with LED strips on cars. In order to transfer intent, the LED strip was mounted on the vehicle and exhibit four states. Solid red lights indicate the pedestrian shouldn’t cross as the vehicle would not stop. Blinking blue lights mean the vehicle was aware of the pedestrian. Green lights moving from left to right indicate the vehicle had fully halted and it was safe to cross. Purple lights moving from right to the left meat the vehicle would start soon. \\
Milecia et al. proposed a method fusing the speaker, LED word displays as well as strobe lights to transfer intent and corresponding experiments showed pedestrians’ trust to AVs was greatly improved\citep{RN132}. Furthermore, based on human-robot interaction, three novel methods were proposed by Nicole et al\citep{RN71}. First, Gaze and gestures of conventional drivers are logged and projected on wild shield screens to interact with pedestrians. Second, elements in the front of cars, such as headlights, radiator grill as well as side mirrors, were also utilized to make gestures like drivers to inform pedestrians. Third, a robot driver was utilized to imitate the eye contact and gestures of drivers. \\
In experiments of Rahimian et al.\citep{RN342}, pedestrians could receive an alarm auditory from the AVs when they are crossing the road, alerting not using phones and paying attention to driving environment. However, phones are not always terrible during the interaction process. Ahmed et al.\citep{RN127} applied phones to interact with pedestrians. Based on GPS data, phone application and AVs could calculate the distance and time to the collision point as well as danger indexes, therefore, the application could warn pedestrians and AVs by displaying a warning message or vibrating. Additionally, He et al.\citep{RN343} proposed a V2P model which utilized WIFI, Bluetooth as well as DSRC communication technology to build interaction between AVs and pedestrians.
Last but not the least, it will be essential to transfer information to pedestrians under the collapse of AVs, so Aaron et al.\citep{RN134} utilized the words of iPad and LED strips to send faulty information to pedestrians. \\
Through these external interfaces, pedestrians argued they know more about the state of AVs in \citep{RN68, RN69, RN78, RN361, RN65, RN132, RN360} and thereafter they trust AVs more in \citep{RN357, RN74, RN68, RN69, RN127, RN78, RN132}. To show the process of interaction, the features and transferring information are partially plotted in Fig. \ref{fig.9}. \\
\subsubsection{Shortages and suggestions on HMI design}
Nevertheless, every interface has its inevitable shortages. Visual interfaces are the most commonly used interfaces but it is not useful for people with color blindness, visual impairment and distraction. Auditory interfaces are thought to be more like a command and hence pedestrians dislike it. In addition, pedestrians would be confused when there are lots of AVs and other sounds. Phone vibration isn’t preferred by pedestrians for there are other functions that would cause vibration in a phone \citep{RN65}. \\
To tackle the shortages, a fusion of multiple features is a promising method to transfer more information and reach higher robustness\citep{RN65}. In Karthik’s experiments, mixed interfaces got the best score. However, it doesn’t mean the more interfaces mixed, the better interaction it gets. However, information overload takes place when there are too many interfaces\citep{RN65}. In the situations, pedestrians tend to check all interafces and then allow a go-head, thus leading to inefficiency. Therefore, researchers should consider possible information overload when fusing different interfaces. Furthermore, interaction needs a standard language to decide what interfaces to use and how to use. At above, different interfaces and different infomration transferrring methods are adopted by various researchers. Considering the future with AVs, pedestrians would get confused when interacting with AVs of different brands\citep{RN133}. Therefore, a standard interaction language is extremely needed for better welcoming AVs.\\
\subsubsection{The adaptability of pedestrians-AVs interaction}
In conclusion, equipped with external features, AVs could transfer the following state: about to start, about to stop, slowing down, starting AV mode as well as observing the pedestrians, etc. Furthermore, phone utilization would be warned and pedestrians could get the failure state of AVs. Through these, pedestrians could know the driving state of AVs and hence decide what and how to do. Compared to traditional interaction methods, the external features of AVs seem to be more abundant and intuitive. Yet, there still exits some shortages in these researches and more efforts should be taken in the area of making a uniform interacting language and fusing multiple external interfaces. Above all, we conclude driverless technology is OK on interacting with pedestrians but still has room to improve. Moreover, we boldly predict that AVs could even reach a better interaction compared to traditional cars by accomplishing the current limitations.
\subsection{Pedestrians adapt to AVs: receptivity}
Receptivity was originally defined as the willingness to accept uncertain, unfamiliar, or paradoxical idea\citep{RN384}. Therefore, the receptivity of pedestrians to AVs is the willingness of pedestrians to accept AVs, similar to the concept of trust and acceptance. \\
The high receptivity of pedestrians is important. On the one hand, as Vahidi concluded, the main obstacle to achieve a place in the market is not only technical issues but also the lack of acceptance of new ideas, which is important to gradually push AVs into markets\citep{RN291}. On the other hand, pedestrians would be more willing to interact with AVs and push the technology to evolve. \
\subsubsection{Factors that influence receptivity}
Recent researches have paid attention to factors that influence receptivity, which is summarized in Table \ref{table 12}. Demography (including age \& gender), reflects some basic properties of persons and is thought to influence the receptivity. Shuchisnigdha found males and younger people tend to trust AVs due to more interest in new technologies\citep{RN74}. However, Reig thought demographic variables were not meaningfully related to beliefs and perceptions\citep{RN40}. \\
Moreover, understanding to AVs has a strong relationship with receptivity. Samantha et al.\citep{RN40} found an insufficient understanding of AVs would lead to mistrust and made-up explanations for the behaviors of AVs. Monika et al.\citep{RN287} showed that people’s understanding of driverless algorithms improves receptivity and pedestrians now know a little how AVs conduct their movements
\clearpage
\onecolumn
\begin{landscape}
\begin{longtable}{m{1.5cm}<{\centering}m{2cm}<{\centering}m{1.5cm}<{\centering}m{4cm}<{\centering}m{4cm}<{\centering}m{6cm}<{\centering}}
\caption{Interacting methods of driverless technologies to communicate with pedestrians}
\label{table 11}\\\hline
Reference & Experimental type & Date & Interfaces & Information & Conclusion \\\hline
Karthik et al.\citep{RN65} & Empirical experiment \& questionnaire & 2018 & 1. Speaker + LED strip 2. Speaker + LED lights (in street) 3. Animated face + phone haptic(pedestrians) 4. Printed hand + phone audio + LED lights (street) & (AVs) about to start; about to stop; fully stop; notice the pedestrians;
& (1) Interfaces help pedestrians attempt to cross; (2) Interfaces can exist in the environment; (3) AVs should use a combination of visual,auditory, and physical interfaces. \\
Milecia et al.\citep{RN132} & Empirical experiment \& simulating experiments & 2017 &LED word display + speakers +strobe lights & (Pedestrians): cross now; stop; wait to cross & (1) Humans react positively and more predictably when the intent of the vehicle is communicated (2) Pedestrians trust AVs more when there are interfaces or they have prior knowledge to AVs \\
Aaron et al.\citep{RN134} & Empirical experiment \& simulating experiments \& interview & 2018 & LED strobe lights+ iPad display & (Pedestrians): please wait; safe to cross & (1) There exists possible confusion of interfaces (2) Pedestrians want to know the state of AVs rather than what they should do \\
Nicole et al.\citep{RN71} & N/A & 2017 & 1. Windshield screen 2. Head lights, radiator grill, the side mirrors 3. Robot driver & Human-like gaze and gestures & N/A \\
Tobias et al. \citep{RN135} & A Wizard of Oz approach & 2015 & LED light strip &In AV mode; about to yield; about to start; is resting &Interfaces can promote the interaction between pedestrians and vehicles. \\
Ahmed et al. \citep{RN127} & User study \& simulating experiments & 2016 & Smartphones & Time to collision; velocity; distance to collision etc.& External interfaces create good performance high detection rate and user satisfaction \\
Urmson et al. \citep{RN451} & N/A (patent) & 2015 & An electronic sign or lights, a speaker\ & What AVs are doing or going to do & The interfaces could replace the interaction between pedestrians and drivers \\
Pooya et al.\citep{RN342} & Simulating experiments & 2018 & Smartphones &Warn pedestrians that "Take care around and there are AVs" & Informing pedestrians by phones could improve the safety \\
He et al. \citep{RN343} & Simulating experiment \& case study & 2016 & Smartphones &The location of AVs and pedestrians & Bluetooth technology combined with DSRC could be workable for active pedestrian protection.\\
Evelyn et al. \citep{RN69} & Empirical experiment & 2016 & Speaker + LED display & 1. Whether pedestrians’ presence has been perceived by the autonomous vehicle 2. Broadcast the detection result of LiDAR & The audio cues and LED strips are useful for interaction and promote trust. \\
Shuchisnigdha et al. \citep{RN68} & VR experiment & 2018 & Speaker + LED display & 1. AVs are braking 2. Pedestrians are safe to cross & (1). Familiar sign for pedestrian, clear text and clear verbal message is preferred (2). The inclusion interfaces increase pedestrians’ receptivity \\
Azra et al. \citep{RN133} & N/A & 2018 & LED light strips &In AV or manual mode; about to start; about to yield &External interfaces help to add up trust to AVs; Features call for standardization; \\\hline
\end{longtable}
\end{landscape}
\clearpage
\twocolumn
\begin{figure*}[pos=!t]
\centering
\includegraphics[width=7in]{figure/figure7interactionofAVs}
\DeclareGraphicsExtensions.
\caption{External interfaces and transferring information for interaction in driverless technology}
\label{fig.9}
\end{figure*}
agreeing with previous research of Rogers\citep{RN385}. Furthermore, Monika et al. emphasized the usability as well as trialability would take pedestrians closer to AVs. The experiments of Samantha et al.\citep{RN40}found the more interest pedestrians have, the higher level pedestrians would trust AVs. Actually, usability, trialability and interest improve the receptivity by promoting the understanding of pedestrians.\\
The interaction between AVs and pedestrians also promotes the receptivity\citep{RN133}. In the context of AVs, interaction is conducted based on external interfaces. Samantha et al. \citep{RN40}emphasized the importance of external interfaces to promote receptivity due to transferring the intent as well as the current state of AVs. Saleh et al. highlighted the intent communication and called it the most critical sign to win the trust\citep{RN289}. Ahmed et al.\citep{RN127}adopted a V2P application to warn persons and found a promotion in trust toward AVs. Azrac et al. discovered that after interacting with AVs once, pedestrians could gain a close understanding of AVs, hence promoting trust and acceptance. Similarly, pedestrians trust AVs more in \citep{RN68, RN69, RN132} after interacting with external interfaces. \\
Interestingly, the brands of companies influence the receptivity to AVs. Samantha et al.\citep{RN40} discovered pedestrians trust the AVs of Uber because of its largeness, engineering quality while others doubt Uber for its fast speed to develop AVs . \\
Accidents related to AVs harm the receptivity because negative events are more visible compared to positive events\citep{RN386}. Small events, even totally no responsibility from AVs, could set off a number of resistances\citep{RN387}. Actually, most of the accidents should not blame AVs as Favarò et al. showed in \citep{RN292} that contributing factors come more from other the manoeuvers of other vehicles. \\
\begin{table*}[]
\centering
\caption{Influencing factors of pedestrians’ receptivity to AVs}
\label{table 12}
\begin{tabular}{m{6cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}}\hline
Title & Reference & Method & Year & Factors \\\hline
A Field Study of Pedestrians and Autonomous Vehicles & \citep{RN40} & Questionnaire & 2018 & Understanding \& Ability \& Brand \&Demography \\
Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices & \citep{RN287} & Case study & 2016 & Understanding \\
Trust in AV: An Uncertainty Reduction Model of AV-Pedestrian Interactions & \citep{RN78} & VR & 2018 & Ability \\
Towards Trusted Autonomous Vehicles from Vulnerable Road Users Perspective & \citep{RN183} & Model (trust model) & 2017 & Ability \\
Development and validation of a questionnaire to assess pedestrian receptivity toward fully autonomous vehicles & \citep{RN74} & Questionnaire & 2017 & Interaction \& Demography \\
P2V and V2P Communication for Pedestrian Warning on the basis of Autonomous Vehicles & \citep{RN127} & Model (application) & 2016 & Interaction \\
External Vehicle Interfaces for Communication with Other Road Users? & \citep{RN133} & Model (external interfaces) & 2018 & Interaction \\
Intent Communication between Autonomous Vehicles and Pedestrians & \citep{RN132} & Model (external interfaces) & 2017 & Interaction \\
Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service & \citep{RN69} & Model (external interfaces) & 2018 & Interaction \\
Investigating pedestrian suggestions for external interfaces on fully autonomous vehicles: A virtual reality experiment & \citep{RN68} & VR & 2018 & Interaction \\\hline
\end{tabular}
\end{table*}
\subsubsection{The adaptability of receptivity}
Above all, demography, understanding, the ability of AVs, interaction, the brands of companies and accidents would influence the receptivity of AVs. By analyzing these factors, interaction has taken up largely in the influencing factors, which emphasize the importance of interaction and interacting experiences again. Additionally, the understanding and the ability also contribute to the receptivity. More importantly, the ability of AVs, knowledge and interaction could promote each other and the commonality of these factors is to stress the real experiences with AVs. It is known that real experiences occur in close contact with pedestrians. However, we seldom see researches on Chinese pedestrians; we hardly see interacting experiment is carried out in China; we never see waymo (google) or cruise (general motor) test their AVs in the Chinese environment. Therefore, the receptivity of Chinese pedestrians to AVs must be very low. In conclusion, the receptivity of Chinese pedestrians is bad and not adaptive to China.
\subsection{The adaptability of driverless technologies}
According to analyses above, we summarize the adaptability of AVs to pedestrians in Table \ref{table 13}. The interaction features could transfer the state of AVs to pedestrians but need a standard language to manage external interfaces. Driverless technologies could detect normal pedestrians well. Moreover, in small and distracted pedestrian detection, the precision is OK and there is room to promote. The worse scenes are pedestrians with heavy occlusion and detectors have very bad performance in the situation. Additionally, the influencing factors of receptivity reflects the importance of real experiences with AVs but little is conducted in China. Hence, the receptivity of Chinese pedestrians is so bad.
\begin{table*}[]
\centering
\caption{The adaptability of AVs to pedestrians in urban China}
\label{table 13}
\begin{tabular}{m{2.5cm}<{\centering}m{1.5cm}<{\centering}m{2cm}<{\centering}m{1.5cm}<{\centering}m{2cm}<{\centering}m{3cm}<{\centering}m{2.5cm}<{\centering}}\hline
\multirow{2}{*}{Technical demands} & \multicolumn{4}{c}{AVs adapt to pedestrians: detection} &\multicolumn{2}{c}{Pedestrians adpat to AVs} \\\cline{2-7}
& Normal pedestrians & Occluded pedestrians & Small pedestrians & Distrated pedestrians & {Interaction(HMI)} & {Receptivity} \\\hline
Evaluation & Excellent & Bad & OK & OK & OK & Bad \\
Limitations and suggestions & N/A & Greatly improve precision in heavy occlusion(lowest miss rate:49\%) & Improve detection precision(lowest miss rate:23\%) & More researches about phone distraction detection; more researches focusing on other distraction factors & Call for standard HMI design language;avoid information overload of HMI; fuse different interface modalities
& Totally lack receptivity research in China; conduct driverless experiments involving pedestrians \\\hline
\end{tabular}
\end{table*}
\subsection{Influence on pedestrian mentality and government regulations.}
AVs would greatly change the traditional situation of pedestrian traffic and take a change to pedestrian mentality and regulations. In the scholar research,
There are some articles talking about and the mentality of drivers in AVs and legal obstacles of pushing AVs into mass production, such as issue of compliance, issues of liability, issues of information governance\citep{RN181, RN284, RN285, RN182, RN283}. However, there is a lack of research on the change of pedestrian mentality and government regulations to pedestrians when AVs enter the road. Therefore, based on the analyzed behaviors of pedestrians and adaptability analysse, the AVs’ influence on pedestrian mentality and government regulations would be discussed in the following. \\
Firstly, the mentality of pedestrians to conduct these bad behaviors would change. Currently, AVs are troubled by occluded pedestrian detection, which is to say, AVs would not pass the road when there are a few pedestrians. Furthermore, the human-machine interface would accurately transfer this state to pedestrians that AVs would stop. In the situation, pedestrians would get road rights with no fear of accidents and learn experience from it, which could cause a more serious red light running take place. Similarly, jaywalking would be safer for pedestrians because they could find the AVs are about to stop directly compared to gaining information from traditional drivers. As for distraction, pedestrians would have to pay attention to AVs because they are not familiar with this new technology and the human-machine interface should read by them themselves, which possibly reduces the distracted pedestrians. In summary, pedestrians would pay more attention to the traffic but turn bold to violate rules. \\
Under the context of AVs, pedestrian regulations should be adjusted accordingly. We conclude that pedestrians could be bolder to violate the rules above. In the situation, pedestrians would be safe but AVs would lose their ability to make traffic efficiency. In most of existing regulations, violated pedestrians are hard to caught and the punishment is light, which could not prevent bad behaviors in an efficient way. Therefore, the government must take stronger and more efficient measures to limit pedestrians violate the rules; for example, record illegal records into personal credit reports or build pedestrian bridges. Only by strong or efficient measures could reduce pedestrian violations and let AVs bring traffic more efficiency.
\section{Challenges \& Opportunities }
Based on the above analyses, we summarize some challenging but promising researches in the Chinese pedestrian environment.
\subsection{Standard interaction language}
As discussed above, a number of researches emerge on the external interfaces to interact AVs with pedestrians but there is not a common language. What information to transfer and how to transfer depends on researchers. We could imagine it will be a mess if different AVs use different interacting ways to inform pedestrians, causing a possible misunderstanding. Therefore, we call for a standard interacting language with two parts. Firstly, the kind of transferring information should be standardized. All AVs only transfer specific kinds of information, such as startup, observing the pedestrians, etc. Secondly, the information transferring methods should be made standard. For example, AVs would only indicate they are starting up by lighting red lights. Under the system, we believe pedestrians would have a better understanding of driverless technologies and push AVs to become real traffic participants in the long run.
\subsection{Interfaces fusion of HMI design}
Single modality interface, such as LED strips, might offer limited interaction information and have some shortages in its nature. Fusing interfaces of different modalities would be a promising solution, which would promote the precision and robustness of interaction. For example, pedestrians could observe all the interfaces and decide what to do. Especially, pedestrians could still judge the state of AVs when part of interfaces collapse, which is essential to protect pedestrian safety in the future with AVs. It is suggested that researchers should combine the advantages of different interfaces and eliminate the shortages by adding other interfaces.
\subsection{Information overload avoidance on HMI design}
Interfaces fusion of HMI design has been a trend, however, overfull interfaces would lead to interaction information overload. In the situation, pedestrians tend to check all interafces and then decide what to do. This would confuse pedestrians and cause an inefficiency in traffic because pedestrians need much time to observe interfaces' state. Therefore, information overload on HMI design is suggested to avoid and the reriewed articles showed three interfaces are enough.
\subsection{Small pedestrian detection}
Small pedestrian detection is important to protect safety around the world. In the Chinese driving environment, there are a number of small pedestrians due to jaywalking. However, the optimal detector only has a 23\% miss rate of small pedestrian detection, which is far from practical use. We suggest more efforts should be taken to improve the performance.
\subsection{Heavily occluded pedestrian detection}
Occlusion often happens when pedestrians are crossing the crosswalk and is a challenging problem in the Chinese driving environment. Nevertheless, the detector performs poorly in heavily occluded pedestrians. Hence, we call on efforts to tackle the issues of heavily occluded pedestrian detection.
\subsection{More distracted pedestrian research}
We have talked about the distracted pedestrian detection above and focus on smartphone usage. However, there is only one paper implemented to detect pedestrians using phones. Further research is required to push phone utilization detection. Moreover, other distracted reasons also call for researches in the context of AVs. Therefore, we suggest more distracted pedestrian research.
\subsection{Nation-based receptivity research}
In the part of receptivity, we concluded factors that influence the receptivity of pedestrians, such as demography, knowledge and interaction, which highlight the real experiences between AVs and pedestrians. Therefore, the researches based on American would be not adaptive to other countries. However, as the largest market of cars, few researches are conducted on the receptivity in China. Therefore, we suggest nation-based receptivity research, especially in China, to get close knowledge to develop AVs.
\subsection{Driverless experiments involving pedestrians. }
We conclude that real interacting experience of pedestrians greatly promote the trust to AVs. However, few researches are implemented involving pedestrians. To achieve receptivity, why not invite more pedestrians to join the empirical experiments? By doing so, not only do researchers gain practical feedback but also pedestrians could interact with AVs. In the process, pedestrians know more knowledge about AVs and hence promote the receptivity. Overall, it is wise for researchers to conduct experiments with pedestrians.
\section{Conclusions}
The objective of this paper is to survey the adaptability of autonomous vehicles to pedestrians in urban China. China has a complicated pedestrian environment, however, little is known about the adaptability of future driverless technology. To this end, we analyzed three typical pedestrian behaviors in urban China and summed up the key technical demands for AVs. Then by reviewing the latest driverless technologies, we conclude the adaptability of autonomous vehicles to pedestrians respectively. Finally, we summarized the challenging problems and opportunities in Chinese pedestrian environment.\\
As we talked above, the adaptability of autonomous vehicles depends on the adaptability of interaction, detection as well as receptivity. In conclusion, driverless technologies perform well in normal pedestrian detection. Additionally, driverless technologies have an OK work in small pedestrian detection, distracted pedestrian detection and interaction. However, occluded pedestrian detection and the receptivity of pedestrians are not adaptive to China.
We noted some challenging but promising areas for Chinese pedestrian environment, such as standard interaction languages and nation-based receptivity research. These aspects could be our future research or conducted by other practitioners.
\section*{Acknowledgment}
This research was funded by National Natural Science Foundation of China (Grant No. 51605054), State Key Laboratory of Vehicle NVH and Safety Technology (NVHSKL-202008 and NVHSKL-202010), The Science and Technology Research Program of Chongqing Education Commission of China (KJQN201800517 and KJQN201800107), Fundamental Research Funds for the Central Universities (No: 2019CDXYQC003), Chongqing Social Science Planning Project (No:2018QNJJ16), Key Technical Innovation Projects of Chongqing Artificial Intelligent Technology (cstc2017rgzn-zdyfX0039)
\bibliographystyle{cas-model2-names}
|
1,108,101,563,128 | arxiv | \section{Introduction}
Over the past two decades, several high-stakes decision-making domains such as the child-welfare system (CWS), criminal justice system, education, and medical services have increasingly turned towards risk assessment algorithms as a means to standardize and improve decision-making. Facing severely limited resources and new dilemmas in the form of burdensome workloads and high staff turnover, most human services agencies have also turned towards algorithms as they purportedly promise to reduce costs and provide greater efficiencies in public policy and social services delivery. CWS has also been the center of public and media scrutiny because of the harm caused to children who are removed from the care of their parents \cite{camasso2013decision}. On the other hand, CWS also receives severe criticism and media attention for child abuse tragedies where the system failed to remove and protect a child \cite{gajanan_2020}. This has further mounted the pressure on CWS in several states in the United States (U.S.) to employ structured decision-making tools (and more recently, algorithmic decision-making) to prove that they are employing evidence-based, consistent, and objective decision-making processes \cite{saxena2020conducting, saxena2020child}. Decades of research in clinical psychology and medicine exhibit that statistical decision-making outperforms human experts in prediction tasks \cite{grove2000clinical, aegisdottir2006meta} and is often cited as a justification for introducing algorithms in the public sector. However, as illustrated by my CHI 2020 literature review \cite{saxena2020human}, CWS poses its own challenges with respect to the \textit{\textbf{technical}} (i.e., quality of data, reliability/validity of constructs), \textit{\textbf{social and cultural}} (i.e., workers' interactions with algorithms, impact of systemic constraints), \textit{\textbf{theoretical}} (i.e., what is empirical risk vs. theoretical risk?), and \textit{\textbf{societal}} (i.e., impact of algorithms on communities and decision-making ecosystem) implications of algorithmic decision-making.
\vspace{0.15cm}
Abebe et al. \cite{abebe2020roles} highlight that much of the computational research that focuses on fairness, bias, and accountability on algorithmic systems continues to formulate “fair” technical solutions while failing to address deeper systemic and structural injustices. Through my dissertation work, I bring attention back to the \textit{sociotechnical} and highlight social problems in child-welfare and how these problems become embedded in algorithmic systems. Through the studies discussed below, my dissertation assumes the dual roles of \textit{computing as rebuttal} where I highlight the technical limitations and feasibility of risk assessment algorithms, and of \textit{computing as synecdoche} by uncovering systemic complexities and social problems that directly impact families. This dissertation will also seek to make contributions at the intersection of gaps highlighted by the literature review and recommend solutions centered in strength- and asset-based approaches \cite{bronfenbrenner1975reality, zimmerman2013resiliency, badillo2018chibest} that will improve the state of current algorithmic interventions, enhance child-welfare practice, and improve street-level decisions mediated through algorithms.
Therefore, my dissertation answers the following overarching research questions:
\begin {itemize} [leftmargin=*]
\item \textbf{RQ1:} (a) How do caseworkers interact with algorithms in their daily lives, and (b) How does the implementation of a given algorithm impact algorithmic decision-making, human discretion, and bureaucratic processes?
\item \textbf{RQ2:} (a) Can computational text analysis help uncover invisible patterns of human discretionary work conducted within the constraints of bureaucracy, and (b) can these theoretical signals derived from casenotes help contextualize algorithmic decisions?
\item \textbf{RQ3:} \textit(a) How is "risk" quantified empirically within algorithmic systems as compared to how it is understood theoretically within the domain?, and (b) how do risk factors fluctuate and mediate each other throughout the child-welfare process and its implications for algorithmic decision-making?
\end{itemize}
To answer these questions, I will conduct the four studies described below. Examining the nature of practice and street-level discretionary work as well as the impact of systemic and policy-related barriers on decision-making (human or algorithmic) will allow us to develop technical solutions that operate within these constraints and augment the quality of human discretionary work.
\section{Research Overview}
In the following sections, I provide a short overview of my four dissertation studies.
\subsection{\large{Study 1: Theoretical Framework for Algorithmic Decision-Making in the Public Sector Developed through an Ethnography of Child-Welfare}}
This study constitutes an in-depth ethnographic case study that I conducted at a child-welfare agency in Milwaukee, Wisconsin \cite{saxena2021framework}. It was published at CSCW '2021 and was presented at the conference. It contributes to the \textbf{\textit{theoretical}} and \textbf{\textit{social and cultural}} gaps highlighted by the literature review. Algorithms in the public sector is a domain in its own right and requires a cohesive framework that explains how algorithms interact with bureaucracy and human discretion. First, drawing upon theories from Human-Computer Interaction (HCI), Science and Technology Studies (STS), and Public Administration (PA), we propose a theoretical framework for algorithmic decision-making for the public sector (ADMAPS) which accounts for the interdependencies between human discretion, bureaucratic processes, and algorithmic decision-making. The framework is then validated through a case study of algorithms in use at the agency. Second, the ethnography uncovers the daily algorithmic practices of caseworkers, what causes them to (dis)trust an algorithm, and how they navigate through different algorithmic systems especially when they do not account for policy and systemic barriers or resource constraints at the agency.
\subsection{\large{Study 2: Examining Invisible Patterns of Street-level Discretionary Work in Child Welfare embedded in Caseworker Narratives}}
This study seeks to utilize sources of information that have been hard to quantify so far, namely, caseworker narratives. Child-welfare caseworkers are trained in writing detailed casenotes about their interactions with families and case progress through the life of the case. This study contributes to the \textbf{\textit{technical}} and \textbf{\textit{theoretical}} gaps illustrated by the literature review by deriving rich qualitative signals from casenotes using natural language processing techniques such as topic modeling. We are specifically analyzing casenotes written by the Family Preservation Services (FPS) team that works closely with birth parents in their efforts to achieve reunification. Casenotes offer a rich description of decisions, relationships, conflicts, personas, as well as policy-related and systemic barriers. Analyzing these casenotes offers a unique lens towards understanding the workings of a child-welfare team trying to achieve reunification; one of the primary policy-mandated goals of CWS. Theoretical signals derived from casenotes will also help contextualize the quantitative structured assessments \cite{saxena2022train} and highlight patterns of invisible labor conducted by caseworkers and systemic constraints and power asymmetries that impact all decisions. This study was published at CHI'2022 \cite{saxena2022unpacking}.
\subsection{\large{Study 3: Algorithms in the Child-Welfare Ecosystem: Impact on Practice, Organization, and Street-Level Decision-Making}}
Drawing upon findings from a two-year ethnography conducted at a child-welfare agency, we highlight how algorithmic systems are embedded within a complex decision-making ecosystem at critical points of the child-welfare process. In our prior study \cite{saxena2021framework}, we focused on the micro-interactions between the dimensions of human discretion, algorithmic decision-making, and bureaucratic processes to understand why algorithms failed (or succeeded) to offer utility to child-welfare staff and their impact on the quality of human discretionary work. In this study, we critically investigate the macro-interactions between these three elements to assess the impact of algorithmic decision-making on the nature of practice, the organization, as well as the interactions between human discretion and bureaucratic processes to understand how the nature of street-level decision-making is changing and whether algorithms are living up to the promises of cost-effective, consistent, and fair decision-making. This study contributes to the \textbf{\textit{social and cultural}} and \textbf{\textit{societal}} gaps highlighted by the literature review by unpacking how the decision-making ecosystem within the public sector is changing. It also depicts the case study of an algorithm that offers higher utility to caseworkers, however, required significant investments from the agency leadership to bring about that ecological change in decision-making where the algorithmic system plays an essential role. This manuscript is currently under review for the ACM Journal on Responsible Computing in October 2022.
\subsection{\large{Study 4: Rethinking "Risk" in Algorithmic Systems Through A Computational Narrative Analysis of Casenotes in Child-Welfare}}
Risk assessment algorithms have been adopted by several public sector agencies to make high-stakes decisions about human lives. However, there is a mismatch between how risk is quantified empirically based on administrative data versus how it is understood theoretically within the domain. Public servants such as caseworkers are essentially risk workers who are tasked with assessing and managing risks, translating risk in different contexts, and conducting care work in the context of risk \cite{gale2016towards}. However, this risk work is increasingly mediated through algorithmic systems with a mismatch between \textit{empirical risk} and \textit{theoretical risk} that leads to unreliable decision-making and conflicts in practice. This study contributes to the \textbf{\textit{theoretical}} and \textbf{\textit{societal}} gaps highlighted by the literature review. Algorithms model “risk” based on individual client characteristics to identify clients most in need. However, this understanding of risk is primarily based on easily quantifiable risk factors that present an incomplete and biased perspective of clients. In this study, I conducted computational narrative analysis of child-welfare casenotes and draw attention toward deeper systemic risk factors that are hard to quantify but directly impact families and street-level decision-making. Beyond individual risk factors, the system itself poses a significant amount of risk to families where parents are over-surveilled by caseworkers and experience a lack of agency in decision-making. I also problematize the notion of risk as a static construct by highlighting temporality and mediating effects of different risk and protective factors and show that any temporal point estimate of risk will produce biased predictions. I also draw caution against using casenotes in NLP-based algorithms by unpacking their limitations and biased embedded within them. This study is currently under submission at CHI'2023.
\vspace{0.2cm}
\section{Research Progress \& GROUP 2022 DC Participation}
All four studies have been completed with \textbf{Study 1} and \textbf{Study 2} published and presented at their respective conferences. The manuscript for \textbf{Study 3} is currently under submission and review for the ACM Journal on Responsible Computing. \textbf{Study 4} is currently under submission and review at CHI'2023.
\vspace{0.2cm}
\section{EXPECTED OUTCOMES}
My dissertation assumes the dual roles of \textit{computing as rebuttal} and \textit{computing as synecdoche} and will make three contributions. First, I highlight the technical limitations and feasibility of risk assessment algorithms and draw attention to the systemic complexities and structural issues that directly impact families. Second, I developed a theoretical framework for algorithmic decision-making in the public sector that accounts for the complex interdependencies between human discretion, bureaucratic processes, and algorithmic decision-making. Third, I show how computational narrative analysis can help uncover patterns of invisible labor, systemic constraints, and power asymmetries and problematize the empirical notion of risk by highlighting the temporality of risk as well as systemic risk factors that are hard to quantify but directly impact street-level decision-making.
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,563,129 | arxiv | \section{Introduction}
Discrete valued time series are increasingly of practical importance with applications in diverse fields such as analysis of crime statistics, econometric modelling, high frequency financial data, animal behaviour, epidemiological assessments and disease outbreak monitoring, and modern biology including DNA sequence analysis -- see \cite{dunsmuir2008assessing}. In this paper we focus on time series of binomial counts.
Two broad classes of models for time series of counts, based on the categorization of \cite{cox1981statistical}, are generally discussed in the literature: observation driven models, in which the serial dependence relies on previous observations and residuals; and parameter driven models, in which the serial dependence is introduced through an unobserved latent process. Estimation of parameter driven models is significantly challenging especially when the latent process is correlated. Therefore methods that provides preliminary information of the regression parameters without requiring a heavy computation load would be appealing. For example, the use of generalized linear model (GLM) estimation for obtaining estimates of the regression parameters is discussed in \cite{davis2000autocorrelation} and \cite{davis2009negative} for Poisson and negative binomial observations respectively. GLM estimation is consistent and asymptotically normal for these two types of response distribution even when there is a latent process inducing serial dependence. However as recently pointed out by \cite{wu2014parameter} and discussed in more detail below, use of GLM for binary or binomial data leads to asymptotically biased estimates. \cite{wu2014parameter} propose a semiparametric estimation method for binary response data in which the marginal probability of success modelled non-parametrically. This paper takes a different approach and suggest using estimation based on one-dimensional marginal distributions which accounts for the variance of the latent process but not the serial dependence. Such a procedure is easy to implement using standard software for fitting generalized linear mixed models (GLMM). We show that this method leads to estimates of regression parameters and the variance of the latent process, which are consistent and asymptotically normal even if the latent process includes serial dependence. Additionally the method extends easily to other response distributions such as the Poisson and negative binomial and in these cases will improve efficiency of regression parameters related to GLM estimates.
Suppose $Y_t$ represents the number of successes in $m_t$ trials observed at time $t$. Assume that there are $n$ observations $\{y_1,\ldots,y_n\}$ from the process $\{Y_t\}$ and that $x_{nt}$ is an observed $r$-dimensional vector of regressors, which may depend on the sample size $n$ to form a triangular array, and whose first component is unity for an intercept term. Then given $x_{nt}$ and a latent process $\left\{\alpha_{t}\right\}$, the $Y_{t}$ are independent with density
\begin{equation}
f_{Y_{t}}(y_{t}|x_{nt},\alpha_{t};\theta)=\exp\left\{ y_{t}W_{t}- m_{t}b(W_{t})
+ c(y_{t})\right\} \label{eq: EFDensity}%
\end{equation}
in which
\begin{equation}
W_{t}=x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta+\alpha_{t}, \label{eq: LinPredWt}%
\end{equation}
and $b(W_t)=\log(1+\exp(W_t))$ and $c(y_t)=\log \binom{m_t}{y_t}$. Then
\[
E(Y_{t}|x_{nt},\alpha_{t})= m_{t}\dot{b}(W_{t}), \quad \mathrm{Var}(Y_{t}|x_{nt},\alpha_{t})= m_{t}\ddot{b}(W_{t}).
\]
The process $\{\alpha_t\}$ is not observed and because of this is referred to as a latent process. Often $\{\alpha_t\}$ is assumed to be a stationary Gaussian linear process with zero mean and auto-covariances
\begin{equation*}
\mathrm{Cov}\left(\alpha_t, \alpha_{t+h}\right) = \tau R(h;\psi)
\end{equation*}
where $\tau$ is the marginal variance of $\alpha_t$ and $\psi$ are the parameters for the serial dependence in the model of $\alpha_t$. The specification of stationary Gaussian linear process covers many practical applications and we will assume that for the remainder of the paper. However, Gaussianity it not required for the main asymptotic results presented here, and in general, $\alpha_t$ can be assumed a stationary strongly mixing process. We will discuss this extension further in Section \ref{Sec: discussion}.
We let $\theta=(\beta,\tau,\psi)$ denote the collection of all parameters and let $\theta_0$ be the true parameter vector. For the above model the likelihood is defined in terms of an integral of dimension $n$ as follows,
\begin{equation}\label{eq: fullLiklihood}%
L(\theta): = \int_{ \mathbb{R}^{n}}\prod_{t=1} ^{n}\exp\left\{y_{t}W_{t} - m_t b(W_{t})+ c(y_{t})\right\} g(\alpha;\tau,\psi) d\alpha
\end{equation}
where $g(\alpha;\tau,\psi)$ is the joint density of $\alpha=(\alpha_1,\ldots,\alpha_n)$ given the parameters $\tau$ and $\psi$.
Maximization of the likelihood \eqref{eq: fullLiklihood} is computationally expensive. Methods for estimating the high dimensional integrals in \eqref{eq: fullLiklihood} using approximations, Monte Carlo method or both are reviewed in \cite{DunsDVTS2015}. However simple to implement methods that provide asymptotically normal unbiased estimators of $\beta$ and $\tau$ without the need to fit the full likelihood are useful for construction of statistics needed to investigate the strength and form of the serial dependence. They can also provide a initial parameter values for the maximization of the full likelihood \eqref{eq: fullLiklihood}.
For practitioners, GLM estimation has strong appeal as it is easy to fit with standard software packages. GLM estimators of the regression parameters $\beta$ are obtained by treating the observations $y_t$ as being independent with $W_t=x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta$ and using the GLM log-likelihood
\begin{equation}\label{eq: loglikglm}%
l_0(\beta): =\log L_0(\beta) = \sum_{t=1} ^{n}\left[y_{t}(x_{nt} ^{{ \mathrm{\scriptscriptstyle T} }}\beta) - m_{t}b(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta) + c(y_{t})\right]
\end{equation}
We let $\tilde \beta$ denote value of $\beta$ which maximises \eqref{eq: loglikglm}. This GLM estimate assumes that there is no additional unexplained variation in the responses beyond that due to the regressors $x_{nt}$.
However, as recently noted by \cite{wu2014parameter}, GLM does not provide consistent estimates of $\beta$ when $W_t$ contains a latent autocorrelated component. To be specific, for deterministic regressors $x_{nt}=h(t/n)$ for example, $n^{-1}l_{0}(\beta)$ has limit
\begin{equation*}
Q(\beta)= \bar{m}\int_{0}^{1}\left( \int_{\mathbb{R}} \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0+\alpha) g(\alpha;\tau_0)d\alpha (h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta) - b(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta)\right)du
+ \overset{M}{\underset{m=1}\sum} \kappa_{m} \overset{m}{\underset{j=0}\sum} \int_{0}^{1}\pi^0(j)c(j)du
\end{equation*}
where $\bar{m} = E(m_{t})$, $\kappa_{m}=P(m_{t}=m)$ and $\pi^0(j)=P(Y_t=j|x_{nt}, \theta_0)$. We show below that $\tilde\beta$ converges to $\beta'$, which maximizes $Q(\beta)$. Equivalently $\beta'$ is the unique vector that solves
\begin{equation} \label{eq: betaprime equation for 2a}
\bar{m} \int_0^1 \left(\int_{\mathbb{R}} \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0 + \alpha)g(\alpha;\tau_0)d\alpha - \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta^\prime)\right)h(u) du = 0
\end{equation}
In the Poisson or negative binomial cases, $m_t\equiv 1$, and $E(Y_t)=E(\dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta_0+\alpha_t))=\dot{b}(x_{nt}^T\beta_0 + \frac{\tau}{2})$, in which $\tau/2$ only modifies the regression intercept but does not influence the response to other regression terms. Such an identity does not usually hold for binomial observations. When $\tau_{0}>0$, the relationship between $\beta'$ and $\beta_0$ in binomial logit regression models has been investigated by several researchers. For example, \cite{neuhaus1991comparison} proved that the logit of the marginal probability $\int (1+e^{-(x^T\beta_0+\alpha)})^{-1}g(\alpha)d\alpha$ can be approximated with $x^T\beta^\ast$, where $\vert\beta^\ast\vert \le \vert\beta_0\vert$ for single covariate $x$ and the equality is only attained when $\tau=0$ or $\beta_0=0$. \cite{wang2003matching} proved that only if $g(\cdot)$ is the ``bridge" distribution, the logit of $\int (1+e^{-\alpha-x^T\beta_0})^{-1} g(\alpha)d\alpha$ equals to $x^T\beta_0$ holds; \cite{wu2014parameter} proposed their MGLM method because GLM\ estimates for binomial observations generated under model \eqref{eq: LinPredWt} are inconsistent.
To overcome the inconsistency observed in GLM\ estimation, in this paper we propose use of marginal likelihood estimation, which maximises the likelihood constructed under the assumptions that the process $\alpha_t$ consists of independent identically distributed random variables. Under this assumption the full likelihood \eqref{eq: fullLiklihood} is replaced by the ``marginal" likelihood
\begin{equation}
L_1(\delta) = \prod_{t=1}^{n}f(y_{t}|x_{t}, \delta) = \prod_{t=1}^{n} \int_{ \mathbb{R}} \exp\left( y_{t}W_{t}-m_t b(W_{t}) + c(y_{t})\right) g(\alpha_t;\tau) d\alpha_t. \label{eq: marginal likelihood}%
\end{equation}
and the corresponding ``marginal" log-likelihood function is
\begin{equation}
l_1(\delta) = \sum_{t=1}^{n}\log f(y_{t}|x_{t},\delta)=\sum_{t=1} ^{n}\log \int_{ \mathbb{R}} \exp\left( y_{t}W_{t}- m_tb(W_{t}) + c(y_{t}) \right) g(\alpha_t;\tau) d\alpha_t. \label{eq: marginal log likelihood}%
\end{equation}
where $\delta = (\beta,\tau)$ and $g(\cdot,\tau)$ is the density for a mean zero variance $\tau$ normal random variable. Let $\hat\delta$ be the estimates obtained by maximising
\eqref{eq: marginal log likelihood} over the compact parameter space $\Theta:=\{\beta \in \mathbb{R}^r: \|\beta - \beta_0\| \le d_1\}\bigcap \{\tau\ge 0: |\tau - \tau_0| \le d_2\}$, where $d_1<\infty$, $d_2<\infty$.
Marginal likelihood estimators of $\hat\delta$ can be easily obtained with standard software packages for fitting generalized linear mixed models. Since these marginal likelihood estimates $\hat\delta$ are consistent, they can be used as the starting value of full likelihood based on \eqref{eq: fullLiklihood}. Additionally, the asymptotic distribution of $\hat\delta$, and the standard deviation derived from the asymptotic covariance matrix can be used to assess the significance of regression parameters $\hat\beta$. Moreover, in another paper we have developed a two-step score-type test to first detect the existence of a latent process and if present whether there is serial dependence. The asymptotic results of this paper are needed in order to derive the large sample chi-squared distribution of the second step, the score test for detecting serial dependence.
Large sample properties of the marginal likelihood estimates $\hat\delta$ are provided in Section \ref{Sec: Asympototic Theory for Marginal Likelihood Estimates}; The simulations of Section \ref{Sec: Simulations} show that marginal likelihood estimates lead to a high probability of $\hat\tau=0$ when the number of trials, $m_t$, is small. In particular $P(\hat\tau=0)$ can be almost 50\% for binary data. Hence Section \ref{Sec: Pile up probability} focuses on obtaining asymptotic approximations to the upper bound for $P(\hat\tau=0)$, which is useful to quantify the proportion of times the marginal likelihood procedures `degenerates' to the GLM procedure. Also in Section \ref{Sec: Pile up probability} we derive a theoretical mixture distribution which provided better approximation in this situation. Section \ref{Sec: Simulations} presents simulation evidence to demonstrate the accuracy of the asymptotic theory and the covariance matrix of the marginal likelihood estimate. Section \ref{Sec: MGLM est} discusses the difference between marginal likelihood estimation and MGLM estimation of \cite{wu2014parameter}. Section \ref{Sec: discussion} concludes.
\section{Asymptotic Theory for Marginal Likelihood Estimates} \label{Sec: Asympototic Theory for Marginal Likelihood Estimates}
We present the large sample properties for the marginal likelihood estimates of $\beta$ and $\tau$ obtained by maximizing \eqref{eq: marginal log likelihood}. We begin by presenting the required conditions on the latent process $\{\alpha_{t}\}$, the regressors $\{x_{nt}\}$ and the sequence of binomial trials $\{m_{t}\}$.
A process $\{\alpha_t\}$ is strongly mixing if
\[
\nu(h)=\sup_{t} \sup_{A\in \emph{F}_{-\infty}^{t}, B\in \emph{F}_{t+h}^\infty}|P(AB)-P(A)P(B)|\to 0
\]
as $h\to \infty$, where $\emph{F}_{-\infty}^{t}$ and $\emph{F}_{t+h}^\infty$ are the $\sigma$-fields generated by $\{\alpha_s, s\leq t\}$ and $\{\alpha_s, s\geq t+h\}$ respectively.
In practice, the number of trials $m_{t}$ may vary with time. To allow for this we introduce:
\newtheorem{cond}{Condition} \begin{cond}\label{Cond: mt}
The sequence of trials $\{m_t: 1\le m_t\le M\}$ is a stationary strongly mixing process independent to $\{X_t\}$; the mixing coefficients satisfy: $\overset{\infty}{\underset{h=0}\sum}\nu(h)< \infty$. Let $\kappa_j=P(m_t=j)$, assume $\kappa_M>0$, $\overset{M}{\underset{j=1}\sum} \kappa_j=1$.
\end{cond}
\noindent An alternative would be to take $m_t$ as deterministic and asymptotically stationary in which case the $\kappa_j$ would be limits of finite sample frequencies of occurrences $m_t = j$. Both specifications obviously include the case where $m_t=M$ for all $t$, of which $M=1$ yields binary
responses.
As in previous literature \cite{davis2000autocorrelation}, \cite{davis2009negative} and \cite{wu2014parameter} we allow for both deterministic and stochastic regressors:
\begin{cond}\label{Cond: Reg Trend Type}
The regression sequence is specified in one of two ways:
\begin{description}
\item (a) Deterministic covariates defined with functions: $x_{nt}=h(t/n)$ for some specified piecewise continuous vector function $h: [0,1]\to \mathbb{R}^r$.
\item (b) Stochastic covariates which are a stationary vector process: $x_{nt}=x_t$ for all $n$ where $\{x_t\}$ is an observed trajectory of a stationary process for which $E(e^{s^T X_t}) <\infty$ for all $s\in \mathbb{R}^r$.
\end{description}
\end{cond}
\begin{cond}\label{Cond: Reg Full Rank}
Let $r=\dim(\beta)$. The regressor space $\mathbb{X}=\{x_{nt}: t\ge 1\}$, assume $\texttt{rank}(\texttt{span}(\mathbb{X}))=r$.
\end{cond}
The full rank of the space spanned by the regressors required for Condition \ref{Cond: Reg Full Rank} holds for many examples. For instance, for deterministic regressors generated by functions given in Condition 2a, such $X_i$, $i=1,\ldots,r$ exist if there are $r$ different values of $u_i= (u_1,\ldots, u_r)$ such that the corresponding function $(h(u_1),\ldots,h(u_r))$ are linearly independent. For stochastic regressors generated with a stationary process given in Condition 2b, linearly independent $X_i$, $i=1,\ldots,r$ can be found almost surely if $\mathrm{Cov}(X)>0$.
\begin{cond} \label{Cond: Mixing} The latent process $\{\alpha_t\}$, is strictly stationary, Gaussian and strongly mixing with the mixing coefficients satisfying $\sum_{h=0}^\infty \nu(h)^{\lambda/(2+\lambda)} < \infty$ for some $\lambda>0$.
\end{cond}
Conditions for a unique asymptotic limit of the marginal likelihood estimators are also required. Denote the marginal probability of $j$ successes in $m_t$ trials at time $t$ as
\begin{equation*}
\pi_{t}(j)=\int_{ \mathbb{R}} e^{jW_t - m_tb(W_t)+c(j)}\phi(z_t) dz_t, \quad j=1,\ldots,m_t.
\end{equation*}
where $W_t=x_{nt}^T\beta + \tau^{1/2} z_t$, and $z_t = \alpha_t/\tau^{1/2}$ has unit variance. If $\{\alpha_t\}$ is Gaussian, so is the process $\{z_t\}$ and $z_t\sim N(0,1)$ with density function $\phi(\cdot)$. Similarly let $\pi^0_{t}(j)$ be the marginal probability evaluated with the true values $\beta_0$ and $\tau_0$ at time $t$. Define
\begin{equation} \label{eqn: Qn delta}
Q_n(\delta)=\frac{1}{n} l_1(\delta)
\end{equation}
conditional on $m_t$ and $x_{nt}$,
\begin{equation}\label{eqn: E Qn delta}
E(Q_n(\delta))=\frac{1}{n}\sum_{t=1}^n \sum_{j=0}^{m_t}\pi^0_{t}(j)\log \pi_{t}(j), \quad \delta\in \Theta.
\end{equation}
Under Conditions \ref{Cond: mt} and \ref{Cond: Reg Trend Type}, $E(Q_n(\delta))\overset{a.s.}\to Q(\delta)$. Let $\pi(j,\cdot)=P(Y=j|\cdot, \delta)$, and $\pi^0(j,\cdot)$ is evaluated with $\delta_0$, then under Condition 2a,
\begin{equation} \label{eqn: lim Q delta Cond 2a}
Q(\delta) = \sum_{m=1}^M \kappa_m \int_0^1 \sum_{j=0}^m \pi^0(j,h(u))\log \pi(j,h(u)) du
\end{equation}
and, under Condition 2b,
\begin{equation}\label{eqn: lim Q delta Cond 2b}
Q(\delta) = \sum_{m=1}^M \kappa_m \int_{\mathbb{R}^r} \sum_{j=0}^m \pi^0(j,x)\log \pi(j,x) dF(x)
\end{equation}
the proof is included in the proof of Theorem \ref{Thm: Consist&Asym Marginal Like Estimator}.
\begin{cond}\label{Cond: Ident and Const}
$Q(\delta)$ has a unique maximum at $\delta_0 = (\beta_0, \tau_0)$, the true value.
\end{cond}
We now establish the consistency and asymptotic normality of the marginal likelihood estimator.
\newtheorem{thm}{Theorem} \begin{thm}[Consistency and asymptotic normality of marginal likelihood estimators] \label{Thm: Consist&Asym Marginal Like Estimator}%
Assume $\tau_{0}>0$ and Conditions \ref{Cond: mt} to \ref{Cond: Ident and Const}, then $\hat \delta \overset{\textrm{a.s.}}{\to} \delta_0$ and $\sqrt{n}(\hat\delta - \delta_0)
\overset{d}\rightarrow N(0, \Omega_{1,1}^{-1}\Omega_{1,2}\Omega_{1,1}^{-1})$
as $n \to \infty$, in which
\begin{equation}\label{eq: PD InfMat}%
\Omega_{1,1} = \underset{n\to \infty}\lim \frac{1}{n}\sum_{t=1}^{n}E(\dot{l}_{t}(\delta_0) \dot{l}_{t}^{{ \mathrm{\scriptscriptstyle T} }}(\delta_0)) > 0
\end{equation}
\begin{equation}\label{eq: PD CovMat}%
\Omega_{1,2}=\underset{n\to \infty}\lim \frac{1}{n}\sum_{t=1}^{n}\sum_{s=1}^{n} \mathrm{Cov} (\dot{l}_{t}(\delta_0), \dot{l}_{s}(\delta_0))
\end{equation}
where
\begin{equation}\label{eq: deriv1 loglik t}%
\dot{l}_{t}(\delta_0) = \frac{\partial \log \pi_{t}(y_{t})}{\partial\delta}\vert_{\delta_0}= f^{-1}(y_{t}|x_{nt},\delta_0)\int (y_{t}- m_{t}\dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta_0+\tau_{0}^{1/2}z_{t})) \binom{x_{nt}}{\frac{z_t}{2\sqrt{\tau_{0}}}}f(y_{t}|x_{nt},z_t,\delta_0)\phi(z_t)dz_t
\end{equation}
\end{thm}
To use this theorem in practice requires at least that the identifiability condition holds and
that the covariance be estimated from a single series. We address these aspects in detail
in Section \ref{SSc: Ident} and \ref{SSc: CovMat Est}. In addition, particularly for binary responses, marginal likelihood estimators produce a high probability of $\hat\tau=0$. We address
this in detail in Section \ref{Sec: Pile up probability}, where we propose an improved asymptotic distribution based on a mixture.
\subsection{Asymptotic identifiability} \label{SSc: Ident}%
We now discuss circumstances under which Condition \ref{Cond: Ident and Const} holds. Now for any $\delta\in \Theta$, $Q(\delta)\le Q(\delta_0)$, since for any $x$, $\sum_{j=0}^{m}\pi^0(j,x)\log \pi(j,x) \le \sum_{j=0}^{m}\pi^0(j,x)\log \pi^0(j,x)$. Thus the model is identifiable if and only if for any $\delta \in \Theta$, $Q(\delta)- Q(\delta_0)< 0$ if $\delta \ne \delta_0$.
\newtheorem{lem}{Lemma} \begin{lem}\label{lem: Identifiable Binom}
Assume $M\ge 2$ and Condition \ref{Cond: Reg Full Rank}, then Condition \ref{Cond: Ident and Const} holds for marginal likelihood \eqref{eq: marginal log likelihood}.
\end{lem}
\noindent{\small The proof is outlined in Appendix A.}
For binary data, $M=1$, then $Q(\delta)=Q(\delta_0)$ if $\pi(1,x) = \pi^0(1,x)$, $\forall x\in \mathbb{X}$. Hence model \eqref{eq: marginal log likelihood} is not identifiable if $\exists \delta\ne \delta_0$ such that $\pi(1) = \pi^0(1)$ everywhere on $\mathbb{X}$, that is, for each distinct value $X_i \in\mathbb{X}$, such $(\beta,\tau)\ne (\beta_0,\tau_0)$ can be found to establish
\begin{equation}\label{eq: pi1 to pi01}%
\pi(1,X_i)= \int \frac{e^{X_i^T\beta+\sqrt{\tau} z}}{1+e^{X_i^T\beta +\sqrt{\tau} z}}\phi(z)dz =
\int \frac{e^{X_i^T\beta_0 +\sqrt{\tau_0} z}}{1+e^{X_i^T\beta_0+\sqrt{\tau_0} z}}\phi(z)dz = \pi^0(1,X_i).
\end{equation}
If $\tau=\tau_0$, then \eqref{eq: pi1 to pi01} implies $X_i^{{ \mathrm{\scriptscriptstyle T} }}\beta=X_i^{{ \mathrm{\scriptscriptstyle T} }}\beta_0$. Under Condition \ref{Cond: Reg Full Rank}, $r$ linearly independent $X_i$, $\mathrm{X}=(X_1,\ldots,X_r)$ can be found on $\mathbb{X}$ to establish $\mathrm{X}^{{ \mathrm{\scriptscriptstyle T} }}\beta=\mathrm{X}^{{ \mathrm{\scriptscriptstyle T} }}\beta_0$. Then $(\beta-\beta_0)$ has a unique solution of $\textbf{0}_r$. Hence if $\tau=\tau_0$, \eqref{eq: pi1 to pi01} holds if and only if $\beta=\beta_0$, and Condition
\ref{Cond: Ident and Const} holds.
If $\tau\ne \tau_0$, for each $X_i$, a unique solution of $a_i$, $a_i = X_i^{{ \mathrm{\scriptscriptstyle T} }}\beta \ne X_i^{{ \mathrm{\scriptscriptstyle T} }}\beta_0$ can be found for \eqref{eq: pi1 to pi01}. Assume the regressor space $\mathbb{X}$ is a set of discrete vectors such that $\mathbb{X}=\{X_i: 1\le i\le L\}$, where $X_i\ne X_j$ if $i\ne j$. Let $\mathrm{X}=(X_1,\ldots,X_L)$ be a $r\times L$ matrix. Then \eqref{eq: pi1 to pi01} holds for each $X_i\in\mathbb{X}$ if there exists such solution of $\beta$ that $\mathrm{X}^T\beta=\mathrm{A}$, $\mathrm{A}=(a_1, \ldots, a_L)$. Since $\tau\ne \tau_0$, $\beta=\beta_0$ is not excluded from the possible solutions of $\beta$. If $L=r$, a unique solution of $\beta$ exists, hence there exists such $\delta\ne \delta_0$ that establishes \eqref{eq: pi1 to pi01} for all $X_i\in \mathbb{X}$, therefore \eqref{eq: marginal log likelihood} is not identifiable. If $L>r$, note $\texttt{rank}(\mathrm{X})=r$, then $\mathrm{X}^{{ \mathrm{\scriptscriptstyle T} }}\beta=\mathrm{A}$ is overdetermined. Thus a solution of $\beta$ does not always exist and in these situations Condition \ref{Cond: Ident and Const} holds, however a general proof without further conditions on the regressors is difficult. Instead, we provide a rigourous proof to show Condition \ref{Cond: Ident and Const} holds for binary data when the regressor space $\mathbb{X}$ is connected.
\begin{lem}\label{lem: Identifiable Binary}
Let $M=1$. In addition to Condition \ref{Cond: Reg Full Rank}, $\mathbb{X}$ is assumed to be a connected subspace of $\mathbb{R}^r$, then Condition \ref{Cond: Ident and Const} holds.
\end{lem}
\noindent{\small Proof: see the appendix A}.
\subsection{Estimation of the Covariance matrix}\label{SSc: CovMat Est}
To use Theorem \ref{Thm: Consist&Asym Marginal Like Estimator} the asymptotic covariance matrix $\Omega_{1,1}^{-1}\Omega_{1,2}\Omega_{1,1}^{-1}$ needs to be estimated using a single observed time series. Now $\Omega_{1,1}$ can be estimated by replacing $\delta_0$ with the marginal likelihood estimates $\hat\delta$. However estimation of $\Omega_{1,2}$ is challenging, as $\Omega_{1,2} = n^{-1}E\left[\sum_{t=1}^n \sum_{s=1}^n \dot{l}_t(\delta_0)\dot{l}_s(\delta_0)\right]$ has cross terms $E\left(\dot{l}_t(\delta_0) \dot{l}_s(\delta_0)\right)$, $s\ne t$, which cannot be estimated without knowledge of $\psi_0$. We use the modified subsampling methods reviewed in \cite{wu2012variance} and \cite{wu2014parameter} to estimate $\Omega_{1,2}$.
Let $Y_{i,k_n}= (y_i,\ldots,y_{i+k_n-1})$ denote the subseries of length $k_n$ starting at the $i$th observation, where $i=1,\ldots, m_n$ and $m_n=n-k_n+1$ is the total number of subseries. Define
\[
\hat{q}_{n,t}= \frac{1}{\sqrt{n}}\dot{l}_{t}(\hat\delta)
\]
by replacing $\delta_0$ by $\hat\delta$ in \eqref{eq: deriv1 loglik t}. Under similar conditions to those given above, we show that as $k_n\to\infty$ and $m_n\to\infty$, $\hat\Gamma_{1,n}^{-1}\hat\Gamma^\dag_n\hat\Gamma_{1,n}^{-1}$ is a consistent estimator of the asymptotic covariance matrix of $\hat\delta$, where
\[
\hat\Gamma_{1,n} = \sum_{t=1}^n \hat{q}_{n,t}\hat{q}_{n,t}^{{ \mathrm{\scriptscriptstyle T} }}; \quad
\hat\Gamma^\dag_n = \frac{1}{m_n}\sum_{i=1}^{m_n}\left(\sum_{t=i}^{i+k_n-1} \sum_{s=i}^{i+k_n-1}\hat{q}_{k_n,t}\hat{q}_{k_n,s}^{{ \mathrm{\scriptscriptstyle T} }}\right)
\]
The performance of subsampling estimators relies on $k_{n}$ to a large extent. Following the guidance of \cite{heagerty2000window} on optimal selection of $k_{n}$, we use $k_{n}=C[n^{1/3}]$, $C=1,2,4,8$ in the simulations. The one dimensional integrals in $\hat{q}_{n,t}$ can be easily obtained using the \textbf{R} function \textsf{integrate}.
\section{Degeneration of Marginal Likelihood Estimates}\label{Sec: Pile up probability}%
Even when the identifiability conditions are satisfied, in finite samples the marginal likelihood can be maximised at $\hat\tau=0$, in which case $\hat\beta$ degenerates to the ordinary GLM estimate $\tilde\beta$. Simulation evidence of Section \ref{Sec: Simulations} suggests that the chance of this occurring, even for moderate to large sample sizes, is large (up to $50$\% for binary data but decreasing rapidly as the number of trials $m$ increases). In this section we will derive two approximations for this probability. In both approximations we conclude that $P(\hat\tau=0)$ will be high whenever the range of $x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta$ is such that $\dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta)\approx a_0 + a_1(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta)$, where $a_{0}, a_{1}$ are constants. When this linear approximation is accurate the covariance matrix for the marginal likelihood estimates is obtained from the inverse of a near singular matrix and results in $\textrm{var}(\hat \tau)$ being very large so that $P(\hat\tau=0)$ is close to $50$\%. When there is a nontrivial probability of $\hat\tau=0$, the distribution of $\hat\beta$ for finite samples is better approximated by a mixture of two multivariate distributions weighted by $P(\hat\tau=0)$ and $P(\hat\tau >0)$.
\subsection{Estimating the probability of $\hat\tau=0$}
One approximation for the probability of $\hat\tau=0$ can be obtained using the asymptotic normal distribution provided in Theorem \ref{Thm: Consist&Asym Marginal Like Estimator}. Define
$\kappa_{2}= P(\sqrt{n}(\hat\tau-\tau_0)\le -\sqrt{n}\tau_0)$, then in the limit,
\begin{equation}\label{eq: kappa2}%
\bar\kappa_{2}= \Phi( -\sqrt{n}\tau_0/\sigma_{\tau}(\delta_0)); \quad \sigma_{\tau}^2(\delta_0) = (\Omega_{1,1}^{-1}\Omega_{1,2}\Omega_{1,1}^{-1})_{\tau\tau}
\end{equation}
where $\Phi(\cdot)$ is the standard normal distribution.
An alternative approximation to $P(\hat \tau=0)$ can be based on the score function evaluated at $\hat\tau=0$. Consider the scaled score function $S_{1,n}(\tilde\beta) = 2 n^{-1}\partial l_{1}(\beta, \tau)/\partial \tau|_{\beta=\tilde\beta,\tau=0}$, which, using integration by parts, is
\begin{equation}\label{eq: ST GLM}%
S_{1,n}(\tilde\beta)= \frac{1}{n}\sum_{t=1}^{n}\left[(y_{t}- m_t\dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }} \tilde\beta))^{2} - m_t \ddot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\tilde\beta)\right].
\end{equation}
Now, $\hat\tau=0$ implies $S_{1,n}(\tilde\beta)\le 0$ but not the converse, hence $P(\hat\tau=0)$ is bounded above by $P(S_{1,n}(\tilde\beta)\le 0)$.
In order to derive a large sample approximation to this probability we show, in Section \ref{SSc: Asymptotic Theory GLM Estimation}, that the large sample distribution of $\sqrt{n}(S_{1,n}(\tilde\beta)-c_S)/\sigma_S$ is standard normal,
where $c_S= \underset{n\to \infty} \lim E(S_{1,n}(\beta^\prime))$ and $\sigma_{S}^2= \underset{n\to\infty}\lim \mathrm{Var}(\sqrt{n}S_{1,n}(\tilde\beta))$. Define $\kappa_{1} = P(S_{1,n}(\tilde\beta)\le 0)$, it can then be approximated with
\begin{equation}\label{eq: kappa 1}%
\bar\kappa_{1}= \Phi( -\sqrt{n}c_S/\sigma_S).
\end{equation}
The quantities $c_S$ and $\sigma_S$ can be expressed analytically for some regression specifications. In simulations, the limits are computed using numerical integration. For the binary case in particular, the ratio $c_S/\sigma_S$ can be quite small resulting in a large value for $\bar \kappa_1$. We compare how well $P(\hat \tau =0)$ is estimated by $\bar\kappa_{1}$ and $\bar\kappa_{2}$ via simulations in Section
\ref{Sec: Simulations}, and conclude that $\bar \kappa_{1}$ is slightly more accurate in the situation covered there.
\subsection{Asymptotic Theory for GLM Estimates and Marginal Score}
\label{SSc: Asymptotic Theory GLM Estimation}
To develop the asymptotic distribution of $S_{1,n}(\tilde\beta)$, the asymptotic normality of $\sqrt{n}(\tilde{\beta}-\beta')$ is required.
\begin{thm}[Asymptotic normality of GLM\ estimators]\label{Thm: GLM asymptotics}%
Under Conditions \ref{Cond: mt} to \ref{Cond: Mixing}, the estimates $\tilde\beta$ maximising the likelihood \eqref{eq: loglikglm} satisfies $\tilde\beta\overset{p}\to \beta'$, and $\sqrt{n}(\tilde\beta -\beta') \to \textrm{N}(0, \Omega_{1}^{-1} \Omega_{2}\Omega_{1}^{-1})$ as
$n \to \infty$, in which
\[
\Omega_{1} = \underset{n\rightarrow\infty}\lim \frac{1}{n}\sum_{t=1}^{n}m_{t}\ddot{b}
(x_{nt}^{T}\beta^{\prime})x_{nt}x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}
\]
\begin{align*}
\Omega_{2} =& \underset{n\rightarrow \infty}\lim \frac{1}{n}\sum_{t=1}^{n}\sum_{s=1}^n m_t m_s \left( \int (\dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta_0 + \alpha_t) - \dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta')) (\dot{b}(x_{ns}^{{ \mathrm{\scriptscriptstyle T} }}\beta_0+ \alpha_s)- \dot{b}(x_{ns}^{{ \mathrm{\scriptscriptstyle T} }}\beta')
)g(\alpha_t,\alpha_s;\tau_0,\psi_0) d\alpha\right) x_{nt}x_{ns}^{{ \mathrm{\scriptscriptstyle T} }}\\
+ & \underset{n\to \infty}\lim \frac{1}{n}\sum_{t=1}^n m_t\left(\int \ddot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta_0 + \alpha_t)g(\alpha_t;\tau_0)d\alpha \right)x_{nt}x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}
\end{align*}
\end{thm}
The proof of this theorem is given in Appendix B. It relies on concavity of the GLM log likelihood with respect to $\beta$. Standard results of a functional limit theorem are used to establish the above result, in a similar way as that used in \cite{davis2000autocorrelation}, \cite{davis2009negative} and \cite{wu2014parameter}.
In order to use Theorem \ref{Thm: GLM asymptotics} for practical purposes, first $\beta'$ needs to be determined and then $\Omega_1$, $\Omega_2$. Estimation of $\beta'$ would require knowledge of $\tau$, and the estimation of $\Omega_2$ would require both $\tau$ and $\psi$, neither of which can be estimated using the GLM\ procedure. Hence the theorem is of theoretical value only.
Based on Theorem \ref{Thm: GLM asymptotics} we can now derive the large sample distribution of the score function of the marginal likelihood evaluated at $\tilde \delta= (\tilde \beta,0)$.
Because all derivatives of $b(\cdot)$ are uniformly bounded and $\tilde\beta\overset{p}\to\beta'$, hence
\begin{equation*}
\sqrt{n}\left(S_{1,n}(\tilde\beta) - E(S_{1,n}(\beta')) \right) = \sqrt{n}\left( S_{1,n}(\beta^\prime) - E(S_{1,n}(\beta^\prime)) \right) - J_{S}^{{ \mathrm{\scriptscriptstyle T} }}\sqrt{n}(\tilde\beta - \beta^\prime) + o_p(1).
\end{equation*}
Since $n^{-1/2}\sum_{t=1}^n (y_t-m_{t}\dot{b}(x_{nt}^T\tilde\beta))x_{nt} = 0$ by definition of $\tilde\beta$, using Taylor expansion
\begin{equation*}
\sqrt{n}(\tilde \beta-\beta')=\Omega_{1}^{-1}\frac{1}{\sqrt{n}}\sum_{t=1}^{n} e_{t,\beta'}x_{nt} + o_p(1), \quad e_{t,\beta'}= y_t-m_t\dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta').
\end{equation*}
Then it follows that
\begin{equation}\label{eq: ScoreVec tau=0}%
\sqrt{n}\left(S_{1,n}(\tilde\beta))-E(S_{1,n}(\beta^\prime))\right) - \left(U_{1,n} - J_{S}^{{ \mathrm{\scriptscriptstyle T} }}U_{2,n}\right) \overset{p}\to 0
\end{equation}
where
\begin{equation}\label{eq: Js}%
J_{S} = \underset{n\to\infty}\lim \frac{1}{n}\sum_{t=1}^n \left[2 m_t^2 (\pi^0(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }} \beta_0) - \dot{b}(x_{nt}^T\beta')) \ddot{b}(x_{nt}^T\beta^\prime)+ m_t b^{(3)}(x_{nt}^T \beta^\prime)\right] x_{nt}
\end{equation}
\begin{equation}\label{eq: U1 U2}%
U_{1,n}:= \sqrt{n}\left(S_{1,n}(\beta^\prime) - E(S_{1,n}(\beta^\prime)) \right)= \frac{1}{\sqrt{n}} \sum_{t=1}^{n} e_{t,\beta^\prime} ^2 - E e_{t,\beta'}^2; \quad
U_{2,n}:= \frac{1}{\sqrt{n}}\sum_{t=1}^{n} e_{t,\beta^\prime}c_{nt}
\end{equation}
note $\pi^0(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta_0) = \int \dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta_0+\alpha_t)g(\alpha_t,\tau_0)d\alpha$ and $c_{nt} = \Omega_{1}^{-1}x_{nt}$, which is a non-random vector.
Then the CLT for $\sqrt{n}\left(S_{1,n}(\tilde\beta)- E(S_{1,n}(\beta'))\right)$ follows the CLT for the joint vector of $(U_{1,n}, U_{2,n})$. Note that both sequences of $\{U_{1,t}\}$ and $\{U_{2,t}\}$ are strongly mixing by Proposition 1 in \cite{blais2000limit}. Then the CLT for mixing process proposed in \cite{davidson1992central} can be applied to show that $(U_{1,n}, U_{2,n})$ is asymptotically normally distributed with mean zero and covariance matrix
\[
\begin{pmatrix}
V_S & \Omega_{1}^{-1}K_{S}\\
K_{S}^{{ \mathrm{\scriptscriptstyle T} }}\Omega_{1}^{-1} & \Omega_{1}^{-1}\Omega_{2}\Omega_{1}^{-1}
\end{pmatrix}
\]
where $\Omega_{1}, \Omega_{2}$ are given in Theorem \ref{Thm: GLM asymptotics}, and
\[
V_S:=\underset{n\to\infty}\lim \left\{ \sum_{h=0}^{(n-1)}\left(\frac{1}{n} \sum_{t=1}^{n}\textrm{Cov}(e_{t,\beta^\prime}^2, e_{t+h, \beta^\prime}^2)\right)
+ \sum_{h=1}^{(n-1)}\left(\frac{1}{n} \sum_{t=h+1}^{n}\textrm{Cov}(e_{t,\beta^\prime}^2,
e_{t-h, \beta^\prime}^2)\right)\right\}
\]
\[
K_{S}: = \underset{n\to\infty}\lim \left\{ \sum_{h=0}^{(n-1)}\left(\frac{1}{n}\sum_{t=1}^{n}
E (e_{t,\beta'} e_{t+h,\beta'}^{2}) x_{nt}\right) + \sum_{h=1}^{(n-1)}\left(\frac{1}{n} \sum_{t=h+1}^{n}E (e_{t,\beta'} e_{t-h,\beta'}^{2}) x_{nt}\right)\right\}
\]
\begin{thm} \label{Thm: score dist}
Under the assumptions of Theorem \ref{Thm: GLM asymptotics}, as $n\to \infty$,
$\sqrt{n}\left(S_{1,n}(\tilde\beta)- E(S_{1,n}(\beta'))\right)/\sigma_{S}\overset{d}\to N(0, 1)$.
\end{thm}
\subsection{An approximate mixture distribution for $\hat\beta$} \label{SSc: mixture}%
\begin{thm}[Mixture distribution under finite samples] \label{Thm: Prob mixdensity sigma=0}%
Assume $\tau_{0}>0$, under Conditions \ref{Cond: mt} to \ref{Cond: Ident and Const}, in finite samples, distribution of $\sqrt{n}(\hat\beta-\beta_{0})$ can be approximated with the mixture
\[
\kappa F_{1}(c,\delta_0) + (1-\kappa)F_{2}(c,\delta_0),\quad \kappa=P(\hat\tau=0)
\]
in which $F_{1}(c,\delta_0)$ is $r$-dimensional multivariate distribution obtained through
$\sqrt{n}(\hat\beta-\beta')$, which is a skew normal distribution $U_{2,n}|U_{1,n} + \sqrt{n} E(S_{1,n}(\beta')) - 2 J_{S}^{{ \mathrm{\scriptscriptstyle T} }} U_{2,n}\le 0$, based on the joint normality of
$(U_{1,n}, U_{2,n})$ given in Theorem \ref{Thm: score dist}; $F_{2}(c,\delta_0)$ is a $r$-dimensional skew normal distribution $\sqrt{n}(\hat\beta-\beta_{0})|\hat\tau > 0$, based on the joint normality of $N(0,\Omega_{1,1}^{-1}\Omega_{1,2}\Omega_{1,1}^{-1})$ in Theorem
\ref{Thm: Consist&Asym Marginal Like Estimator}. Moreover, $\kappa\to0$ as $n\to\infty$. %
\end{thm}
Remarks
\begin{enumerate}
\item The skew normal distribution is defined in \cite{gupta2004multivariate}.
\item If $\tau_0=0$, $\beta'=\beta_0$ and the value $\kappa=0.5$ in the above mixture is similar to that in \citet[Theorem I]{moran1971maximum}; when $\tau_{0}=a/\sqrt{n}$, $a\ge0$, above results are parallel to those in \citet[Theorem IV]{moran1971maximum} and based on the same reasoning. However Moran's results are for independence observations whereas our results require the serial dependence to be accounted for in the asymptotic results.
\item While the mixture provides a better theoretical description of the asymptotic distribution for marginal likelihood estimates when $m$ is small, in practice, the mixture distribution cannot be estimated without knowing the true values of $\beta_0$, $\tau_0$ and $\psi_0$. In simulations, the covariance matrix for the joint distribution of $\hat\beta$ and $\hat\tau$ is approximated with $\Sigma(\delta_0)=n^{-1}\Omega_{1,1}^{-1}\Omega_{1,2}\Omega_{1,1}^{-1}$, and based on $F_2(c,\delta_0)$, we calculate
\begin{equation}\label{eq: beta cond}%
E(\hat\beta-\beta_0|\hat\tau>0) = \Sigma_{\beta\tau}(\delta_0)\Sigma^{-1}_{\tau\tau}(\delta_0) E(\hat\tau-\tau_0|\hat\tau >0)
\end{equation}
\begin{equation}\label{eq: Vbeta cond}%
\mathrm{Var}(\hat\beta|\hat\tau>0) = \Sigma_{\beta\beta}(\delta_0) - \Sigma_{\beta\tau}(\delta_0) \Sigma^{-1}_{\tau\tau}(\delta_0)\Sigma_{\tau\beta}(\delta_0) + \Sigma_{\beta\tau}(\delta_0) \Sigma^{-2}_{\tau\tau}(\delta_0)\Sigma_{\tau\beta}(\delta_0)\mathrm{Var}(\hat\tau-\tau_0|\hat\tau>0)
\end{equation}
\end{enumerate}
\section{Simulation Results} \label{Sec: Simulations}
In this section we summarize results of several simulation studies to illustrate the key theoretical results derived above as well as to indicate circumstances under which $P(\hat \tau=0)$ is large in which case the mixture distribution of Theorem \ref{Thm: Prob mixdensity sigma=0} would provide a more accurate description.
For all examples we consider the simple linear trend with latent process $W_{0,t} = \beta_1 + \beta_2 (t/n) + \alpha_t$, in which $\alpha_t$ is assumed to be: $\alpha_t=\phi\alpha_{t-1} + \epsilon_t$, $\epsilon_t\overset{i.i.d}\sim N(0,\sigma^2_{\epsilon})$ where $\sigma^2_{\epsilon}$ is chosen to maintain $\textrm{Var}(\alpha_t)=1$. In all cases the true values are $\beta_0 = (1,2)$ and $\tau_0=1$ and $\phi$ varies in the interval $(-1,1)$. While simple, this example provides substantial insights into the behaviour of the marginal likelihood estimates as well as into problems that can arise. The simplicity of this example also allows us to obtain analytical calculations of key quantities and to provide some heuristic explanations of the non-standard distribution results which can arise, particularly for binary time series.
In all simulations reported later, the number of replications was $10,000$. The marginal likelihood estimates were obtained using the \textbf{R} package \textsf{lme4}. The frequency with which $\hat \tau =0$ is not package dependent other than the occasionaly case -- this was checked using our own implementation based on adaptive Gaussian quadrature and by comparing the results with those from SAS PROC MIXED. The first simulation (Section \ref{Sec: Example 1 Sim}) focuses on binary responses and illustrates that the distribution of marginal likelihood estimates $\hat\delta$ for this kind of data converge towards a mixture as proposed in Theorem \ref{Thm: Prob mixdensity sigma=0}, in which the $P(\hat\tau=0)$ can be approximated using the result of Theorem \ref{Thm: score dist} to good accuracy. The second experiment (Section \ref{Sec: Example 2 Sim}) studies the finite sample performance of $\hat\delta$ for binomial cases and shows that $P(\hat\tau=0)$ vanishes as $m_{t}$ increases or as $n\to\infty$, thus the distribution of $\hat\delta$ is multivariate normal as developed in Theorem \ref{Thm: Consist&Asym Marginal Like Estimator}. Finally (Section
\ref{Sec: Example 3 sim}) the method for estimation the covariance matrix for $\hat\delta$ proposed in Section \ref{SSc: CovMat Est}, is evaluated.
In order to implement the simulations in Section \ref{Sec: Analytical Exp Key Asym Quantities} we first derive some theoretical expressions for key quantities used to define the large sample distributions of Theorems \ref{Thm: Consist&Asym Marginal Like Estimator}, \ref{Thm: GLM asymptotics} and \ref{Thm: score dist} as well as for the estimates $\hat \kappa_1$ and $\hat \kappa_2$ for $P(\hat \tau = 0)$.
\subsection{Analytical Expressions for Asymptotic Quantities} \label{Sec: Analytical Exp Key Asym Quantities}
Key quantities required for implementation and explanation of the simulation results to follow are: \newline (1). The limit point $\beta'$ for the GLM estimate $\tilde \beta$.
\newline (2). $c_S$ and $\sigma_S$ appearing in Theorem \ref{Thm: score dist} and used to obtain the approximation $\bar \kappa_1$ for $P(\hat \tau = 0)$.
\newline (3). The asymptotic variance of $\hat \tau$ in Theorem \ref{Thm: Consist&Asym Marginal Like Estimator} used to obtain the approximation $\bar \kappa_2$ for $P(\hat \tau = 0)$.
\newline (4). Various quantities defining the mixture distribution in Theorem \ref{Thm: Prob mixdensity sigma=0}.
Throughout, the derivations are given for the case of deterministic regressors specified as $x_{nt} = h(t/n)$ for a suitably defined vector function $h$ as in Condition \ref{Cond: Reg Trend Type}a and in which the first component is unity in order to include the intercept term. Also, in order to reduce notational clutter we will assume that $m_t\equiv m$ (the number of binomial trials at all time points is the same). The analytical expressions involve various integrals which are computed using numerical integration either with the \textbf{R}-package \textsf{integrate} or using grid evaluation for $u$ with mesh $0.0001$ over the interval $[0,1]$. Calculation of $\Omega_{1,2}$ in Theorem \ref{Thm: Consist&Asym Marginal Like Estimator} and $K_S$, $V_S$ and $\Omega_2$ in Theorem \ref{Thm: score dist} to obtain the variance $\sigma_{S}^2$ for the theoretical upper bound of $P(\hat\tau=0)$ require evaluation of two-dimensional integrals of the form
\begin{equation*}
\mathrm{Cov}(\dot{l}_{t}(\delta_0), \dot{l}_{t+h}(\delta_0)) = \sum_{y_t=0}^{m_{t}} \sum_{y_{t+h}=0}^{m_{t+h}}\pi^0(y_t,y_{t+h})\dot{l}_{t}(\delta_0)\dot{l}_{t+h}(\delta_0).
\end{equation*}
For these, the integral expression for $\pi^0(y_{t}, y_{t+h})$ is approximated using adaptive Gaussian quadratic (AGQ) with 9 nodes for each of the two dimensions.
\subsubsection{Limit Point of GLM Estimation}\label{SSSc: Lim GLM Est}%
By numerically solving the non-linear system \eqref{eq: betaprime equation for 2a} with Newton Raphson iteration the limiting value of the GLM estimates is $\beta'=(0.8206,1.7574)$.
\subsubsection{Quantities needed for $\bar \kappa_1$}
The analytical expression for the limiting expectation of the scaled score with respect to $\tau$ evaluated at the limiting point $\beta'$ is.
\begin{align}\label{eq: Expect Covg}%
c_{S}: =& \underset{n\to \infty} \lim E(S_{1,n}(\beta^\prime)) \nonumber\\
= & \underset{n\to\infty}\lim \frac{1}{n}\sum_{t=1}^n \left[ m(m-1) \int \left(\dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta_{0}+\alpha_t) - \dot{b}(x_{nt}^{T}\beta^\prime) \right)^{2}g(\alpha_{t},\tau_{0})d\alpha_t +
m(\dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta^\prime) - \pi^0(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta_0))(2\dot{b}(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta^\prime) -1) \right] \nonumber\\
= & m(m-1)\int_{0}^{1}\int \left(\dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_{0}+\alpha) - \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta^\prime) \right)^{2}g(\alpha,\tau_{0})d\alpha du\nonumber \\
+ & m\int_{0}^{1}(\dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta^\prime) - \pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)) (2\dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta^\prime) -1)du \nonumber\\
= & m(m-1)c_{1}+ mc_{2}.
\end{align}
Note that $c_{1}$ in \eqref{eq: Expect Covg} is strictly positive but make no contribution for binary responses (when $m=1$) in which case $c_{2}$ is the only term contributing to $c_S$. We have observed that $c_2$ is non-negative, in the simulations but we do not have a general proof of that. In that case $c_S$ is also non-negative for all $m$. We have observed in the simulations that $c_1$ is substantially larger than $c_2$ and as a result $\bar \kappa_1$ is small for non-binary responses ($m>1$) but can be large for binary responses because $c_{2}\approx 0$.
Recall that $\sigma_{S}^2= \underset{n\to\infty}\lim \mathrm{Var}(S_{1,n}(\tilde\beta))$. For the case where the latent process is i.i.d. we have
\begin{align}\label{eq: Var Covg}%
\sigma_{S}^2= & \int_{0}^{1} m\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)(1-\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0))\left[1+ 2(m-3)\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)(1-\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0))\right] \nonumber \\
+ & 4m^3\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)(1-\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0))(\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0) - \dot{b}(h(u)^T\beta'))^2 \nonumber \\
+ & 4 m^2\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)(1-\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0))(1-2\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)) (\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)-\dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta'))du \nonumber \\
- & 2 J_S^T\Omega_{1}^{-1} K_S + J_S^T \Omega_1^{-1}\Omega_{2}\Omega_{1}^{-1} J_S
\end{align}
in which
\begin{align*}
K_S = & \int_{0}^{1} \left[ m\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)(1-\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)) (1-2\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)) \right. \\
+ & \left. 2 m^2\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)(1-\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0))(\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0) - \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta'))\right] h(u)du\\
J_S = & \int_{0}^{1} \left[ 2 m^2(\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0) - \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta^\prime)) \ddot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta^\prime) + m b^{(3)}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta^\prime)\right] h(u)du
\end{align*}
similar expression of $\sigma_{S}^2$ can be obtained for the dependent cases where $\psi \ne 0$. However, serial dependence does not appear to make much difference in the chance of $\hat\tau=0$ at least in the simulations of Section \ref{Sec: Example 2 Sim}.
For the simulation example of the binary response case ($m=1$) the expressions \eqref{eq: Expect Covg}, \eqref{eq: Var Covg} can be evaluated using numerical integration to give
\begin{table}[h]\centering
\begin{tabular}{l*{4}{c}r}
\multicolumn{4}{l}{$c_{1}=0.0168$, $c_{2}=1.65\times 10^{-5}$}\\ \hline
& $c_{S}$ & $\sigma_S$ & $c_{S}/\sigma_{S}$ \\ \hline
$m=1$ & $1.65\times 10^{-5}$ & $7.17\times 10^{-3}$ & 0.0023 \\
$m=2$ & 0.034 & 0.303 & 0.1110 \\
$m=3$ & 0.101& 0.525 & 0.1921\\ \hline
\end{tabular}
\end{table}
Since $c_{S}$ is observed to be strictly positive for marginal likelihood, $\sqrt{n}c_{S}\to \infty$ as $n\to \infty$. By Theorem \ref{Thm: score dist}, the probability of $S_{1,n}(\tilde\beta)\le 0$ vanishes as $n\to \infty$. Because $P(\hat\tau=0)$ is bounded above by $P(S_{1,n}(\tilde\beta)\le0)$, as $n\to \infty$, the marginal likelihood estimate will be such that $\hat\tau=0$ with vanishing probability. However, notice for binary data $c_S=c_2$ and $c_S/\sigma_S = 0.0023$. Hence, even for the largest sample size ($n=5000$) reported below, $\bar \kappa_1$ is $44\%$. It would require a sample size of $10^6$ to reduce this to $1\%$. Clearly this has substantial implications for the use of marginal likelihood for binary data. For binomial responses the $c_1$ term dominates, and hence even with small values of $m>1$ the chance of $\hat \tau =0$ reduces rapidly.
We conclude this subsection with a heuristic explanation of why $c_2 \approx 0$. If $h(u)^{ \mathrm{\scriptscriptstyle T} } \beta'$ appearing in the definition of $c_2$ is such that $\dot b (h(u)^{ \mathrm{\scriptscriptstyle T} } \beta')$ is well approximated linearly in $h(u)^{ \mathrm{\scriptscriptstyle T} } \beta'$ then $c_2$ could be approximated by
\begin{equation}
c_2^*=\int_0^1 [\dot b(h(u)^{{ \mathrm{\scriptscriptstyle T} }} \beta')-\pi^0(h(u)^{ \mathrm{\scriptscriptstyle T} } \beta_0)](a_o+a_1 h(u)^{ \mathrm{\scriptscriptstyle T} } \beta')du.
\end{equation}
But, because the first element of $h(u)$ is equal to $1$, $a_o+a_1 h(u)^{ \mathrm{\scriptscriptstyle T} } \beta'$ can be rewritten as $h(u)^{ \mathrm{\scriptscriptstyle T} } \beta^{*}$ for some vector $\beta^*$. Then $c_2^*$ can be written as
\begin{equation}
c_2^*=\int_0^1 [\dot b(h(u)^{ \mathrm{\scriptscriptstyle T} }\beta')-\pi^0(h(u)^{ \mathrm{\scriptscriptstyle T} } \beta_0)]h(u)^{ \mathrm{\scriptscriptstyle T} } du\beta^* = 0
\end{equation}
by definition of $\beta'$ in \eqref{eq: betaprime equation for 2a}. Note that $\dot b(x)$ is the probability of success from a logit response and hence if $x$ ranges over reasonably large values then $\dot b (x)$ may be near linear. In the example used for simulations $h(u) \beta'$ ranges over the interval $(0.8206, 2.578)$ and $\dot b (x)$ for $x$ in this interval is well approximated by a straight line in $x$.
\subsubsection{Asymptotic Covariance matrix for marginal estimates} \label{SSc: asym CovMat}%
Although positive definite, $\Omega_{1,1}$ of the asymptotic covariance matrix in Theorem
\ref{Thm: Consist&Asym Marginal Like Estimator} can be near singular and this results in an overall covariance matrix for $\sqrt{n}(\hat \delta - \delta_0)$ which has very large elements; in particular, the variance of $\hat \delta$ in \eqref{eq: kappa2} is very large and, as a result $\bar \kappa_2$ given in \eqref{eq: kappa2} will also be close to $50\%$. The reason for this will
be analyzed in this section by calculating various components of the asympototic covariance for marginal estimates. To keep the discussion manageable the deterministic regressors, assumed to be generated by functions $h(\cdot)$ and to satisfy Condition \ref{Cond: Reg Trend Type}a, will be used for the derivations. In this case, the summations on the left of \eqref{eq: PD InfMat} has limit given by the integral
\begin{equation}\label{eq: PD InfMat Lim}%
\Omega_{1,1} = \int_{0}^{1} \sum_{y=0}^{m}f(y|h(u),\delta_0)\dot{l}(y|h(u);\delta_0) \dot{l}^{{ \mathrm{\scriptscriptstyle T} }}(y|h(u);\delta_0)du
\end{equation}
in which
\begin{equation*}
\dot{l}(y|h(u);\delta) = f^{-1}(y|h(u),\delta)\begin{pmatrix}\int (y - m \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta + \sqrt{\tau}z))f(y|h(u),z,\delta)\phi(z)dz \cdot h(u)\\
\int \left[(y- m \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta +\sqrt{\tau}z))^2-m\ddot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta + \sqrt{\tau}z) \right] f(y|h(u),z,\delta)\phi(z)dz /2
\end{pmatrix}
\end{equation*}
where the first dimension corresponds to $\partial l(y|h(u);\delta)/\partial \beta$ and the second dimension corresponds to $\partial l(y|h(u);\delta)/\partial \tau$.
For binary responses, using the facts that $m\equiv 1$ and $y^2=y$, the component in $\partial l(y|h(u);\delta)/\partial \tau$ has
\begin{equation}\label{eq: Trans tau binary}%
(y- m \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta + \sqrt{\tau}z))^2-m\ddot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta + \sqrt{\tau}z) =
(y-\dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta+\sqrt{\tau}z))(1-2\dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta+ \sqrt{\tau}z)).
\end{equation}
Let $W=h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta+\sqrt{\tau}z$, note for binary cases, $(y-\dot{b}(W))f(y|h(u),z,\delta) =
\ddot{b}(W)$ if $y=1$ and $-\ddot{b}(W)$ if $y=0$. Define conditional distribution $\rho(z) = \ddot{b}(W)\phi(z)/\int \ddot{b}(W) \phi(z)dz$, then if $h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta$ appearing in $\dot{l}(y|h(u);\delta)$ is such that the linearly approximation
\begin{equation}\label{eq: d1 tau cond}%
\frac{\partial l(y|h(u);\delta)/\partial \tau}{\int \ddot{b}(W)\phi(z)dz } = \int (1-2\dot{b}(W))\rho(z)dz\approx a_0^\ast + a_1^\ast (h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta) = h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta^\ast
\end{equation}
is well established, then for each fixed $u\in [0,1]$ and $\delta$, $\dot{l}(y|h(u);\delta)$ is a nearly linear dependent vector. Consequently $\Omega_{1,1}$ becomes near singular with a large inverse. In this example $h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0$ takes all values in the interval $(1,3)$ over which the left side of
\eqref{eq: d1 tau cond} is approximately a straight line and hence, in view of the above discussion, and $\Omega_{1,1}$ has eigenvalues $(0.129,6.784\times 10^{-3}, 1.903\times10^{-6})$, with $\sigma_{\tau}^2(\delta_0)=698$.
For binomial data, the properties of $m > 1$ and $y^2 \ne y$ and the left side of \eqref{eq: Trans tau binary} is no longer a near linear function of $h(u)$. As a result $\Omega_{1,1}$ is not nearly singular and $\sigma_{\tau}^2(\delta_0)$ is of moderate size.
\subsubsection{Quantities needed for mixture distribution}
To assess the accuracy of the asymptotic mixture distribution, the theoretical mean vector and covriance for the distributions $F_1(\cdot,\delta_0)$ and $F_2(\cdot,\delta_0)$ are required. For $F_1(\cdot,\delta_0)$ these are approximately as for the normal distribution in Theorem \ref{Thm: GLM asymptotics} for the GLM estimates. The mean is $\beta' = (0.82, 1.76)$ given in Section \ref{SSSc: Lim GLM Est}. The covariance matrix
\begin{equation*}
\Omega_{1}^{-1}\Omega_{2}\Omega_{1}^{-1} = \begin{pmatrix}
23.636 & -39.8212\\
-39.8212 & 98.13 \end{pmatrix}
\end{equation*}
is calculated using $\delta_0=1$ and $\beta'$, in
\begin{equation*}
\Omega_{1}=\int_{0}^{1} \ddot{b}(h^{T}(u)\beta')h(u)h^{T}(u)du
\end{equation*}
and
\begin{equation*}%
\Omega_{2} = \int_{0}^{1} \left[\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0)- 2\pi^0(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0) \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta') + \dot{b}^2(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta')\right]h(u)h(u)^{{ \mathrm{\scriptscriptstyle T} }}du.
\end{equation*}
For $F_{2}(\cdot,\delta_0)$, the mean is evaluated with conditinoal mean \eqref{eq: beta cond} and covariance matrix \eqref{eq: Vbeta cond}. In this example $\Sigma(\delta_0)= n^{-1}\Omega_{1,1}^{-1}$, where $\Omega_{1,1}$ is provided above. For binary data, $\mathrm{E}(\hat\tau-\tau_0|\hat\tau>0)$ and $\mathrm{Var}(\hat\tau-\tau_0|\hat\tau>0)$ are obtained empirically using $\hat\Omega_{1,1}$ as follows:
\begin{align*}
\hat\Omega_{1,1}= & \frac{1}{n}\sum_{t=1}^n\left\{ \dot{l}_t(\delta^\ast)\dot{l}_t^T(\delta^\ast) + f(y_t|x_{nt};\delta^\ast)^{-1}\int (y_t-m_t\dot{b}(W_t^\ast))
\begin{pmatrix} 0 & 0\\
0 & \frac{1}{4\tau^\ast}
\end{pmatrix}
f(y_t|x_{nt},z_t,\delta^\ast)\phi(z)dz \right. \\
- & \left. f(y_t|x_{nt};\delta^\ast)^{-1}\int \left[ (y_t-m_t\dot{b}(W_t^\ast))^2 - m_t\ddot{b}(W_t^\ast)\right]\binom{x_{nt}}{\frac{z_t}{2\sqrt{\tau^\ast}}}
(x_{nt}^T,\frac{z_t}{2\sqrt{\tau^\ast}})f(y_t|x_{nt},z_t,\delta^\ast)\phi(z)dz \right\}
\end{align*}
where $\Vert \delta^\ast -\hat\delta\Vert\le \Vert\hat\delta-\delta_0\Vert$ but under finite samples $\delta^\ast\ne \delta_0$. Note that although for binary data, $\dot{l}_{t}(\delta)$ can be almost linearly dependent as analyzed above, the last part in $\hat\Omega_{1,1}$ is nontrivial and nonsingular for $\delta^\ast\ne \delta_{0}$, which allows $\hat\Omega_{1,1}$ to be nonsingular. As a result a large difference between theoretical and empirical covariance is observed for binary data, which will be shown in Example 2 of simulations.
\subsection{Example 1: binary data, independent latent process} \label{Sec: Example 1 Sim}
This example considers the simplest case of independent observations obtained when $\tau_0=1$ and $\phi_0=0$. For each replication, an independent binary sequence of $Y_t|\alpha_t,x_{nt}\sim B(1,p_t)$, $p_t=1/(1+\exp(-W_{0,t}))$, is generated. Table \ref{tb: conv mixture coef} reports the empirical values $\hat\kappa_{1}$ (the empirical proportion of $S_{1,n}(\tilde\beta)\le 10^{-6}$) and $\hat\kappa_{2}$ (the empirical proportion of $\hat\tau\le 10^{-6}$). Also shown are the empirical mean and standard deviation (in parentheses) of $\hat\beta$ conditional on $\hat\tau=0$ and $\hat\tau>0$ obtained from the simulations along with the theoretical values of these obtained from \eqref{eq: kappa 1}, \eqref{eq: kappa2}, and \eqref{eq: beta cond}, \eqref{eq: Vbeta cond} associated with Theorem \ref{Thm: Prob mixdensity sigma=0}.
Table \ref{tb: conv mixture coef} clearly demonstrates that for this data generating mechanism there is very high proportion of replicates for which $\hat \tau = 0$ and that this proportion does not decrease rapidly with sample size increasing. This is as predicted by theory and the theoretical values of $\bar \kappa_1$ and $\bar \kappa_2$ provide good approximations to $\hat \kappa_1$ and $\hat \kappa_2$. As explained above, this high proportion of zero estimates for $\tau$, even for large sample sizes, is as expected for the regression structure used in this simulation. Note that $\bar\kappa_{1}\le \bar\kappa_{2}$ and $\bar\kappa_{1}$ is closer to the probability of $\hat\tau=0$. The estimations reverse the theoretical property that $\kappa_{1}\ge \kappa_{2}$. For binary data, $P(\hat\tau=0)$ and both theoretical approximations, $\bar{\kappa}_1$ and $\bar{\kappa}_2$ decrease slowly and a very large sample is required to attain $P(\hat\tau=0)\approx 0$.
It is also clear from Table \ref{tb: conv mixture coef} that the empirical mean and standard deviation of $\hat\beta|\hat\tau=0$ and $\hat\beta|\hat\tau>0$ show good agreement with the theoretical results predicted by Theorem \ref{Thm: Prob mixdensity sigma=0}. Overall, the theory we derive for $P(\hat\tau=0)$ and the use of a mixture distribution for $\hat\beta$ is quite accurate for all sample sizes, and with relatively large sample the mixture is a better representation. However, estimation of the corresponding distributions in the mixture requires $\beta_0$, $\tau_0$ and $\psi_0$ and therefore cannot be implemented in practice.
\begin{table}[ptb]\centering
\begin{tabular}{l|llll|l*{4}{c}r}\hline
& \multicolumn{4}{c}{Theoretical} & \multicolumn{5}{c}{Empirical from simulations}\\
& $\hat\beta|\hat\tau=0$ & $\hat\beta|\hat\tau>0$ & $\bar\kappa_1$ & $\bar\kappa_2$ & $\hat\beta|\hat\tau=0$ & $\hat\beta|\hat\tau>0$ & $\hat\kappa_{1}$ & $\hat\kappa_{2}$ &\\ \hline
\multirow{2}{*}{$n=200$} & 0.82(0.344)&1.10(0.421)& \multirow{2}{*}{48.70\%}& \multirow{2}{*}{49.20\%} & 0.83(0.354)& 1.07(0.431) & \multirow{2}{*}{45.56\%}&\multirow{2}{*}{45.54\%}\\
& 1.76(0.700)&2.15(0.819)& & &1.79(0.731) & 2.26(1.020) \\ \hline
\multirow{2}{*}{$n=500$}&0.82(0.217)&1.05(0.269)& \multirow{2}{*}{47.95\%} & \multirow{2}{*}{48.72\%} & 0.82(0.219) &1.04(0.261) &\multirow{2}{*}{46.27\%} &\multirow{2}{*}{46.26\%}\\
&1.76(0.443)&2.07(0.521) & & & 1.78(0.451)& 2.10(0.621)\\ \hline
\multirow{2}{*}{$n=10^3$}&0.82(0.154)& 1.02(0.193)& \multirow{2}{*}{47.10\%} & \multirow{2}{*}{48.20\%}& 0.82(0.156)& 1.01(0.182) &\multirow{2}{*}{45.52\%}
&\multirow{2}{*}{45.51\%} \\
&1.76(0.313) & 2.03(0.371)& & & 1.77(0.315)& 2.06(0.415)\\ \hline
\multirow{2}{*}{$n=5\cdot10^3$}& 0.82(0.069)&0.97(0.093)& \multirow{2}{*}{43.54\%} & \multirow{2}{*}{45.96\%}& 0.82(0.069) &0.96(0.089) &\multirow{2}{*}{42.93\%}
&\multirow{2}{*}{42.93\%}\\
& 1.76(0.140)&1.95(0.174)& & &1.76(0.139)& 1.96(0.184) \\ \hline
\multicolumn{4}{l}{\small Standard deviation in ``()"}
\end{tabular}
\caption{Mixture distribution of marginal likelihood estimates for binary independent time series} \label{tb: conv mixture coef}%
\end{table}
\subsection{Example 2: binomial data, correlated latent process}
\label{Sec: Example 2 Sim}
This simulation investigates bias and standard deviation of the marginal estimates for $m= 1, 2, 3$, $n = 200, 500$ and a range of serial dependence given by $\phi = -0.8, -0.2, 0, 0.2, 0.8$. The observed standard deviation of the estimates over the replications is compared to that given by the asymptotic covariance matrix in Theorem \ref{Thm: Consist&Asym Marginal Like Estimator}. We also give the empirical proportion of the event that $\hat \tau = 0$ using the proportion $\hat\tau\le10^{-6}$. The theoretical upper bound $\bar\kappa_{1}$ is approximated with $\bar{\kappa}_1$ defined in \eqref{eq: kappa 1}.
Table \ref{tb: asym MGLMM binomial} summarizes the results. For binomial series ($m\ge 2$), the empirical values are in good agreement with the asymptotic mean and standard deviation with bias of the estimates generally improving with $m$ or $n$ increasing. Also observed for binomial series is that, both theoretically and empirically, the probability of $\hat\tau=0$ decreases quickly by increasing either the number of trials $m$ or the sample size $n$. However, for binary responses, large asymptotic standard deviations are obtained for the reason explained in Section
\ref{SSc: asym CovMat}. In binary data, the probability of $\hat\tau=0$ is close to $50\%$ and although theoretically this ($\bar\kappa_1$) will converges to zero as $n\to\infty$, it is doing so slowly as $n$ increases. This can be explained by Theorem \ref{Thm: score dist}. Under the settings of this example, using AGQ, $\sigma_{S}$ is calculated to vary from $0.72\times10^{-2}$ to $1.55\times10^{-2}$ and the values of $c_{1}$ and $c_{2}$ are the same as those in Example 1 and hence for, binary responses, $\sqrt{n}c_{S}/\sigma_S$ is at most $2\sqrt{n}\times 10^{-3}$ across the range of autocorrelation considered here. Thus the large values of $\bar\kappa_{1}=\Phi(-\sqrt{n}c_{S}/\sigma_{S})$ across the range of autocorrelations is to be expected for this regression structure. However, for binomial series with $m=2$ and $n=200$, $\phi=0.8$ for instance, $\sigma_S\approx 0.351$, $\sqrt{n}c_{S}$ is dominated by $\sqrt{n}m(m-1)c_{1}=0.475$ which explains why $P(\hat\tau=0)$ decreases rapidly with $m>1$ and increasing $n$.
Interestingly, for binary series, as $n$ increases from 200 to 500 the bias of $\hat\tau$ worsens which seems somewhat counterintuitive. A plausible explanation for this is that the distribution of $\hat\tau$ is a mixture using weights
$P(\hat\tau=0)$, which is approximately $45\%$ across sample size and serial dependence, and $P(\hat\tau>0)$.
When $n=200$ the conditional distribution of $\hat\tau|\hat\tau>0$ has larger variance than when $n=500$ resulting in an inflated overall mean when $n=200$ relative to $n=500$.
In summary, the theoretical upper bound ($\bar\kappa_1$) is above or close to the empirical proportion of $P(\hat\tau=0)$, which is a pattern that is consistent with Theorem
\ref{Thm: score dist}. For binomial series ($m \ge 2$) the the marginal estimates have good bias properties and standard deviations explained by the large sample distribution of Theorem \ref{Thm: Consist&Asym Marginal Like Estimator} and these conclusions are not severely impacted by the level or direction of serial dependence. For binary series, the high proportion of $\hat \tau= 0$ is persistent regardless of $n=200, 500$ or the level of serial dependence and this is explained by theory presented above.
\begin{table}[ptb]\small\centering
\begin{tabular}[c]{l*{12}{c}r}
\multicolumn{10}{l}{$n=200$} \\ \hline
&&\multicolumn{3}{c}{$m=1$} & \multicolumn{3}{c}{$m=2$} & \multicolumn{3}{c}{$m=3$}\\
$\phi$ & & Mean & SD & ASD & Mean & SD & ASD & Mean & SD &ASD \\ \hline
\multirow{4}{*}{0.8} & $\hat\beta_1$&0.984&0.617&10.22&0.994&0.525&0.516&0.991&0.484&0.483\\
& $\hat\beta_2$&2.102&1.207&15.19&2.046&0.975&0.942&2.023&0.890&0.878\\
&$\hat\tau$&0.983&1.177&65.20&0.960&0.790&0.821&0.909&0.541&0.557\\
&$\hat\kappa(\bar\kappa_1)$ & \multicolumn{3}{c}{45.41\%(49.44\%)} &\multicolumn{3}{c}{11.06\%(8.85\%)} & \multicolumn{3}{c}{2.52\%(3.00\%)}\\ \hline
\multirow{4}{*}{0.2} & $\hat\beta_1$ &0.953&0.433&7.90&0.996&0.346&0.342&0.992&0.293&0.292\\
&$\hat\beta_2$&2.071&0.949&11.84&2.047&0.667&0.648&2.027&0.563&0.553\\
&$\hat\tau$&0.898&1.026&50.55&1.052&0.793&0.790&0.993&0.518&0.510\\
&$\hat\kappa(\bar\kappa_1)$& \multicolumn{3}{c}{45.08\%(49.38\%)}&
\multicolumn{3}{c}{8.08\%(7.15\%)} & \multicolumn{3}{c}{0.86\%(1.30\%)}\\ \hline
\multirow{4}{*}{0} & $\hat\beta_1$ &0.955&0.414&7.71&1.007&0.331&0.327&1.001&0.278&0.273 \\
& $\hat\beta_2$ &2.048&0.926&11.56&2.032&0.634&0.623&2.013&0.527&0.523\\
& $\hat\tau$ &0.869&1.015&49.35&1.063&0.786&0.789&1.003&0.512&0.509 \\
&$\hat\kappa(\bar\kappa_1)$& \multicolumn{3}{c}{46.72\%(49.37\%)} & \multicolumn{3}{c}{7.25\%(7.08\%)} & \multicolumn{3}{c}{0.78\%(1.24\%)}\\ \hline
\multirow{4}{*}{-0.2}& $\hat\beta_1$ &0.958&0.408&7.58&1.003&0.317&0.316&0.999&0.261&0.261\\
& $\hat\beta_2$&2.049&0.913&11.39&2.036&0.620&0.606&2.014&0.511&0.504\\
& $\hat\tau$ &0.880&1.017&48.57&1.059&0.794&0.789&1.004&0.516&0.510\\
&$\hat\kappa(\bar\kappa_1)$& \multicolumn{3}{c}{45.78\%(49.37\%)} & \multicolumn{3}{c}{7.27\%(7.07\%)} &\multicolumn{3}{c}{0.8\%(1.23\%)} \\ \hline
\multirow{4}{*}{-0.8} &$\hat\beta_1$ &0.953&0.405&7.53&0.995&0.306&0.303&0.992&0.247&0.245\\
& $\hat\beta_2$ &2.045&0.932&11.34&2.038&0.623&0.603&2.023&0.512&0.500\\
& $\hat\tau$ &0.863&1.006&48.30&1.047&0.794&0.812&1.002&0.547&0.542\\
& $\hat\kappa(\bar\kappa_1)$ & \multicolumn{3}{c}{46.01\%(49.40\%)}& \multicolumn{3}{c}{7.92\%(7.88\%)} & \multicolumn{3}{c}{1.43\%(1.91\%)} \\ \hline
&\\
\multicolumn{10}{l}{$n=500$} \\ \hline
&& \multicolumn{3}{c}{$m=1$} & \multicolumn{3}{c}{$m=2$} &\multicolumn{3}{c}{$m=3$} \\
$\phi$ & & Mean & SD & ASD & Mean & SD & ASD & Mean & SD &ASD \\ \hline
\multirow{4}{*}{0.8} & $\hat\beta_1$ &0.948&0.373&6.58&0.998&0.335&0.330&0.998&0.311&0.309\\
&$\hat\beta_2$&1.992&0.737&9.77&2.013&0.615&0.604&2.001&0.573&0.563\\
&$\hat\tau$&0.785&0.866&41.96&0.974&0.515&0.519&0.956&0.35&0.352&\\
&$\hat\kappa(\bar\kappa_1)$ & \multicolumn{3}{c}{44.77\%(49.36\%)}& \multicolumn{3}{c}{1.45\%(1.68\%)} & \multicolumn{3}{c}{0.07\%(0.14\%)}\\ \hline
\multirow{4}{*}{0.2} & $\beta_1$ &0.936&0.273&5.00&1.000&0.217&0.216&1.002&0.182&0.184\\
&$\hat\beta_2$ &1.953&0.589&7.50&2.012&0.412&0.410&2.002&0.349&0.350\\
&$\hat\tau$ & 0.694&0.772&31.99&1.011&0.492&0.499&0.998&0.325&0.322\\
&$\hat\kappa(\bar\kappa_1)$ & \multicolumn{3}{c}{46.18\%(49.32\%)} & \multicolumn{3}{c}{0.93\%(1.06\%)} & \multicolumn{3}{c}{0\%(0.02\%)}\\ \hline
\multirow{4}{*}{0} &$\hat\beta_1$ &0.938&0.263&4.87&0.997&0.205&0.206&0.999&0.173&0.172\\
&$\hat\beta_2$&1.959&0.572&7.32&2.011&0.395&0.394&2.005&0.333&0.331\\
&$\hat\tau$&0.708&0.767&31.21&1.011&0.502&0.499&0.996&0.321&0.322\\
&$\hat\kappa(\bar\kappa_1)$& \multicolumn{3}{c}{45.18\%(49.32\%)} & \multicolumn{3}{c}{1.04\%(1.04\%)} & \multicolumn{3}{c}{0\%(0.017\%)}\\ \hline
\multirow{4}{*}{-0.2} &$\hat\beta_1$ &0.934&0.257&4.79&1.000&0.202&0.200&0.996&0.164&0.164\\
&$\hat\beta_2$ &1.957&0.555&7.21&2.013&0.389&0.383&2.010&0.315&0.318\\
&$\hat\tau$ &0.689&0.763&30.71&1.014&0.493&0.499&0.998&0.329&0.322\\
&$\hat\kappa(\bar\kappa_1)$&\multicolumn{3}{c}{46.26\%(49.32\%)} &
\multicolumn{3}{c}{0.7\%(1.04\%)} &\multicolumn{3}{c}{0.01\%(0.017\%)}\\ \hline
\multirow{4}{*}{-0.8} &$\hat\beta_1$ &0.929&0.253&4.76&0.996&0.190&0.191&0.996&0.156&0.155\\
& $\hat\beta_2$ &1.962&0.570&7.18&2.012&0.386&0.381&2.011&0.319&0.317\\
& $\hat\tau$ &0.681&0.768&30.57&1.008&0.509&0.513&0.997&0.344&0.343\\
&$\hat\kappa(\bar\kappa_1)$& \multicolumn{3}{c}{47.10\%(49.34\%)} & \multicolumn{3}{c}{1.01\%(1.31\%)} & \multicolumn{3}{c}{0.02\%(0.05\%)}\\ \hline
\end{tabular}
\caption{Marginal likelihood estimates for Binomial observations under various values of $\phi$, where the true values are $\beta^0_1=1$; $\beta^0_2=2$; $\tau_0=1$.} \label{tb: asym MGLMM binomial}%
\end{table}
\subsection{Example 3: Estimate of Covariance Matrix}\label{Sec: Example 3 sim}
The subsampling method of estimating the covariance for marginal likelihood estimates is of limited practical value for binary data because, firstly, when $\hat\tau=0$, which occurs nearly 50\% of the time, the method does not provide estimates of the covariance matrix in Theorem \ref{Thm: Consist&Asym Marginal Like Estimator} and, secondly, because of the high proportion of $\hat\tau=0$, $\hat\delta$ has the mixture distribution given in Theorem \ref{Thm: Prob mixdensity sigma=0}, the covariance of which requires $\beta'$ and $\delta_0$, both of which are unknown and cannot be estimated from a single sequence.
In the binomial case ($m \ge 2$) the subsampling method is likely to be useful for a range of serial dependence.
Table \ref{tb: cov binom subsampling 2} presents simulation results under various levels of serial correlation. The table summarizes the estimates of standard deviation for $\hat\delta$ using the subsampling method described in Section \ref{SSc: CovMat Est}. The column ``$\textrm{ASD}$" contains the asymptotic standard deviation calculated with the covariance matrix in Theorem \ref{Thm: Consist&Asym Marginal Like Estimator} and column ``$\textrm{SD}$" contains empirical standard deviation. The table shows the subsampling estimates of standard deviations are of the same magnitude of theoretical standard deviations even for moderate sample size $n=200$ but are biased downwards and increasingly so as $C$ increases. Values of $C=1,2$ provide the least biased estimates for the standard errors for both sample sizes. Downwards bias is greater for large positive values of $\phi$ as might be expected.
\begin{table}[ptb]\centering
\begin{tabular}[c]{l*{8}{c}r}
\multicolumn{2}{l}{\small $m=2$, $n=200$} \\\hline
&& \textrm{ASD} & \textrm{SD} & $k_{n}=5$ & $k_{n}=11$ & $k_{n}=23$ &$k_{n}=46$ \\
\multirow{3}{*}{$\phi=0.8$} & $\hat\beta_1$ & 0.515 & 0.525 & 0.392 & 0.406 & 0.387 &0.325\\
& $\hat\beta_2$ & 0.942 & 0.975 & 0.734 & 0.755& 0.719 & 0.606\\
& $\hat\tau$ & 0.821 & 0.790 & 0.824& 0.811& 0.784 & 0.733\\ \hline
\multirow{3}{*}{$\phi=0.2$} & $\hat\beta_1$ & 0.342 & 0.346 & 0.333&0.316 &
0.284&0.235\\
& $\hat\beta_2$ & 0.648 & 0.667& 0.637 & 0.605 & 0.545& 0.447\\
& $\hat\tau$ & 0.790 & 0.793 &0.829& 0.814 & 0.786& 0.729\\ \hline
\multirow{3}{*}{$\phi=-0.2$} & $\hat\beta_1$ & 0.316 & 0.322&0.313&0.296&0.265 &0.221\\
& $\hat\beta_2$ & 0.606 & 0.616&0.604& 0.570 &0.511& 0.420\\
& $\hat\tau$ & 0.789 & 0.785& 0.831& 0.816& 0.786& 0.729\\ \hline
\multirow{3}{*}{$\phi=-0.8$} &$\hat\beta_1$& 0.303 & 0.305&0.300&0.283&0.255&0.214\\
&$\hat\beta_2$ & 0.603 & 0.619& 0.592 & 0.562& 0.508& 0.419\\
&$\hat\tau$& 0.812 & 0.796 & 0.842& 0.834 & 0.808& 0.750\\ \hline
\\
\multicolumn{2}{l}{\small $m=2$, $n=500$} \\\hline
&& \textrm{ASD} & \textrm{SD} & $k_{n}=7$ & $k_{n}=14$ & $k_{n}=28$ &$k_{n}=56$ \\
\multirow{3}{*}{$\phi=0.8$} & $\hat\beta_1$ & 0.330&0.330& 0.261 & 0.275 & 0.271 & 0.242\\
&$\hat\beta_2$ & 0.604 &0.607& 0.487 & 0.510& 0.501& 0.451\\
&$\hat\tau$&0.519 & 0.507&0.506 &0.501 &0.490 &0.469\\ \hline
\multirow{3}{*}{$\phi=0.2$} & $\hat\beta_1$ &0.216 & 0.217 &0.210&0.204 &0.192 & 0.169\\
&$\hat\beta_2$ & 0.410 &0.418 & 0.400&0.388 & 0.364& 0.321\\
&$\hat\tau$& 0.499 &0.493 & 0.503&0.497&0.486&0.465\\ \hline
\multirow{3}{*}{$\phi=-0.2$} & $\hat\beta_1$ & 0.199 & 0.199 & 0.197& 0.190 &0.179 &0.159\\
& $\hat\beta_2$ &0.383 & 0.387 & 0.378& 0.365 &0.343&0.303 \\
& $\hat\tau$ &0.499 & 0.503&0.505& 0.500& 0.490&0.469\\ \hline
\multirow{3}{*}{$\phi=-0.8$} &$\hat\beta_1$& 0.191 & 0.191 & 0.188 & 0.182& 0.171& 0.152\\
&$\hat\beta_2$& 0.381 & 0.387 & 0.372& 0.361 &0.340 & 0.302\\
&$\hat\tau$&0.513&0.513&0.514 &0.511&0.502&0.481\\ \hline
\end{tabular}
\caption{Subsampling estimates for standard deviation of GLMM estimation, $k_{n}=C[n^{1/3}]$, $C=1,2,4,8$.}
\label{tb: cov binom subsampling 2}%
\end{table}
\section{Alternative to Marginal Likelihood Estimation} \label{Sec: MGLM est}%
We close with a discussion of the alternative approach for binary time series regression modelling proposed by \cite{wu2014parameter}. Their modified GLM (MGLM) method replaces $\exp(x_{nt}^{ \mathrm{\scriptscriptstyle T} }\beta)/(1+\exp(x_{nt}^{ \mathrm{\scriptscriptstyle T} }\beta))$ in the GLM log-likelihood \ref{eq: loglikglm} with a function $\pi(x_{nt}^{ \mathrm{\scriptscriptstyle T} }\beta)$ representing the marginal mean to arrive at the objective function
\begin{equation} \label{eqn: MGLM objective function}
l_2(\beta) =\sum_{t=1}^n \left[ y_t \log \pi(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta) + (1-y_t)\log (1- \pi(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta)) \right], \quad \pi(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta)= \int \frac{e^{x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta+\alpha_t}}{1+e^{x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta+\alpha_t}} g(\alpha_t) d\alpha.
\end{equation}
The MGLM estimate $\hat\beta_2$ is found by iterating two steps starting with the GLM estimate of $\beta$: step 1, estimate the curve of $\pi(u)=\int e^{u+\alpha}/(1+e^{u+\alpha}) g(\alpha)d\alpha$ non-parametrically on $u\in \mathbb{R}$; step 2, maximize $l_2$ with respect to $\beta$, based on the estimate of $\pi(u)$ obtained in the first step. Steps 1 and 2 are repeated and the iteration stops when the maximum value of $l_2$ is reached and the last update of $\beta$ is then regarded as the MGLM estimator. Implementation details are provided in \cite{wu2014parameter}.
In defining their method \cite{wu2014parameter} do not require that $\pi(u)=\int e^{u+\alpha}/(1+e^{u+\alpha}) g(\alpha)d\alpha$ for any distribution $g$ of the latent process. Hence it is not required that $\pi(u)$ be non-negative and strictly increasing in $u$. However, their main theorem concerning consistency and asympototic normality of $\hat \beta_2$ is stated in terms of this latent process specification. For such specifications, application of their non-parameteric method for estimating $\pi(u)$ requires additional constraints which are not currently implemented. For example, taking the first derivative with respect to $u$, gives $\dot \pi(u)= \int e^{u+\alpha}/(1+e^{u+\alpha})^2 g(\alpha)d\alpha\le \pi(u)(1-\pi(u))$ so $\dot\pi(u)\in [0,0.25]$ by application of Jensen's inequality. When applied to the Cambridge-Oxford Boat Race time series the non-parameteric estimate of $\pi(u)$ is not monotonic and produces marginal estimates, at values of $u$ between the gaps in the observed values of $x_{nt}^{ \mathrm{\scriptscriptstyle T} } \hat\beta_2$, which are zero and therefore not useful for prediction at new values of the linear predictor.
Although not implemented in the R-code of \cite{wu2014parameter}, this constraint as well as that of monotonicity can be enforced in the nonparametric estimation of $p(u)$ using an alternative local linearization to that used in \cite{wu2014parameter}. For example, with constraints of monotonicity and $\dot \pi(u)\in [0,0.25]$, different estimates $\hat\beta_1=0.2093$, $\hat\beta_2=0.1899$ (compared to $\hat\beta_1=0.237$, $\hat\beta_2=0.168$ in \cite{wu2014parameter}) are observed for the model for the Cambridge-Oxford boat race series that they analyse. In this example, the marginal likelihood estimates give $\hat\tau=0$ and hence $\hat \beta$ degenerates to GLM estimate which differs from that of \cite{wu2014parameter}. Somehow the marginal method (with or without monotonocity contraints) is avoiding the degeneracy issue that arises with the marginal estimation method proposed in this paper. This needs to be further understood.
While MGLM is computationally much more intensive than using standard GLMM methods for obtaining the marginal estimates it appears to avoid degeneracy but for reasons that are not fully understood at this stage. Additionally, it is not clear the extent to which MGLM with or without the proper constraints implied by a latent process specification avoids the high proportion of degenerate estimates observed with GLMM for binary data. Additionally the extent to which MGLM reproduces the correct curve for the marginal probabilities $\pi(u)$ when the true data generating mechanism is defined in terms of a latent process (parameter driven specification) has not been investigated. The extent to which the MGLM estimate of $\pi(u)$ differs from the curve defined by a latent process specification might form the basis for a non-parametric test of the distribution of the latent process -- when $\{\alpha_t\}$ is not Gaussian, the true curve of $\pi(u)$ is not the same with that evaluated under GLMM fits, and we may end up into different results for the estimates of $\hat\beta_2$ and $\hat\beta$.
\section{Discussion}\label{Sec: discussion}
To overcome the inconsistency of GLM estimates of the regression parameters in parameter driven binomial models time series models we have proposed use of the marginal likelihood estimation, which can be easily conducted using the generalized linear mixing model fitting packages. We have shown that the estimates of regression parameters and latent process variation obtained from this method are consistent and asymptotically normal even if the observations are serially dependent. The distribution of the marginal estimates is required for a score test of serial dependence in the latent process something which we will report on elsewhere. The asymptotic results and proofs thereof have assumed that the latent process is Gaussian which has helped streamline the presentation. This is not required for all results except for Lemma 2 (asymptotic identifiability for the binary case) which relies directly on the normal distribution. The proofs can be readily modified provided we assume that the moment generating function $m_{\alpha_t}(u)$ of $\alpha_t$ is finite for all $u < \sqrt{(d_2)}$ where $\tau < d_2$ defines the parameter space.
The structure of the model considered here is such that the theoretical results apply to other response distributions such as the Poisson and negative binomial with very little change in the proofs of theorems. GLM estimation in these cases is consistent and asymptotically normal regardless of serial dependence in the latent process. The same will be true of the use of marginal estimation with the advantage that the latent process variability is also estimated. While we have not yet shown this, we expect that for these other response distributions the use of marginal estimation will lead to more efficient estimates of the regression parameters.
For all response distributions and for moderate sample sizes the marginal estimation method can result in a non-zero probability of $\hat \tau = 0$. As we have observed in simulations, and explained via theoretical asymptotic arguments, this is particularly problematical for binary responses with very high probabilities being observed (and expected from theory) for `pile-up' probability for $\hat \tau$. We have observed that for binomial data ($m>1$) the `pile-up' probability quickly decreases to zero and, as a result of this observation, we anticipate that this probability will not typically be large for Poisson and negative Binomial responses.
For binary data we have developed a useful upper bound approximation to this probability and subsequently proposed an improved mixture distribution for $\hat\beta$ in finite samples. These theoretical derivations are well supported by simulations presented. While this mixture distribution cannot be used based on a single time series none-the-less it provided useful insights into the sampling properties of marginal estimation for binary time series. Additionally the derivations suggest that regression models in which $x^{{ \mathrm{\scriptscriptstyle T} }} \beta$ varies over an interval over which the inverse logit function is approximately linear will be particularly prone to the `pile-up' problem and this persists even when there is strong serial dependence. Practitioners should apply the marginal likelihood method with caution in such situations.
\section{Acknowledgements}
We thank Dr Wu and Dr Cui for providing us with the R-code for their application of the MGLM method to the Boat Race Data reported in \cite{wu2014parameter}.
\section{Appendix: A} \label{Sec: Pf: A}
\begin{proof}[Proof of Lemma \ref{lem: Identifiable Binom}] \label{Pf: lem: Identifiable Binom}
We consider the deterministic regressors only in this proof but the same arguments can be extended to stochastic regressors. Now $M$ is the largest value of $m$ for which $k_M>0$, so $Q(\delta) = Q(\delta_0)$ if and only if \begin{equation*}
\int_0^1 \sum_{j=0}^M \pi^0(j,h(u))\left(\log \pi(j,h(u)) - \log \pi^0(j,h(u))
\right) du = 0
\end{equation*}
Since the integrand is non-positive the integrand can be zero if and only if the integrand is zero almost everywhere. Hence, for a contradiction, assume $\exists \delta\ne \delta_0$ such that
\begin{equation*}
\sum_{j=0}^M \pi^0(j,h(u))\left(\log \pi(j,h(u))- \log \pi^0(j,h(u))\right) = 0,
\quad \forall u \in [0,1].
\end{equation*}
and this can only happen if $\pi^0(j,h(u))=\pi(j,h(u))$ for all $j=0,\ldots,M$. Since
\[
\pi(j,h(u))=\int {{M}\choose {j}}\dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta+\sqrt{\tau} z)^j (1-\dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta+\sqrt{\tau} z))^{M-j}\phi(z)dz
\]
it is straightforward to show, by iterating from $j=0, \ldots, M$, that $\pi^0(j,h(u))=\pi(j,h(u))$ for all $j=0,\ldots,M$ is equivalent to
\begin{equation}\label{eq: Binom identity Spec}%
\int \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta+\sqrt{\tau} z)^j\phi(z)dz = \int \dot{b}(h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0+\sqrt{\tau_0} z)^j\phi(z)dz, \quad j=1,\ldots,M,
\end{equation}
for any $u\in [0,1]$
We next show that the only way this can hold is if $\delta=\delta_0$. Fix $u$ and denote $a = h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta$, $a_0=h(u)^{{ \mathrm{\scriptscriptstyle T} }}\beta_0$ and $\sigma=\sqrt{\tau}$,
\[
d_j(a,\sigma) = E\left[\dot{b}(a+\sigma z)^j\right] - E\left[\dot{b}(a_0+\sigma_0 z)^j\right],
\quad j=1,\ldots,M.
\]
where expectation is with respect to the density $\phi(\cdot)$. Hence \eqref{eq: Binom identity Spec} is equivalent to
so that $d_1(a_0,\sigma_0) = d_2(a_0,\sigma_0) = \cdots = d_M(a_0,\sigma_0) = 0$. Assume $\eta_0 = (a_0,\sigma_0)$, if there exists $\eta\ne \eta_0$ such that \eqref{eq: Binom identity Spec} holds, then there is $\Vert\eta^\ast-\eta_0\Vert \le \Vert\eta-\eta_0\Vert$ such that
\begin{equation}\label{Pf: lem: eq: JacMat}%
\begin{pmatrix}
d_1(a,\sigma) \\
d_2(a,\sigma) \\
\vdots\\
d_M(a,\sigma)
\end{pmatrix} = %
\begin{bmatrix}
E \left[\dot{b}(\eta^\ast)^{0}\ddot{b}(\eta^\ast)\right] & E \left[ \dot{b}(\eta^\ast)^{0}\ddot{b}(\eta^\ast)z\right]\\
2 E \left[\dot{b}(\eta^\ast)^{1}\ddot{b}(\eta^\ast)\right] & 2 E \left[ \dot{b}(\eta^\ast)^{1}\ddot{b}(\eta^\ast)z \right]\\
\vdots & \vdots\\
M E \left[ \dot{b}(\eta^\ast)^{M-1}\ddot{b}(\eta^\ast)\right] &
M E \left[ \dot{b}(\eta^\ast)^{M-1}\ddot{b}(\eta^\ast)z\right]
\end{bmatrix}
\begin{pmatrix}
v_1 \\
v_2
\end{pmatrix} = J(\eta^\ast)\binom{v_1}{v_2}
\end{equation}
where $v_1=a-a_0$ and $v_2=\sigma-\sigma_0$ cannot be zero at the same time when
$\eta\ne \eta_0$, which is equivalent to the matrix $J(\eta^\ast)$ being of full rank. But $J(\eta^\ast)$ is not of full rank if and only if the ratios of the second column to the first column are the same for all $j=1,\ldots, M$. However, we now show that this ratio increases as $j$ increases. Since $\dot{b}(\cdot)$ and $\ddot{b}(\cdot)$ are non-negative functions we can define
probability densities
\[
g_j(z) = \frac{\dot{b}(a^\ast +\sigma^\ast z)^{j-1}\ddot{b}(a^\ast+\sigma^\ast z) \phi(z)}{\int \dot{b}(a^\ast +\sigma^\ast z)^{j-1}\ddot{b}(a^\ast+\sigma^\ast z) \phi(z)d z}, \quad j=1, \ldots, M
\]
so that
\begin{equation*}
\frac{E\left[\dot{b}(\eta^\ast)^{j}\ddot{b}(\eta^\ast)z\right]}{E\left[ \dot{b}(\eta^\ast)^{j}\ddot{b}(\eta^\ast)\right]} = \frac{\int z \dot{b}(a^\ast+
\sigma^\ast z) g(z)d z}{\int \dot{b}(a^\ast+\sigma^\ast z) g(z)d z} = \frac{E_{g_j}(z \dot{b}(a^\ast+\sigma^\ast z))}{ E_{g_j}(\dot{b}(a^\ast+\sigma^\ast z))}
\end{equation*}
where $E_{g_j}()$ denotes expectation with respect to $g_j$.
But since $\dot{b}(a^\ast+\sigma^\ast z)$ is an increasing function of $z$, $z$ and $\dot{b}(a^\ast+\sigma^\ast z)$ are positively correlated. Therefore, \begin{equation*}
E_{g_{j}}(z)< E_{g_{j}}(z \dot{b}(a^\ast+\sigma^\ast z))/E_{g_{j}}(\dot{b}(a^\ast+\sigma^\ast z))
\end{equation*}
it follows that
\begin{equation*}
\frac{E\left[\dot{b}(\eta^\ast)^{j-1}\ddot{b}(\eta^\ast)z\right]}{E\left[ \dot{b}(\eta^\ast)^{j-1}\ddot{b}(\eta^\ast)\right]} < \frac{E\left[ \dot{b}(\eta^\ast)^{j}\ddot{b}(\eta^\ast)z\right]}{E\left[ \dot{b}(\eta^\ast)^{j}\ddot{b}(\eta^\ast)\right]}, \quad j=1,\ldots, M.
\end{equation*}
Then when \eqref{eq: Binom identity Spec} holds, \eqref{Pf: lem: eq: JacMat} has a unique solution of $(0,0)$ for $(v_1,v_2)$, which contradicts to the assumption that $\eta\ne \eta_0$. Thus
\eqref{eq: Binom identity Spec} holds if and only if $\eta=\eta_0$, which implies
$a=a_0$ and $\sigma=\sigma_0$. By Condition \ref{Cond: Reg Full Rank}, we can conclude that $a=a_0$ for all $u$ implies $\beta=\beta_0$. Therefore Condition \ref{Cond: Ident and Const} holds for
$M\ge 2$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem: Identifiable Binary}]\label{Pf: lem: Identifiable Binary}%
This proof considers the Fourier transform method used in \cite{wang2003matching}. Assume $\exists \delta\ne \delta_0$ such that $\forall x\in \mathbb{X}$,
\begin{equation*}
\pi^0(1)=\int \dot{b}(x^{{ \mathrm{\scriptscriptstyle T} }}\beta_0 + \sigma_0 z)\phi(z)dz = \int \dot{b}(x^{{ \mathrm{\scriptscriptstyle T} }}\beta+ \sigma z)
\phi(z)dz =\pi(1), \quad \sigma=\sqrt{\tau}.
\end{equation*}
For connected $\mathbb{X}$, the first derivative with respect to $x$ of both sides are also the same, that is
\begin{equation}\label{eq: pi0 d1 to pi d1}%
\int \ddot{b}(x^{{ \mathrm{\scriptscriptstyle T} }}\beta_0+\sigma_0 z)\phi(z)dz \cdot \beta_0 = \int \ddot{b}(x^{{ \mathrm{\scriptscriptstyle T} }}\beta+
\sigma z)\phi(z)dz \cdot \beta
\end{equation}
which implies there exists a constant $c_1>0$ such that $c_1= \beta_{k}/\beta_{0,k}$ for $k=1,\ldots,r$, with $\eta=x^{{ \mathrm{\scriptscriptstyle T} }}\beta$, $c_2=\sigma/\sigma_0 >0$, \eqref{eq: pi0 d1 to pi d1} can be rewritten as convolutions
\begin{equation*}
\int g(u)h(\eta-u)du = c_1 \int g(u)h(c_1\eta-c_2 u)du; \quad
h(x)= \frac{e^{x}}{(1+e^{x})^2}, \quad g(x)=\frac{1}{\sigma_0\sqrt{2\pi}}\exp(-\frac{x^2}{2\sigma_0^2}).
\end{equation*}
let $G(s)$ be the Fourier transform of the normal density $g(\cdot)$, $H(s)$ be the Fourier transform of the logistic density $h(\cdot)$, using the fact that the Fourier transform of the convolution is the product of Fourier transform of each function,
\begin{align*}
G(s)H(s)&=\int \left(\int c_1 \phi(u)h(c_1\eta-c_2 u)du\right) e^{-i\eta s}d\eta \\
&= c_1 \int \phi(u) \left(\int h(c_1 \eta - c_2 u) e^{-i\eta s}d\eta\right)du \\
&= c_1 \int \phi(u) \left(\int h(c_1 \eta - c_2 u) e^{-i(c_1\eta - c_2 u)(s/c_1)}
d(c_1\eta - c_2 u)\right)|c_1|^{-1}e^{-iu(c_2s/c_1)}du\\
&= \int \phi(u) e^{-iu(c_2s/c_1)}du H(s/c_1) = \frac{c_1}{|c_1|}G(c_2s/c_1)H(s/c_1),
\quad c_1>0.
\end{align*}
it follows that $G(s)H(s)= G(c_2s/c_1)H(s/c_1)$, $\forall s\in \mathbb{R}$. The Fourier transform for the mean zero normal distribution is $G(s) = \exp(-\frac{1}{2}\sigma_0^2 s^2)$, and the Fourier transform for the logistic distribution is $H(s)= 2\pi s/(e^{\pi s}- e^{-\pi s})$, thus for any $s\ne 0$,
\begin{equation*}
\exp(-\frac{1}{2}\sigma_0^2 s^2)\frac{1}{\sinh(\pi s)} = \exp(-\frac{1}{2}\sigma_0^2 s^2 \left(\frac{c_2}{c_1}\right)^2)\frac{1}{c_1\sinh(\pi s/c_1)}
\end{equation*}
for any fixed $c_1\ne 1$, $c_2$ can be expressed as a function of $s$ and this function is not a constant over $s$, which contradicts to the definition of $c_2$. Hence the equality holds if and only if $c_1=1$ and $c_2=1$.
\end{proof}
\section{Appendix: B} \label{Sec: Pf: B}
\begin{proof}[Proof of Theorem \ref{Thm: Consist&Asym Marginal Like Estimator}]
\label{Pf: Thm: Consist&Asym Marginal Like Estimator}
This proof is presented in three steps: first, we show, for any $\delta \in \Theta$, that $E(Q_n(\delta))$ defined in \eqref{eqn: E Qn delta} converges to $Q(\delta)$ defined in \eqref{eqn: lim Q delta Cond 2a} and \eqref{eqn: lim Q delta Cond 2b} under Condition 2a and 2b respectively; second, that $Q_n(\delta)- E(Q_n(\delta))\overset{\textrm{a.s}}{\to} 0 $ where $Q_n(\delta)$ is defined in \eqref{eqn: Qn delta} from which, using compactness of the parameter space, it follows that $\hat\delta\overset{\textrm{a.s.}}\rightarrow \delta_0$; third, that
$\sqrt{n}(\hat\delta -\delta_0)\overset{\textrm{d}}\to N(0,\Omega_{1,1}^{-1}\Omega_{1,2}\Omega_{1,1}^{-1})$.
\noindent \textit{Proof that} $E(Q_n(\delta)) \overset{\textrm{a.s.}}{\to}Q(\delta)$: Use Jensen's inequality multiple times we have
\begin{align}\label{eq: Min ln pi}
& - \sum_{j=0}^{m_t}\pi_{t}^0(j)\ln \pi_{t}(j) \le\left(\sum_{j=0}^{m_t}\pi_{t}^0(j)\right) \left(\sum_{j=0}^{m_t}(-\ln \pi_{t}(j)) \right) = - \sum_{j=0}^{m_t}\ln \pi_{t}(j) \nonumber\\
\le & - \sum_{j=0}^{m_t} \left(j(x_{t}^T\beta) - m_t (\ln 2 + \max(x_{t}^T\beta + \tau/2,0)) + c(j) \right) \nonumber \\
\le & \sum_{j=0}^{m_t} \left[ m_t(\ln 2 + \tau/2)+ m_t\vert x_{t}^T\beta\vert - c(j)\right] < m_t(1+m_t)\left(\ln 2 + \tau/2 + \vert x_{t}^T\beta\vert\right)
\end{align}
then conditional on $m_t$ and $x_{t}$, $\sum_{j=0}^{m_t}\pi_{t}^0(j)\ln \pi_{t}(j)$ is bounded for all $t$ and $\delta$. Under Condition 2a, the regressor $x_{nt}:=h(t/n)$ is nonrandom as is the marginal density $\pi_{nt}$. Then the strong law of large numbers for mixing processes \citep{mcleish1975maximal} applied to $\{m_t\}$ gives
\begin{equation*}
\lim_{n\to\infty}\frac{1}{n}\sum_{t=1}^n \sum_{j=0}^{m_t}\pi_{nt}^0(j)\log \pi_{nt}(j) =
\lim_{n \to \infty} \frac{1}{n}\sum_{t=1}^n E\left[ \sum_{j=0}^{m_t} \pi_{nt}^0(j)\log \pi_{nt}(j)\right]=Q(\delta)
\end{equation*}
defined in \eqref{eqn: lim Q delta Cond 2a}.
For Condition 2b the ergodic properties of the stationary processes $m_t$ and $X_t$ can be used to establish
\begin{equation*}
\lim_{n\to\infty} \frac{1}{n}\sum_{t=1}^n \sum_{j=0}^{m_t}\pi_t^0(j)\log \pi_t(j)= \lim_{n\to\infty}\sum_{m=1}^M \frac{n_m}{n} \left(\frac{1}{n_m}\sum_{\{t:m_{t}=m\}}\sum_{j=0}^m \pi_{t}^0(j)\log \pi_{t}(j)\right)=
Q(\delta)
\end{equation*}
defined in \eqref{eqn: lim Q delta Cond 2b}.
\bigskip
\noindent \textit{Consistency}: We write \eqref{eqn: Qn delta} as $Q_n(\delta) = n^{-1} \sum_{t=1}^\infty q_t(\delta)$ where $q_{t}(\delta) = \log f(y_{t}|x_{nt},\delta)$. By \citet[Proposition 1]{blais2000limit}, $\{q_t(\delta)\}$ is strongly mixing for any $\delta \in \Theta$. To apply the strong law of large numbers for mixing process in \cite{mcleish1975maximal} we need to show that $\exists \lambda \geq 0$ such that
\begin{eqnarray*}
\sum_{t=1} ^{\infty} \|q_{t}(\delta) - E q_{t}(\delta)\|_{2+\lambda}^{2}/t^{2} < \infty
\end{eqnarray*}
where $\|\cdot\|_p$ denotes the $L^p$ norm. By Minkowski's inequality and H\"{o}lder's inequality,
\begin{equation*}
\left\Vert q_{t}(\delta) - E q_{t}(\delta) \right\Vert_{2+\lambda} \le 2\left\Vert q_{t}(\delta)\right\Vert_{2+\lambda}
\end{equation*}
using similar derivations as used in \eqref{eq: Min ln pi},
$|q_{t}(\delta)| \le \vert - m_t |x_{nt}^T\beta| + c(y_t) - m_t(\ln 2+\tau/2) \vert$
where $m_t$ is bounded. It suffices to establish $\sum_{t=1}^\infty \Vert q_t(\delta)\Vert_{2+\lambda}^2/t^2< \infty$ if $\sum_{t=1}^\infty \vert x_{nt}^{{ \mathrm{\scriptscriptstyle T} }} \beta\vert^2/t^2<\infty$. Under Condition 2a, $\{x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta\}$ is bounded for any given $\beta$; $\sum_{t=1}^\infty 1/t^2< 2$, therefore the result follows. Under Condition 2b, we have $E\Vert x\Vert^2 <\infty$, given any $\beta$, for all $\varepsilon > 0$,
\begin{equation*}
P\left(\sum_{t=1}^n \vert x_t^{{ \mathrm{\scriptscriptstyle T} }}\beta\vert^2/t^2 \ge \emph{K}\right) \le (2/\emph{K})
E\vert x^{{ \mathrm{\scriptscriptstyle T} }}\beta\vert^2 \le \varepsilon, \quad \textit{ if } \quad \emph{K}\ge 2E\vert x^{{ \mathrm{\scriptscriptstyle T} }}\beta\vert^2/\varepsilon.
\end{equation*}
Then $n^{-1}\sum_{t=1}^{n} \left[q_{t}(\delta) - E q_{t}(\delta)\right] \overset{a.s.} \rightarrow 0$ for any $\delta \in \Theta$. Together with the first part of the proof given above we now have $Q_n(\delta) \overset{\textrm{a.s}}{\to} Q(\delta)$. Since $\Theta$ is a compact set, and $Q_{n}(\delta)$ is a continuous function of $\delta$ for all $n$, by
\citet[Theorem 3.3]{gallant1988unified}, $\hat{\delta}: = \arg \underset{\Theta}\max~Q_n(\delta)\overset{a.s.} \rightarrow \delta_{0}.$
\bigskip
\noindent \textit{Asymptotic Normality}: Using a Taylor expansion,
\begin{equation*}
\sqrt{n}(\hat\delta - \delta_0)= -\left(\frac{1}{n}\sum_{t=1}^n \ddot{l}_t(\delta^\ast)\right) ^{-1} \frac{1}{\sqrt{n}}\sum_{t=1}^n \dot{l}_t(\delta_0), \quad \delta^\ast\overset{a.s.}\to \delta_0
\end{equation*}
the asymptotic normality of $\sqrt{n}(\hat\delta-\delta_0)$ can be obtained if
\[ -\frac{1}{n}\sum_{t=1}^n \ddot{l}_t(\delta^\ast) \overset{p}\to \Omega_{1,1} \quad \textrm{and} \quad \frac{1}{\sqrt{n}}\sum_{t=1}^n \dot{l}_t(\delta_0)\overset{d}\to N(0,\Omega_{1,2}). \]
Conditional on $m_t$ and $x_{nt}$, $\{\ddot{l}_t(\delta)\}$ is strongly mixing. Then by Chebyshev's inequality and \citet[Theorem 17.2.3]{ibragimov1971independent}, $\exists \epsilon>0$ such that
\[
\frac{1}{n}\sum_{t=1}^n \ddot{l}_t(\delta)- \frac{1}{n}\sum_{t=1}^n E(\ddot{l}_t(\delta)) \overset{p}\to 0, \quad \Vert \delta-\delta_0\Vert \le \epsilon
\]
Since $\delta^\ast\overset{a.s.}\to \delta_0$, and the continuity of $\ddot{l}_{t}(\cdot)$ with respect to $\delta$, $n^{-1}\sum_{t=1}^n \ddot{l}_{t}(\delta^\ast) - n^{-1}\sum_{t=1}^n \ddot{l}_t(\delta_0)\overset{a.s.}\to 0$, and $n^{-1}\sum_{t=1}^n \ddot{l}_t(\delta_0) - n^{-1}\sum_{t=1}^n E(\ddot{l}_t(\delta_0)) \overset{\textrm{a.s.}}{\to} 0$. Now $E(\ddot{l}_t(\delta_0))= E(\dot{l}_t(\delta_0)\dot{l}_t^{{ \mathrm{\scriptscriptstyle T} }}(\delta_0))$, and hence it follows that $n^{-1}\sum_{t=1} \ddot{l}_t(\delta^\ast) \overset{p}\to \Omega_{1,1}$.
Next we show that $\Omega_{1,1}$ is positive definite. Let $s=(s_{1},s_{2})$ be an $(r+1)$ dimensional constant vector, without lose of generality, $s^{{ \mathrm{\scriptscriptstyle T} }} s= 1$. Define $q_{t}(\delta_0)=s^{{ \mathrm{\scriptscriptstyle T} }}\dot{l}_t(\delta_0)$, note $\det(\Omega_{1,1})\ge 0$ and $\det(\Omega_{1,1})=0$ only if $E\left(q_t^2(\delta_0)\vert m_t, x_{nt} \right)=0$ for all $t$.
Then under Condition \ref{Cond: Reg Full Rank}, $\Omega_{1,1}$ is positive definite. The limit, $\Omega_{1,1}$, under Condition \ref{Cond: Reg Trend Type}a is given in \eqref{eq: PD InfMat Lim}. For stationary regressors such limit can be easily obtained using ergodic theorem.
Next we show $\Omega_{1,2}$ exists. Note
\begin{equation*}
\Omega_{1,2}=\underset{n\to\infty}\lim \mathrm{Var}\left(\frac{1}{\sqrt{n}}\sum_{t=1}^n q_{t}(\delta_0) \right) = \sum_{h=0}^{n-1}\left(\frac{1}{n}\sum_{t=1}^{n-h} \mathrm{Cov}(q_t(\delta_0), q_{t+h}(\delta_0))\right) + \sum_{h=1}^{n-1}\left(\frac{1}{n}\sum_{t=h+1}^n \mathrm{Cov}(q_{t}(\delta_0), q_{t-h}(\delta_0))\right)
\end{equation*}
then $\Omega_{1,2}$ exists if
\[
\underset{n\to\infty}\lim \sum_{h=0}^{n-1}\left(\frac{1}{n}\sum_{t=1}^{n-h} | \mathrm{Cov}(q_{t}(\delta_0), q_{t+h}(\delta_0)) | \right) < \infty, \quad \underset{n\to\infty}\lim \sum_{h=1}^{n-1} \left(\frac{1}{n}\sum_{t=h+1}^n | \mathrm{Cov}(q_{t}(\delta_0), q_{t-h}(\delta_0))|\right) < \infty.
\]
Since the $\{q_t(\delta_0)\}$ is strong mixing, by Theorem 17.3.2 in \cite{ibragimov1971independent},
\begin{equation*}
\sum_{h=0}^{n-1}\left(\frac{1}{n}\sum_{t=1}^{n-h} \left\vert \mathrm{Cov}(q_t(\delta_0), q_{t+h}(\delta_0)) \right\vert \right) \le 2 \sum_{h=0}^{n-1} \nu(h)^{\lambda/(2+\lambda)}W_h < \infty,
\end{equation*}
where \begin{equation*} W_h = \lim_{n\to\infty} \frac{1}{n}\sum_{t=1}^{n-h}\left[ 4+ 3(c_t c_{t+h}^{1+\lambda} + c_t^{1+\lambda}c_{t+h}) \right], \quad c_t\ge \|q_{t}(\delta_0)\|_{2+\lambda} \end{equation*} Such $c_t$ exists, for example, take $\lambda=2$, use Cauchy-Schwarz's inequality,
\begin{align*}
E\left( |q_{t}(\delta_0)| ^{4}|m_t,x_{nt}\right) & \le E\left\{ \int f(y_{t}|x_{nt},z_{t},\delta_0) \phi(z_{t})\left((y_{t}-m_t\dot{b}(W_{0,t}))(s_{1}^{{ \mathrm{\scriptscriptstyle T} }}x_{nt}+ s_{2}z_{t})\right)^{4} dz \cdot f^{-1}(y_{t}|x_{nt},\delta_0)\right\}\\
& = E\left[ m_t\ddot{b}(W_{0,t})(1+ (3m_t-6)\ddot{b}(W_{0,t}))(s_{1}^{{ \mathrm{\scriptscriptstyle T} }}x_{nt}+ s_{2}z_{t})^{4} | m_t, x_{nt}\right]
\end{align*}
Then by the application of CLT for mixing process in Theorem 3.2, \cite{davidson1992central} we will have \newline $n^{-1/2}\sum_{t=1}^n \dot{l}_t(\delta_0)\overset{d}\to N(0,\Omega_{1,2})$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm: GLM asymptotics}] \label{Pf: Thm: GLM asymptotics}
Following the proof of \cite{davis2000autocorrelation} and \cite{wu2014parameter}, let $u = \sqrt{n}(\beta - \beta ^\prime)$ (note here we centre on $\beta'$ and not the true value $\beta_0$ as was done in these references). Then maximizing
\begin{equation*}
l_n(\beta)= \sum_{t=1}^n\left[ y_t(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta)- m_t b(x_{nt}^{{ \mathrm{\scriptscriptstyle T} }}\beta) + c(y_t)\right]
\end{equation*}
over $\beta$ is equivalent to minimizing $g_n(u)$ over $u$ where $g_{n}(u):= -l_{n}(\beta^\prime + u/\sqrt{n}) + l_{n}(\beta^{\prime})$. Let $\hat{u}=\arg \min \underset{n\to\infty}\lim g_n(u)$. Write $g_{n}(u): = B_{n}(u) - A_{n}(u)$ where
\[
B_{n}(u): =\sum_{t=1} ^{n} m_{t} \left( b(x_{t}^{T}\beta^\prime + x_t^T u/\sqrt{n})- b(x_{t}^{T}\beta^{\prime}) - \dot{b}(x_{t}^{T}\beta^{\prime})x_{t}^{T} u/\sqrt{n}\right)
\]
and
\[
A_{n}(u): = u^T \frac{1}{\sqrt{n}}\sum_{t=1}^{n}(y_{t} - m_{t}\dot{b}(x_{t}^{T}\beta^{\prime}))
x_{t} = u^T U_n.
\]
Using similar procedures as in the proof of Theorem 1. in \cite{wu2014parameter} it is straightforward to show
\[
B_{n}(u) \rightarrow \frac{1}{2}u^{T} \Omega_{1} u
\]
and
\[
E\left( e^{is^{T}U_{n}}\right) = \exp\left[-\frac{1}{2}s^T\Omega_{2}s\right].
\]
for each $u$. Since $g_{n}(u)$ is a convex function of $u$ and $\hat{u}_n$ minimizes $g_n(u)$, then an application of the functional limit theory gives $\hat{u}_n\overset{d}\to \hat{u}$, where $\hat{u}=\arg \min \underset{n\to\infty}\lim g_n(u)$. In conclusion,
\begin{equation*}
g_n(u)\overset{d}\to g(u)= \frac{1}{2}u^T \Omega_1 u - u^T N(0,\Omega_2)
\end{equation*}
on the space $C(\mathbb{R}^r)$, and $\hat{u}_n\overset{d}\to \hat{u}$, where $\hat{u}\sim
N(0, \Omega_{1}^{-1}\Omega_{2}\Omega_{1}^{-1}).$
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm: score dist}] \label{Pf: Thm: score dist}%
Since $S_{1,n}(\tilde\beta)$, a linear function of $U_{1,n}$ and $U_{2,n}$, to show that $\sqrt{n}\left(S_{1,n}(\tilde\beta)- E(S_{1,n}(\beta^\prime))\right)$ is normally distributed it is sufficient to show that the the joint distribution of $(U_{1,n}, U_{2,n})$ is multivariate normal. As defined in \eqref{eq: U1 U2},
\[
U_{1,n}:= S_{1,n}(\beta^\prime) - E\left(S_{1,n}(\beta^\prime)\right)= \frac{1}{\sqrt{n}} \sum_{t=1}^{n} e_{t,\beta^\prime} ^2 - E e_{t,\beta'}^2; \quad
U_{2,n}:= \Omega_{1}^{-1}\frac{1}{\sqrt{n}}\sum_{t=1}^{n} e_{t,\beta^\prime}x_{nt}= \frac{1}{\sqrt{n}}\sum_{t=1}^{n} e_{t,\beta^\prime}c_{nt}
\]
where $c_{nt} = \Omega_{1}^{-1}x_{nt}$, defines a sequence of non-random vectors.
Let $U_{nt}= (U_{1,nt}, U_{2,nt})$ be the joint vector at time $t$, then each dimension of $\{U_{nt}\}$ is uniformly bounded, strongly mixing and $E(U_{nt})=0$. We need to prove that $a_{1} U_{1,n} + a_{2}^T U_{2,n}$ has normal distribution for arbitrary constant vector $a = (a_1, a_2)$ where $a^T a =1$ without loss of generality. By the SLLN for mixing process in \cite{mcleish1975maximal}, there exists a limiting matrix $\Omega_{U}$ such that
\begin{equation*}
\mathrm{Var}(\sum_{t=1}^{n} a^T U_{nt}) = \sum_{h=0}^{n-1}\left(\sum_{t=1}^{n-h} a^T\textrm{Cov}
(U_{nt}, U_{n,t+h}) a\right) + \sum_{h=1}^{n-1}\left(\sum_{t=1+h}^{n} a^T\textrm{Cov}(U_{nt},
U_{n, t-h}) a \right) \rightarrow a^T \Omega_{U} a.
\end{equation*}
Then conditions of the CLT of \cite{davidson1992central} are satisfied, and we have $\sum_{t=1}^{n} U_{nt} \overset{d}\to N(0,\Omega_U)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm: Prob mixdensity sigma=0}]
This proof follows \cite{moran1971maximum}. $\delta_0$ is the true value of the parameters, $\delta'=(\beta',0)$ is the limit of parameters that maximize
\eqref{eq: marginal log likelihood} for fixed $\tau=0$; $\hat\delta$ is the maximum likelihood estimators of \eqref{eq: marginal log likelihood}.
Consider first the distribution of $\hat\delta$ if $\hat\tau>0$. Since the unconditional joint distribution of $\hat\beta$ and $\hat\tau$ is multivariate normal with $N(0, \Omega_{1,1}^{-1} \Omega_{1,2}\Omega_{1,1}^{-1})$ -- see proof of Theorem \ref{Thm: Consist&Asym Marginal Like Estimator}, hence $F_2(c,\delta_0)$, the distribution of $\sqrt{n}(\hat\beta-\beta_0)|\hat\tau>0$ is skew normal based on $N(0,\Omega_{1,1}^{-1}\Omega_{1,2}\Omega_{1,1}^{-1})$.
When $\hat\tau=0$, use a Taylor expansion to the first derivatives around $\delta'$, then
\begin{equation*}
\sqrt{n}(\hat\beta -\beta') = \left(-\frac{1}{n}\sum_{t=1}^n \ddot{l}_{t}(\delta') \right)^{-1} \frac{1}{\sqrt{n}}\sum_{t=1}^n \frac{\partial l_{t}(\delta')}{\partial \beta} + o_p(1),
\end{equation*}
conditional on $n^{-1/2}\sum_{t=1}^n \partial l_{t}(\delta')/\partial \tau < 0$. As $n\to\infty$,
\[
\frac{1}{\sqrt{n}}\sum_{t=1}^n \frac{\partial l_{t}(\delta')}{\partial \delta} = \begin{pmatrix}
\frac{1}{\sqrt{n}}\sum_{t=1}^n e_{t,\beta'}x_{nt} \\
\frac{1}{2\sqrt{n}}\sum_{t=1}^n \left[ e_{t,\beta'}^2- m_t \ddot{b}(x_{t}^T\beta')\right]
\end{pmatrix} \overset{d}\to N(\begin{pmatrix} 0\\
E(S_{1,n}(\beta'))/2 \end{pmatrix}, \begin{pmatrix}
\Omega_{2} & K_{S}/2\\
K_{S}^T/2 & V_{S}/4 \end{pmatrix} )
\]
\end{proof}
|
1,108,101,563,130 | arxiv | \section{INTRODUCTION}
Thermoelectric materials have attracted a great deal of interest due to their remarkable applications in meeting the world's demand for generating electricity from waste heat and solid-state Peltier coolers\,\cite{dis, nol1, sny}. The thermoelectric efficiency of a material is determined by the dimensionless figure of merit\,\cite{trit1, mah}, defined as $zT=S^2\sigma T/\kappa$=PF $T/\kappa$, where $S$ is the Seebeck coefficient, $\sigma$ is the electrical conductivity, $T$ is the absolute temperature, and $\kappa$ is the total thermal conductivity (including the lattice contribution $\kappa_l$ and the electron contribution $\kappa_e$), and PF is the power factor (PF=$S^{2}\sigma$). $zT$=3 is needed for thermoelectric energy converters to complete with mechanical power generation and active refrigeration. However, state-of-the-art commercially available thermoelectric materials have a peak $zT$ value less than unity. As a result, a material suitable for thermoelectric applications must be optimized through electrical conductivity, Seebeck coefficient and thermal conductivity. However, aside from the independent parameter lattice thermal conductivity, the other transport properties (electrical conductivity, Seebeck coefficient, and electronic thermal conductivity) cannot be independently tuned in an effort to increase $zT$. Because the properties are interdependent via the carrier concentration ($n$) in a given thermoelectric material\,\cite{sny,nol2,nol3}. Therefore, the main conventional efforts for maximizing $zT$ of thermoelectric materials are carrier concentration optimization\,\cite{sny} and lattice thermal conductivity reduction\,\cite{sale,hsu,poudel}. It is well known that the optimal carrier concentration depends on temperature and the band structure of thermoelectric semiconductors. As a consequence, there are two major approaches that are pursued separately or in conjunction to achieve higher $zT$: One is to find new crystalline materials with unique structure property relationships that yield the desired combination of properties\,\cite{sny,trit2,cyu,kau,bro}, and the other is to utilize band engineering\,\cite{her1,her2}, alloying\,\cite{bis} or nanostructuring\,\cite{yang1,sak} to tune electrical and thermal transport properties.
Half-Heusler compounds with a valence electron count of 18 have recently been identified as promising thermoelectric materials due to the unique XYZ structures\,\cite{cgf,tjz3,jhe}. These phases are well-known semiconductors with a narrow energy gap and sharp slope of density of states near the Fermi level, which could potentially provide a higher Seebeck coefficient and moderate electrical conductivity\,\cite{ali,gal,yang2,sim}. Nevertheless, the lattice thermal conductivity is relatively high\,\cite{hon,uhe,xia,sek}. Among them, it's noteworthy that $p$-type FeNb$_{0.8}$Ti$_{0.2}$Sb is more competitive not only because the elements are inexpensive and Hf-free but also due to it possessing a relatively low lattice thermal conductivity. More importantly, FeNb$_{0.8}$Ti$_{0.2}$Sb exhibits excellent thermoelectric performance at high temperatures ($>$ 900 K). The $zT$ is superior to the optimized typical half-Heusler compounds\,\cite{tjz1,dow,che}, and especially, the maximum $zT$ (1.1 at 1100 K)\,\cite{fu} is almost twice as high as that of the most widely used $p$-type silicon-germanium thermoelectric materials\,\cite{poudel,zeb,yu,jos,poo}. Fu \emph{et al}.\,\cite{fu} have also confirmed the good experimental repeatability and high-temperature stability of FeNb$_{0.8}$Ti$_{0.2}$Sb. Although the excellent high-temperature thermoelectric performance of FeNb$_{0.8}$Ti$_{0.2}$Sb has been presented, the physical mechanisms are not clear\,\cite{Ran}. The study of material properties at low temperatures without thermal fluctuations is essential to have a real understanding of physical origins of its good performance at high temperatures.
In this work, we present a series of low-temperature investigation of FeNb$_{0.8}$Ti$_{0.2}$Sb in order to obtain the physical origins of its excellent thermoelectric performance at high temperatures. The physical mechanisms for low-temperature electrical and thermal properties are revealed. Moreover, the high data coherence of low and high temperature is observed. Thus, the physical mechanisms at low temperatures are extended to high temperatures.
\section{EXPERIMENTAL DETAILS}
The sample ingot with nominal composition FeNb$_{0.8}$Ti$_{0.2}$Sb used in the experiment was synthesized by levitation melting\,\cite{fu}. The obtained ingot was mechanically milled to obtained fine-grained powders. Afterwards, the powders were immediately and compacted by using spark plasma sintering at 1123 K for 10 minutes under 65 MPa in a vacuum, for a more detailed explanation refer to Fu \emph{et al}.\,\cite{fu}. The as-sintered samples were annealed at 1123 K for 8 days. Phase structures of the sample were investigated by X-ray diffraction on a RigakuD /MAX-2550PC diffractometer using Cu-K{$\alpha$} radiation ($\lambda$$_{0}$=1.5406 {\AA}) and the chemical compositions were checked by Energy Dispersive Spectrometer on an OXFORD X-Max$^{N}$. Magnetic susceptibility measurements were carried out in the temperature range of 1.8 - 300 K and in the magnetic fields up to 5 T using a Magnetic Property Measurement System (Quantum Design). The electrical conductivity and the Seebeck coefficient as well as the thermal conductivity measurements were performed from 2 K to 400 K by the thermal transport option (TTO) of Physical Property Measurement System (Quantum Design). The Hall coefficient and specific heat measurements were completed in the temperature range of 1.8 - 400 K also using a Physical Property Measurement System (Quantum Design). For high-temperature (300 - 1100 K), the electrical conductivity and Seebeck coefficient were measured on a commercial Linseis LSR-3 system and the thermal conductivity was estimated by a laser flash method on Netzsch LFA457 instrument with a Pyroceram standard.
\section{RESULTS AND DISCUSSION}
\subsection{Structural characterization}
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.48\textwidth]{fig1.eps}}
\label{modes} \caption{(Color online) X-ray diffraction pattern of FeNb$_{0.8}$Ti$_{0.2}$Sb.}
\end{figure}
\begin{table}[b]
\centering
\caption[modes]{Atomic distribution of FeNb$_{0.8}$Ti$_{0.2}$Sb.}
\begin{tabular*}{8cm}{@{\extracolsep{\fill}}lcccc}
\hline
\textbf{Element} & \textbf{Ti} & \textbf{Fe} & \textbf{Nb} & \textbf{Sb} \\\hline
\vspace{0.05cm}
\textbf{Atomic\%} & 6.31 & 33.09 & 27.29 & 33.32 \\\hline
\end{tabular*}
\end{table}
The X-ray diffraction pattern of the sample, as shown in Fig. 1, was fully indexed within cubic face-centered unit cell with lattice parameter $\emph{a}$=5.951 {\AA}. Compared with the data base, the intensities of the diffraction peaks belong to a space group of $\emph{F}\bar{4}3\emph{m}$ which is consistent with the literature\,\cite{cas}. Table I shows the atomic distribution of the sample. The chemical composition of the sample which was determined from Energy Dispersive Spectrometer is FeNb$_{0.8}$Ti$_{0.2}$Sb.
\subsection{Magnetic characterization}
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.48\textwidth]{fig2.eps}}
\label{modes} \caption{(Color online) (a) Magnetic susceptibility and the inverse susceptibility $vs$ temperature for FeNb$_{0.8}$Ti$_{0.2}$Sb. The dotted line represents the linear extrapolation of the inverse susceptibility $vs$ temperature plots. (b) Magnetization $vs$ magnetic field at 1.8 K. Inset: Arrot plot at 1.8 K of FeNb$_{0.8}$Ti$_{0.2}$Sb.}
\end{figure}
Figure 2(a) shows the temperature dependence of the zero-field-cooled (ZFC) and field-cooled (FC) magnetic susceptibility curves with the external field of 100 Oe together with the inverse magnetic susceptibility ($\chi^{-1} vs$) data in the FC run of FeNb$_{0.8}$Ti$_{0.2}$Sb between 1.8 K and 300 K. Both the ZFC and FC magnetic susceptibilities increase with decreasing temperature, exhibiting a sharp increase at lower temperatures below 6 K. A peak in ZFC is observed around 10 K and irreversibility in ZFC and FC curves occurs below 200 K. A divergence in ZFC and FC along with a peak in the ZFC curves has been reported in the systems\,\cite{vas,ber,all1,rub,all2} that possess mixed exchange interactions, such as spin glass, superparamagnetic or magnetic clusters. Taking other Heusler alloys for reference, the present results suggest the sample is magnetically disordered. The broad maximum in ZFC curve suggests the presence of distribution of magnetic clusters/defects\,\cite{vas}. On the other hand superparamagnetism could also be taken into account\,\cite{vas}. With decreasing temperature, the inverse magnetic susceptibility follows a Curie-Weiss law above 135 K, indicating a paramagnetic behavior. However, it deviates markedly from the Curie-Weiss law below 135 K. Curie-Weiss law has the formula: $\chi(T)=\chi_{0} + C/(T-T_{C})$, where $C$=$N_{A}\mu_{eff}^2/(3k_{B})$, $N_{A}$ is Avogadro's number, $\mu_{eff}$ is the effective moment, $\mu_{B}$ is the Bohr magneton, and $T_{C}$ is the Curie-Weiss temperature. A least-squares fit of the inverse magnetic susceptibility from 135 K to 300 K is shown in Fig.2. The excellent fitting indicates the onset of weak antiferromagnetism below 135 K. The antiferromagnetism is probably a result of atomic disorder\,\cite{sle1,lue,sle2}.
In order to identify the magnetic phase at lower temperatures below 10 K, we investigate the magnetization ($M$) $vs$ magnetic field ($H$) at 1.8 K shown in Fig. 2(b). Superparamagnetism is a form of magnetism which appears in small ferromagnetic or ferrimagnetic nanoparticles. In the absence of an external magnetic field, when the time used to measure the magnetization of the nanoparticles is much longer than the N$\acute{e}$el relaxation time, their magnetization appears to be in average zero. While ferromagnetism is a form of magnetism which could exhibit spontaneous magnetization: a net magnetic moment in the absence of an external magnetic field. As shown in Fig. 3, a so small amount of hysteresis exists at 1.8 K. This is a good indication of the presence of either superparamagnetism or weak ferromagnetism. Because a small hysteresis will also occur in superparamagnetism below the blocking temperature\,\cite{feng,vas}. Therefore, a further investigation of magnetic property is needed. An Arrot plot ($M^{2} vs H/M$) of the $M(H)$ data for $H\leqq$5 kOe at 1.8 K is presented in the inset of Fig. 2(b). The Arrot plots are not linear and the slope is positive, further confirming the superparamagnetism and ruling out the possibility of ferromagnetism\,\cite{vas}. The occurrence of hysteresis is due to freezing of the superparamagnetism below 10 K\,\cite{vas}. The presence of antiferromagnetic state and superparamagnetic clusters below 135 K and 10 K, respectively, in the sample thus can be explained by the existence of atomic disorder. We must emphasize that the exact identification of heusler alloys remains unsettled\,\cite{vas,ber,all1,rub,all2,sle1,lue,sle2,feng}. The magnetic characterization of the sample is not the focus of this article. We concentrate more on the thermoelectric properties and their origins.
\subsection{Specific heat}
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.48\textwidth]{fig3.eps}}
\caption{(Color online) Temperature dependence of specific heat of FeNb$_{0.8}$Ti$_{0.2}$Sb. The inset presents the low-temperature data as a $C_P/T$ vs $T^2$ function. The dotted line is the least-squares fit according to the equation above.}
\end{figure}
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.48\textwidth]{fig4.eps}}
\caption{(Color online) Temperature dependence of (a) electrical conductivity and (b) Seebeck coefficient of FeNb$_{0.8}$Ti$_{0.2}$Sb. The inset of (b) is the temperature dependence of Seebeck coefficient below 50 K. The high-temperature data taken from a LSR-3 system are shown in red for comparison.}
\end{figure}
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.48\textwidth]{fig5.eps}}
\caption{(Color online) Temperature dependence of (a) Hall coefficient, (b) carrier concentration and (c) carrier mobility of FeNb$_{0.8}$Ti$_{0.2}$Sb.}
\end{figure}
Figure 3 presents the temperature dependence of specific heat $C_{p}$ of FeNb$_{0.8}$Ti$_{0.2}$Sb. The specific heat curve has a typical sigmoid like shape and approaches a value expected from Dulong-Petit law, $C_{p}$ = 3$nR$ = 74.8 J mol$^{-1}$ K$^{-1}$, where $n$ is the number of atoms per molecule and $R$ is the gas constant. At very low temperatures, the specific heat in FeNb$_{0.8}$Ti$_{0.2}$Sb gradually diminishes to zero. The inset in Fig. 3 shows the low temperature dependence of specific heat presented as $C_{P}/T$ $vs$ $T^{2}$ from 5 K to 10 K. It can be well described by the formula\,\cite{gof} \vspace{2mm}\newline \centerline { $C_{p}=\gamma T+\beta T^{3}$,} \vspace{1mm}\newline where $\gamma T$ and $\beta T^{3}$ are the electron and phonon contribution to the total specific heat, respectively. As a result, the coefficients $\gamma$ is 10.25 mJ mol$^{-1}$ K$^{-2}$, and $\beta$ is 0.10 mJ mol$^{-1}$ K$^{-4}$. From the value of $\beta$ one can estimate the Debye temperature $\Theta_D$=$(12R\pi^4n/5\beta)^{1/3}$ to be about 388 K. An abnormal upturn seen at low temperature is similar to that observed in several systems, including the new iron-based superconductors and other heusler materials\,\cite{kim,gof2,dor}. For FeNb$_{0.8}$Ti$_{0.2}$Sb, the phenomenon maybe origin from the magnetic clusters arising from the atomic disorder.
It's noteworthy that there has been a theoretical estimate the thermal conductivity using Debye theory and could reveal a relation between thermal conductivity and specific heat, which is given by $\kappa=\frac{1}{3}C\nu l$, where $C$ is the specific heat per volume, $\nu$ is the average phonon velocity, and $l$ is the phonon mean free path. At the very low temperatures, the low specific heat indicates the low thermal conductivity. With increasing temperature, the specific heat increases fast and approaches to a constant value. Therefore the thermal conductivity will increase rapidly and also reach a maximum. At higher temperatures, with the enhancement of the phonon-phonon scattering, the average phonon velocity and phonon mean free path are limited significantly and the thermal conductivity reduces largely.
\subsection{Electrical transport properties}
Figure 4 illustrates the temperature dependence of (a) electrical conductivity and (b) Seebeck coefficient of FeNb$_{0.8}$Ti$_{0.2}$Sb. The high-temperature data taken from LSR-3 system are drawn in red color for comparison. As shown in the figures, the low and high temperature electrical transport properties measured in the different methods are consistent with each other. In addition, the low and high temperature data converge at room temperature. As temperature is increased, the electrical conductivity decreases rapidly in the range of 10$^6$ $S m^{-1}$, following the degenerate semiconducting behavior\,\cite{cgf2}. This implies that the electrical conductivity will follow a temperature dependence of $T^{-1.5}$ from the Debye temperature (388 K) to the intrinsic excitation temperature\,\cite{cgf3}, which agrees well with the high-temperature experiment data (388 K - 1100 K). It means that acoustic phonon scattering dominates charge transport\,\cite{shi}, which is consistent with the specific heat measurement. There is an upturn at the low temperatures (below 30 K). The anomalous temperature dependence of electrical conductivity maybe due to the magnetic clusters arising from the atomic disorder\,\cite{dor}.
The values of Seebeck coefficient are negative below 10 K and keep positive from 10 K to 1100 K. As temperature is increased, the Seebeck coefficient increases rapidly and approaches 80 $\mu$vK$^{-1}$ in the vicinity of 400 K and approaches a maximum of 205 $\mu$vK$^{-1}$ at 1100 K, which is typical behavior for degenerate semiconductors\,\cite{cgf2}. Thus it can be predicted that the Seebeck coefficient will linearly increase with increasing temperature before the intrinsic excitation which is in accordance with the high-temperature data.
For FeNb$_{0.8}$Ti$_{0.2}$Sb, the carrier concentration is one of the most important physical parameters for thermoelectric performance. The electrical conductivity is related to carrier concentration through carrier mobility ($\mu$): $\sigma=ne\mu$, where $e$ is the unit charge. The carrier concentration is calculated by $n=1/eR_{H}$, where $e$ is the unit charge and $R_{H}$ is the Hall coefficient\,\cite{fu}. Figure 4 shows the temperature dependence of (a) Hall coefficient, (b) the calculated carrier concentration and (b) carrier mobility of FeNb$_{0.8}$Ti$_{0.2}$Sb from 1.8 K to 400 K, respectively. The carrier concentration is rather constant and almost independent of temperature, about 10$^{21}$ $cm^{-3}$ below 400 K which is in the optimal value range\,\cite{sny,fu}. The carrier mobility decreases slightly with temperature and becomes 25 $cm^2v^{-1}s^{-1}$ at 400 K, which is consistent with the electrical conductivity. The Hall coefficient values are positive and keep stable over the whole low temperature range, indicating the majority of the charge carriers are holes and there is only a single type of carrier which will benefit the Seebeck coefficient. For degenerate semiconductors the Seebeck coefficient is given by\,\cite{sak} \vspace{2mm}\newline\centerline {$S = \frac{8\pi^2\kappa^2_BT}{3eh^2}m^*(\frac{\pi}{3n})^{2/3}$,} \vspace{1mm}\newline where $m^{*}$ is the effective mass of the carrier. Thus the large Seebeck coefficient is due to the optimal carrier concentration and hole carriers which have larger effective mass than electrons. Below 10 K, the values of Seebeck coefficient are below zero, implying the effective mass is negative. This means the band curves downwards away from a maximum. Taking into account the magnetic properties of FeNb$_{0.8}$Ti$_{0.2}$Sb, the magnetic clusters arising from the atomic disorder in the sample maybe contribute to the phenomenon.
\subsection{Thermal transport properties}
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.48\textwidth]{fig6.eps}}
\caption{(Color online) Temperature dependence of the thermal conductivity of FeNb$_{0.8}$Ti$_{0.2}$Sb, as well as high-temperature data (in red) estimated by a laser flash method on a Netzsch LFA457 instrument with a Pyroceram standard. The inset is the temperature dependence of the lattice and electron components of the thermal conductivity of FeNb$_{0.8}$Ti$_{0.2}$Sb.}
\end{figure}
Figure 6 presents the temperature dependence of the thermal conductivity of FeNb$_{0.8}$Ti$_{0.2}$Sb, as well as high-temperature data estimated by a laser flash method on Netzsch LFA457 instrument with a Pyroceram standard. The data from low temperature agree well with the high-temperature data measured in different methods. As temperature is increased, the thermal conductivity increases rapidly and reaches a maximum (approximately 8.7 WK$^{-1}$$m^{-1}$) around 126 K, and then declines gradually, which is in accordance with the estimation from specific heat. The scattering mechanism could give further explanation of the shape of the observed thermal conductivity curve as follows: the thermal conductivity is typically limited by normal three-phonon scattering, umklapp scattering, impurity scattering and boundary scattering\,\cite{cal}. At very low temperatures, boundary scattering predominates the scattering mechanism and the thermal conductivity is small. As temperature is increased, the impurity scattering becomes important because it becomes easier to create higher frequency phonons which are scattered efficiently by point impurities and the thermal conductivity reaches a maximum and then declines. As temperature is increased further, normal three-phonon scattering and umklapp scattering gradually come to dominate. At higher temperatures, all phonon scattering occupies the scattering mechanism.
The inset shows the temperature dependence of the lattice and electron components of the thermal conductivity of FeNb$_{0.8}$Ti$_{0.2}$Sb. The lattice thermal conductivity was obtained by subtracting the electron component from the total thermal conductivity. The electronic thermal conductivity was calculated via $\kappa_{e}$=$L\sigma T$=$Lne\mu T$, where $L$ is Lorenz number and can be calculated using the single parabolic band (SPB) model with reasonable approximation\,\cite{xie}. The temperature dependence of lattice thermal conductivity is similar to that of the total thermal conductivity and the maximum is 5.9 WK$^{-1}$$m^{-1}$ around 77 K. In previous work, it's found the carrier mean path in the $p$-type FeNbSb is comparable to the lattice parameter, indicating that the carrier mobility of this system almost reaches the loffe-Regel limit\,\cite{Gurvitch}, which means that the carrier scattering has reached the highest limit and introducing more phonon scattering centers will not impair the power factor while largely suppress the lattice thermal conductivity\,\cite{cgf2}. For semiconductors, the electronic thermal conductivity is much less than the lattice thermal conductivity, while the electron contribution to the total thermal conductivity of FeNb$_{0.8}$Ti$_{0.2}$Sb is not negligible. As shown in Fig. 6, the electronic thermal conductivity increases with temperature and becomes higher than the lattice thermal conductivity from 200 K.
To minimize the lattice thermal conductivity, disorder within the unit cell\,\cite{sale}, superlattices\,\cite{venk}, complex unit cells\,\cite{chun}, nanostructures\,\cite{tjz2} have been widely used in the thermoelectric materials over the past several years. For FeNb$_{0.8}$Ti$_{0.2}$Sb, the electronic thermal conductivity is comparable to or even higher than the lattice thermal conductivity above 200 K. This means the lattice thermal conductivity is largely suppressed and the thermal conductivity is mainly determined by the electronic thermal conductivity.
\subsection{Figure of merit zT}
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.48\textwidth]{fig7.eps}}
\caption{(Color online) Temperature dependence of $zT$ of FeNb$_{0.8}$Ti$_{0.2}$Sb. The data at high temperatures (in red) are taken from a LSR-3 system and a Netzsch LFA457 instrument.}
\end{figure}
Figure 7 shows the temperature dependence of $zT$ of FeNb$_{0.8}$Ti$_{0.2}$Sb in the temperature range of 1.8 - 400 K, together with the high-temperature data measured in different method for comparison. As the intensive result of the electrical conductivity, Seebeck coefficient and thermal conductivity, the $zT$ exhibits a pronounced rise with temperature. Afterwards, it keeps a continuous increase to 0.14 around 400 K and reaches a maximum of 1.1 at 1100 K. Compared with other high temperature thermoelectric materials, FeNb$_{0.8}$Ti$_{0.2}$Sb exhibits excellent thermoelectric performance for power generation. The $zT$ exceeds the industry benchmarks set by $p$-type silicon-germanium high temperature alloys\,\cite{use}. Even more, it's better than the optimized $n$-type (Hf, Zr)NiSn half-Heusler compound (the maximum $zT$ is 1.0 at 1000 K)\,\cite{tjz1,dow,che}. The values of $zT$ at low temperatures join well with that of high temperatures and converge with the high-temperature data around room temperature. The trend demonstrated in the case of low temperature is extended to high temperature. This means the large Seebeck coefficient and the moderate electrical conductivity as well the relatively low thermal conductivity at low temperatures, which result from an optimal and temperature-independent carrier concentration and high content of Ti doping, will account for that of high temperatures. It's known that power factor is essentially determined by the electronic properties, and the thermal conductivity of FeNb$_{0.8}$Ti$_{0.2}$Sb is also mainly dominated by electron component. Based on the above considertaion, the $zT$ of FeNb$_{0.8}$Ti$_{0.2}$Sb is mainly governed by the electronic properties.
\section{CONCLUSIONS}
In conclusion, we have performed electrical and thermal transport measurements on FeNb$_{0.8}$Ti$_{0.2}$Sb at low temperatures in order to elucidate the physical origins of the high thermoelectric performance avoiding the influence of thermal fluctuations. The low-temperature trend of the electrical conductivity, Seebeck coefficient and thermal conductivity are extended to high temperature. The optimized power factor mostly results from the optimal and almost temperature-independent carrier concentration. Meanwhile, a single type of hole carrier benefits the Seebeck coefficient as well. The lattice thermal conductivity is largely suppressed and the total thermal conductivity is mainly determined by the electronic thermal conductivity. Consequently, the $zT$ of FeNb$_{0.8}$Ti$_{0.2}$Sb is mainly governed by the electronic properties. As a result, the $zT$ exhibits a pronounced rise from the low to high temperatures and approaches a maximum of 1.1 at 1100 K exceeding that of state-of-the-art thermoelectric materials. These findings highlight investigating low-temperature physical properties of thermoelectric materials can help to have a thorough knowledge of their behavior at high temperatures.
\vspace{8mm}
\begin{acknowledgments}
We thank Yanping Yang for help in the Energy Dispersive Spectrometer measurement. The sample preparation was supported by Natural Science Foundation of China (No. 11574267).
\end{acknowledgments}
|
1,108,101,563,131 | arxiv | \section{Introduction}
Within a spin torque oscillator (STO), magnetic auto-oscillations, with MHz to GHz frequencies, are driven by the spin transfer torque (STT) associated with injection of DC spin current. Their frequency and amplitude can be tuned via either the DC electrical bias current or an applied magnetic field, while the magnetoresistance of the constituent materials leads to the generation of voltage oscillations. STOs have strong potential for magnetic sensing, signal processing, and neurmorphic computing applications \cite{stiles2006spin,Karenowska2015}. The ability to lock the frequency and phase of the STO to an injected RF signal is an important property within applications, while arrays of STOs promise increased output power through mutual synchronization. However it is first necessary to understand the character of the underlying magnetization dynamics. More specifically, the dynamics excited by both DC and RF currents must be determined if the conditions required for phase-locking are to be fully understood.
Within a spin Hall nano-oscillator (SHNO) the Spin Hall effect (SHE) \cite{Kato2004, Hoffmann2013} drives a pure spin current from a heavy metal with large spin-orbit interaction into a ferromagnet layer \cite{Demidov2012,Brataas2012,Dumas2014}. The de-coupling of charge and spin currents opens up new device geometries, for example enabling exploitation of magnetic insulators\cite{Hamadeh2014}, and in the present study, allows optical access to the active region of the device.
The generation of magnetic auto-oscillations requires a critical spin current density to be exceeded. Within the SHNO the injected charge current is concentrated within a small region of the heavy metal, either by overlaying thick needle-shaped nano-contacts (NCs) on the heavy metal layer ,\cite{Demidov2012,Demidov2014,Liu2013,Nouriddine2011,Ulrichs2014,Giordano2014} or by forming a nanoconstriction within the heavy metal/ ferromagnet bilayer\cite{Demidov2014a,Awad2016,Kendziorczyk2016,Mazraati2016,Zahedinejad2017,Divinskiy2017}. SHNOs of both kinds have been studied by Brillouin Light Spectroscopy (BLS), microwave spectroscopy and micro-magnetic simulations. While the spectral characteristics of the dynamics have been explored, the time-dependent magnetization has not been measured directly.
\begin{figure}
\centering
\includegraphics[width=8cm]{Fig1_device_bullets_V4h_p_LowRes.png}
\caption{(a) SEM image of a typical SHNO, where $I$ is the injected current, $\sigma$ the corresponding spin polarization, $d$ the NC separation, and $H$ the magnetic field applied at angle $\theta_H$. (b) Voltage Spectral Density (VSD) of microwave emission from a SHNO with $d$ = 240 nm at fixed magnetic field $H$ = 650 Oe and $\theta_H = 210^\circ$ for different values of $I_{DC}$. (c) Microwave emission from a SHNO with $d =$ 180 nm for $I_{DC}$ = 18 mA, with magnetic field $H$ oriented at $\theta_H = 150^\circ$. The red squares show resonance fields determined by STT-FMR measurements of the same device, while the solid black curve is a fit that is described within the main text. (d) Emission from a device with d = 200 nm when $I_{DC}$ = 16 mA and $I_{RF}$ has amplitude of 0.9 mA and frequency of 6 GHz. Squares identify the centre of peaks fitted to the emission spectra. The red dotted curve represents the line centre for free-running oscillations when $I_{RF}$ = 0 mA. The inset shows the emission spectrum at the edge of the locking region, from which a back-ground spectrum acquired with $I_{DC}$ = 0 has been subtracted.
\label{Fig:1}
\end{figure}
Here time resolved scanning Kerr microscopy (TRSKM)\cite{Gangmei2011,Valkass2016,Keatley2017,Keatley2016} is used to study NC-SHNO devices. The response to both a radio frequency (RF) current $I_{RF}$, and a DC current $I_{DC}$ when phase-locked to an injected $I_{RF}$, are observed. The source of the contrast observed in longitudinal and polar magneto optical Kerr effect (MOKE) measurements is first explained, before the spatial character of the magnetization dynamics induced by RF and DC currents is determined. Finally, the implications for effective injection locking and optimisation of the device geometry will be discussed.
NC-SHNOs were fabricated by a combination of sputter deposition and electron-beam lithography. A 4$\mu m$ Py(5 nm)/Pt(6 nm) bi-layer disk was first defined upon a $Al_2O_3$ substrate, before two triangular Au(150 nm) NCs, with tip separation $d$, were overlaid as shown in Figure \ref{Fig:1}(a). The NCs are intended to concentrate electrical current within the Pt at the NC tips. The charge current generates a spin current, via the SHE, that propagates from the Pt into the Py. Once the STT compensates the damping, a self-localized non-linear spin wave "bullet" mode is formed. While other modes can be supported within the disk, e.g. propagating waves when the Py is magnetized normal to the plane,\cite{Giordano2014} the bullet mode is of particular interest due to its narrow linewidth and tuneable frequency.
Initial microwave electrical measurements were performed by connecting a selected device to a bias-tee. The inductive and capacitative arms were used to supply $I_{DC}$ and $I_{RF}$ respectively, while the RF signal from the device was directed into a spectrum analyzer via a circulator and +24 dB pre-amplifier. Stroboscopic TRSKM measurements were performed with a vector quadrant bridge detector that exploits different magneto optical Kerr effect (MOKE) geometries to simultaneously detect the three spatial components of the dynamic magnetization and the optical reflectance \cite{Valkass2016a}. The dynamics must be synchronized, via the injected $I_{RF}$, to an exact multiple of the 80 MHz repetition rate of the laser within the TRSKM. The phase of $I_{RF}$ is then adjusted relative to the laser pulses so that the time evolution of the magnetization dynamics can be observed. Measurements were performed with phase modulation of $I_{RF}$ to enhance the signal to noise ratio. The laser pulses had 800 nm wavelength, and were focused to a spot of $\sim$870 nm FWHM diameter by a microscope objective with 10.1 mm working distance and 0.55 numerical aperture \cite{Keatley2017}.
Electrical measurements were performed to identify the bullet mode and confirm locking to $I_{RF}$. Figure \ref{Fig:1}(b) shows emission from a NC-SHNO with $d$ = 240 nm when $I_{DC}$ exceeds $\sim$18 mA. The frequency red-shifts with increasing $I_{DC}$, while no emission is observed if the sign of either $H$ or $I_{DC}$ is changed, in agreement with the expected symmetry of the SHE. Figure \ref{Fig:1}(c) shows the field and frequency dependence of both the Ferromagnetic Resonance (FMR) $f$, determined from separate STT-FMR measurements (consistent with previous work \cite{Liu2011,Liu2013}), and the microwave emission from the same device with $d$ = 180 nm. The FMR frequency has been fitted to the well known formula $f = (\gamma/2\pi)\sqrt{[H(H+4{\pi}M)]}$ where $(\gamma/2\pi) = (g/2) \times$ 2.80 MHz/Oe with $g$ = 2, and $M$ = 604 $\pm$ 50 emu/cm$^3$. For a given frequency, microwave emission is observed at a field close to but greater than that of the FMR mode. The dependence of the frequency upon $H$ and $I_{DC}$ is consistent with previous observations of the bullet mode \cite{Ulrichs2014}.
Figure \ref{Fig:1}(d) shows locking of a bullet mode of 6 GHz frequency to an injected $I_{RF}$ of the same frequency for a device with $d$ = 200 nm. As $H$ is decreased from its maximum value, the frequency of emission is ``pulled'' towards 6 GHz, where it remains within the locking range. Due to the smaller signal amplitude, the pulling is not seen at the lower end of the locking range. Within the locking range, increased output power and reduced linewdith occurs, although here it is masked by the residual injected $I_{RF}$ that reaches the spectrum analyzer. An intermodulation mode can also be observed, decreasing in frequency with increasing field. Locking could also be achieved by injection of $I_{RF}$ with frequency of 12 GHz (see supplementary material) but $I_{RF}$ $\sim$ 3.5 mA was required to achieve even a narrow locking range, which has been attributed to thermal noise enhanced by the spin current \cite{Demidov2014}.
Therefore TRSKM measurements were performed with the frequency of $I_{RF}$ set to 6 GHz so as to minimise the amplitude of $I_{RF}$ required to achieve locking.
Figure \ref{Fig:2}(a) shows TRSKM images, of a device with $d$ = 200 nm, acquired with the bullet mode locked to $I_{RF}$, and when $I_{RF}$ is still present but $I_{DC}$ = 0 mA. In the latter case, $I_{RF}$ drives the FMR with $H$ detuned from the line centre. The addition of $I_{DC}$ leads to additional dynamics in all three magnetic channels. In the polar channel, localized precession is observed between the NC tips, with a different phase to the dynamics in the extended disk (detailed within the supplementary material). By subtracting the images acquired with and without $I_{DC}$, the dynamic response due to $I_{DC}$ may be estimated, as shown in Figure \ref{Fig:2}(b). The subtracted images for the two in-plane (horizontal and vertical) channels each exhibit a spatially antisymmetric structure centred on the peak observed in the polar contrast, but occupying a somewhat larger area of $\sim 2\mu m$ diameter. The subtraction yields negligible residual contrast in the extended region of the disk, confirming that the bullet mode is tightly confined at the centre of the disk.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig2_v5i_p.png}
\caption{(a) TRSKM images acquired from the polar (out of plane), horizontal, vertical and reflected intensity channels of the vector bridge detector for a NC-SHNO with $d$ = 200 nm, $H$ = 650 Oe, $\theta_H=210^\circ$, and $I_{RF}$ = 1.4 mA at 6 GHz frequency. $I_{DC}=$ 16 mA and 0 mA in the upper and lower panels respectively. (b) The difference of the upper and lower images from (a) is shown, revealing contrast due to the presence of $I_{DC}$ only. Each image shows a 5 $\mu m$ square region.}
\label{Fig:2}
\end{figure}
Further measurements at different time delays confirmed that the contrast in the magnetic channels oscillates with $I_{RF}$, and with the same relative phase, which is unexpected if the magnetization undergoes a circular or elliptical precession. Furthermore, since $H$ is applied 30$^\circ$ from the horizontal axis, the amplitude of the dynamic magnetization is expected to be significantly greater in the vertical as compared to the horizontal direction, while in fact these two components were found to have comparable amplitude. To aid the interpretation, micromagnetic simulations were performed, using MuMax 3 after the current distribution and associated Oersted field had been calculated in COMSOL \cite{COMSOL,Vansteenkiste2014}. The magnetization of the permalloy disk was allowed to relax with $H =$ 1 kOe and before the DC spin current and additional Oersted field were applied.
Images of the simulated bullet mode are presented in Figure \ref{Fig:3}(a) for the configuration in Figure \ref{Fig:2}. Initial simulations showed that the bullet quickly escapes the active area and is damped within the extended disk. Therefore a pinning site was introduced in the form of either a single cell discontinuity in the magnetization, or a reduction in saturation magnetization with Gaussian spatial profile of 5\% peak value and $\sim 240$ nm FWHM as in Figure \ref{Fig:3}(a). This led to a bullet mode that was stable for a finite range of $I_{DC}$ values, as well as an additional mode that was localized in the non-uniform Oersted field associated with the injected charge current. The latter mode lies at higher frequency than the bullet mode, but was not observed in the room temperature electrical measurements of Figure \ref{Fig:1}, and so is not expected to appear in TRSKM measurements. Both the bullet and field-localized modes were found to have spatial and spectral character consistent with previous simulations \cite{Ulrichs2014,Giordano2014}. The need to pin the bullet may imply that the spin current produces a local reduction of the spontaneous magnetization that is not captured by the present simulations.
\begin{figure}
\centering
\includegraphics[width=8cm]{SimDynamics_V10b_sglCol_p.png}
\caption{(a) Simulated magnetization profile at different phase values within the cycle of auto-oscillation, with field $H$ = 1 kOe applied at $\theta_{H}$ = 210$^{\circ}$. The arrows indicate the projection of the magnetization within the plane, while the grayscale represents the out of plane component. The images show the bullet mode with $\sim$ 70 nm diameter at the center of the device. The orientation of the coordinate axes is shown, while the origin lies at the center of the device. The lower right panel shows the time-averaged magnetization while the horizontal $x$ axis indicates two areas of interest, (i) the area immediately surrounding the bullet, and (ii) the core of the bullet mode. (b) Magnetization trajectories are shown for different points on the $x$ axis. Arrows show the direction of precession over 1 cycle of oscillation. (c) Schematic to illustrate the artefact that generates contrast in the horizontal and vertical channels. Arrows indicate the direction of propagation of rays.}
\label{Fig:3}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=17cm]{fig4b_CurrVar_V4b.png}
\caption{(a) Polar TRSKM images acquired for different $I_{DC}$ values with the phase of $I_{RF}$ fixed. (b) Maximum absolute values of polar Kerr rotation extracted from the images in (a). (c) TRSKM images from (a) after subtraction of the $I_{DC}= 0$ mA image. All images were recorded from a NC-SHNO with $d$ = 240 nm, with $I_{RF}$ = 0.8 mA, $H$ = 650 Oe and $\theta_H = 210^\circ$.}
\label{Fig:4}
\end{figure*}
While the bullet mode exhibits large angle precession, the images in Figure \ref{Fig:3}(a) do not reproduce the spatially antisymmetric character observed in the measured horizontal and vertical components. The core of the bullet mode undergoes the largest angle of precession. Figure \ref{Fig:3}(b) shows magnetization trajectories at different distances from the center of the disk along the $x$ axis, within two regions of interest, (i) outside and (ii) within the core of the bullet mode. The trajectories are plotted for one cycle of the bullet mode. They are not closed because the motion results from a superposition of the bullet mode with the field-localized mode that has a somewhat different frequency. Outside the core region the magnetization undergoes elliptical precession about an axis parallel to the applied field. At the edge of the core region, at $x=-35$ nm, the average magnetization is close to zero with an in-plane precession angle of $\sim270^\circ$. Within the core the precession amplitude increases further so that the trajectory crosses over itself with the magnetization effectively precessing about a direction anti-parallel to the applied field. The magnetization precesses with the same phase at all positions within the disk. Simulations performed with an additional $I_{RF}$ demonstrated slightly improved stability of the bullet, but otherwise were of similar character.
Difference images, calculated from simulated images separated by $180^\circ$ in phase, were convolved with a 870 nm FWHM Gaussian profile to more closely reproduce the experimental images. Again they did not reproduce the spatially antisymmetric contrast observed in the vertical and horizontal channels. Further tests showed that the antisymmetric contrast was observed only when the bullet mode was present, ruling out mechanisms such as polarization of the Pt by $I_{RF}$ via the SHE. Therefore the in-plane contrast must be an artefact associated with the optical probe overlapping the edge of the 150 nm thick NCs, while in proximity to the bullet mode. Figure \ref{Fig:3}(c) provides a schematic representation of the likely mechanism. As the probe passes over the NCs the beam returning to the detector is partially obstructed. Crucially the symmetry between rays propagating in opposite directions within the cone is broken. The resulting difference in intensity of the two halves of the back-reflected beam, combined with a finite polar Kerr rotation due to the bullet mode, manifests as a signal similar to that due to the longitudinal MOKE from an in-plane component of magnetization\cite{Keatley2006}. It follows from the NC geometry that a top-bottom antisymmetry is observed in the vertical channel and a left-right antisymmetry in the horizontal channel.
The polar images are unaffected by the artefact. Figure \ref{Fig:4}(a) shows polar images, of a device with $d$ = 240 nm, acquired for different $I_{DC}$ values, with the phase and amplitude of $I_{RF}$ fixed. While the FMR mode is observed throughout the disk for all $I_{DC}$ values, the amplitude of the Kerr rotation at the NC tips is observed to increase markedly for current values in the range 17 to 19 mA. By extracting the maximum absolute Kerr rotation from Figure \ref{Fig:4}(a), a clear threshold behaviour, characteristic of the bullet mode, can be observed, that is plotted in Figure \ref{Fig:4}(b). For small $I_{DC}$ values the amplitude of the FMR mode increases gradually with increasing $I_{DC}$ as the injection of DC spin current into the Py layer compensates the damping. In Figure \ref{Fig:4}(c) the images from Figure \ref{Fig:4}(a) have been replotted after subtracting the image for which $I_{DC}$ = 0. For $I_{DC}\geqslant 10$ mA a region of negative contrast appears to the right of the NCs. The asymmetry of the FMR response about the centre of the device reflects the mixed symmetry of the torques present. The STT and the torque due to the in-plane Oersted field are symmetric about the centre while the torque due to the out of plane Oersted field is antisymmetric \cite{Spicer2018a}.
The electrical data of Figure \ref{Fig:1}(b) revealed the presence of a bullet mode for $I_{DC}$ values between 18 and 20 mA. However Figure \ref{Fig:4} shows strong out of plane dynamics at the NC tips for $I_{DC}$ = 17 mA, that is still present when $I_{DC}$ = 19 mA. The reduced threshold value of $I_{DC}$ is due to the presence of the $I_{RF}$, as observed previously \cite{Demidov2014}. Figures \ref{Fig:4}(a) and (c) also show that the apparent size of the bullet mode depends upon $I_{DC}$. Comparing the images for $I_{DC}$ values between 17 and 19 mA, the bullet mode appears to occupy a larger region as $I_{DC}$ is increased, with some reduction in the maximum Kerr amplitude. This might be explained by increased phase noise at the centre of the bullet suppressing its peak amplitude and causing its apparent width to increase. However, since the diameter of this region is an order of magnitude greater than that of the simulated bullet mode in Figure {\ref{Fig:3}}(a), it seems more likely that the bullet instead develops significant translational motion when phase-locked to $I_{RF}$.
It is clear from Figures \ref{Fig:2}(a) and \ref{Fig:4}(a) that the FMR induced by $I_{RF}$ exhibits a minimum at the centre of the device. A more detailed optically-detected STT-FMR study has confirmed that the torques due to $I_{RF}$ also exhibit a minimum, which is attributed to lateral spreading of $I_{RF}$ due to the reactance of the device. \cite{Spicer2018a} The present study shows that the bullet and FMR modes exhibit weak spatial overlap. It is reasonable to expect that the bullet may be pulled towards where the STT arising from $I_{RF}$ is larger, and may exhibit translational motion relative to the centre of the device.
The bullet could either establish a stable trajectory, or escape and be damped in the extended disk (see supplementary material), allowing another bullet to form at the centre and repeat the process. Increasing $I_{DC}$ is likely to increase the mobility of the bullet, allowing it to move further from the centre.
Since the linewidth of the emission in Figure \ref{Fig:1}(b) is only weakly dependent on $I_{DC}$, formation of a stable trajectory seems more likely. Both the lack of spatial overlap of dynamics induced by $I_{DC}$ and $I_{RF}$, and the translational motion of the bullet, may impede injection locking, and contribute to the NC-SHNO being difficult to lock \cite{Demidov2014} compared to other STO devices.
In summary, time resolved images of an injection-locked non-linear bullet mode within a NC-SHNO have been obtained. The injected $I_{RF}$ excites a FMR mode that exhibits weak spatial overlap with the bullet mode. The apparent size of the bullet increases with DC current, which is suggested as being due to increased translational motion of the bullet when the RF current is present. Further work is required to determine the trajectory of the bullet. The translational motion and the lack of spatial overlap of the bullet and FMR modes may impeded injection-locking of the NC-SHNO. This illustrates a more general need to control the geometry of injection-locked oscillators so that the autonomous dynamics of the oscillator exhibit strong spatial overlap with those resulting from the injected signal.
\section*{Supplementary Material}
The supplementary material contains additional microwave spectroscopy and TRSKM datasets, and further details of micromagnetic simulations.
\begin{acknowledgments}
We acknowledge financial support from the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom, via the EPSRC Centre for Doctoral Training in Metamaterials (Grant No. EP/L015331/1) and EPSRC grants EP/I038470/1 and EP/P008550/1.
The research data supporting this publication are openly available from the University of Exeter's institutional repository at: https://doi.org/10.24378/exe.923
\end{acknowledgments}
|
1,108,101,563,132 | arxiv | \section{Introduction}
Coherent states [1-2] are widely used in many aspects of quantum physics.
The bosonic coherent state, which describes the quantum state of laser,
obeys coordinate-momentum minimum uncertainty relation and possesses
non-orthonormal and over-complete properties. Generalized coherent states
has been constructed and applied by theoreticians since 1970s, among which
the coherent state $\left\vert q,\alpha \right\rangle $ for charged bosons
[3] is a remarkable one, when one introduces "charge" by defining two types
of quanta possessing "charge" +1 and -1 with corresponding annihilation
operators\ $a$ and $b,$ so the operator $Q=a^{\dagger }a-b^{\dagger }b$ is
endowed with charge operator, $\left\vert q,\alpha \right\rangle $ is
constructed based on $\left[ Q,ab\right] =0,$ $\left[ a,a^{\dagger }\right] =%
\left[ b,b^{\dagger }\right] =1.$ In the Fock space the charged coherent
state is
\begin{eqnarray*}
\left\vert q,\alpha \right\rangle &=&C_{q}\sum_{n=0}^{\infty }\frac{\alpha
^{q}}{\sqrt{(n+q)!n!}}\left\vert n+q,n\right\rangle , \\
Q\left\vert q,\alpha \right\rangle &=&q\left\vert q,\alpha \right\rangle ,%
\text{\ \ }ab\left\vert q,\alpha \right\rangle =\alpha \left\vert q,\alpha
\right\rangle
\end{eqnarray*}%
where $C_{q}$ is the normalization constant. In quantum optice theory $%
\left\vert q,\alpha \right\rangle $ was named by Agarwal as pair-coherent
state\ in [4], $Q$ is the two-mode photon number-difference operator. By
observing $\left[ Q,a^{\dagger }b^{\dagger }\right] =0,$ in Ref. [5] Fan and
Klauder also constructed the common eigenvector of $Q$ and $a^{\dagger
}b^{\dagger }$ with use of the Dirac's $\delta $-function in countor
integral form proposed by Heitler [6]-[7]. One then naturally to ask waht is
the simultaneous eigenstates, denoted as $\left\vert q,k\right\rangle ,$ of $%
Q$ and $\left( ab-a^{\dagger }b^{\dagger }\right) ?$ This question is full
of physical meaning in quantum optics, because most nonlinear interactions
in tha parametric approximation reduce to a bilinear form
\begin{equation}
H_{I}=i\hbar \kappa \left( ab-a^{\dagger }b^{\dagger }\right) , \label{1}
\end{equation}%
since
\begin{equation}
\left[ \left( ab-a^{\dagger }b^{\dagger }\right) ,Q\right] =0, \label{2}
\end{equation}%
where $\kappa $ is related to the susceptibility in the parametric process. $%
H_{I}$ is responsible for producing two-mode squeezed states [8-11], which
is not only useful for optical communication and weak signal detection, but
also embodies quantum entanglement, i.e. the Einstein-Podolsky-Rosen (EPR)
correlations [12] for quadrature phase amplitude are intrinsic to two-mode
squeezed light, the idler-mode and the signal-mode generated from a
parametric amplifier are entangled each other in a frequency domain. Solving
this question is quite difficult. We recall Dirac's guidence [13]:
\textquotedblleft When one has a particular problem to work out in quantum
mechanics, one can minimize the labour by using a represenation in which the
representatives of the more important abstract quantities occurring in that
problem are as simple as possible\textquotedblright . At first glance, we
thought that the charged coherent state representation $\left\vert q,\alpha
\right\rangle $ was a good candidates for tackling the problem as simple as
possible, but after some tries we found that it was not. Eventually we find
that the entangled state representation [14-15] is of assistence for
searching for the desired common eigenvector $\left\vert q,k\right\rangle $
of $Q$ and $\left( ab-a^{\dagger }b^{\dagger }\right) $,
\begin{equation}
Q\left\vert q,k\right\rangle =q\left\vert q,k\right\rangle ,\text{ }
\label{3}
\end{equation}%
\begin{equation}
\left( ab-a^{\dagger }b^{\dagger }\right) \left\vert q,k\right\rangle
=\left( q-k-1\right) \left\vert q,k\right\rangle . \label{4}
\end{equation}%
because it possesses well-behaved features. Thus we shall briefly review the
properties of entangled state $\left\vert \xi \right\rangle $ in Sec. 2.
Then in Sec. 3 we shall make full use of $\left\vert \xi \right\rangle $ to
set up a complex differential equation for the overlap $\left\langle \xi
\right\vert \left. q,k\right\rangle .$ In Sec. 4 we employ the two-variable
Hermite polynomials and the hypergeometric function to solve the
differential equation. The Gauss' contiguous relation of hypergeometric
function is essential for us to derive the common eigenvector of $Q$ and $%
\left( ab-a^{\dagger }b^{\dagger }\right) $.
\section{Brief review of the bipartite entangled state representations}
In [14] and [15] we have constructed the bipartite entangled state%
\begin{equation}
\exp \left[ -\frac{|\xi |^{2}}{2}+\xi a^{\dagger }+\xi ^{\ast }b^{\dagger
}-a^{\dagger }b^{\dagger }\right] \left\vert 0\right\rangle \equiv
\left\vert \xi \right\rangle ,\text{ }\xi =|\xi |e^{i\varphi }, \label{5}
\end{equation}%
$\left\vert \xi \right\rangle $ satisfy the eigenvator equations%
\begin{equation}
\left( a+b^{\dagger }\right) \left\vert \xi \right\rangle =\xi \left\vert
\xi \right\rangle ,\ \left( a^{\dagger }+b\right) \left\vert \xi
\right\rangle =\xi ^{\ast }\left\vert \xi \right\rangle . \label{6}
\end{equation}%
or
\begin{equation}
\frac{1}{2}\left( X_{1}+X_{2}\right) \left\vert \xi \right\rangle =\frac{1}{%
\sqrt{2}}\xi _{1}\left\vert \xi \right\rangle ,\text{ \ }\left(
P_{1}-P_{2}\right) \left\vert \xi \right\rangle =\sqrt{2}\xi _{2}\left\vert
\xi \right\rangle , \label{7}
\end{equation}%
i.e. $\left\vert \xi \right\rangle $ is just the simultaneous eigenvector of
two-particles' center-of-mss posititon $X_{1}-X_{2}$ and the relative
momentum $P_{1}-P_{2}$ in Fock space, we name it the EPR eigenstate since
EPR were the first who used the commutative property $\left[
X_{1}-X_{2},P_{1}+P_{2}\right] =0$ to challege the incompleteness of quantum
mechanics [11]. According to Dirac's representation theory $[13]$:
\textquotedblleft To set up a representation in a general way, we take a
complete set of bra vectors, i.e. a set such that any bra can be expressed
linearly in terms of them,\textquotedblright\ we use the normal ordering
form of the two-mode vacuum state projector
\begin{equation}
\left\vert 00\right\rangle \left\langle 00\right\vert =:e^{-a^{\dagger
}a-b^{\dagger }b}:, \label{8}
\end{equation}%
and the technique of integration within an ordered product (IWOP) of
operators [16]-[17] , we can prove
\begin{equation}
\int \frac{d^{2}\xi }{\pi }\left\vert \xi \right\rangle \left\langle \xi
\right\vert =1. \label{9}
\end{equation}
\section{Complex differential equation for the new state $\left\vert
q,k\right\rangle $}
By noticing (5) we see%
\begin{eqnarray}
a\left\vert \xi \right\rangle &=&\left( \xi -b^{\dagger }\right) \left\vert
\xi \right\rangle ,\text{\ \ }b\left\vert \xi \right\rangle =\left( \xi
^{\ast }-a^{\dagger }\right) \left\vert \xi \right\rangle , \label{10} \\
\left\langle \xi \right\vert a^{\dagger } &=&\left\langle \xi \right\vert
\left( \xi ^{\ast }-b\right) ,\text{\ \ }\left\langle \xi \right\vert
b^{\dagger }=\left\langle \xi \right\vert \left( \xi -a\right) , \notag
\end{eqnarray}%
so%
\begin{equation}
\left\langle \xi \right\vert \left( ab-a^{\dagger }b^{\dagger }\right)
=\left\langle \xi \right\vert \left[ ab-\left( \xi ^{\ast }-b\right)
b^{\dagger }\right] =\left\langle \xi \right\vert \left[ b\xi -\left( \xi
-a\right) \xi ^{\ast }+1\right] . \label{11}
\end{equation}%
Then using
\begin{equation}
a^{\dagger }\left\vert \xi \right\rangle =\left( \frac{\partial }{\partial
\xi }+\frac{\xi ^{\ast }}{2}\right) \left\vert \xi \right\rangle ,\text{\ \ }%
b^{\dagger }\left\vert \xi \right\rangle =\left( \frac{\partial }{\partial
\xi ^{\ast }}+\frac{\xi }{2}\right) \left\vert \xi \right\rangle ,
\label{12}
\end{equation}%
we re-write (11) as
\begin{equation}
\left\langle \xi \right\vert \left( ab-a^{\dagger }b^{\dagger }\right)
=\left\langle \xi \right\vert \left[ \left( \frac{\overleftarrow{\partial }}{%
\partial \xi }+\frac{\xi ^{\ast }}{2}\right) \xi +\left( \frac{%
\overleftarrow{\partial }}{\partial \xi ^{\ast }}+\frac{\xi }{2}\right) \xi
^{\ast }-|\xi |^{2}+1\right] . \label{13}
\end{equation}%
Operating (13) on the state $\left\vert q,k\right\rangle $, and using (4) we
obtain a complex differential equation
\begin{eqnarray}
&&\left\langle \xi \right\vert \left( ab-a^{\dagger }b^{\dagger }\right)
\left\vert q,k\right\rangle \label{14} \\
&=&\left[ \xi \frac{\partial }{\partial \xi }+\xi ^{\ast }\frac{\partial }{%
\partial \xi ^{\ast }}+1\right] \left\langle \xi \right\vert \left.
q,k\right\rangle =\left( q-k-1\right) \left\langle \xi \right\vert \left.
q,k\right\rangle \notag
\end{eqnarray}%
On the other hand, from (5) we have
\begin{eqnarray}
\left( a^{\dagger }a-b^{\dagger }b\right) \left\vert \xi \right\rangle
&=&\left( \xi a^{\dagger }-b^{\dagger }\xi ^{\ast }\right) \left\vert \xi
\right\rangle \label{15} \\
&=&|\xi |\left( e^{-i\varphi }a^{\dagger }-e^{i\varphi }b^{\dagger }\right)
\exp \left[ -\frac{|\xi |^{2}}{2}+|\xi |\left( e^{-i\varphi }a^{\dagger
}+e^{i\varphi }b^{\dagger }\right) -a^{\dagger }b^{\dagger }\right]
\left\vert 00\right\rangle \notag \\
&=&-i\frac{\partial }{\partial \varphi }\left\vert \xi \right\rangle ,\text{%
\ \ } \notag
\end{eqnarray}%
which together with (3) leads to another differential equation%
\begin{equation}
\left\langle \xi \right\vert \left( a^{\dagger }a-b^{\dagger }b\right)
\left\vert q,k\right\rangle =i\frac{\partial }{\partial \varphi }%
\left\langle \xi \right\vert \left. q,k\right\rangle =q\left\vert
q,k\right\rangle . \label{16}
\end{equation}%
In the next section we want to solve (14) and (16).
\section{Solution to Eq. (16)}
After many tries we find the solution of (14) and (16) is%
\begin{equation}
\left\langle \xi \right\vert \left. q,k\right\rangle =\mathfrak{C}%
(k)e^{-|\xi |^{2}/2}A, \label{17}
\end{equation}%
where $\mathfrak{C}(k)$ is the normalization constant, which keeps
undisturbed in the following calculations, so we neglect it in the
derivation process, and
\begin{equation}
A\equiv \sum_{n=0}^{\infty }\frac{1}{n!}H_{n+q,n}\left( \xi ^{\ast },\xi
\right) \text{ }_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2) \label{18}
\end{equation}%
$_{2}F_{1}$ is the hypergeometric function defined as%
\begin{equation}
_{2}F_{1}(\alpha ,\beta ;\gamma ;\varepsilon )=\sum_{n=0}^{\infty }\frac{%
\left( \alpha \right) _{n}\left( \beta \right) _{n}}{\left( \gamma \right)
_{n}}\frac{z^{n}}{n!}=\frac{\Gamma (\gamma )}{\Gamma (\alpha )\Gamma (\beta )%
}\sum_{n=0}^{\infty }\frac{\Gamma (n+\alpha )\Gamma (n+\beta )}{\Gamma
(n+\gamma )}\frac{z^{n}}{n!}, \label{19}
\end{equation}
the symbol $\left( \alpha \right) _{n}$ means
\begin{equation}
\left( \alpha \right) _{n}=\frac{\Gamma (\alpha +n)}{\Gamma (\alpha )}%
=\alpha \left( \alpha +1\right) (\alpha +2)\cdot \cdot \cdot (\alpha +n-1),
\label{20}
\end{equation}%
and the two-variable Hermite polynomials is defined as [18]
\begin{equation}
H_{m,n}(\xi ,\xi ^{\ast })=\sum_{l=0}^{\min (m,n)}\frac{m!n!}{l!(m-l)!(n-l)!}%
(-1)^{l}\xi ^{m-l}\xi ^{\ast n-l}=H_{n,m}(\xi ^{\ast },\xi ), \label{21}
\end{equation}%
whose generating function is%
\begin{equation}
\sum_{m,n=0}^{\infty }\frac{t^{m}t^{\prime n}}{m!n!}H_{m,n}(\xi ,\xi ^{\ast
})=\exp \left( -tt^{\prime }+t\xi +t^{\prime }\xi ^{\ast }\right) .
\label{22}
\end{equation}%
Note that the convergent condition for the hypergeometric function defined
in (19) is $|z|<1,$ $\gamma \neq 0,-1,-2,\cdot \cdot \cdot ,$ so $%
_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2)$ is a formal power series.
Now we prove (14): Firstly, we notice%
\begin{equation}
\xi \frac{\partial }{\partial \xi }\left\langle \xi \right\vert \left.
q,k\right\rangle =\xi \left( -\frac{\xi ^{\ast }}{2}e^{-|\xi
|^{2}/2}A+e^{-|\xi |^{2}/2}\frac{\partial }{\partial \xi }A\right) ,
\label{23}
\end{equation}%
and%
\begin{equation}
\xi ^{\ast }\frac{\partial }{\partial \xi ^{\ast }}\left\langle \xi
\right\vert \left. q,k\right\rangle =\xi ^{\ast }\left( -\frac{\xi }{2}%
e^{-|\xi |^{2}/2}A+e^{-|\xi |^{2}/2}\frac{\partial }{\partial \xi ^{\ast }}%
A\right) , \label{24}
\end{equation}%
it then follows%
\begin{equation}
\left( \xi \frac{\partial }{\partial \xi }+\xi ^{\ast }\frac{\partial }{%
\partial \xi ^{\ast }}\right) \left\langle \xi \right\vert \left.
q,k\right\rangle =-|\xi |^{2}\left\langle \xi \right\vert \left.
q,k\right\rangle +e^{-|\xi |^{2}/2}\left( \xi \frac{\partial }{\partial \xi }%
+\xi ^{\ast }\frac{\partial }{\partial \xi ^{\ast }}\right) A \label{25}
\end{equation}%
so%
\begin{eqnarray}
&&\left( \xi \frac{\partial }{\partial \xi }+\xi ^{\ast }\frac{\partial }{%
\partial \xi ^{\ast }}+1\right) \left\langle \xi \right\vert \left.
q,k\right\rangle \label{26} \\
&=&e^{-|\xi |^{2}/2}\left( \xi \frac{\partial }{\partial \xi }+\xi ^{\ast }%
\frac{\partial }{\partial \xi ^{\ast }}+1-|\xi |^{2}\right)
\sum_{n=0}^{\infty }\frac{1}{n!}H_{n+q,n}\left( \xi ^{\ast },\xi \right)
\text{ }_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2). \notag
\end{eqnarray}%
Then we use the property%
\begin{equation}
\frac{\partial }{\partial \xi }H_{m,n}\left( \xi ^{\ast },\xi \right)
=nH_{m,n-1}\left( \xi ^{\ast },\xi \right) ,\text{ }\frac{\partial }{%
\partial \xi ^{\ast }}H_{m,n}\left( \xi ^{\ast },\xi \right)
=mH_{m-1,n}\left( \xi ^{\ast },\xi \right) \label{27}
\end{equation}%
and
\begin{equation}
H_{m+1,n}+nH_{m,n-1}=\xi ^{\ast }H_{m,n},\ H_{m,n+1}+mH_{m-1,n}=\xi H_{m,n},
\label{28}
\end{equation}%
which can be derived from (21) and (23), as well as%
\begin{eqnarray}
|\xi |^{2}H_{m,n} &=&\xi ^{\ast }\left( H_{m+1,n}+nH_{m,n-1}\right)
\label{29} \\
&=&H_{m+1,n+1}+nmH_{m-1,n-1}+\left( m+n+1\right) H_{m,n}, \notag
\end{eqnarray}%
we have%
\begin{equation}
\begin{array}{c}
\left( \xi \frac{\partial }{\partial \xi }+\xi ^{\ast }\frac{\partial }{%
\partial \xi ^{\ast }}+1\right) \left\langle \xi \right\vert \left.
q,k\right\rangle \\
=e^{-|\xi |^{2}/2}\sum_{n=0}^{\infty }\frac{1}{n!}[\xi nH_{n+q,n-1}+\xi
^{\ast }(q+n)H_{n+q-1,n} \\
-H_{n+q+1,n+1}-n(n+q)H_{n+q-1,n-1}-\left( q+2n\right) H_{n+q,n}]\text{ }%
_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2) \\
=e^{-|\xi |^{2}/2}\sum_{n=0}^{\infty }\frac{1}{n!}\{n\left[
H_{n+q,n}+(n+q)H_{n+q-1,n-1}\right] \\
+(q+n)[H_{n+q,n}+nH_{n+q-1,n-1}] \\
-H_{n+q+1,n+1}-n(n+q)H_{n+q-1,n-1}-\left( q+2n\right) H_{n+q,n}\}\text{ }%
_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2) \\
=e^{-|\xi |^{2}/2}\sum_{n=0}^{\infty }\frac{1}{n!}%
[(q+n)nH_{n+q-1,n-1}-H_{n+q+1,n+1}]\text{ }_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2)
\\
=e^{-|\xi |^{2}/2}\sum_{n=0}^{\infty }\frac{1}{n!}H_{n+q,n}[(q+n+1)\text{ }%
_{2}F_{1}(-n-1,\frac{k}{2}+1;q+1;2) \\
-n\text{ }_{2}F_{1}(-n+1,\frac{k}{2}+1;q+1;2)].%
\end{array}
\label{30}
\end{equation}%
Then using two Gauss' contiguous relations about the hypergeometric function
[19-20]%
\begin{equation}
_{2}F_{1}(\alpha ,\beta ;\gamma ;\varepsilon )=\ _{2}F_{1}(\beta ,\alpha
;\gamma ;\varepsilon ), \label{31}
\end{equation}%
and%
\begin{equation}
\begin{array}{c}
\lbrack \gamma -2\beta +(\beta -\alpha )\varepsilon ]\text{ }%
_{2}F_{1}(\alpha ,\beta ;\gamma ;\varepsilon )+\beta (1-\varepsilon
)_{2}F_{1}(\alpha ,\beta +1;\gamma ;\varepsilon ) \\
-(\gamma -\beta )_{2}F_{1}(\alpha ,\beta -1;\gamma ;\varepsilon )=0,%
\end{array}
\label{32}
\end{equation}%
and letting $\alpha =\frac{k}{2}+1,$ $\beta =-n,$ $\gamma =q+1,$ $%
\varepsilon =2,$ we have%
\begin{eqnarray}
&&\left( q-k-1\right) \text{ }_{2}F_{1}(\frac{k}{2}+1,-n;q+1;2)+n\text{ }%
_{2}F_{1}(\frac{k}{2}+1,-n+1;q+1;2) \label{33} \\
&&-(q+n+1)\text{ }_{2}F_{1}(\frac{k}{2}+1,-n-1;q+1;2)=0 \notag
\end{eqnarray}%
Substituting (30) into the right-hand side of (30) we finally obtain
(recovering $\mathfrak{C}(k))$
\begin{eqnarray}
&&\left( \xi \frac{\partial }{\partial \xi }+\xi ^{\ast }\frac{\partial }{%
\partial \xi ^{\ast }}+1\right) \left\langle \xi \right\vert \left.
q,k\right\rangle \label{34} \\
&=&(q-k-1)\mathfrak{C}(k)e^{-|\xi |^{2}/2}\sum_{n=0}^{\infty }\frac{1}{n!}%
H_{n+q,n}\left( \xi ^{\ast },\xi \right) _{2}F_{1}(-n,\frac{k}{2}+1;q+1;2)
\notag \\
&=&(q-k-1)\left\langle \xi \right\vert \left. q,k\right\rangle , \notag
\end{eqnarray}%
thus we have proved the solution of (14). On the other hand, from (21) we
know
\begin{equation}
H_{n+q,n}\left( \xi ^{\ast },\xi \right) =e^{-iq\varphi }H_{n+q,n}\left(
|\xi |,|\xi |\right) , \label{35}
\end{equation}%
so the solution (14) automatically satisfied with (16). The solution seems
new.
\section{The simultaneous eigenstate of $Q$ and $\left( ab-a^{\dagger
}b^{\dagger }\right) $}
Now we hope to obtain $\left\vert q,k\right\rangle $ from the information of
$\left\langle \xi \right\vert \left. q,k\right\rangle .$ Using the
completeness relation (9) of $\left\vert \xi \right\rangle $ and (17)-(18)
we can have
\begin{eqnarray}
\left\vert q,k\right\rangle &=&\int \frac{d^{2}\xi }{\pi }\left\vert \xi
\right\rangle \left\langle \xi \right\vert \left. q,k\right\rangle
\label{36} \\
&=&\mathfrak{C}(k)\int \frac{d^{2}\xi }{\pi }\left\vert \xi \right\rangle
e^{-|\xi |^{2}/2}\sum\limits_{n=0}^{\infty }\frac{1}{n!}H_{n+q,n}\left( \xi
^{\ast },\xi \right) \text{ }_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2). \notag
\end{eqnarray}%
Then using (22) we expand $\left\vert \xi \right\rangle $ in (5),%
\begin{eqnarray}
\left\vert \xi \right\rangle &=&e^{-|\xi |^{2}/2}\sum_{l,j=0}^{\infty }%
\frac{a^{\dagger l}b^{\dagger }{}^{j}}{l!j!}H_{l,j}(\xi ,\xi ^{\ast
})\left\vert 0,0\right\rangle \label{37} \\
&=&e^{-|\xi |^{2}/2}\sum_{l,j=0}^{\infty }\frac{1}{\sqrt{l!j!}}H_{l,j}\left(
\xi ,\xi ^{\ast }\right) \left\vert l,j\right\rangle , \notag
\end{eqnarray}%
where $\left\vert l,j\right\rangle $ is the two-mode Fock state.
Substituting (37) into (36) and using the integration formula%
\begin{equation}
\int \frac{d^{2}\xi }{\pi }e^{-|\xi |^{2}}H_{l,j}\left( \xi ,\xi ^{\ast
}\right) H_{m,n}^{\ast }\left( \xi ,\xi ^{\ast }\right) =\delta _{l,m}\delta
_{j,n}m!n!, \label{38}
\end{equation}%
we have
\begin{eqnarray}
\left\vert q,k\right\rangle &=&\mathfrak{C}(k)\sum\limits_{n=0}^{\infty }%
\frac{1}{n!}_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2) \label{39} \\
&&\times \int \frac{d^{2}\xi }{\pi }e^{-|\xi |^{2}}\sum_{l,j=0}^{\infty }%
\frac{1}{\sqrt{l!j!}}H_{l,j}\left( \xi ,\xi ^{\ast }\right) H_{n+q,n}^{\ast
}\left( \xi ,\xi ^{\ast }\right) \left\vert l,j\right\rangle \notag \\
&=&\mathfrak{C}(k)\sum_{n=0}^{\infty }\frac{\sqrt{\left( n+q\right) !}}{%
\sqrt{n!}}\text{ }_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2)\left\vert
n+q,n\right\rangle \notag \\
&=&\mathfrak{C}(k)a^{\dagger q}\sum_{n=0}^{\infty }\frac{a^{\dagger
n}b^{\dagger n}}{n!}\text{ }_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2)\left\vert
0,0\right\rangle . \notag
\end{eqnarray}%
As we have known that the convergent condition for the hypergeometric
function defined in (19) is $|z|<1,$ $\gamma \neq 0,-1,-2,\cdot \cdot \cdot ,
$ so $\left\vert q,k\right\rangle $ seems not normalized as a finite number,
but as a divergent one. To see this more clearly, using (17)-(18), (9) and
(38), formally we have
\begin{eqnarray}
\left\langle q,k\right\vert \left. q,k\right\rangle &=&\int \frac{d^{2}\xi
}{\pi }\left\langle q,k\right\vert \left. \xi \right\rangle \left\langle \xi
\right\vert \left. q,k\right\rangle \label{40} \\
&=&|\mathfrak{C}(k)|^{2}\int \frac{d^{2}\xi }{\pi }\sum_{n=0}^{\infty }\frac{%
1}{n!}H_{n+q,n}\left( \xi ^{\ast },\xi \right) \text{ }_{2}F_{1}(-n,\frac{k}{%
2}+1;q+1;2) \notag \\
\times &&\sum_{n^{\prime }=0}^{\infty }\frac{1}{n^{\prime }!}H_{n^{\prime
}+q,n^{\prime }}\left( \xi ,\xi ^{\ast }\right) \text{ }\left[
_{2}F_{1}(-n^{\prime },\frac{k}{2}+1;q+1;2)\right] ^{\ast } \notag \\
&=&|\mathfrak{C}(k)|^{2}\sum_{n=0}^{\infty }\frac{\left( n+q\right) !}{n!}%
\text{ }|_{2}F_{1}(-n,\frac{k}{2}+1;q+1;2)|^{2}. \notag
\end{eqnarray}%
Therefore, the common eigenvector of $Q$ and $\left( ab-a^{\dagger
}b^{\dagger }\right) ,$ like the common eigenvector of $Q$ and $a^{\dagger
}b^{\dagger },$ is normalized as a singular function, so its applications
are greatly limited. However, the exploration for formal solution of $%
\left\vert q,k\right\rangle $ has its own sense in mathematical physics.
In summary, as a continuum work of Bhaumik et al [3] we have set a complex
differential equation in the entangled state representation for deriving the
new the common eigenstate $\left\vert q,k\right\rangle $ of parametric
interaction Hamiltonian and number-difference operator, the complex
differential equation has been solved in terms of hypergeometric function
and the two-variable Hermite polynomials and the solution seems new. Thus
this paper embodies a new use of hypergeometric functions and is of
theoretical mathematical physics meaning. This work is also an addendum to
Ref. [5], in which the common eigenkets of $Q$ and two-mode creators $%
a^{\dagger }b^{\dagger }$ are discussed.
|
1,108,101,563,133 | arxiv | \section{Introduction}\label{intro}
The goal of this paper is to explain some phenomena arising in the
realization of tractor bundles in conformal geometry as associated
bundles. In order to form an associated bundle one chooses a
principal bundle with normal Cartan connection
(i.e., a normal parabolic geometry) corresponding to a given conformal
manifold. We show that different natural choices can lead to topologically
distinct
associated tractor bundles for the same inducing representation. The
nature of the choices is subtle, so we give a careful presentation of the
relevant foundational material which we hope researchers in the field will
find illuminating. The main considerations apply as well to more general
parabolic geometries.
We focus particularly on tractor bundles associated to the standard
representation of $O(p+1,q+1)$.
The paper \cite{BEG} gave a construction of a canonical tractor bundle and
connection on any conformal manifold $(M,c)$ which are now usually called
the standard tractor bundle and its normal tractor connection.
This standard tractor bundle has other characterizations and realizations;
these were studied in \cite{CG2}. One of the realizations
is as an associated bundle to a
principal bundle over the conformal manifold; the
standard tractor bundle is associated to the standard representation of
$O(p+1,q+1)$. The complications arise because there are
different ways to realize a given conformal manifold as a normal
parabolic geometry, corresponding to different choices of structure group
and lifted conformal frame bundle. Different such choices can give rise to
different
tractor bundles with connection associated to the standard representation,
and for many natural choices one does not obtain the standard tractor
bundle with its normal connection. For example, let
$\mathcal{Q}$ denote the model quadric for conformal geometry in signature
$(p,q)$, consisting of the space of null lines for a quadratic form of
signature $(p+1,q+1)$. If one takes the
homogeneous space realization $\mathcal{Q}=O(p+1,q+1)/P^{\operatorname{line}}$, where
$P^\line$ denotes the isotropy group of a fixed null line, then for $pq\neq
0$ the bundle associated to the standard representation of $P^\line$ is
not the standard tractor bundle. Moreover its holonomy
(which is trival) is
not equal to the conformal holonomy of $\mathcal{Q}$ (which is $\{\pm I\}$).
Recall that $\mathcal{Q}$ is orientable if its dimension $n=p+q$ is even, so for
$n$ even this phenomenon is not a consequence of failure of
orientability of the conformal manifold. The
issue is that this associated bundle does not have the correct topology.
We begin by recalling in \S\ref{standard} the BEG construction of
the standard tractor bundle and its normal connection. We then formulate
a (slight variant of a)
uniqueness theorem of \cite{CG2} providing conditions on a bundle with
auxiliary data on
a conformal manifold which characterize it as the standard tractor bundle
with its normal connection. Following
\cite{CG2}, we review the construction of a tractor bundle and connection
via the ambient
construction and show using the \v{C}ap-Gover Uniqueness Theorem that this
construction also produces the standard tractor bundle.
In \S\ref{TCS} we review a fundamental prolongation result which we call
the TM\v{C}S Theorem (for Tanaka, Morimoto, \v{C}ap-Schichl), which asserts
an equivalence of categories between certain categories of parabolic
geometries and categories of underlying structures on the base manifold.
Our treatment is similar to that of \cite{CSl} except that we
parametrize parabolic geometries and underlying structures by triples
$(\mathfrak{g},P,\operatorname{Ad})$, where $\mathfrak{g}$ is a $|k|$-graded semisimple Lie algebra, $P$ is
a Lie group with Lie algebra $\mathfrak{p}=\mathfrak{g}^0$, and $\operatorname{Ad}$ is a suitable
representation of $P$ on $\mathfrak{g}$. Also we are more explicit about the
choices involved in determining an underlying structure.
We then illustrate the TM\v{C}S Theorem by showing how it can be used to
represent general conformal manifolds and oriented conformal manifolds as
parabolic geometries. In each case, in order to obtain a category of
parabolic geometries one must make a choice of a Lie group $P$ whose Lie
algebra is the usual parabolic subalgebra $\mathfrak{p}\subset \mathfrak{s}\mathfrak{o}(p+1,q+1)$,
and, depending on the
choice of $P$, also a choice of a lift of the conformal frame bundle.
There are several choices, some of which are equivalent.
In \S\ref{trac} we describe the construction of tractor bundles and
connections as associated bundles for general parabolic geometries,
and then specialize to the parabolic geometries arising from conformal
structures discussed in \S\ref{TCS}. We parametrize our associated bundles
by suitably compatible $(\mathfrak{g},P)$-modules; as for our parametrization of
parabolic geometries we find that this clarifies the dependence on the
various choices. We make some observations about general tractor bundles
as associated bundles for conformal geometry, and then we specialize to the
question of which choices from \S\ref{TCS} give rise
to the standard tractor bundle when one takes the $(\mathfrak{g},P)$-module to be
the standard representation. There are preferred choices for which one
always obtains the standard tractor bundle: for conformal manifolds one
should choose $P^{\operatorname{ray}}$, the subgroup of $O(p+1,q+1)$ preserving a null ray,
and for oriented conformal manifolds one should choose $SP^{\operatorname{ray}}$, the
subgroup of $SO(p+1,q+1)$ preserving a null ray. This is well-known and is
often taken as the definition of the standard tractor bundle. What is
novel in our discussion is the fact that so many other natural
choices give bundles associated to the standard representation which are
not the standard tractor bundle with its normal connection.
In \S\ref{trac} we also briefly discuss homogeneous models and conformal
holonomy. We follow the
usual convention of defining the conformal holonomy of a conformal manifold
to be the holonomy of the standard tractor bundle with its normal
connection, and show that for natural choices of principal bundles it often
happens that the
holonomy of the tractor bundle with normal connection associated to the
standard representation is not equal to the conformal
holonomy. We conclude
\S\ref{trac} with a brief discussion of analogous
phenomena for the parabolic geometries corresponding to generic 2-plane
fields on 5-manifolds, the consideration of which led us to become aware of
these subtleties in the first place.
Throughout, our conformal structures are of signature $(p,q)$ on
manifolds $M$ of dimension $n=p+q\geq 3$.
We are grateful to Andreas \v{C}ap and Rod Gover for useful comments and
suggestions.
\section{Standard Tractor Bundle}\label{standard}
The paper \cite{BEG} gave a concrete construction of a tractor bundle
$\mathcal{T}$ on general conformal manifolds. $\mathcal{T}$ has rank $n+2$, carries a
fiber metric $h$ of signature
$(p+1,q+1)$, and has a null rank 1 subbundle $\mathcal{T}^1$ isomorphic to the
bundle
$\mathcal{D}[-1]$ of conformal densities of weight $-1$. We denote by $\mathcal{D}[w]$ the
bundle of conformal densities of weight $w$ and by $\mathcal{E}[w]$ its space of
smooth sections. The bundle $\mathcal{T}$ was defined to be
a particular conformally invariant subbundle of the 2-jet bundle
of $\mathcal{D}[1]$. It was then shown that a choice $g$ of a representative of
the conformal class induces a splitting
$$
\mathcal{T}\cong\mathcal{D}[-1]\oplus TM[-1]\oplus \mathcal{D}[1],
$$
where
$TM[w]=TM\otimes \mathcal{D}[w]$. With respect to this splitting, a section
$U\in \Gamma(\mathcal{T})$ is represented as a triple
$$
U=
\begin{pmatrix}
\rho\\
\mu^i\\
\sigma
\end{pmatrix}
$$
with $\rho\in \mathcal{E}[-1]$, $\mu^i\in \Gamma(TM[-1])$, $\sigma\in \mathcal{E}[1]$.
Under a conformal change $\widehat{g}=e^{2\Upsilon}g$, the representations are
identified by
$$
\begin{pmatrix}
\widehat{\rho}\\
\widehat{\mu}^i\\
\widehat{\sigma}
\end{pmatrix}
=
\begin{pmatrix}
1&-\Upsilon_j&-\tfrac12 \Upsilon_k\Upsilon^k\\
0&\delta^i{}_j&\Upsilon^i\\
0&0&1
\end{pmatrix}
\begin{pmatrix}
\rho\\
\mu^j\\
\sigma
\end{pmatrix}.
$$
Indices are raised and lowered using the tautological 2-tensor
${\bf g}\in \Gamma(S^2T^*M[2])$ determined by the conformal structure.
The tractor metric $h$ is defined by
$$
h(U,U)= 2\rho\sigma + {\bf g}_{ij}\mu^i\mu^j.
$$
The subbundle $\mathcal{T}^1$ is defined by $\mu^i=0$, $\sigma=0$, and the map
$$
\rho\mapsto
\begin{pmatrix}
\rho\\
0\\
0
\end{pmatrix}
$$
defines a conformally invariant isomorphism $\mathcal{D}[-1]\cong \mathcal{T}^1$.
Note for future reference that since conformal density bundles are trivial,
$\mathcal{T}$ is isomorphic to
$TM\oplus \mathbb R^2$ as a smooth vector bundle, where $\mathbb R^2$ denotes a trivial
rank 2 vector bundle. It follows that $\mathcal{T}$ is orientable if and only if
$M$ is orientable.
A connection $\nabla$ on $\mathcal{T}$ was defined in \cite{BEG} directly in terms
of the splitting and the chosen representative and the definition was
verified to be conformally invariant and to give $\nabla h=0$. The
definition is:
$$
\nabla_i
\begin{pmatrix}
\rho\\
\mu^j\\
\sigma
\end{pmatrix}
=
\begin{pmatrix}
\nabla_i \rho-P_{ik}\mu^k\\
\nabla_i \mu^j+\delta_i{}^j\rho+P_i{}^j\sigma\\
\nabla_i\sigma-\mu_i
\end{pmatrix}.
$$
The occurrences of $\nabla_i$ on the right-hand side denote the
connection induced by the representative $g$ on the density bundles, or
that connection coupled with the Levi-Civita connection of $g$ in the case
of $\nabla_i\mu^j$.
$P_{ij}$ denotes the Schouten tensor of $g$.
A uniqueness theorem for such a tractor bundle
was proven in \S 2.2 of \cite{CG2}. We state the result
assuming a conformal manifold, whereas in \cite{CG2} the existence
of the conformal structure was part of the conclusion. Let $(M,c)$ be a
conformal manifold with tautological tensor ${\bf g}\in
\Gamma(S^2T^*M[2])$. Consider the following data. Let $\mathcal{T}$ be
a rank $n+2$ vector bundle over $M$ with metric $h$ of signature
$(p+1,q+1)$ and connection $\nabla$ such that $\nabla h=0$. Let $\mathcal{T}^1$ be
a null line subbundle of $\mathcal{T}$ equipped with an isomorphism
$\mathcal{T}^1\cong\mathcal{D}[-1]$. If $v\in T_xM$ and $U\in \Gamma(\mathcal{T}^1)$,
differentiating $h(U,U)=0$ shows that
$\nabla_vU \in (\mathcal{T}^1_x)^\perp$. The projection of $\nabla_vU$
onto $(\mathcal{T}^1_x)^\perp/\mathcal{T}^1_x$ is tensorial in $U$. Invoking
the isomorphism $\mathcal{T}^1\cong\mathcal{D}[-1]$, it follows that
$v\otimes U\mapsto \nabla_vU + \mathcal{T}^1$ induces a bundle map $\tau:TM\otimes
\mathcal{D}[-1]\rightarrow (\mathcal{T}^1)^\perp/\mathcal{T}^1$. The
metric $h$ determines a metric $h_0$ of signature $(p,q)$ on
$(\mathcal{T}^1)^\perp/\mathcal{T}^1$. The
data $(\mathcal{T},\mathcal{T}^1,h,\nabla)$ are said to be compatible with the conformal
structure if $\tau^*h_0={\bf g}$. We refer to \cite{CG2}
for the formulation of the curvature condition for $\nabla$ to be called
normal.
\bigskip
\noindent
{\bf \v{C}ap-Gover Uniqueness Theorem}. {\it Let $(M,c)$ be a conformal
manifold. Up to
isomorphism, there is a unique $(\mathcal{T},\mathcal{T}^1,h,\nabla)$ as above compatible
with the conformal structure with $\nabla$ normal.
}
\bigskip
Such a $\mathcal{T}$ is called a (or the) standard tractor bundle and $\nabla$ its
normal tractor connection. Even though $(\mathcal{T},\mathcal{T}^1,h,\nabla)$ is unique up
to isomorphism,
there are several different realizations. The tractor bundle and
connection constructed in \cite{BEG} satisfy the conditions and so provide
one realization.
Another realization of the standard tractor bundle discussed in \cite{CG2}
is via the ambient
construction of \cite{FG1}, \cite{FG2}. Let $\mathcal{G}\rightarrow M$ be the
metric bundle of $(M,c)$, i.e.
$\mathcal{G}=\{(x,g_x):x\in M, g\in c\}\subset S^2T^*M$. $\mathcal{G}$ carries
dilations $\delta_s$ for $s>0$ defined by $\delta_s(x,g)=(x,s^2g)$, and the
tautological 2-tensor ${\bf g}$ can be viewed as a section
${\bf g}\in\Gamma(S^2T^*\mathcal{G})$ satisfying $\delta_s^*{\bf g}=s^2{\bf g}$.
An ambient metric $\widetilde{g}$ for $(M,c)$ is a metric of signature
$(p+1,q+1)$ on a dilation-invariant neighborhood $\cGt$ of $\mathcal{G}\times
\{0\}$ in $\mathcal{G}\times \mathbb R$
satisfying $\delta_s^*\widetilde{g} = s^2\widetilde{g}$, $\iota^*\widetilde{g}={\bf g}$,
and a vanishing condition on its Ricci curvature. (In order to construct
the standard tractor bundle and its normal connection, it suffices that the
tangential components of the Ricci curvature of $\widetilde{g}$ vanish when
restricted to $\mathcal{G}\times \{0\}$.) Here
$\iota:\mathcal{G}\rightarrow \mathcal{G}\times \mathbb R$ is defined by $\iota(z)=(z,0)$ and the
dilations extend to $\mathcal{G}\times \mathbb R$ acting in the $\mathcal{G}$ factor alone.
The ambient realization of $\mathcal{T}$ is defined as follows. The fiber $\mathcal{T}_x$
over $x\in M$ is
$$
\mathcal{T}_x= \left\{U\in\Gamma(T\cGt|_{\mathcal{G}_x}):\delta_s^*U=s^{-1}U\right\},
$$
where $\mathcal{G}_x$ denotes the fiber of $\mathcal{G}$ over $x$.
The homogeneity condition implies that $U$ is determined by its
value at any single point of $\mathcal{G}_x$, so $\mathcal{T}_x$ is a vector space of
dimension $n+2$.
The tractor metric $h$ and the normal tractor connection $\nabla$ can be
realized as the restrictions to $\mathcal{G}$ of $\widetilde{g}$ and its Levi-Civita
connection $\widetilde{\nabla}$. The null subbundle $\mathcal{T}^1$ is
the vertical bundle in $T\mathcal{G}\subset T\cGt|_\mathcal{G}$. The infinitesimal
dilation $T$ defines a global section of $\mathcal{T}^1[1]$, so determines the
isomorphism $\mathcal{T}^1\cong\mathcal{D}[-1]$. It can be verified that the tractor
bundle and connection defined this
way satisfy the conditions above, so the uniqueness
theorem implies that the ambient construction gives
a standard tractor bundle with its normal connection. An isomorphism
with the realization in \cite{BEG} is written down directly in \cite{GW} in
terms of a conformal representative.
We mention in passing that the formulation of the ambient construction in
\cite{CG2} appears to be more general than that above in that it allows an
arbitrary ambient manifold $\cGt$ with a free $\mathbb R_+$-action containing
$\mathcal{G}$ as a hypersurface. But at least near $\mathcal{G}$ there is no real gain in
generality: if $\cGt$ admits a metric $\widetilde{g}$ such that $\iota^*\widetilde{g}={\bf
g}$, then the normal bundle of $\mathcal{G}$ in $\cGt$ is trivial so that near
$\mathcal{G}$, $\cGt$ is diffeomorphic to a neighborhood of $\mathcal{G}\times \{0\}$ in
$\mathcal{G}\times \mathbb R$. This is because the 1-form dual to $T$ with respect to
$\widetilde{g}$ gives a global nonvanishing section of $(T\cGt/T\mathcal{G})^*$.
The third usual construction of the standard tractor bundle is as an
associated bundle to the Cartan bundle for the conformal structure. We
postpone discussion of this construction to \S\ref{trac}.
\section{Tanaka-Morimoto-\v{C}ap-Schichl Theorem}\label{TCS}
A fundamental result in the theory of parabolic geometries asserts
an equivalence of
categories between parabolic geometries of a particular type $(\mathfrak{g},P)$
and certain underlying structures.
There are different forms of the result due to Tanaka \cite{T},
Morimoto \cite{M}, and \v{C}ap-Schichl \cite{CSc}. We state a version
which is a slight extension of Theorem 3.1.14 in \cite{CSl} and refer
to it as the TM\v{C}S Theorem.
Let $\mathfrak{g}=\mathfrak{g}_{-k}\oplus \cdots \oplus \mathfrak{g}_k$ be a $|k|$-graded
semisimple Lie algebra with associated filtration
$\mathfrak{g}^i=\mathfrak{g}_{i}\oplus \cdots \oplus \mathfrak{g}_k$ and
subalgebras $\mathfrak{p}=\mathfrak{g}^0$ and $\mathfrak{g}_-=\mathfrak{g}_{-k}\oplus \cdots \oplus \mathfrak{g}_{-1}$.
Let $P$ be a Lie group with Lie algebra $\mathfrak{p}$ and let
$\operatorname{Ad}:P\rightarrow \operatorname{Aut}_{\operatorname{filtr}}(\mathfrak{g})$ be a representation of
$P$ as filtration-preserving Lie algebra automorphisms of $\mathfrak{g}$ such
that $p\mapsto\operatorname{Ad}(p)|_\mathfrak{p}$ is the usual adjoint representation of $P$ on
$\mathfrak{p}$.
Typically there is a Lie group $G$ with Lie algebra $\mathfrak{g}$ containing $P$ as
a parabolic subgroup with respect to the given grading, and $\operatorname{Ad}$ is the
restriction to $P$ of the adjoint representation of $G$. But we assume
neither that there exists such a $G$ nor that we have chosen one.
For fixed $|k|$-graded $\mathfrak{g}$,
two choices $(\mathfrak{g},P_1,\operatorname{Ad}_1)$ and $(\mathfrak{g},P_2,\operatorname{Ad}_2)$ will be
regarded as equivalent from the point of view of the TM\v{C}S Theorem if
there is an isomorphism $\gamma:P_1\rightarrow P_2$ of Lie groups which
induces the identity on the common Lie algebra $\mathfrak{p}$ of $P_1$ and $P_2$ and
which satisfies $\operatorname{Ad}_2\circ \gamma = \operatorname{Ad}_1$.
Given data $(\mathfrak{g},P,\operatorname{Ad})$ as above, the Levi subgroup $P_0\subset P$ is
defined by
$$
P_0=\{p\in P: \operatorname{Ad}(p)(\mathfrak{g}_i)\subset \mathfrak{g}_i, -k\leq i\leq k\}.
$$
We prefer the notation $P_0$ rather than the usual $G_0$ since we do not
choose a group $G$, and also to emphasize that $P_0$ depends on $P$.
A parabolic geometry of type $(\mathfrak{g},P,\operatorname{Ad})$ (or just $(\mathfrak{g},P)$ if the
representation $\operatorname{Ad}$ is understood) on a manifold $M$ is a
$P$-principal bundle $\mathcal{B}\rightarrow M$ together with a Cartan connection
$\omega:T\mathcal{B}\rightarrow \mathfrak{g}$. The definition of a Cartan connection depends
only on the data $(\mathfrak{g},P,\operatorname{Ad})$; see, for example, \cite{S}.
We refer to \cite{CSl} for the conditions on
the curvature of $\omega$ for the parabolic geometry to be called regular
and normal.
Next we formulate the notion of an underlying structure of type
$(\mathfrak{g},P,\operatorname{Ad})$ on a manifold $M$. The first part of the data consists of a
filtration
$TM=T^{-k}M\supset \cdots \supset T^{-1}M\supset \{0\}$ of $TM$
compatible with the Lie bracket such that at each point $x\in M$ the
induced Lie algebra structure
on the associated graded $\operatorname{gr}(T_xM)$ (the symbol algebra) is
isomorphic to $\mathfrak{g}_-$. We denote by $\mathcal{F}(\mathfrak{g}_-,\operatorname{gr}(TM))$ the induced
frame bundle of $\operatorname{gr}(TM)$ whose structure group is the group
$\operatorname{Aut}_{\operatorname{gr}}(\mathfrak{g}_-)$ of
graded Lie algebra automorphisms of $\mathfrak{g}_-$ and whose fiber over
$x$ consists of all the graded Lie algebra isomorphisms
$\mathfrak{g}_-\rightarrow \operatorname{gr}(T_xM)$. The second part of the data is a
$P_0$-principal bundle $E\rightarrow M$ equipped with a bundle map
$\Phi:E\rightarrow \mathcal{F}(\mathfrak{g}_-,\operatorname{gr}(TM))$ covering the identity on $M$ which
is equivariant with respect to the homomorphism
$\operatorname{Ad}:P_0\rightarrow \operatorname{Aut}_{\operatorname{gr}}(\mathfrak{g}_-)$ in the sense that
$\Phi(u.p)=\Phi(u).\operatorname{Ad}(p)$ for $p\in P_0$, $u\in E$. An underlying
structure of type $(\mathfrak{g},P,\operatorname{Ad})$ on $M$ is such a filtration of $TM$
together with such a $P_0$-principal bundle $E$ and map $\Phi$.
There are notions of morphisms of parabolic geometries and of underlying
structures of type $(\mathfrak{g},P,\operatorname{Ad})$ which make these into categories. A
morphism of parabolic
geometries $\mathcal{B}_1\rightarrow M_1$ and $\mathcal{B}_2\rightarrow M_2$ of type
$(\mathfrak{g},P,\operatorname{Ad})$ is a principal bundle morphism $\phi:\mathcal{B}_1\rightarrow \mathcal{B}_2$
such that $\phi^*\omega_2=\omega_1$. A morphism of underlying structures
$E_1\rightarrow M_1$ and $E_2\rightarrow M_2$
of type $(\mathfrak{g},P,\operatorname{Ad})$ is a principal bundle morphism $\phi:E_1\rightarrow
E_2$ which covers a filtration-preserving local diffeomorphism
$f:M_1\rightarrow M_2$ and
which is compatible with the maps $\Phi_1$, $\Phi_2$ in the sense that
$\Phi_2\circ\phi=f_*\circ\Phi_1$, where
$f_*:\mathcal{F}(\mathfrak{g}_-,\operatorname{gr}(TM_1))\rightarrow \mathcal{F}(\mathfrak{g}_-,\operatorname{gr}(TM_2))$ is the map on
the frame bundles induced by the differential of $f$. If $(\mathfrak{g},P_1,\operatorname{Ad}_1)$
and $(\mathfrak{g},P_2,\operatorname{Ad}_2)$ are equivalent from the point of view of the TM\v{C}S
Theorem as defined above, then composition of the principal bundle actions
with $\gamma$
induces an equivalence of categories between the categories of parabolic
geometries of
types $(\mathfrak{g},P_1,\operatorname{Ad}_1)$ and $(\mathfrak{g},P_2,\operatorname{Ad}_2)$ and between the categories of
underlying structures of types $(\mathfrak{g},P_1,\operatorname{Ad}_1)$ and $(\mathfrak{g},P_2,\operatorname{Ad}_2)$.
The simplest and most common situation is when $\operatorname{Ad}:P_0\rightarrow
\operatorname{Aut}_{\operatorname{gr}}(\mathfrak{g}_-)$ is injective. Then $\Phi$ is a bijection between $E$
and $\Phi(E)\subset\mathcal{F}(\mathfrak{g}_-,\operatorname{gr}(TM))$, and $\Phi(E)$ is a subbundle of
$\mathcal{F}(\mathfrak{g}_-,\operatorname{gr}(TM))$ with structure group $\operatorname{Ad}(P_0)\cong P_0$. It is not
hard to see that
this association defines an equivalence between the category of underlying
structures of type $(\mathfrak{g},P,\operatorname{Ad})$ and the category of reductions of
structure group of the frame bundle
$\mathcal{F}(\mathfrak{g}_-,\operatorname{gr}(TM))$ of filtered manifolds of type $\mathfrak{g}_-$
to $\operatorname{Ad}(P_0)\subset\operatorname{Aut}_{\operatorname{gr}}(\mathfrak{g}_-)$.
In general, an underlying structure determines the bundle $\Phi(E)$,
which is still a reduction of $\mathcal{F}(\mathfrak{g}_-,\operatorname{gr}(TM))$ to structure group
$\operatorname{Ad}(P_0)\subset\operatorname{Aut}_{\operatorname{gr}}(\mathfrak{g}_-)$.
But if $\operatorname{Ad}:P_0\rightarrow \operatorname{Aut}_{\operatorname{gr}}(\mathfrak{g}_-)$ is not injective then
the underlying structure contains more information. In all the cases we
will consider, the kernel of
$\operatorname{Ad}:P_0\rightarrow\operatorname{Aut}_{\operatorname{gr}}(\mathfrak{g}_-)$ is discrete. Then
$\operatorname{Ad}:P_0\rightarrow \operatorname{Ad}(P_0)$ and
$\Phi:E\rightarrow \Phi(E)$ are covering maps, and to fix an underlying
structure one
must also choose the lift $\Phi:E\rightarrow \Phi(E)$ of the reduced bundle
$\Phi(E)\subset \mathcal{F}(\mathfrak{g}_-,\operatorname{gr}(TM))$ to a $P_0$-bundle.
\bigskip
\noindent
{\bf TM\v{C}S Theorem}. {\it Let $\mathfrak{g}$ be a $|k|$-graded semisimple Lie
algebra
such that none of
the simple ideals of $\mathfrak{g}$ is contained in $\mathfrak{g}_0$, and such that
$H^1(\mathfrak{g}_-,\mathfrak{g})^1=0$. Let $P$ be a Lie group with Lie algebra $\mathfrak{p}$ and
let $\operatorname{Ad}:P\rightarrow \operatorname{Aut}_{\operatorname{filtr}}(\mathfrak{g})$ be a representation
of $P$ as filtration-preserving Lie algebra automorphisms of $\mathfrak{g}$ such
that $p\mapsto\operatorname{Ad}(p)|_\mathfrak{p}$ is the usual adjoint representation of $P$ on
$\mathfrak{p}$.
Then there is an equivalence of categories between
normal regular parabolic geometries of type $(\mathfrak{g},P,\operatorname{Ad})$ and underlying
structures of type $(\mathfrak{g},P,\operatorname{Ad})$. }
\bigskip
\noindent
Here $H^1(\mathfrak{g}_-,\mathfrak{g})^1$ denotes the 1-piece in the filtration of
the first Lie algebra cohomology group. All the examples we will consider
satisfy $H^1(\mathfrak{g}_-,\mathfrak{g})^1=0$.
The discussion in \cite{CSl} is in terms of categories of parabolic
geometries and underlying structures of type $(G,P)$ and assumes from the
outset that that $P$ is a parabolic subgroup of $G$.
Our point in formulating parabolic geometries and underlying structures of
type $(\mathfrak{g},P,\operatorname{Ad})$ rather than type $(G,P)$ is not really to extend
the discussion to the case that such a $G$ might not exist. There is such
a group $G$ for all the examples we care about. Rather, the point is
to emphasize that the choice of a particular such $G$ is irrelevant as far
as the TM\v{C}S Theorem is concerned. The fact that the TM\v{C}S Theorem
holds in the generality stated above has been communicated to us by Andreas
\v{C}ap. The emphasis on $\mathfrak{g}$ rather than $G$ and further
generalization in this direction are fundamental aspects of the work of
Morimoto \cite{M}.
Consider the case of general conformal structures of signature
$(p,q)$, $p+q=n\geq 3$. The
filtration of $TM$ is trivial, the frame bundle $\mathcal{F}$ is the full frame
bundle of $M$, and a conformal structure is equivalent to a reduction of
the structure group of $\mathcal{F}$ to
$CO(p,q)=\mathbb R_+\, O(p,q)$. The Lie algebra $\mathfrak{g}$ is $\mathfrak{s}\mathfrak{o}(p+1,q+1)$ and
we will consider various possibilities for $P$. Take the quadratic
form defining $\mathfrak{s}\mathfrak{o}(p+1,q+1)$ to be $2x^0x^\infty +h_{ij}x^ix^j$ for some
$h_{ij}$ of signature $(p,q)$. Writing the matrices in terms of $1\times
n\times 1$ blocks, the Levi subgroups $P_0$ will be of the form
\begin{equation}\label{P0}
P_0=\left\{p=
\begin{pmatrix}
\lambda&0&0\\
0&m&0\\
0&0&\lambda^{-1}
\end{pmatrix}
\right\}
\end{equation}
with various restrictions on $\lambda\in \mathbb R\setminus \{0\}$ and $m\in
O(p,q)$. The matrices in $P$
will be block upper-triangular. $\mathfrak{g}_-\cong \mathbb R^n$ consists of matrices of
the form
$$
\begin{pmatrix}
0&0&0\\
x^i&0&0\\
0&-x_j&0
\end{pmatrix}
$$
with $x\in \mathbb R^n$, and $h_{ij}$ is used to lower the index. Under the
adjoint action, $p\in P_0$ acts on $\mathfrak{g}_-$ by $x\mapsto \lambda^{-1}mx$.
A natural first choice is to take $P$ to be the subgroup $P^{\operatorname{ray}}$ of
$G=O(p+1,q+1)$ preserving the ray $\mathbb R_+ e_0$, with $\operatorname{Ad}$ the restriction of
the adjoint representation of $O(p+1,q+1)$. Then $P^{\operatorname{ray}}_0$ is given by
\eqref{P0} with the restrictions $\lambda>0$, $m\in O(p,q)$.
The map $\operatorname{Ad}:P_0^{\operatorname{ray}}\rightarrow CO(p,q)$ is an isomorphism, so an
underlying structure is exactly a conformal structure.
There is another choice of $P$ which is equivalent to $P^{\operatorname{ray}}$ from the
point of view of the TM\v{C}S Theorem. Namely, consider the subgroup
$PP$ of $PO(p+1,q+1)=O(p+1,q+1)/\{\pm I\}$ preserving the line through
$e_0$ in the projective action, with $\operatorname{Ad}$ induced by the inclusion of
$PP$ as a subgroup of $G=PO(p+1,q+1)$. The coset projection
$P^{\operatorname{ray}}\rightarrow PP$ is an isomorphism which maps one $\operatorname{Ad}$
representation to the other, so these choices of $P$ are equivalent from
the point of view of the TM\v{C}S Theorem.
If $n$ is odd, there is yet another choice of $P$ also equivalent from
the point of view of the TM\v{C}S Theorem. This is the subgroup $SP^\line$
of $SO(p+1,q+1)$ preserving the line through $e_0$, with $\operatorname{Ad}$
induced by the inclusion $SP^\line\subset O(p+1,q+1)$. Observe that
$SP^\line_0$ corresponds to $\lambda\neq 0$, $m\in SO(p,q)$. If
$\lambda<0$, $m\in SO(p,q)$, and $n$ is odd, then $\lambda^{-1}m$ has
negative determinant, and one sees easily that
$\operatorname{Ad}:SP_0^\line\rightarrow CO(p,q)$ is an isomorphism.
The map $A\mapsto \det(A)A$ is an isomorphism from $P^{\operatorname{ray}}$ to $SP^\line$
which maps the $\operatorname{Ad}$ representation of $P^{\operatorname{ray}}$ on $\mathfrak{s}\mathfrak{o}(p+1,q+1)$ to
that of $SP^\line$. Thus $P^{\operatorname{ray}}$
and $SP^\line$ are equivalent from the point of view of the TM\v{C}S
Theorem if $n$ is odd.
The TM\v{C}S Theorem asserts an equivalence of categories between conformal
structures and normal parabolic geometries of type
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$. (The regularity condition is automatic for
conformal geometry since $\mathfrak{s}\mathfrak{o}(p+1,q+1)$ is $|1|$-graded.)
Thus, for each conformal manifold, there is a $P^{\operatorname{ray}}$-principal bundle
$\mathcal{B}^{\operatorname{ray}}$ carrying a normal Cartan connection, and it is unique up to
isomorphism. This is of course a classical result going back to Cartan.
We will call this parabolic geometry of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$ the
canonical parabolic geometry realization of the conformal manifold $(M,c)$.
One may equally well choose to represent the canonical parabolic geometry
using the structure group $PP$ (or $SP^\line$ if $n$ is
odd), since these categories of parabolic geometries are equivalent.
It is instructive to identify the principal bundles for model $M$.
For instance, suppose we take $M=S^p\times S^q$ to be the space of null
rays, with conformal structure determined by the metric $g_{S^p}-g_{S^q}$.
We have $M=O(p+1,q+1)/P^{\operatorname{ray}}$, so the canonical parabolic geometry
realization determined by the TM\v{C}S Theorem is $\mathcal{B}=O(p+1,q+1)$ with
Cartan connection
the Maurer-Cartan form. Alternately we can choose $M$ to be the
quadric $\mathcal{Q}=(S^p\times S^q)/\mathbb Z_2$ embedded in
$\P^{n+1}$ as the set of null lines. We can realize
$\mathcal{Q}=PO(p+1,q+1)/PP$ and view this as a parabolic geometry for $P^{\operatorname{ray}}$ via
the isomorphism $P^{\operatorname{ray}}\cong PP$. So the canonical parabolic geometry
realization of the quadric is
$PO(p+1,q+1)$ with its Maurer-Cartan form as Cartan connection. Thus
in this sense both
$PO(p+1,q+1)/PP$ and $O(p+1,q+1)/P^{\operatorname{ray}}$ are homogeneous models for
the category of parabolic geometries of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$.
If $n$ is odd, then there is an alternate
homogeneous space realization of $\mathcal{Q}$ using $SP^\line$; namely as
$\mathcal{Q}=SO(p+1,q+1)/SP^\line$.
The uniqueness assertion inherent in the TM\v{C}S Theorem imples that this
realization must be isomorphic to that above. Indeed,
the map $A\mapsto \det(A)A$ determines an isomorphism
$PO(p+1,q+1)\rightarrow SO(p+1,q+1)$ of the two parabolic geometry
realizations of $\mathcal{Q}$.
Next let us choose $P$ to be the subgroup $P^\line$ of $O(p+1,q+1)$
preserving the line spanned by the first basis vector $e_0$, with $\operatorname{Ad}$
induced by the inclusion in $G=O(p+1,q+1)$. The Levi
factor $P^\line_0$ corresponds to
the conditions $\lambda\neq 0$, $m\in O(p,q)$. We have
$\operatorname{Ad}(P^\line_0)=CO(p,q)$, so an underlying structure includes the data of a
conformal structure. But now $\operatorname{Ad}$ is not injective; its kernel is $\{\pm
I\}$. So to determine an underlying structure we must additionally choose
a lift $E$ of the conformal frame bundle to a $P_0^\line$-bundle.
Such a lift always exists since $P_0^\line$ is a product:
$P^\line_0\cong P_0^{\operatorname{ray}}\times \{\pm I\}$. If $\mathcal{F}_c$ denotes the
conformal frame bundle of $M$ with
structure group $CO(p,q)\cong P_0^{\operatorname{ray}}$, then we can take
$E=\mathcal{F}_c\times \{\pm I\}$ with the product action of $P^\line_0$, and can
take the map $\Phi$ in the definition of underlying structures to be the
projection onto $\mathcal{F}_c$. Since $P^\line\cong P^{\operatorname{ray}}\times \{\pm I\}$, if
$(\mathcal{B}^{\operatorname{ray}},\omega^{\operatorname{ray}})$ denotes the canonical parabolic geometry
realization of the conformal manifold, then the bundle $\mathcal{B}^\line$ produced
by the TM\v{C}S Theorem
for this choice of $E$ is just $\mathcal{B}^\line=\mathcal{B}^{\operatorname{ray}}\times \{\pm I\}$,
and the Cartan connection is the pullback of $\omega^{\operatorname{ray}}$ to $\mathcal{B}^\line$
under the obvious projection.
However, depending on the topology of $M$, there may be a number of other
inequivalent lifts $E$
of the conformal frame bundle to a $P^\line_0$-bundle, which will determine
inequivalent $P^\line$-principal bundles $\mathcal{B}$ with normal Cartan
connection via the TM\v{C}S Theorem. For instance, consider the quadric
$\mathcal{Q}$. The product bundle
constructed in the previous paragraph gives rise to the realization
$\mathcal{Q}=(PO(p+1,q+1)\times \{\pm I\})/P^\line$, with Cartan bundle
$\mathcal{B}=PO(p+1,q+1)\times \{\pm I\}$. On the other hand, the geometrically
obvious realization of
$\mathcal{Q}$ as a homogeneous space for $P^\line$ is as $\mathcal{Q}=O(p+1,q+1)/P^\line$.
If $p=0$ or
$q=0$, then $\mathcal{Q}=S^n$ is simply connected so there is only one lift. Indeed,
$O(n+1,1)\cong PO(n+1,1)\times \{\pm I\}$, corresponding to the
decomposition into time-preserving and time-reversing transformations. But
if $pq\neq 0$, then $O(p+1,q+1)$ and $PO(p+1,q+1)\times \{\pm I\}$ are
inequivalent as $P^\line$-principal bundles over $\mathcal{Q}$, as we will see in
the next section.
There are analogues of all these choices for oriented conformal
structures. In this case the structure group reduction is to
$CSO(p,q)=\mathbb R_+SO(p,q)\subset
CO(p,q)$. A natural choice is to take $P$ to be $SP^{\operatorname{ray}}$, the subgroup of
$SO(p+1,q+1)$ preserving the null ray, for which $SP^{\operatorname{ray}}_0$ corresponds to
$\lambda>0$, $m\in SO(p,q)$. We have that $\operatorname{Ad}:SP^{\operatorname{ray}}_0\rightarrow
CSO(p,q)$ is an isomorphism, so in all dimensions and signatures underlying
structures of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^{\operatorname{ray}})$ are the same as oriented
conformal structures. The parabolic geometry of type
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^{\operatorname{ray}})$ determined by the TM\v{C}S Theorem is a
reduction to structure group $SP^{\operatorname{ray}}$ of the canonical parabolic geometry
of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$ determined by the same conformal
structure but forgetting the orientation.
If $n$ is even, for oriented conformal structures a choice of $P$
equivalent to $SP^{\operatorname{ray}}$ from the point of view of the TM\v{C}S
Theorem is the subgroup $PSP$ of $PSO(p+1,q+1)$ preserving the null line in
the projective action. In all dimensions and signatures, a homogeneous
model is $S^p\times S^q= SO(p+1,q+1)/SP^{\operatorname{ray}}$. The quadric $\mathcal{Q}$ is
orientable if $n$ is even, and in this case it provides another homogeneous
model for parabolic geometries of type
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^{\operatorname{ray}})$: $\mathcal{Q}= PSO(p+1,q+1)/PSP$.
For even $n$, a choice of $P$ for oriented conformal structures analogous
to $P^\line$ above is $SP^\line$, since $\lambda^{-1}m$ remains
orientation-preserving for $\lambda<0$ if $n$ is even.
For this choice the structure group reduction
is to $\operatorname{Ad}(SP_0^\line)=CSO(p,q)$. But $\operatorname{Ad}$ has kernel $\{\pm I\}$, so a
lift of the oriented conformal frame bundle must be chosen to determine an
underlying structure. One has the
product decomposition $SP_0^\line\cong SP_0^{\operatorname{ray}}\times \{\pm I\}$, so one
choice is always the product lift. But if $pq\neq 0$, the realization
$\mathcal{Q} = SO(p+1,q+1)/SP^\line$ corresponds to an inequivalent lift.
\section{Tractor Bundles as Associated Bundles}\label{trac}
Let $M$ be a manifold with a parabolic geometry $(\mathcal{B},\omega)$ of type
$(\mathfrak{g},P,\operatorname{Ad})$.
There is an associated vector bundle $\mathcal{V}\rightarrow M$ corresponding to
any finite-dimensional representation $\rho:P\rightarrow GL(V)$. The
sections of $\mathcal{V}$ can be
identified with the maps $f:\mathcal{B}\rightarrow V$ which are $P$-equivariant in
the sense that $R_p^*f=\rho(p^{-1})f$ for all $p\in P$.
Suppose moreover that $(V,\rho)$ is actually a $(\mathfrak{g},P)$-representation,
that is
there is an action $\rho:\mathfrak{g}\rightarrow \mathfrak{g}\mathfrak{l}(V)$ of $\mathfrak{g}$ on $V$
which is compatible with the $P$-action in the sense
that the infinitesimal action of $\mathfrak{p}$ obtained by differentiating the
action of $P$ agrees with the restriction of the action of $\mathfrak{g}$ to $\mathfrak{p}$.
We will say that the $(\mathfrak{g},P)$-module $(V,\rho)$ is $\operatorname{Ad}$-compatible if
$$
\rho\left(\operatorname{Ad}(p)(Z)\right)=\rho(p)\rho(Z)\rho(p^{-1})\qquad\quad
p\in P,\;Z\in \mathfrak{g}.
$$
In this case there is an induced linear connection $\nabla$ on $\mathcal{V}$
defined as follows. Let $f$ be a section of $\mathcal{V}$ and let $X$ be a vector
field on $M$. Choose a lift ${\bar X}$ of $X$ to $\mathcal{B}$ and set
\begin{equation}\label{connection}
\nabla_Xf={\bar X}f+\rho(\omega({\bar X}))f.
\end{equation}
The fact that $\omega$ reproduces generators of fundamental vector fields,
the equivariance of $f$,
and the compatibility of the $(\mathfrak{g},P)$-actions implies that the right-hand
side is unchanged upon adding a vertical vector field to ${\bar X}$. Thus
$\nabla_X f$ is independent of the choice of lift ${\bar X}$. So one may
as well take ${\bar X}$ to be $P$-invariant. Then ${\bar X}f$ is clearly
$P$-equivariant, and one checks easily
that the $\operatorname{Ad}$-compatibility implies that $\rho(\omega({\bar X}))f$ is
$P$-equivariant. Thus $\nabla_Xf$ is a section of $\mathcal{V}$. The
resulting map $(X,f)\mapsto \nabla_Xf$ defines a connection on $\mathcal{V}$.
We call $(\mathcal{V},\nabla)$ the tractor bundle and tractor connection
for the parabolic geometry $(\mathcal{B},\omega)$ associated to the
$\operatorname{Ad}$-compatible $(\mathfrak{g},P)$-module $(V,\rho)$.
As discussed in the previous section, typically one can find a Lie group
$G$ with Lie algebra $\mathfrak{g}$ which contains
$P$ as a parabolic subgroup and which induces the given $\operatorname{Ad}$.
If $(V,\rho)$ is any finite-dimensional representation of $G$, the
induced representations of $\mathfrak{g}$ and $P$ define a
$(\mathfrak{g},P)$-module structure which is automatically $\operatorname{Ad}$-compatible. So
there is a tractor bundle and connection associated to any
finite-dimensional representation of any such group $G$. We will call this
the tractor bundle and connection associated to the restriction to
$(\mathfrak{g},P)$ of the $G$-module $(V,\rho)$.
We saw in \S\ref{TCS} that one can choose different normal
parabolic geometries corresponding to a given conformal manifold $(M,c)$.
As we will see, different choices can give rise to different bundles
associated
to the same $O(p+1,q+1)$-module $(V,\rho)$. Recall that we have a
canonical parabolic geometry corresponding to $(M,c)$:
the normal parabolic geometry of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$. Applying
the associated bundle construction for this choice gives a canonical
tractor bundle and connection associated to any $O(p+1,q+1)$-module
$(V,\rho)$.
We first compare tractor bundles and connections for the product parabolic
geometry $(\mathcal{B}^\line,\omega^\line)$ of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^\line)$
with those for the canonical parabolic geometry.
Recall that the product parabolic geometry was defined as follows.
If $(\mathcal{B}^{\operatorname{ray}},\omega^{\operatorname{ray}})$ is the canonical parabolic geometry,
then $\mathcal{B}^\line=\mathcal{B}^{\operatorname{ray}}\times \{\pm I\}$ with the product action of
$P^\line\cong P^{\operatorname{ray}}\times \{\pm I\}$, and $\omega^\line$ is the pullback
of $\omega^{\operatorname{ray}}$ under the projection $\mathcal{B}^\line\rightarrow \mathcal{B}^{\operatorname{ray}}$. The
bundle $\mathcal{B}^\line$ may alternately be described as the $P^\line$-principal
bundle associated to the $P^{\operatorname{ray}}$-principal bundle $\mathcal{B}^{\operatorname{ray}}$ by the action
of $P^{\operatorname{ray}}$ on $P^\line$ by left translation.
Let $(V,\rho^{\operatorname{ray}})$ be an $\operatorname{Ad}$-compatible
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$-module. We will say that an $\operatorname{Ad}$-compatible
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^\line)$-module $(V,\rho^\line)$ extends $(V,\rho^{\operatorname{ray}})$
if $\rho^\line=\rho^{\operatorname{ray}}$ on $\mathfrak{s}\mathfrak{o}(p+1,q+1)$ and
$\rho^\line|_{P^{\operatorname{ray}}}=\rho^{\operatorname{ray}}$. For a given $(V,\rho^{\operatorname{ray}})$, there are
always at least two choices of such $\rho^\line$; namely those determined
by the two choices $\rho^\line(-I)=\pm I_V$. One checks easily that either
choice of $\pm$ defines an $\operatorname{Ad}$-compatible
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^\line)$-module.
\begin{proposition}\label{Pline}
Let $(M,c)$ be a conformal manifold. Let $(V,\rho^{\operatorname{ray}})$ be an
$\operatorname{Ad}$-compatible $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$-module and let $(V,\rho^\line)$
be an $\operatorname{Ad}$-compatible $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^\line)$-module which extends
$\rho^{\operatorname{ray}}$. Then the tractor bundle and connection
associated to the $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^\line)$-module $(V,\rho^\line)$
for the product parabolic geometry $(\mathcal{B}^\line,\omega^\line)$
are naturally isomorphic to the tractor bundle and connection
associated to the $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$-module $(V,\rho^{\operatorname{ray}})$
for the canonical parabolic geometry $(\mathcal{B}^{\operatorname{ray}},\omega^{\operatorname{ray}})$.
\end{proposition}
\begin{proof}
We first claim that the bundle associated to
$(V,\rho^\line)$ for $(\mathcal{B}^\line,\omega^\line)$
is isomorphic to that associated to
$(V,\rho^{\operatorname{ray}})$ for $(\mathcal{B}^{\operatorname{ray}},\omega^{\operatorname{ray}})$.
This is a special case of the following
general fact, the proof of which is straightforward.
Suppose that $P_1$ is a Lie subgroup of a Lie group $P_2$ and $\mathcal{B}_1$ is a
$P_1$-principal bundle over a manifold $M$. Let $\mathcal{B}_2$ be the
$P_2$-principal bundle over $M$ associated to the action of $P_1$ on $P_2$
by left
translation. Let $(V,\rho)$ be a $P_2$-module and $\mathcal{V}_2$ the vector
bundle associated to $(V,\rho)$ for $\mathcal{B}_2$. Then $\mathcal{V}_2$ is naturally
isomorphic as a smooth vector bundle to the vector bundle $\mathcal{V}_1$
associated to $(V,\rho|_{P_1})$ for $\mathcal{B}_1$.
It is clear from \eqref{connection} that the tractor connections induced by
the Cartan connections $\omega^{\operatorname{ray}}$ and $\omega^\line$ correspond under
this isomorphism, since $\mathcal{B}^{\operatorname{ray}}$ can be embedded as an open subset of
$\mathcal{B}^\line$ on which $\omega^\line$ restricts to $\omega^{\operatorname{ray}}$.
\end{proof}
The following corollary is an immediate consequence of
Proposition~\ref{Pline}.
\begin{corollary}\label{Plinecor}
If $(V,\rho)$ is an $O(p+1,q+1)$-module, then the tractor bundle and
connection for the product parabolic geometry $(\mathcal{B}^\line,\omega^\line)$
associated to the restriction to $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^\line)$ of $(V,\rho)$
are naturally isomorphic to the
tractor bundle and connection
for the canonical parabolic geometry $(\mathcal{B}^{\operatorname{ray}},\omega^{\operatorname{ray}})$
associated to the restriction to
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$ of $(V,\rho)$.
\end{corollary}
The tractor bundle and connection associated to a given
$O(p+1,q+1)$-module for other normal parabolic geometry realizations of a
conformal manifold may be different from those for the canonical parabolic
geometry. The basic case is the standard
representation $\mathbb V$ of $G=O(p+1,q+1)$, since any finite-dimensional
$O(p+1,q+1)$-module is isomorphic to a submodule of a direct sum
of tensor powers of the standard representation. Consider first the
canonical parabolic geometry.
\begin{proposition}\label{Pray}
Let $(M,c)$ be a conformal manifold. Let $(\mathcal{B}^{\operatorname{ray}},\omega^{\operatorname{ray}})$ be the
canonical parabolic geometry of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$.
Let $(\mathcal{T},\nabla)$ be the bundle and connection associated to the
restriction to $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$ of the standard
representation $\mathbb V$ of $O(p+1,q+1)$. Then $(\mathcal{T},\nabla)$ is a standard
tractor bundle with normal connection.
\end{proposition}
\begin{proof}
Since the action of $P^{\operatorname{ray}}\subset O(p+1,q+1)$ preserves the quadratic form
defining $O(p+1,q+1)$, it follows that the $S^2\mathbb V^*$-valued constant
function on
$\mathcal{B}$ whose value at each point is this quadratic form is
$P^{\operatorname{ray}}$-equivariant. So
it defines a metric $h$ on $\mathcal{T}$. Since the quadratic form
is annihilated by $\mathfrak{s}\mathfrak{o}(p+1,q+1)$, the formula
\eqref{connection} for the connection shows that $\nabla h=0$. Now
$P^{\operatorname{ray}}$ acts on $e_0$ by multiplication by $\lambda>0$.
It
follows that $e_0$ determines a nonvanishing global section of $\mathcal{T}[1]$.
This section determines a null rank 1 subbundle $\mathcal{T}^1$ of $\mathcal{T}$ together
with an
isomorphism $\mathcal{T}^1\cong \mathcal{D}[-1]$. The compatibility of the data with the
conformal structure and the normality of $\nabla$ follow from the fact that
$\omega$ is the normal Cartan connection for the structure;
see \cite{CG1}, \cite{CG2}. Thus $(\mathcal{T},\nabla)$ possesses the structure
defining a standard tractor bundle with normal connection.
\end{proof}
Recall that if $n$ is odd, then $P^{\operatorname{ray}}$ and $SP^\line$ are equivalent
from the point of view of the TM\v{C}S Theorem. So by composing the
principal bundle action on $\mathcal{B}^{\operatorname{ray}}$ with the inverse of the isomorphism
$A\mapsto (\det A)A$ from $P^{\operatorname{ray}}$ to $SP^\line$, the canonical parabolic
geometry $(\mathcal{B}^{\operatorname{ray}},\omega^{\operatorname{ray}})$ can be viewed
as a parabolic geometry of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^\line)$. So there is
a tractor bundle and connection associated to the restriction to
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^\line)$ of the standard representation of
$O(p+1,q+1)$. This tractor bundle can alternately be
described as the bundle associated to the restriction to
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$ of the representation $\det \otimes \mathbb V$ of
$O(p+1,q+1)$ for the canonical parabolic geometry.
\begin{proposition}\label{SPline}
Let $(M,c)$ be a nonorientable odd-dimensional conformal manifold.
Let $(\mathcal{B},\omega)$ be the corresponding normal parabolic geometry of type
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^\line)$. The tractor bundle with normal connection
associated to the
restriction to $(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^\line)$ of the standard
representation $\mathbb V$ of $O(p+1,q+1)$ is not a standard tractor bundle.
\end{proposition}
\begin{proof}
The group $SP^\line$ preserves a volume form on $\mathbb V$. There is
an induced nonvanishing volume form for the associated bundle, so
the tractor bundle is orientable. But we saw in \S\ref{standard}
that the standard tractor
bundle constructed in \cite{BEG} is orientable if and only if $M$ is
orientable. Since standard tractor bundles are unique up to isomorphism,
it follows that the associated bundle is not a standard tractor bundle if
$M$ is not orientable.
\end{proof}
Recall that the quadric $\mathcal{Q}$ is not orientable if $n$ is odd and $pq\neq
0$. Therefore we conclude:
\begin{corollary}\label{quadodd}
Let $n$ be odd and $pq\neq 0$. Represent $\mathcal{Q}=SO(p+1,q+1)/SP^\line$.
The tractor bundle on $\mathcal{Q}$ associated to the standard representation of
$SP^\line$ is not a standard tractor bundle.
\end{corollary}
Proposition~\ref{Pray} and Corollary~\ref{Plinecor} imply that for the
product parabolic geometry of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^\line)$ on a general
conformal manifold, the tractor
bundle and connection associated to the standard representation are a
standard tractor bundle with its normal connection. But as we saw in the
last section, there may be other normal parabolic
geometries of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^\line)$ corresponding to the
same conformal structure, and the product parabolic geometry might
not be the most geometrically natural choice. For other choices the
associated tractor bundle need not be a standard tractor bundle.
\begin{proposition}\label{Plinequadric}
Represent $\mathcal{Q}=O(p+1,q+1)/P^\line$. If $pq\neq 0$, then
the tractor bundle on $\mathcal{Q}$ associated to the standard representation of
$P^\line$ is not a standard tractor bundle.
\end{proposition}
\begin{proof}
The null line subbundle $\mathcal{T}^1$ is associated to the action of $P^\line$ on
the invariant subspace $\span\{e_0\}$. It is easily
seen that this associated bundle is the tautological bundle, whose fiber at
a null line is the line itself. But the tautological bundle on $\mathcal{Q}$ is
not trivial if $pq\neq 0$, so it cannot be isomorphic to $\mathcal{D}[-1]$.
\end{proof}
\noindent
In Proposition~\ref{Plinequadric}, $\mathcal{T}^1$ is associated to the
representation of $P^\line$ in which $p$ in \eqref{P0} acts by $\lambda$,
while $\mathcal{D}[-1]$ is associated to the representation in which $p$
acts by $|\lambda|$. For this homogeneous space these associated
bundles are not equivalent.
\begin{remark}
The tractor bundle on $\mathcal{Q}$ in Proposition~\ref{Plinequadric} is trivial
since it is a bundle on a homogeneous space $G/P$ associated to the
restriction to $P$ of a representation of $G$. In particular it is
orientable. So if $n$ is odd, an alternate proof of
Proposition~\ref{Plinequadric} is to derive a contradiction to
orientability of $\mathcal{T}$ as in the proof of Proposition~\ref{SPline} and
Corollary~\ref{quadodd}.
But if $n$ is even, $\mathcal{Q}$ is orientable and there is no contradiction to
orientability of $\mathcal{T}$. In this case the contradiction concerns the
orientability (equivalently, the triviality) of $\mathcal{T}^1$, not of $\mathcal{T}$.
Observe also the following curious state of affairs in
Proposition~\ref{Plinequadric} when $n$ is odd. The
standard tractor bundle on $\mathcal{Q}$ is nontrivial but its distinguished null
line subbundle is trivial. By contrast, the associated tractor bundle
is trivial but its distinguished null line subbundle is nontrivial.
\end{remark}
One encounters the same phenomena for oriented conformal structures.
Recall
from the previous section that oriented conformal structures are equivalent
to normal parabolic geometries of type $(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^{\operatorname{ray}})$.
The same
proof as in Proposition~\ref{Pray} shows that the associated bundle for the
standard representation of $(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^{\operatorname{ray}})$ is a standard
tractor bundle. If $n$ is even, $SP^\line$ factors as
$SP^\line=SP^{\operatorname{ray}}\times \{\pm I\}$, and the same proof as in
Proposition~\ref{Pline} shows
that the associated bundle for the product
$SP^\line$-principal bundle is a standard tractor bundle. But if $n$ is
even and the quadric is realized as $\mathcal{Q}=SO(p+1,q+1)/SP^\line$, then the
proof of Proposition~\ref{Plinequadric} shows that
the associated bundle to the standard representation of $SP^\line$ is not a
standard tractor bundle if $pq\neq 0$.
In the theory of Cartan geometries one sometimes declares a particular
connected homogeneous space $G/P$ to be the model,
and a tractor bundle on $G/P$ to be a bundle associated to the restriction
to $P$ of a representation of $G$. Such tractor bundles are necessarily
trivial and the induced connection is the usual flat connection on a
trivial bundle. For definite signature conformal
structures the natural choice is to take the model to be the quadric
$\mathcal{Q}=S^n$, realized either as $O(n+1,1)/P^\line$ or as $O_+(n+1,1)/P^{\operatorname{ray}}$,
where $O_+(n+1,1)$ denotes the
time-preserving subgroup. The above discussion shows that for either
realization the bundle associated to the standard representation of $G$ is
the standard tractor bundle and its normal
connection is the induced flat connection. For indefinite signature
conformal structures, a natural
choice is to take the model to be $S^p\times S^q=O(p+1,q+1)/P^{\operatorname{ray}}$ and
again the same statements hold. (The modification is necessary for
definite signature since $O(n+1,1)/P^{\operatorname{ray}}$ is not connected.) It is also
possible to view the quadric as the homogeneous model in the case of
indefinite signature. Since $PO(p+1,q+1)$ does not admit a standard
representation, the realization $\mathcal{Q}=PO(p+1,q+1)/PP$ does not admit a
tractor bundle associated to the standard representation under
the framework of this paragraph. But the
realization $\mathcal{Q}=O(p+1,q+1)/P^\line$ does. The general results stated
above of course remain true: the
bundle associated to the standard representation of $O(p+1,q+1)$ is trivial
and inherits the usual flat connection. But this is not
the standard tractor bundle: the standard tractor bundle has no nontrivial
parallel sections on $\mathcal{Q}$. One must exercise similar care in
interpreting other results about homogeneous models. For instance, it is a
general result (\cite{CSS}) that on a homogeneous parabolic geometry $G/P$,
a BGG sequence resolves the constant sheaf determined by
the inducing representation of $G$. But one must keep in mind that the
bundles in the BGG sequences are all defined as associated bundles. For
example, for the BGG sequence associated to
the standard representation $\mathbb V$ on $\mathcal{Q}=O(p+1,q+1)/P^\line$ with $pq\neq
0$, the constant sheaf $\mathbb V$ is realized as the global kernel of the first
BGG operator
$\operatorname{tf}(\nabla^2+P)$ acting not on the bundle of densities $\mathcal{D}[1]$, but on a
twisted version thereof (the dual to the tautological bundle), and this
twisted version arises as the projecting part of the associated bundle to
$\mathbb V$, which is not the standard tractor bundle for the conformal structure
on $\mathcal{Q}$. The global kernel of $\operatorname{tf}(\nabla^2+P)$ acting on $\mathcal{D}[1]$ for
$\mathcal{Q}$ is trivial if $pq\neq 0$.
Similar issues arise in the consideration of conformal holonomy.
We follow the usual practice of defining the conformal holonomy of a
conformal manifold to be the holonomy of
a standard tractor bundle with its normal tractor connection. This is
well-defined by the \v{C}ap-Gover Uniqueness Theorem. But the above
considerations demonstrate that one must be careful if one is realizing the
standard tractor bundle as an associated bundle. The holonomy of a
tractor bundle defined as an associated bundle to a
standard representation might not equal the conformal holonomy if the
principal bundle is not chosen correctly. This
happens already for the quadric $\mathcal{Q}$ if $pq\neq 0$. If we realize
$\mathcal{Q}=O(p+1,q+1)/P^\line$, then the tractor bundle associated to the
standard representation of $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^\line)$ has trivial holonomy.
But as discussed above, the
standard tractor bundle of $\mathcal{Q}$ is the bundle associated to the standard
representation of $(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$ for the realization
$\mathcal{Q}=PO(p+1,q+1)/PP$, viewed as a parabolic geometry for
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),P^{\operatorname{ray}})$ via the isomorphism $P^{\operatorname{ray}}\cong PP$.
Its holonomy is $\{\pm I\}$, since parallel
translation in $S^p\times S^q$ to the antipodal point induces $-I$ on a
fiber of the standard tractor bundle on $\mathcal{Q}$. Another instance of this
is the following.
\begin{proposition}\label{hol}
Let $(M,c)$ be a nonorientable odd-dimensional conformal manifold. Recall
from the TM\v{C}S Theorem that up to isomorphism there is a unique
normal parabolic geometry of type
$(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^\line)$ corresponding to $(M,c)$. Let $(\mathcal{T},\nabla)$
be the bundle and connection associated to the
restriction to $(\mathfrak{s}\mathfrak{o}(p+1,q+1),SP^\line)$ of the standard representation
of $O(p+1,q+1)$. Then the holonomy of $(\mathcal{T},\nabla)$ is not equal to the
conformal holonomy.
\end{proposition}
\begin{proof}
The argument is similar to the proof of
Proposition~\ref{SPline}. The standard representation of $SP^\line$
preserves a volume form, so there is an induced section of the
associated bundle. The volume form is preserved also by $\mathfrak{s}\mathfrak{o}(p+1,q+1)$,
so this section is parallel. Thus the holonomy of the associated bundle
for $SP^\line$ is contained in $SO(p+1,q+1)$. But if the holonomy of
the standard tractor bundle is contained in $SO(p+1,q+1)$, then the
standard tractor bundle is orientable, so $M$ is orientable.
\end{proof}
These issues
concerning standard tractor bundles as associated bundles arose in our work
\cite{GW} concerning conformal structures and ambient metrics of holonomy
$G_2$ (by this we mean the split real form throughout this discussion).
Nurowski \cite{N} showed that a generic 2-plane field on a 5-manifold $M$
induces a conformal structure of signature $(2,3)$ on $M$. The TM\v{C}S
Theorem implies that generic 2-plane
fields on oriented 5-manifolds are the underlying structures corresponding
to normal regular parabolic geometries of type $(\mathfrak{g}_2,SQ^{\operatorname{ray}})$, where
$SQ^{\operatorname{ray}}$ is the
subgroup of $G_2$ preserving a null ray, analogous to $SP^{\operatorname{ray}}$ above. For
generic 2-plane fields on nonorientable manifolds one must change the group
$P=SQ^{\operatorname{ray}}$ to allow
orientation-reversing transformations in $\operatorname{Ad}(P_0)$. A first guess is to
take $P$ to be $SQ^\line$, the subgroup of $G_2$ preserving a null line.
The TM\v{C}S Theorem implies that the category of generic 2-plane fields on
general 5-manifolds is equivalent to the category of normal regular
parabolic geometries of type $(\mathfrak{g}_2,SQ^\line)$. But just as in
Proposition~\ref{SPline}, the associated bundle to the restriction to
$(\mathfrak{g}_2,SQ^\line)$ of the standard representation of $G_2$ need not be the
standard tractor bundle for Nurowski's induced conformal structure; in
fact, it cannot be if $M$ is not orientable.
By analogy with the situation above for general conformal structures,
instead of $SQ^\line$ one should use the subgroup $Q^{\operatorname{ray}}$ of
$\{\pm I\}G_2$
preserving a null ray. The TM\v{C}S Theorem again gives an equivalence of
categories with normal regular parabolic geometries of type
$(\mathfrak{g}_2,Q^{\operatorname{ray}})$. And now, just as in Proposition~\ref{Pray},
the tractor bundle associated to the restriction to $(\mathfrak{g}_2,Q^{\operatorname{ray}})$ of the
standard representation of $\{\pm I\}G_2\subset O(3,4)$ is the standard
tractor bundle of Nurowski's conformal structure with its normal connection.
$Q^{\operatorname{ray}}$ is isomorphic to $SQ^\line$, but they are embedded in $O(3,4)$
differently, just as for $P^{\operatorname{ray}}$ and $SP^\line$ above. Using the
realization of the standard tractor
bundle as the associated bundle for $(\mathfrak{g}_2,Q^{\operatorname{ray}})$, the same
arguments as in \cite{HS}, \cite{GW} for the orientable case now show that
for general $M$, Nurowski's conformal structures are characterized by
having conformal
holonomy contained in $\{\pm I\}G_2$, and in the real-analytic case the
corresponding ambient metrics have metric holonomy contained in
$\{\pm I\}G_2$.
|
1,108,101,563,134 | arxiv | \section{Introduction}\label{sec:Introduction}
The pressing demand for high data rate in wireless communications networks coupled with the fact that mobile devices are physically small and power-limited by batteries, has driven the notion of energy harvesting (EH) to become a promising resolution for green communications \cite{varshney2008transporting,Grover_Tesla_C10}. Among the varied available resources for EH, radio-frequency (RF)-enabled wireless energy transfer (WET) has aroused an upsurge of interest for its long operation range, ubiquitous existence in the electromagnetic radiation, and effective energy multicasting, which motivates the paradigm of simultaneous wireless information and power transfer (SWIPT), e.g., \cite{Rui_TWC_SWIPT_J13,ZhouXun_WIPT_J13,Liu2013opportunistic,Xu2013Multiuser}.
A typical SWIPT system consists of one access point (AP) that has constant power supply and broadcasts wireless signals to a group of user terminals, amongst which some intend to decode information, referred to as information receivers (IRs), while others scavenge energy from the ambient radio signals, named energy receivers (ERs). This gives rise to a challenging physical (PHY)-layer security issue where the ERs may eavesdrop the information sent to the IRs due to their close proximity to the AP To overcome this problem, in \cite{Liu2014Secrecy,Ng2014Robust,xing2014secrecySWIPT}, several researchers presented various approaches to guarantee secret communication to the IRs and maximize the energy simultaneously transferred to the ERs or to satisfy the individual EH requirement for the ERs and maximize the secrecy rate for the IR, by advocating the dual use of the artificial noise (AN) or jamming.
However, previous works all assumed that the ERs in the SWIPT systems attempt to intercept the information for the IR, which is overly protective. On the contrary, it is possible that some ERs are cooperative, especially when they are EH-enabled wirelessly. Following the recent advances in \emph{wireless powered communications networks} \cite{ju2014throughput,xing2014harvest-and-jam}, this paper proposes a self-sustaining {\em harvest-and-jam (HJ)} relaying protocol, where in the first transmission phase a single-antenna transmitter transfers confidential information to a multiple-antenna amplify-and-forward (AF) relay and power to a group of multi-antenna EH-enabled idle helpers simultaneously, while in the second phase, the relay amplifies and forwards the information to the IR under the protection of the AN generated by the helpers using the energy harvested from their received signals in the first transmission phase.
Physical (PHY)-layer security issues in the rapidly growing cooperative networks have attracted much attention. Cooperative approaches, such as, \emph{cooperative jamming}, communications have been widely examined \cite{tekin2008general,dong2010improving,tang2008gaussian,Huang2011CJ}. The idea is to assist the transmitter in the secrecy transmission by generating an AN to interfere with the eavesdropper via either multiple antennas or external trusted helpers \cite{Goel2008,Zheng2011CJ,chu2014secrecy,cumanan2014secrecy}. However, all of those utilizing ANs require additional supply of power and therefore incur extra system costs. Meanwhile, collaborative use of relays to form effective beams jamming the eavesdropper, i.e., \emph{secure collaborative relay beamforming}, has been studied for relay-wiretap channels with single eavesdropper in \cite{zhang2010relay}, multiple eavesdroppers with AF relays and decode-and-forward (DF) relays in \cite{yang2013cooperative} and \cite{jiangyuan2011CRB}, respectively. All, however, assumed the availability of perfect channel state information (CSI). Though \cite{wang2013secure} proposed robust AF relay beamforming against the eavesdropper's channel, the solutions were yet suboptimal.
The assumption of perfect CSI of the eavesdroppers appears to be too ideal because the eavesdroppers, despite being legitimate users, wish to hide from the transmitter without being cooperative in the stage of channel estimation. Even if they are registered users and bound to help the transmitter in obtaining their CSIs to facilitate their own communication, the CSIs at the transmitter side will change due to mobility and Doppler effect, and may be outdated. Moreover, even for the legitimate users, the estimated CSIs may also be subject to quantization errors due to the limited capacity of the feedback channel, although the inaccuracy is reasonably assumed less severe than that for the eavesdroppers. To tackle this issue, state-of-art schemes have been developed (\cite{he2013wireless} and the references therein), among which the \emph{worst-case secrecy rate} is commonly employed to formulate the robust secrecy rate maximization problem \cite{li2011optimal,swindlehurst2012robust,Ng2014Robust,cumanan2014secrecy,chu2015robust}. The robust transmit covariance design for the secrecy rate maximization in a multiple-input-single-output (MISO) channel overheard by multi-antenna eavesdroppers was considered in \cite{li2011optimal,chu2015MISOME} while the enhanced secrecy performance was achieved by introducing a friendly jammer in the same scenario in \cite{swindlehurst2012robust}, in which a joint optimization of the robust transmit covariance and power allocation between the source and the helper was studied via geometric programming. More recently, \cite{Ng2014Robust} studied a joint robust design of the information beams, the AN and the energy signals for SWIPT networks with quality-of-service (QoS) constraints.
\textcolor{black}{The contribution of this paper is threefold. First, with perfect CSI, in addition to the joint optimal solutions, we propose two near-optimal schemes with much reduced complexity by exploiting the optimal structure of the relay weight matrix, and providing a semi-closed form solution for the relay weight matrix given fixed \emph{null-space} jamming, respectively. Second, besides the imperfect eavesdropper's channel, legitimate channels such as those from the $K$ HJ helpers (the transmitter) to the legitimate receiver ($K$ HJ helpers), and from the AF relay to the receiver are jointly modeled with imperfect estimation, and multiple semi-indefinite non-convex constraints have been judiciously replaced by linear matrix inequalities (LMIs) to fit the semi-definite programming (SDP). Third, a rank-one reconstruction algorithm exploiting the structure of the semi-definite relaxation (SDR)-based solutions has been proposed to provide promising performance at low computational cost.}
Of particular relevance to our work is \cite{li2015robust} which jointly optimizes the AF matrices and AN covariances in a relay wiretap channel with multiple multi-antenna AF relays and multiple multi-antenna eavesdroppers via a worst-case robust formulation. While our network model is similar, the difference of our work from \cite{li2015robust} is twofold. On one hand, in this paper, the AN generated by the friendly jammers are subject to their respective channels from the transmitter during WET in the first transmission phase. On the other hand, the technique in \cite[\emph{Proposition 1}]{li2015robust} cannot be applied to our problem since the AN beams and the forwarded information are transmitted via different channels in ours. As a consequence, to the best of authors' knowledge, our proposed worst-case based {\em robust} optimization scheme that incorporates imperfect CSIs into all the HJ helpers, has not been addressed in the literature.
It is worth noting that devising a wireless-powered friendly jammer to enhance PHY-layer security for a direct transmission protocol was studied in \cite{xiangyun2014secure}, in which the ``harvesting'' blocks and ``jamming'' blocks were well exploited to compose four different types of harvesting-jamming cycles. Compared to \cite{xiangyun2014secure}, which focused on the dedicated scheduling of ``harvest'' and ``jam'' operations and its long-term performance, ours are concerned with adaptive rate/power optimization with multiple HJ helpers to achieve higher worst-case secrecy rate. \textcolor{black}{Moreover, instead of assuming perfect channels to/from the HJ helpers, our {\em robust} optimization algorithm takes imperfect legitimate channels into account to provide robustness.}
Note that in this paper, as in \cite{wang2013secure,li2015robust}, we assume that the channel between the transmitter and the AF relay is perfectly known and there is no direct link between the transmitter and the receiver or the eavesdropper, a common assumption in the concerned AF relay wiretap channel \cite{zhang2010relay,yang2013cooperative}.
{\em Notations}---Throughout, we use the upper case boldface letters for matrices and lower case boldface letters for vectors. The superscripts $(\cdot)^{T}$, $(\cdot)^{\dagger}$ and $(\cdot)^{H}$ represent the transpose, conjugate and conjugate transpose, respectively. Also, ${\rm tr}(\cdot)$ and $\mathbb{E}[\cdot]$ stand for the trace of a matrix and the statistical expectation for random variables, respectively. Likewise, ${\rm vec}(\mv A)$ is defined as a column vector obtained by stacking the rows of \(\mv A\) on top of one another. ${\rm vec}^{(-1)}$ is the inverse operation of ${\rm vec}$. $\mathbf{null}(\mv A)$ denotes the null space of \(\mv A\). $\otimes$ represents the Kronecker product of two matrices. In addition, the notation $\mv{A}\succeq 0$ indicates that $\mv{A}$ is a positive semi-definite matrix and $\mv{I}$ ($\mv 0$) denotes an identity (all-zero) matrix with appropriate size. Furthermore, $\|\cdot\|$ represents the Euclidean norm of a vector, while $P_r(\cdot)$ stands for the probability of an input random event. Finally, $[x]^{+}$ denotes $\max(0,x)$ and $(\cdot)^\ast$ stands for an optimal solution.
\section{Network Model}\label{sec:System Model}
We consider a cooperative relay wiretap channel for SWIPT over a given frequency band as shown in Fig.~\ref{fig:subfig:channel model}. We assume that there is a transmitter, named Alice, sending confidential messages to the IR, Bob, in the presence of an eavesdropper \cite{chorti2013resilience}, Eve, with the aid of a multi-antenna AF relay and \(K\) ERs willing to act as HJ helpers, \(\mathcal{H}_{\rm helper}=\{{\sf H}_1,\dots,{\sf H}_K\}\). \textcolor{black}{The transmitter, ERs, and the AF relay are deployed in a same cluster that is relatively far away from the destination and Eve, such that there is no direct link from the transmitter to the receiver or Eve, respectively. Moreover, the ERs are assumed to be located closer to the transmitter than the AF relay in order that they can harvest sufficient amount of energy for jamming.} Alice, Bob and Eve are all assumed to be equipped with single antenna, while the AF relay and each of the \(K\) helpers are assumed to have the same \(N_t\) antennas.
Using two equal slots for the HJ relaying protocol, as shown in \textcolor{black}{Fig.~\ref{fig:subfig:HJ protocol}}, for the first phase, Alice sends a confidential message to the relay while simultaneously transferring energy to the \(K\) helpers; for the second phase, the relay amplifies and forwards the message to Bob while the \(K\) helpers perform cooperative jamming using their respective harvested energy from the first transmission phase, to compromise Eve. In this paper, we assume a quasi-static fading environment and for convenience denote \(\mv h_0\in\mathbb{C}^{N_t\times 1}\) as the complex channel from the transmitter to the relay and \(\mv h_k\in\mathbb{C}^{N_t\times 1}\), \(k=1,\ldots,K\), as that from the transmitter to the \(k\)th helper; \(\tilde{\mv h}_0\) as the transpose of the complex channel from the relay to Bob and \(\tilde{\mv h}_k\in\mathbb{C}^{N_t\times1}\), \(k=1,\ldots,K\), as that from \({\sf H}_k\) to Bob; \(\mv g_0\in\mathbb{C}^{N_t\times 1}\) and \(\mv g_k\in\mathbb{C}^{N_t\times 1}\), \(k=1,\ldots,K\), as those from the relay and \({\sf H}_k\) to Eve, respectively.
\begin{figure}[htb]
\centering
\epsfxsize=1\linewidth
\subfigure[AF-relaying wiretap channel with jamming.]{\label{fig:subfig:channel model}\includegraphics[width=7.0cm]{color_system_model.eps}}
\vfill
\epsfxsize=1\linewidth
\subfigure[\textcolor{black}{The {\em HJ} relaying protocol.}]
{\label{fig:subfig:HJ protocol}\includegraphics[width=7.0cm]{HJ_protocol_diagram.eps}}
\caption{\textcolor{black}{HJ-enabled cooperative relaying for secure SWIPT.}}\label{fig:system model}
\vspace{-1.5em}
\end{figure}
In the first transmission phase, the baseband received signal at the AF relay can be expressed as
\begin{equation}
\mv{y}_r=\mv{h}_0\sqrt{P_s}s+\mv{n}_r , \label{eq:received at the relay}
\end{equation}
where \(s\) is a circularly symmetric complex Gaussian (CSCG) random variable, denoted by \(s \sim \mathcal{CN}(0,1)\) and \(\mv n_r\) is the additive complex noise vector, denoted by \(\mv{n}_r \sim \mathcal{CN}(\mv{0}, \sigma^2_r\mv{I})\). Also, \(P_s\) denotes the given transmit power at Alice. Further, the received signal at each helper \({\sf H}_k\) is expressed as
\begin{equation}\label{eq:received at the CJ helpers}
\mv{y}_k=\mv{h}_k\sqrt{P_s}s+\mv{n}^\prime_k ,
\end{equation}
where \(\mv{n}^\prime_k\) is the additive noise, denoted by \(\mv{n}^\prime_k\sim\mathcal{CN}(\mv{0},\sigma_h^2\mv{I})\).
On the other hand, for WET, the harvested energy
of \({\sf H}_k\) in each unit slot is given by
\begin{equation}\label{eq:harvested at HJ helpers}
E_k=\eta \mathbb{E}[\| \mv{h}_k\sqrt{P_s}s\|^2]=\eta P_s\left\|\mv{h}_k\right\|^2,\; \forall k,
\end{equation}
where \(0<\eta\leq 1\) denotes the EH efficiency.
In the second transmission phase, the linear operation at the AF relay can be represented by
\begin{equation}\label{eq:transmitted signal at the relay}
\mv x^\prime=\mv{Wy}_r,
\end{equation}
where \(\mv x^\prime\in\mathbb{C}^{N_t\times1}\) is the retransmit signal at the AF relay and \(\mv W\in\mathbb{C}^{N_t\times N_t}\) is the beamforming matrix. Note that the transmit power of the AF relay can be shown as
\begin{equation}\label{eq:transmit power at the relay}
{\rm tr}\left(\mathbb{E}\left[\mv x\mv x^H\right]\right)={\rm tr}\left(\mv W\left(P_s\mv{h}_0\mv{h}^H_0 +\sigma^2_r\mv{I}\right)\mv W^H \right),
\end{equation}
which is constrained by the maximum available power at the AF relay, i.e., \(P_r\), which is given by
\begin{equation}\label{eq:transmit power constaint at the relay}
{\rm tr}\left(\mv W\left(P_s\mv{h}_0\mv{h}^H_0 +\sigma^2_r\mv{I}\right)\mv W^H \right )\leq P_r.
\end{equation}
In the meantime, each \({\sf H}_k\) will help generate an AN \(\mv n_k\in\mathbb{C}^{N_t\times1}\) to interfere with Eve. Similar to \cite{Goel2008}, we assume that \(\mv n_k\)'s are independent CSCG vectors denoted by \(\mv n_k\sim\mathcal{CN}(0,\mv Q_k)\), \(\forall k\), since the worst-case noise for Eve is known to be Gaussian. In addition, each \({\sf H}_k\) has a transmit power constraint due to its harvested energy in the previous transmission phase, i.e., \({\rm tr}(\mv Q_k)\le\eta P_s\|\mv h_k\|^2\) (c.f.~\eqref{eq:harvested at HJ helpers}), \(\forall k\).
The received signal at Bob can thus be expressed as
\begin{equation}\label{eq:received at Bob}
\mv{y}_b
=\sqrt{P_s}\widetilde{\mv{h}}^T_0\mv{Wh}_0s+\sum^K_{k=1}\widetilde{\mv{h}}^T_k\mv{n}_k+\widetilde{\mv{h}}_0^T\mv{Wn}_r+\mv{n}_b,
\end{equation}
where \(\mv n_b\sim\mathcal{CN}(0,\sigma_b^2\mv I)\) is the additive noise at Bob. Similarly, the received signal at Eve can be expressed as
\begin{align}
\mathbf{y}_e=\sqrt{P_s}\mv{g}^T_0\mv{Wh}_0s +\sum^K_{k=1}\mv{g}^T_k\mv{n}_k+\mv{g}_0^T\mv{Wn}_r+\mv{n}_e, \label{eq:received at Eve}
\end{align}
where \(\mv n_e\sim\mathcal{CN}(0,\sigma_e^2\mv I)\). According to \eqref{eq:received at Bob} and \eqref{eq:received at Eve}, the signal-to-interference-plus-nose-ratio (SINR) at Bob and Eve can be, respectively, expressed as
\begin{equation}\label{eq:SINR at Bob}
\gamma_b=\frac{P_s\vert\widetilde{\mv{h}}^T_0\mv{Wh}_0\vert^2}{\sigma^2_r\widetilde{\mv{h}}^T_0\mv{WW}^H\widetilde{\mv{h}}_0^\dagger+\sum^K_{k=1}\widetilde{\mv{h}}^T_k\mv{Q}_k\widetilde{\mv{h}}_k^\dagger+\sigma^2_b},
\end{equation}
and
\begin{equation}\label{eq:SINR at Eve}
\gamma_e=\frac{P_s\vert\mv{g}^T_0\mv{Wh}_0\vert^2}{\sigma^2_r\mv{g}^T_0\mv{WW}^H\mv{g}_0^\dagger+\sum^K_{k=1}\mv{g}^T_k\mv{Q}_k\mv{g}_k^\dagger+\sigma^2_e}.
\end{equation}
As such, the achievable secrecy rate at Bob is \cite{Goel2008}
\begin{equation}\label{eq:achievable secrecy rate}
r_0=\frac{1}{2}\left[\log_2(1+\gamma_b)-\log_2(1+\gamma_e)\right ]^+.
\end{equation}
\section{Joint AN-AF Beamforming with Perfect CSI}\label{sec:A Joint Optimization Based on Perfect CSI}
\subsection{Problem Formulation for Perfect CSI}\label{subsec:Problem Formulation for perfect CSI}
We aim to maximize the secrecy rate at Bob subject to the transmit power constraints at the AF relay and each individual helper \({\sf H}_k\), \(k=1,\dots,K\). Thus, our problem is to solve
\begin{subequations}
\begin{align}\mathrm{(P1)}:~\mathop{\mathtt{max}}_{\{\boldsymbol{Q}_k\},\boldsymbol{W}}
& ~~~ r_0\notag \\
\mathtt {s.t.}& ~~~\eqref{eq:transmit power constaint at the relay},\label{eq:power constraint at the relay}\\
& ~~~{\rm tr}\left(\mv{Q}_k \right )\leq \eta P_s\left\|\mv{h}_k\right\|^2, \; \forall k,\label{eq:power constraint for the AN}\\
& ~~~\mv{Q}_k\succeq \mv 0, \; \forall k\label{eq:constraint on PSD}.
\end{align}
\end{subequations}
Next, we define a new function \(\bar F(\{\mv{Q}_k\},\mv W)\) as
\begin{equation}\label{eq:def of bar F}
\bar F(\{\mv{Q}_k\},\mv W)\triangleq\frac{1+\gamma_b}{1+\gamma_e}.
\end{equation}
It can be easily shown that the optimal solution \(\{\mv Q_k^\ast\}\), \(\mv W^\ast\) to $({\rm P1})$, is also optimal for $({\rm P1^\prime})$ given by
\begin{equation}\mathrm{(P1^\prime)}:~\mathop{\mathtt{max}}_{\{\boldsymbol{Q}_k\},\boldsymbol{W}}
\bar F(\{\mv{Q}_k\},\mv W
~\mathtt {s.t.} ~\eqref{eq:power constraint at the relay}-\eqref{eq:constraint on PSD}.
\end{equation}
Hence, we focus on solving problem $({\rm P1}^\prime)$ in the rest of the paper. However, since $({\rm P1}^\prime)$ is in general a non-convex problem that is hard to solve, we will reformulate it into a two-stage optimization problem. First, we constrain the SINR at Eve to be \(\bar\gamma_e\), it thus follows from \eqref{eq:def of bar F} that \(\bar F(\{\overline{\mv Q}_k\},\mv W)\) is maximized when \(\gamma_b\) is maximized, which can be obtained by solving the following problem:
\begin{multline}\label{eq:SNR constaint at the Eve}
\mathrm{(P1^\prime.1)}: \mathop{\mathtt{max}}_{\{\boldsymbol{Q}_k,\boldsymbol{W}\}}
\frac{P_s\vert\widetilde{\mv h}_0^T{\mv W}\mv h_0\vert^2}{\sigma_r^2\widetilde{\mv h}_0^T{\mv W}{\mv W}^H\widetilde{\mv h}_0^\dagger+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dag+\sigma_b^2}\\
\mathtt{s.t.}~ \frac{P_s\vert\mv g_0^T{\mv W}\mv h_0\vert^2}{\sigma_r^2\mv g_0^T{\mv W}{\mv W}^H\mv g_0^\dagger+\sum_{k=1}^K{\mv g}_k^T\mv Q_k{\mv g}_k^\dag+\sigma_e^2}\!=\!\bar\gamma_e,\\
\eqref{eq:power constraint at the relay}-\eqref{eq:constraint on PSD}.
\end{multline}
Let \(H(\bar\gamma_e)\) denote the optimal value of $({\rm P1}^\prime.1)$ given \(\bar\gamma_e\). Then $({\rm P1}^\prime)$ can be equivalently solved by
\begin{equation}
\mathrm{(P1^\prime.2)}: \mathop{\mathtt{max}}_{\bar\gamma_e>0}~\frac{1+H(\bar\gamma_e)}{1+\bar\gamma_e}. \label{eq:(P1'.2)}
\end{equation}
\begin{lemma}\label{lemma:P1'.2 same optimal value}
Problem $({\rm P1}^\prime)$ has the same optimal value as $({\rm P1}^\prime.2)$, and the same optimal solution as $({\rm P1}^\prime.1)$ when \(\bar\gamma_e\) takes the optimal solution for $({\rm P1}^\prime.2)$.
\end{lemma}
\begin{IEEEproof}
The proof follows from \cite[\emph{Lemmas 4.1-4.2}]{Liu2014Secrecy}.
\end{IEEEproof}
Therefore, $({\rm P1}^\prime)$ can be solved in the following two steps. First, given any \(\bar\gamma_e>0\), we solve $({\rm P1}^\prime.1)$ to attain \(H(\bar\gamma_e)\); then we solve $({\rm P1}^\prime.2)$ to obtain the optimal \(\bar\gamma_e^\ast\).
\vspace{-1.0em}
\subsection{Optimal Solution to $({\rm P1}^\prime.1)$}\label{subsec:Optimal Solution}
Here, we consider solving problem $(\rm P1^\prime.1)$ by jointly optimizing the covariance matrix for the AN at each of the HJ helper, \(\mv Q_k\)'s, and the beamforming matrix, \(\mv W\). To facilitate the analysis in the sequel, we rewrite the following equations in line with our definition of \({\rm vec}(\cdot)\) \cite[Chapter 13]{laub2005matrix}:
\begin{align}
\vert\tilde{\mv h}_0^T\mv W\mv h_0\vert^2 & =\vert{\rm vec}^T(\tilde{\mv h}_0\mv h_0^T){\rm vec}(\mv W)\vert^2, \\
\tilde{\mv h}_0^T\mv W\mv W^H\tilde{\mv h}_0^\dagger & =\|\tilde{\mv h}_0^T\otimes\mv I{\rm vec}(\mv W)\|^2,\\
\vert\mv g_0^T\mv W\mv h_0\vert^2 & =\vert{\rm vec}^T(\mv g_0\mv h_0^T){\rm vec}(\mv W)\vert^2,\\
\mv g_0^T\mv W\mv W^H\mv g_0^\dagger & =\|\mv g_0^T\otimes\mv I{\rm vec}(\mv W)\|^2.
\end{align}
In addition, \({\rm tr}(\mv W(P_s\mv h_0\mv h_0^H+\sigma_r^2\mv I)\mv W^H)=\|\mv \Phi\mv w\|^2\), where \(\mv\Phi=(\mv I\otimes\mv\Theta^T)^{1/2}\) with \(\mv\Theta=P_s\mv h_0\mv h_0^H+\sigma_r^2\mv I\). Hence, $\rm (P1^\prime.1)$ can be rewritten as
\begin{subequations}
\begin{align}
\mathrm{(P1^\prime.1\text{-}RW)}:\nonumber\\ \mathop{\mathtt{max}}_{\boldsymbol{W},\{\boldsymbol{Q}_k\}}
&~~~\frac{P_s\vert\mv f_1^T\mv w\vert^2}{\sigma_r^2\left\|\mv Y_1\mv w\right\|^2+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dagger+\sigma_b^2}\notag\\
\mathtt{s.t.}& ~~~\frac{P_s\vert\mv f_2^T\mv w\vert^2}{\sigma_r^2\left\|\mv Y_2\mv w\right\|^2+\sum_{k=1}^K\mv g_k^T\mv Q_k\mv g_k^\dagger+\sigma_e^2}=\bar \gamma_e,\\
& ~~~\left\|\Phi \mv w\right\|^2\leq P_r,\\
& ~~~\eqref{eq:power constraint for the AN}, \eqref{eq:constraint on PSD}\notag,
\end{align}
\end{subequations}
in which \(\mv w={\rm vec}(\mv W)\), \(\mv f_1={\rm vec}(\widetilde{\mv h}_0\mv h_0^T)\), \(\mv f_2={\rm vec}(\mv g_0\mv h_0^T)\), \(\mv Y_1=\widetilde{\mv h}_0^T\otimes \mv I\) and \(\mv Y_2=\mv g_0^T\otimes\mv I\).
As problem $\rm (P1^\prime.1\text{-}RW)$ is non-convex, we define \(\mv X\triangleq\mv w\mv w^H\), \(\mv F_1\triangleq\mv f_1^\dagger\mv f_1^T\), \(\mv F_2\triangleq\mv f_2^\dagger\mv f_2^T\), \(\overline{\mv Y}_1\triangleq\mv Y_1^H\mv Y_1 \), \(\overline{\mv Y}_2\triangleq\mv Y_2^H\mv Y_2 \) and \(\overline{\mv \Phi}\triangleq\mv \Phi^H\mv\Phi\). Then by ignoring the rank-one constraint on \(\mv X\), $\rm (P1^\prime.1\text{-}RW)$ is modified as
\begin{subequations}\label{eq:constraints for P1'.1-RW-SDR-Eqv}
\begin{align}
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!~~~~~ ~~~ \mathrm{(P1^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}:\nonumber\\ \!\!\!\!\!\mathop{\mathtt{max}}_{\boldsymbol{X},\{\boldsymbol{Q}_k\}}
&~~~\frac{P_s{\rm tr}(\mv F_1\mv X)}{\sigma^2_r{\rm tr}(\overline {\mv Y}_1\mv X)+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dagger+\sigma_b^2}\nonumber\\
\mathtt{s.t.}& ~~~P_s{\rm tr}(\mv F_2\mv X)\nonumber\\
&~~~ =\bar\gamma_e\left(\sigma^2_r{\rm tr}(\overline {\mv Y}_2\mv X)+\sum_{k=1}^K\mv g_k^T\mv Q_k\mv g_k^\dagger+\sigma_e^2\right), \\
&~~~{\rm tr}(\overline{\mv \Phi}\mv X)\leq P_r,\\
&~~~\mathrm{tr}\left(\mv{Q}_k \right)\leq \eta P_s\left\|\mv{h}_k\right\|^2, \; \forall k,\\
&~~~\mv X\succeq \mv 0, \, \mv Q_k\succeq \mv 0, \; \forall k.
\end{align}
\end{subequations}
Problem $\rm (P1^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)$, via Charnes-Cooper transformation \cite{charnes1962programming}, can be equivalently recast as
\begin{subequations}\label{eq:transformation by C-O}
\begin{align}
&\mathrm{(P1^\prime.1\text{-}RW\text{-}SDR)}:~\mathop{\mathtt{max}}_{\boldsymbol{X},\{\boldsymbol{Q}_k\},\tau}
~~~P_s{\rm tr}(\mv F_1\mv X)\nonumber\\
&\mathtt{s.t.}~~~\sigma^2_r{\rm tr}(\overline {\mv Y}_1\mv X)+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dagger+\tau\sigma_b^2=1, \label{eq:constraint on C-O transformation}\\
&P_s{\rm tr}(\mv F_2\mv X)\notag\\
&=\bar\gamma_e\left(\sigma^2_r{\rm tr}(\overline {\mv Y}_2\mv X)+\sum_{k=1}^K\mv g_k^T\mv Q_k\mv g_k^\dagger+\tau\sigma_e^2\right), \label{eq:constraint on SINR of Eve}\\
&{\rm tr}(\overline{\mv \Phi}\mv X)\leq \tau P_r, \label{eq:constraint on transmit power of the relay}\\
&\mathrm{tr}\left(\mv{Q}_k \right)\leq \tau\eta P_s\left\|\mv{h}_k\right\|^2, \; \forall k, \label{eq:constraint on amount of AN for each HJ helper}\\
&\mv X\succeq \mv 0, \, \mv Q_k\succeq \mv 0, \; \forall k, \, \tau\ge 0. \label{eq:constraint on X, Qk and tau}
\end{align}
\end{subequations}
\begin{lemma}\label{lemma:inequality equivalent of (P1.1-RW-SDR)}
The constraints in \eqref{eq:constraint on C-O transformation} and \eqref{eq:constraint on SINR of Eve} can be replaced by \(\sigma^2_r{\rm tr}(\overline {\mv Y}_1\mv X)+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dagger+\tau\sigma_b^2\le1\) and \(P_s{\rm tr}(\mv F_2\mv X)\le\bar\gamma_e(\sigma^2_r{\rm tr}(\overline {\mv Y}_2\mv X)+\sum_{k=1}^K\mv g_k^T\mv Q_k\mv g_k^\dagger+\tau\sigma_e^2)\), respectively, where both inequalities will be activated when problem $\rm (P1^\prime)$ obtains its optimum value.
\end{lemma}
\begin{IEEEproof}
See \cite[Appendix \ref{appendix:proof of prop:structure of optimal X and its rank-one reconstruction}]{xing2015robust}.
\end{IEEEproof}
Since problem $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$ is a standard convex optimization problem and satisfies the Slater's condition, its gap with its dual problem is zero \cite{boyd2004convex}. Now, let \(\lambda\) denote the dual variable associated with the equality constraint in \eqref{eq:constraint on C-O transformation}, \(\alpha\) associated with the other equality constraint in \eqref{eq:constraint on SINR of Eve}, \(\beta_0\) associated with the transmit power constraint for the AF relay in \eqref{eq:constraint on transmit power of the relay}, \(\{\beta_k\}\) associated with the transmit power constraints for each \({\sf H}_k\) in \eqref{eq:constraint on amount of AN for each HJ helper}, and \(\zeta\) associated with \(\tau\). Then the Lagrangian of problem $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$ is given by
\begin{eqnarray}\label{eq:Lagrangian of (P1.1-RW-SDR)}
L(\mv\Omega)&\!\!=&\!\!\mathrm{tr}(\mv A\mv X)+\sum_{k=1}^K\mathrm{tr}(\mv B_k\mv Q_k)+\zeta\tau+\lambda,
\end{eqnarray}
where \(\mv\Omega\) denotes the set of all primal and dual variables,
\begin{align}
\mv A&=P_s\mv F_1-\lambda\sigma_r^2\overline{\mv Y}_1-\alpha P_s\mv F_2+\alpha\bar\gamma_e\sigma_r^2\overline{\mv Y}_2-\beta_0\overline{\mv \Phi}, \label{eq:A}\\
\mv B_{k}&=-\lambda\tilde{\mv h}_k^\ast\tilde{\mv h}_k^T+\alpha\bar\gamma_e\mv g_k^\ast\mv g_k^T-\beta_k\mv I,\; \forall k, \label{eq:Bk}\\
\zeta&=-\lambda\sigma_b^2+\alpha\bar\gamma_e\sigma_e^2+\beta_0P_r+\sum_{k=1}^K\eta P_s\beta_k\|\mv h_k\|^2. \label{eq:zeta}
\end{align}
\begin{proposition}
The optimal solution, \((\mv X^\ast, \{\mv Q_k^\ast\}, \tau^\ast)\), to $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$ satisfies the following conditions:
\begin{enumerate}
\item
$\mathrm{rank}(\mv Q_k)\left\{\begin{array}{ll}\ge N_t-2,\ &{\rm if}\, \beta_k^\ast=0,\\
=1, \ &{\rm if}\, \beta_k^\ast>0, \end{array}\right.\forall k$;
\item \(\mv X^\ast\) can be expressed as
\begin{align}
\mv X^\ast=\sum_{n=1}^{N_t^2-r_c}a_n\mv \eta_n\mv \eta_n^H+b\mv \xi\mv \xi^H, \label{eq:structure of optimal X}
\end{align}
where \(a_n\ge 0\) \(\forall n\), \(b>0\), \(r_c=\mathrm{rank}(\mv C^\ast)\) (c.f. \eqref{eq:A star}) and \(\mv\xi\in\mathbb{C}^{N_t^2\times1}\) is a vector orthogonal to \(\mv\Xi=\{\mv\eta_n\}_{n=1}^{N_t^2-r_c}\), which consists of orthonormal basis for \(\mathbf{null}(\mv C^\ast)\);
\item According to \eqref{eq:structure of optimal X}, if \(\mathrm{rank}(\mv X^\ast)>1\), then we have the following sufficient condition to yield an optimal solution of \(\mv X\) with rank-one:
\begin{align}
\hat{\mv X}^\ast& =b\mv \xi\mv \xi^H,\label{eq:reconstructed structure of optimal hat X}\\
\hat{\mv Q}_k^\ast&=\mv Q_k^\ast,\; \forall k,\label{eq:reconstructed Qk}\\
\hat \tau^\ast&=\tau^\ast+\Delta \tau, \label{eq:reconstructed tau}
\end{align}
is also optimal to problem $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$, if there exists \(\Delta\tau\ge 0\) such that
\begin{multline}\label{eq:sufficient condition for rank-one X}
\left[\sum_{n=1}^{N_t^2-r_c}a_n\mathrm{tr}\left(\mv\eta_n^H(\tfrac{\sigma_r^2\overline{\mv Y}_2}{\sigma_e^2}-\tfrac{P_s\mv F_2}{\gamma_e\sigma_e^2})\mv\eta_n\right)\right]^+\\
\le\Delta\tau \le\tfrac{\sigma_r^2}{\sigma_b^2}\sum_{n=1}^{N_t^2-r_c}a_n\mathrm{tr}\left(\mv\eta_n^H\overline{\mv Y}_1\mv\eta_n\right).
\end{multline}
\end{enumerate} \label{prop:structure of optimal X and its rank-one reconstruction}
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{appendix:proof of prop:structure of optimal X and its rank-one reconstruction}.
\end{IEEEproof}
Note from Proposition \ref{prop:structure of optimal X and its rank-one reconstruction} that if \(\mathrm{rank}(\mv X^\ast)\!\!=\!\!1\), then the optimal \(\mv w^\ast\) to $\rm (P1^\prime.1\text{-}RW)$ can be found directly from the eigenvalue decomposition (EVD) of \(\overline{\mv X}^\ast\), where \(\overline{\mv X}^\ast=\mv X^\ast/\tau^\ast\). Namely, the upper-bound optimum value obtained by solving $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$ is tight in this case; otherwise, \((\mv X^\ast, \{\mv Q_k^\ast\}, \tau^\ast)\) only serves as an upper-bound solution.
Now, we show that this upper-bound is always achievable by a rank-one \(
\mv X^\ast\). When \(\mathrm{rank}(\mv X^\ast)>1\), firstly, we check whether the sufficient condition proposed in \eqref{eq:sufficient condition for rank-one X} is satisfied. If it is met, then a direct reconstruction of \((\hat{\mv X}^\ast, \{\hat{\mv Q}_k^\ast\}, \hat\tau^\ast)\) with \(\mathrm{rank}(\hat{\mv X}^\ast)=1\) follows according to \eqref{eq:reconstructed structure of optimal hat X}--\eqref{eq:reconstructed tau}; otherwise, assume that any optimal solution to problem $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$ has no zero component, i.e., \((\mv X^\ast\neq\mv 0,\{\mv Q_k^\ast\neq\mv 0\},\tau^\ast\neq 0)\). In addition, the number of optimization variables and the number of shaping constraints are denoted by \(L\) and \(M\), respectively. Since \(L=K+2\) and \(M=K+3\) for $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$, we have \(M\le L+2\) satisfied. Thus, according to \cite[\emph{Proposition 3.5}]{huang2010rank}, $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$ has an optimal solution of \(\hat{\mv X}^\ast\) that is rank-one. Also, the detailed rank reduction procedure based on an arbitrary-rank solution has been given in \cite[Algorithm 1]{huang2010rank}. Algorithm \ref{table:Algorithm I} for solving \(\mathrm{(P1^\prime)}\) is shown in Table \ref{table:Algorithm I}.
\begin{table}[htp]
\begin{center}
\vspace{0.025cm}
\caption{\textcolor{black}{\rm Algorithm I for \(\mathrm{(P1^\prime)}\)}} \label{table:Algorithm I}
\vspace{-0.05cm}
\hrule
\vspace{0.2cm}
\begin{itemize}
\item {\bf Initialize} \(\bar\gamma_{e\_{\rm search}}=0:\alpha:\bar\gamma_{e\max}\) and $i=0$
\item {\bf Repeat}
\begin{itemize}
\item [1)] {\bf Set} $i=i+1$;
\item [2)] Given \(\bar\gamma_e=\bar\gamma_{e\_{\rm search}}(i)\),\\
{\bf solve} \(\mathrm{(P1^\prime.1\text{-}RW\text{-}SDR)}\) and {\bf obtain} \(H(\bar\gamma_e^{(i)})\).
\end{itemize}
\item {\bf Until} \(i=L\), where \(L=\lfloor{\tfrac{\bar\gamma_{e\max}}{\alpha}}\rfloor+1\) is the length of \(\bar\gamma_{e\_{\rm search}}\)
\item {\bf Set} \(\bar\gamma_e^\ast=\bar\gamma_{e\_{\rm search}}\left(\!\arg\max\limits_{i}\!\left\{\tfrac{1+H(\bar\gamma_e^{(i)})}{1+\bar\gamma_e^{(i)}}\right\}\right)\) for \(\mathrm{(P1^\prime.2)}\)
\item Given \(\bar\gamma_e^\ast\), {\bf solve} \(\mathrm{(P1^\prime.1\text{-}RW\text{-}SDR)}\) to obtain \((\mv X^\ast, \{\mv Q_k^\ast\}, \tau^\ast)\)\\
{\bf if} \({\rm rank}(\mv X^\ast)=1\), {\bf apply} EVD on \(\mv X^\ast\) such that \(\mv X^\ast=\mv w^\ast\mv w^{\ast H}\);\\
{\bf else if} the sufficient condition in \eqref{eq:sufficient condition for rank-one X} is satisfied,\\
{\bf construct} \((\hat{\mv X^\ast},\{\hat{\mv Q}_k^\ast\}, \hat\tau^\ast)\) following \eqref{eq:reconstructed structure of optimal hat X}-\eqref{eq:reconstructed tau} and {\bf set} \(\mv w^\ast=\sqrt{b}\mv\xi\);\\
~~~~{\bf else} {\bf construct} \(\hat{\mv X}^\ast\) using the procedure in \cite[{\rm Algorithm 1}]{huang2010rank}.\\
~~~~{\bf end}\\
{\bf end}
\item {\bf Recover} \(\mv W^\ast={\rm vec}^{-1}(\mv w^\ast)\)
\end{itemize}
\vspace{0.2cm} \hrule
\end{center}
\end{table}
\subsection{Suboptimal Solutions to $\rm (P1^\prime.1)$}\label{subsec:Suboptimal Solutions}
\subsubsection{Optimal Solution Structure based Scheme}\label{subsubsec:structure-based}
We propose a relay beamforming design for $\rm (P1^\prime.1)$ based on the optimal structure of \(\mv W\) \cite[\underline{\emph{Theorem}}~3.1]{zhang2009optimal}. First, define \(\mv H_1\triangleq[\tilde{\mv h}_0\ \mv g_0]\) and \(\mv H_2\triangleq[\mv h_0\ \mv g_0]\). Then express the truncated singular-value decomposition (SVD) of \(\mv H_1\) and \(\mv H_2\), respectively, as
\begin{align}
\mv H_1&=\mv U_1\mv \Sigma_1\mv V_1^H,\label{eq:SVD of H1}\\
\mv H_2&=\mv U_2\mv \Sigma_2\mv V_2^H. \label{eq:SVD of H2}
\end{align}
\begin{lemma}
The optimal relay beamforming matrix \(\mv W\) for problem $\rm (P1^\prime.1)$ is of the form:
\begin{equation}\label{eq:optimal structure for W}
\mv W=\mv U_1^\dagger\mv B\mv U_2^H+\mv U_1^\dagger\mv C(\mv U_2^{\bot})^H,
\end{equation}
where \(\mv B\in\mathbb{C}^{2\times2}\) and \(\mv C\in\mathbb{C}^{2\times (N_t-2)}\) are two unknown matrices, and \(\mv U_1^{\bot}\), \(\mv U_2^{\bot} \in\mathbb{C}^{N_t\times(N_t-2)}\) satisfy \(\mv U_1^{\bot}(\mv U_1^{\bot})^H=\mv I-\mv U_1\mv U_1^H\), \(\mv U_2^{\bot}(\mv U_2^{\bot})^H=\mv I-\mv U_2\mv U_2^H\), respectively. \label{lemma:optimal structure for W}
\end{lemma}
\begin{IEEEproof}
See Appendix \ref{appendix:proof of lemma:optimal structure for W}.
\end{IEEEproof}
Denote \(\mv U_1^H\tilde{\mv h}_0\), \(\mv U_2^H\mv h_0\), \(\mv U_1^H\mv g_0\) by \(\bar{\tilde{\mv h}}_0\), \(\bar{\mv h}_0\), \(\bar{\mv g}_0\), respectively. We thus simplify \(\vert\widetilde{\mv{h}}^T_0\mv{Wh}_0\vert^2\) and \(\vert\mv g_0^T\mv{Wh}_0\vert^2\) as \(\vert\bar{\tilde{\mv h}}_0^T\mv B\bar{\mv h}_0\vert^2\) and \(\vert\bar{\mv g}_0^T\mv B\bar{\mv h}_0\vert^2\), respectively. Since \(\mv C\) has \(2(N_t-2)\) complex variables, we devise a suboptimal design for \(\mv C\) to reduce the size of variables by \((N_t-2)\). Specifically, let \(\mv C=\mv u^{\prime\bot}\mv v^T\), where \(\mv u^\prime=\bar{\tilde{\mv h}}_0^\dag/\|\bar{\tilde{\mv h}}_0\|\) such that \(\mv u^{\prime\bot}\mv u^{\prime\bot H}=\mv I-\mv u^\prime\mv u^{\prime H}\). Hence, \(\widetilde{\mv{h}}^T_0\mv{WW}^H\widetilde{\mv{h}}_0^\dagger\), \({\mv{g}}^T_0\mv{WW}^H\mv g_0^\dagger\) and \eqref{eq:transmit power at the relay} can be reduced to \(\|\mv B^H\bar{\tilde{\mv h}}_0^\dagger\|^2\), \(\|\mv B^H\bar{\mv g}_0^\dagger\|^2+\vert\mv v^\dagger\mv u^{\prime\bot H}\bar{\mv g}_0^\dagger\vert^2\) and \(P_s\|\mv B\bar{\mv h}_0\|^2+\sigma_r^2\mathrm{tr}(\mv B^H\mv B)+\sigma_r^2\|\mv v\|^2\), respectively. Then define \(\mv b={\rm vec}(\mv B)\), \(\bar{\mv f}_1={\rm vec}(\bar{\tilde{\mv h}}_0\bar{\mv h}_0^T)\), \(\bar{\mv f}_2={\rm vec}(\bar{\mv g}_0\bar{\mv h}_0^T)\), \(\mv Y_1^\prime=\bar{\tilde{\mv h}}_0^T\otimes\mv I\), \(\mv Y_2^\prime=\bar{\mv g}_0^T\otimes\mv I\), and \(\mv\Phi^\prime=(\mv I\otimes\mv\Theta^{\prime T})^{1/2}\) with \(\mv\Theta^{\prime}=P_s\bar{\mv h}_0\bar{\mv h}_0^H+\sigma_r^2\mv I\); \(\mv Z=\mv b\mv b^H\), \(\mv V=\mv v\mv v^H\), \(\overline{\mv F}_1=\bar{\mv f}_1^\dagger\bar{\mv f}_1^T\), \(\overline{\mv F}_2=\bar{\mv f}_2^\dagger\bar{\mv f}_2^T\), \(\overline{\mv Y}_1^\prime=\mv Y_1^{\prime H}\mv Y_1^\prime\), \(\overline{\mv Y}_2^\prime=\mv Y_2^{\prime H}\mv Y_2^\prime\), and \(\overline{\mv \Phi}^\prime=\mv \Phi^{\prime H}\mv\Phi^\prime\). The suboptimal design for problem $\rm (P1^\prime.1)$ by ignoring the rank constraints on \(\mv Z\) and \(\mv V\) is thus given by
\begin{subequations}
\begin{align}
&\!\!\!\! \mathrm{(P1^\prime.1\text{-}sub1\text{-}SDR)}:~ \mathop{\mathtt{max}}_{\boldsymbol{Z},\boldsymbol{V},\{\boldsymbol{Q}_k\},\tau}~~~ P_s{\rm tr}(\overline{\mv F}_1\mv Z)\notag\\
&\!\!\!\! \mathtt{s.t.}~~\sigma^2_r{\rm tr}(\overline {\mv Y}^\prime_1\mv Z)+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dagger+\tau\sigma_b^2=1,\\
&\!\!\!\! P_s{\rm tr}(\overline{\mv F}_2\mv Z)\le\bar\gamma_e\nonumber\\
& \!\!\!\!\left(\sigma^2_r\left({\rm tr}(\overline {\mv Y}^\prime_2\mv Z)+\vert\bar{\mv g}_0^T\mv u^{\prime\bot}\vert^2\mathrm{tr}(\mv V)\right)+\sum_{k=1}^K\mv g_k^T\mv Q_k\mv g_k^\dagger+\tau\sigma_e^2\right),\\
&~~{\rm tr}(\overline{\mv \Phi}^\prime\mv Z)+\sigma_r^2{\rm tr}(\mv Z)+\sigma_r^2\mathrm{tr}(\mv V)\leq \tau P_r,\\
&~~\mathrm{tr}\left(\mv{Q}_k \right)\leq \tau\eta P_s\left\|\mv{h}_k\right\|^2, \; \forall k,\\
&~~\tau\ge 0, \, \mv Q_k\succeq \mv 0, \; \forall k, \,\mv Z\succeq \mv 0, \, \mv V\succeq \mv 0.
\end{align}
\end{subequations}
\begin{remark}
The variables in $\rm (P1^\prime.1\text{-}sub1\text{-}SDR)$, i.e., \(\mv Z\in\mathbb{C}^{4\times4}\), \(\mv V\in\mathbb{C}^{(N_t-2)\times(N_t-2)}\), are of much reduced size. Further, the reconstruction of \(\mv v^\ast\) from \(\mv V\) can be briefly explained as follows. Given the Lagrangian of $\rm (P1^\prime.1\text{-}sub1\text{-}SDR)$, the KKT conditions with respect to (w.r.t.)~\(\mv V^\ast\) are given by
\begin{align}
(\alpha^\ast\bar\gamma_e\vert\bar{\mv g}_0^T\mv u^{\prime\bot}\vert^2-\beta_0^\ast\sigma_r^2)\mv I +\mv U^\ast &=0, \label{eq:zero derivitive wrt V} \\
\mv U^\ast\mv V^\ast &=0.\label{eq:complementary slackness wrt V}
\end{align}
\end{remark}
Post-multiplying \eqref{eq:zero derivitive wrt V} with \(\mv V^\ast\), we have \((\alpha^\ast\bar\gamma_e\vert\bar{\mv g}_0^T\mv u^{\prime\bot}\vert^2-\beta_0^\ast\sigma_r^2)\mv V^\ast=0\). As a result, if \(\tfrac{\alpha^\ast}{\beta_0^\ast}\neq\tfrac{\sigma_r^2}{\bar\gamma_e\vert\bar{\mv g}_0^T\mv u^{\prime\bot}\vert^2}\), \(\mv V^\ast=\mv 0\); otherwise \(\mv V^\ast=\mv v^\ast\mv v^{\ast H}\), with \(\mv v^\ast=\sqrt{{\rm tr}(\mv V^\ast)}\mv v_0\), where \(\mv v_0\in\mathbb{C}^{(N_t-2)\times1}\) is an arbitrary vector with unit norm. With \(\mv V\) solved, $\rm (P1^\prime.1\text{-}sub1\text{-}SDR)$ reduces to a problem with similar structure as $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$, and the proof for existence of a rank-one \(\mv Z\) can be referred to Proposition \ref{prop:structure of optimal X and its rank-one reconstruction}.
\subsubsection{\textcolor{black}{Zero-forcing}}\label{subsubsec:ZF}
\textcolor{black}{We propose a low-complexity ZF scheme for \(\mathrm{(P1^\prime.1)}\), in which the jamming signal places a null at Bob, and then a semi-closed form solution for \(\mv W\) is derived. In line with the principle of ZF jamming \cite{Zheng2011CJ}, the jamming signal \(\mv n_k\) is designed as \(\mv n_k=\tilde{\mv V}_k\tilde{\mv n}_k\) such that \(\mv I-\tfrac{\tilde{\mv h}_k^\dag\tilde{\mv h}_k^T}{\|\tilde{\mv h}_k\|^2}=\tilde{\mv V}_k\tilde{\mv V}_k^H\), and \(\tilde{\mv n}_k\in\mathbb{C}^{(N_t-1)\times 1}\) is an arbitrary random vector, \(\tilde{\mv n}_k\sim\mathcal{CN}(\mv 0, \tilde{\mv Q}_k)\), \(k=1,\ldots,K\). Thus, given any \(\mv W\), \(\tilde{\mv n}_k\)'s can be optimized to maximize the effect of jamming at Eve by \(\max\limits_{\tilde{\boldsymbol Q}_k}\!\sum_{k=1}^K\mv g_k^T\tilde{\mv V}_k\tilde{\mv Q}_k\tilde{\mv V}_k^H\mv g_k^\dag\), which gives \(\tilde{\mv Q}_k^\ast=\zeta_k^2\tilde{\mv g}_k^\dag\tilde{\mv g}_k^T\), where \(\tilde{\mv g}_k=\tilde{\mv V}_k^T\mv g_k\), and \(\zeta_k=\sqrt{\eta P_s}\|\mv h_k\|/\|\tilde{\mv g}_k\|\) is determined by \eqref{eq:constraint on amount of AN for each HJ helper}, \(\forall k\). As such, \(\sum_{k=1}^K\mv g_k^T\tilde{\mv V}_k\tilde{\mv Q}_k^\ast\tilde{\mv V}_k^H\mv g_k^\dag\) turns out to be \(\sum_{k=1}^K\eta P_s\|\mv h_k\|^2\|\tilde{\mv g}_k\|^2\), which is denoted by \(q\).}
\textcolor{black}{With fixed \(q\), \(\mathrm{(P1^\prime.1\text{-}RW\text{-}SDR)}\) can be recast as
\begin{subequations}
\begin{align}
&\mathrm{(P1^\prime.1\text{-}sub2\text{-}SDR)}:~\mathop{\mathtt{max}}_{\boldsymbol {X},\tau}
~P_s{\rm tr}(\mv F_1\mv X)\nonumber\\
&\mathtt{s.t.}~\sigma^2_r{\rm tr}(\overline {\mv Y}_1\mv X)+\tau\sigma_b^2=1, \label{eq:C1 of ZF}\\
&P_s{\rm tr}(\mv F_2\mv X)\le\bar\gamma_e\left(\sigma^2_r{\rm tr}(\overline {\mv Y}_2\mv X)+\tau q+\tau\sigma_e^2\right), \label{eq:C2 of ZF}\\
&{\rm tr}(\overline{\mv \Phi}\mv X)\leq \tau P_r, \label{eq:C3 of ZF}\\
&\mv X\succeq \mv 0, \, \tau\ge 0.
\end{align}
\end{subequations}}
\textcolor{black}{\begin{proposition}
\(\mathrm{(P1^\prime.1\text{-}sub2\text{-}SDR)}\) must yield a rank-one solution, i.e., \(\mv X^\ast=\mv w\mv w^\ast\), such that \(\mv w^\ast=\mu\mv\nu_{\max}(\mv Z^\ast)\), and
\begin{align}
\mv Z^\ast=&P_s\mv F_1-\lambda^\ast\sigma_r^2\overline{\mv Y}_1-\alpha^\ast P_s\mv F_2\nonumber\\
&+\alpha^\ast\bar\gamma_e\sigma_r^2\overline{\mv Y}_2-\beta_0^\ast\overline{\mv \Phi}, \label{eq:Z for sub2}
\end{align}
where \(\mv \nu_{\max}(\cdot)\) represents the eigenvector corresponding to the largest eigenvalue of the associated matrix, and \(\mu=\sqrt{\tfrac{P_r}{{\rm tr}(\overline{\mv\Phi})\mv\nu_{\max}(\mv Z^\ast)\mv\nu_{\max}^H(\mv Z^\ast)}}\). Also, \(\lambda^\ast\), \(\alpha^\ast\) and \(\beta_0^\ast\) are the optimal dual variables associated with \eqref{eq:C1 of ZF}--\eqref{eq:C3 of ZF}, respectively.\label{prop:semi-closed form}
\end{proposition}}
\begin{IEEEproof}
See Appendix \ref{appendix:proof of prop:semi-closed form}.
\end{IEEEproof}
\textcolor{black}{The only problem in Proposition \ref{prop:semi-closed form} is the dual problem of \(\mathrm{(P1^\prime.1\text{-}sub2\text{-}SDR)}\), which admits a much simpler structure to solve than the primal one.}
\section{Joint AN-AF Beamforming with Imperfect CSI}\label{sec:A Joint Optimization Based on Imperfect CSI}
\subsection{Problem Formulation for Imperfect CSI}\label{subsec:Problem Formulation for Imperfect CSI}
We use a deterministic spherical model \cite{li2011optimal,swindlehurst2012robust} to characterize the resulting CSIs' uncertainties such that
\begin{subequations}\label{eq:uncertainty regions}
\begin{align}
\mathcal{G}_0= &\{\mv g_0\vert\mv g_0=\hat{\mv {g}}_0+\Delta\mv g_0, \Delta\mv g_0^H\mv W_0\Delta\mv g_0\le 1\},\\
\mathcal{G}_k= & \{\mv g_k\vert\mv g_k=\hat{\mv {g}}_k+\Delta\mv g_k, \Delta\mv g_k^H\mv W_k\Delta\mv g_k\le 1\}, \forall k, \\
\tilde{\mathcal H}_0=&\{\tilde{\mv h}_0\vert\tilde{\mv h}_0=\hat{\tilde{\mv h}}_0+\Delta\tilde{\mv h}_0,\Delta\tilde{\mv h}_0^H{\mv W}_0^\prime\Delta\tilde{\mv h}_0\le 1\},\\
\tilde{\mathcal H}_k= &\{\tilde{\mv h}_k\vert\tilde{\mv h}_k=\hat{\tilde{\mv h}}_k+\Delta\tilde{\mv h}_k,\Delta\tilde{\mv h}_k^H\mv W_k^{\prime\prime}\Delta\tilde{\mv h}_k\le 1\}, \forall k, \\
\mathcal{H}_k= & \{\mv h_k\vert\mv h_k=\hat{\mv {h}}_k+\Delta\mv h_k, \Delta\mv h_k^H\mv W_k^{\prime}\Delta\mv h_k\le 1\}, \forall k,
\end{align}
\end{subequations}
where \(\hat{\mv g}_0\), \(\hat{\mv g}_k\)'s, \(\hat{\tilde{\mv h}}_0\), \(\hat{\tilde{\mv h}}_k\)'s and \(\hat{\mv h}_k\)'s are the estimates of the corresponding channels; \(\Delta\mv g_0\), \(\Delta\mv g_k\)'s, \(\Delta\tilde{\mv h}_0\), \(\Delta\tilde{\mv h}_k\)'s and \(\Delta\mv h_k\)'s are their respective channel errors; the matrices \(\mv W_0\), \(\mv W_k\)'s, \(\mv W_0^\prime\), \(\mv W_k^{\prime\prime}\)'s and \(\mv W_k^\prime\)'s determine the shape of each error region. Without loss of generality (w.l.o.g.), we set \(\mv W_0=\mv I/\epsilon_0\), \(\mv W_0^\prime=\mv I/\epsilon_0^\prime\), \(\mv W_k=\mv I/\epsilon_k\), \(\mv W_k^\prime=\mv I/\epsilon_k^\prime\) and \(\mv W_k^{\prime\prime}=\mv I/\epsilon_k^{\prime\prime}\) for simplicity, where \(\epsilon_0\), \(\epsilon_0^\prime\), \(\epsilon_k\), \(\epsilon_k^\prime\), and \(\epsilon_k^{\prime\prime}\) represent the respective size of the bounded error regions, \(k=1,\ldots, K\).
Accordingly, we denote the robust counterpart for \(\mathrm{(P1^\prime)}\) a
\begin{subequations}
\begin{align}
\mathrm{(P2^\prime)}:&~~~\mathop{\mathtt{max}}_{\{\boldsymbol{Q}_k\},\boldsymbol{W}}\mathop{\mathtt{min}}_{{\tilde{\boldsymbol h}_0\in\tilde{\mathcal{H}}_0,\tilde{\boldsymbol{h}}_k\in\tilde{\mathcal H}_k,\forall k}\atop{\boldsymbol g_0\in\mathcal{\boldsymbol G}_0},\tilde{\boldsymbol g}_k\in\tilde{\mathcal G}_k,\forall k}\!\!\!\!\!\!\bar F(\{\mv Q_k\},\mv W)\notag\\
\mathtt{s.t.}& ~~~{\rm tr}\left(\mv W\left(P_s\mv{h}_0\mv{h}^H_0 +\sigma^2_r\mv{I}\right)\mv W^H \right )\leq P_r,\\
& ~~~{\rm tr}\left(\mv{Q}_k \right )\le \eta P_s\!\min\limits_{\boldsymbol h_k\in\mathcal{H}_k,\forall k}\!\left\|\mv{h}_k\right\|^2, \; \forall k,\\
&~~~\mv{Q}_k\succeq \mv 0, \; \forall k.
\end{align}
\end{subequations}
An equivalent robust reformulation of \(\mathrm{(P1^\prime.2)}\) is given by
\begin{align}
\mathrm{(P2^\prime.2)}: \mathop{\mathtt{max}}_{\bar\gamma_e>0}
&~~~\frac{1+\hat H(\bar\gamma_e)}{1+\hat F(\bar\gamma_e)}, \label{eq:P(2'.2)}
\end{align}
where \(\hat F(\bar\gamma_e)=\gamma_e\) and \(\hat H(\bar\gamma_e)\) denotes the optimal value of problem \(\mathrm{(P2^\prime.1)}\) that is given by
\begin{subequations}
\begin{align}
&\mathrm{(P2^\prime.1)}:\notag\\
&\!\!\mathop{\mathtt{max}}_{\boldsymbol{X},\{\boldsymbol{Q}_k\}}\!\mathop{\min}_{{\tilde{\boldsymbol{h}}_k\in\tilde{\mathcal H}_k,\forall k}\atop{\tilde{\boldsymbol h}_0\in\tilde{\mathcal{H}}_0}}
~\frac{P_s{\rm tr}(\mv F_1\mv X)}{\sigma^2_r{\rm tr}(\overline {\mv Y}_1\mv X)+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dagger+\sigma_b^2}\notag\\
&\mathtt{s.t.}\max\limits_{\boldsymbol g_k\in\mathcal{G}_k,\forall k\atop\boldsymbol g_0\in\mathcal{G}_0}\frac{P_s{\rm tr}(\mv F_2\mv X)}{\sigma^2_r{\rm tr}(\overline {\mv Y}_2\mv X)+\sum_{k=1}^K\mv g_k^T\mv Q_k\mv g_k^\dagger+\sigma_e^2}\le\bar\gamma_e, \label{eq:epigraph reformulation on gamma_e}\\
&{\rm tr}(\overline{\mv \Phi}\mv X)\leq P_r, \label{eq:constraint on the relay power}\\
&{\rm tr}\left(\mv{Q}_k \right )\le \eta P_s\!\min\limits_{\boldsymbol h_k\in\mathcal{H}_k,\forall k}\!\left\|\mv{h}_k\right\|^2, \; \forall k, \label{eq:constraints on the jamming power}\\
& {\rm rank}(\mv X)=1, \label{eq:constraint on the rank}\\
& \mv X\succeq \mv 0, \, \mv Q_k\succeq \mv 0, \; \forall k. \label{eq:PSD constriants}
\end{align}
\end{subequations}
As stated in Lemma \ref{lemma:P1'.2 same optimal value}, similarly, \(\mathrm{(P2^\prime)}\) can be proved to have the same optimal value as \(\mathrm{(P2^\prime.2)}\) and the same optimal solution as \(\mathrm{(P2^\prime.1)}\) when \(\bar\gamma_e\) takes its optimal value. As a result, \(\mathrm{(P2^\prime)}\) can be solved in a two-stage fashion as well. Specifically, given any \(\bar\gamma_e\), we first solve \(\mathrm{(P2^\prime.1)}\) to obtain \(\hat H(\bar\gamma_e)\) and then search for the optimal \(\bar\gamma_e\) to \(\mathrm{(P2^\prime.2)}\).
\subsection{Solutions to $\rm (P2^\prime.1)$}
By ignoring \eqref{eq:constraint on the rank}, \(\mathrm{(P2^\prime.1)}\) is recast as
\begin{subequations}
\begin{align}
&\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}:\notag\\
&\!\!\mathop{\mathtt{max}}_{\boldsymbol{X},\{\boldsymbol{Q}_k\}}\!\mathop{\min}_{{\tilde{\boldsymbol{h}}_k\in\tilde{\mathcal H}_k,\forall k}\atop{\tilde{\boldsymbol h}_0\in\tilde{\mathcal{H}}_0}}
~\frac{P_s{\rm tr}(\mv F_1\mv X)}{\sigma^2_r{\rm tr}(\overline {\mv Y}_1\mv X)+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dagger+\sigma_b^2}\notag\\
&\mathtt{s.t.}\eqref{eq:epigraph reformulation on gamma_e}-\eqref{eq:constraints on the jamming power}, \, \eqref{eq:PSD constriants}.
\end{align}\label{eq:(P2'.1-RW-SDR-Eqv)}
\end{subequations}
It is worth noting that due to the rank-one relaxation of \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}\), solution provided by \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}\) in general yields an upper-bound for \(\hat H(\bar\gamma_e)\), which may not be achievable. However, in the sequel we insist on solving \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}\) that is regarded as an upper-bound benchmark for our proposed problem detailed later in this subsection.
\subsubsection{\textcolor{black}{Solutions to $\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}$}}
\textcolor{black}{To make the ``max-min'' objective function of \eqref{eq:(P2'.1-RW-SDR-Eqv)} tractable, we first rewrite \eqref{eq:(P2'.1-RW-SDR-Eqv)} by the equivalent epigraph formulation as
\begin{subequations}
\begin{align}
&~\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}:\nonumber\\
&\!\!\mathop{\mathtt{max}}_{\boldsymbol{X},\{\boldsymbol{Q}_k\},\delta}~\delta\nonumber\\
&~\mathtt{s.t.}\min\limits_{{\tilde{\boldsymbol{h}}_k\in\tilde{\mathcal H}_k,\forall k}\atop{\tilde{\boldsymbol h}_0\in\tilde{\mathcal{H}}_0}}
\frac{P_s{\rm tr}(\mv F_1\mv X)}{\sigma^2_r{\rm tr}(\overline {\mv Y}_1\mv X)+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dagger+\sigma_b^2}\ge\delta, \label{eq:epigraph reformulation on delta}\\
&~\eqref{eq:epigraph reformulation on gamma_e}-\eqref{eq:constraints on the jamming power}, \, \eqref{eq:PSD constriants}.
\end{align}
\end{subequations}
As there are potentially infinite number of constraints in \eqref{eq:epigraph reformulation on delta}, \eqref{eq:epigraph reformulation on gamma_e}, and \eqref{eq:constraints on the jamming power}, they are semi-indefinite and thus intractable. In the following, we equivalently transform these constraints to tractable ones using {\em S-Procedure} and a generalized {\em S-Procedure} given in Lemmas~\ref{lemma:S-Procedure} and \ref{lemma:LMI from robust block QMI}, respectively.
}
\begin{lemma}\label{lemma:S-Procedure} ({\em S-Procedure} \cite{boyd2004convex}) Let \(f_m(\mv x)\), \(m=1,2\) be defined as
\begin{equation}\label{eq:f_m(x)}
f_m(\mv x)=\mv x^H\mv A_m\mv x+2\Re\{\mv b_m^H\mv x\}+c_m,
\end{equation}
where \(\mv A_m=\mv A_m^H\in\mathbb{C}^{N\times N}\), \(\mv b_m\in\mathbb{C}^{N\times 1}\) and \(c_m\in\mathbb{R}\), and $\Re$ gives the real part of the input entity. Then the implication \(f_1(\mv x)\ge 0\Rightarrow f_2(\mv x)\ge 0\) holds if and only if there exists $\delta\ge 0$ such that
\begin{equation}\label{eq:S-Procedure}
\begin{bmatrix} \mv A_2 & \mv b_2\\
\mv b_2^H & c_2\end{bmatrix}-\delta\begin{bmatrix}\mv A_1 &\mv b_1\\ \mv b_1^H &c_1 \end{bmatrix}\succeq \mv 0,
\end{equation}
provided there exists a point \(\hat{\mv x}\) such that \(f_m(\hat{\mv x})>0\), \(m=1,2\).
\end{lemma}
\begin{lemma}\label{lemma:LMI from robust block QMI}
(\cite[\em Theorem 3.5]{luo2004multivariate}) The robust block quadratic matrix inequality (QMI),
\begin{align}\label{eq:robust block QMI}
\begin{bmatrix}
\mv H & \mv F+\mv G\mv X\\
(\mv F+\mv G\mv X)^H & \mv C+\mv X^H\mv B+\mv B^H\mv X+\mv X^H\mv A\mv X
\end{bmatrix}\succeq\mv 0, \,\nonumber\\
\!\!\!\!\!\!\!\!\!\! \mbox{for all}\, \mv I-\mv X^H\mv D\mv X\succeq \mv 0,
\end{align}
is equivalent to
\begin{equation}\label{eq:LMI from robust block QMI}
\exists t\ge 0, \, \mbox{such that} \, \begin{bmatrix}
\mv H & \mv F & \mv G\\
\mv F^H & \mv C & \mv B^H\\
\mv G^H & \mv B & \mv A
\end{bmatrix} -t\begin{bmatrix}\mv 0 & \mv 0 & \mv 0 \\\mv 0 & \mv I & \mv 0\\\mv 0 &\mv 0 &-\mv D \end{bmatrix}\succeq\mv 0.
\end{equation}
\end{lemma}
\textcolor{black}{First, by rearranging terms, \eqref{eq:epigraph reformulation on delta} can be equivalently transformed into the following linear form:
\begin{multline}\label{eq:rearranging on delta}
\min\limits_{{\tilde{\boldsymbol{h}}_k\in\tilde{\mathcal H}_k,\forall k}\atop{\tilde{\boldsymbol h}_0\in\tilde{\mathcal{H}}_0}}\!\!P_s{\rm tr}(\mv F_1\mv X)-\delta\sigma_r^2{\rm tr}(\overline{\mv Y}_1\mv X)\\
-\delta\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dag-\delta\sigma_b^2\ge 0.
\end{multline}
Recalling the following matrix equalities in line with our definition of \({\rm vec}(\cdot)\) operation:
\begin{align}
{\rm tr}(\mv A\mv B^T)&={\rm vec}^T(\mv A){\rm vec}(\mv B), \\
{\rm vec}(\mv A\mv X\mv B)&=(\mv A\otimes\mv B^T){\rm vec}(\mv X), \\
(\mv A\otimes\mv B)^T&=\mv A^T\otimes\mv B^T,
\end{align}
it follows that
\begin{align}
{\rm tr}(\mv F_1\mv X)&=\tilde{\mv h}^T(\mv h_0\otimes\mv I)X(\mv h_0^H\otimes\mv I)\tilde{\mv h}^\dag,\\
{\rm tr}(\overline{\mv Y}_1\mv X)&=\tilde{\mv h}^T(\mv I\otimes\mv X)\tilde{\mv h}^\dag, \label{eq:maniuplation on Y_1}
\end{align}
where \(\tilde{\mv h}\in\mathbb{C}^{N_t^3\times1}={\rm vec}(\tilde{\mv h}_0^T\otimes\mv I)\). The equivalent channel model for \(\tilde{\mv h}\) is given by \(\tilde{\mv h}=\hat{\tilde{\mv h}}+\Delta\tilde{\mv h}\), where \(\|\Delta\tilde{\mv h}\|^2\le N_t\epsilon_0^\prime\) (c.f.~\eqref{eq:uncertainty regions}). By introducing \(\mv X^{\prime\prime}=(\mv h_0\otimes\mv I)\mv X(\mv h_0^H\otimes\mv I)\) and \(\mv X^\prime=\mv I\otimes\mv X\), \eqref{eq:rearranging on delta} can thus be recast as
\begin{multline}\label{eq:semi-indefinite form of tilde_h for S-procedure}
\!\!\!\!\!\!\!\!\min\limits_{{\tilde{\boldsymbol{h}}_k\in\tilde{\mathcal H}_k,\forall k}\atop{\tilde{\boldsymbol h}_0\in\tilde{\mathcal{H}}_0}}\!\!\Delta\tilde{\mv h}^T(P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime)\Delta\tilde{\mv h}^\dag+2\Re\{\Delta\tilde{\mv h}^T(P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime)\hat{\tilde{\mv h}}^\dag\}\\
-\delta\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dag-\delta\sigma_b^2\ge 0.
\end{multline}
Hence, according to Lemma~\ref{lemma:S-Procedure}, the implication \(\|\Delta\tilde{\mv h}\|^2\le N_t\epsilon_0^\prime\Rightarrow\eqref{eq:semi-indefinite form of tilde_h for S-procedure}\) holds if and only if there exists \(w^{(0)}\ge 0\) such that the following LMI holds:
\begin{align}
\begin{bmatrix}\mv H_1 & \mv F_1\\
\mv F_1^H & c_1 \end{bmatrix}\succeq\mv 0, \label{eq:LMI of eqv obj for S-Procedure}
\end{align}
where $\mv H_1=P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime+w^{(0)}\mv I$, $\mv F_1=(P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime)\hat{\tilde{\mv h}}^\dag$ and
$c_1=\hat{\tilde{\mv h}}^T(P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime)\hat{\tilde{\mv h}}^\dag-\delta\sum_{k=1}^K{\tilde{\mv h}}_k^T\mv Q_k{\tilde{\mv h}}_k^\dag-\delta\sigma_b^2-w^{(0)}N_t\epsilon_0^\prime$.
Now, \eqref{eq:epigraph reformulation on delta} has been equivalently reformulated as \eqref{eq:LMI of eqv obj for S-Procedure}. To further cope with channel uncertainties with regards to \(\tilde{\mv h}_k\)'s such that \eqref{eq:LMI of eqv obj for S-Procedure} holds for \(\tilde{\mv h}_k\in\tilde{\mathcal H}_k\), \(k=1,\ldots,K\), we need the following proposition.}
\textcolor{black}{\begin{proposition}
The semi-indefinite constraint of \eqref{eq:semi-indefinite form of tilde_h for S-procedure} can be equivalently recast as the following block matrix inequality:
\begin{align}
\begin{bmatrix} \mv H_1^{(K)} & \mv F_1^{(K)} & \mv G_1^{(K)}\\ \mv F_1^{(K)H} & c_1^{(K)} & \mv B_1^{(K)H}\\ \mv G_1^{(K)H} & \mv B_1^{(K)} & \mv A_1^{(K)}\end{bmatrix}-w^{(K)}\begin{bmatrix}\mv 0 & \mv 0 & \mv 0\\ \mv 0 & 1 & \mv 0\\ \mv 0 & \mv 0 & \frac{-\mv I}{\epsilon_K^{\prime\prime}} \end{bmatrix}\succeq\mv 0,\label{eq:LMI reformulation on delta}
\end{align} where \(\mv H_1^{(K)}\), \(\mv F_1^{(K)}\) and \(c_1^{(K)}\) are recursively given by
\begin{align}
\mv H_1^{(k)}&=\left\{\begin{array}{ll}
\begin{bmatrix}\mv A_1^{(k-1)}+\frac{w^{(k-1)}}{\epsilon_{k-1}^{\prime\prime}}\mv I & \mv G_1^{(k-1)H}\\ \mv G_1^{(k-1)} & \mv H_1^{(k-1)}\end{bmatrix}, & k>1;\\
P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime+w^{(0)}\mv I, & k=1,
\end{array}\right.\\
\mv F_1^{(k)}&=\left\{\begin{array}{ll}
\begin{bmatrix}\mv B_1^{(k-1)}\\ \mv F_1^{(k-1)} \end{bmatrix},& k>1;\\
(P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime)\hat{\tilde{\mv h}}^\dag,& k=1,
\end{array}\right.
\end{align}
\(c_1^{(k)}=\hat{\tilde{\mv h}}^T(P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime)\hat{\tilde{\mv h}}^\dag-\delta\sum_{j=1}^k\hat{\tilde{\mv h}}_j^T\mv Q_j\hat{\tilde{\mv h}}_j^\dag-\delta\sum_{i=k+1}^K{\tilde{\mv h}}_i^T\mv Q_i{\tilde{\mv h}}_i^\dag-\delta\sigma_b^2-w^{(0)}N_t\epsilon_0^\prime-\sum_{l=1}^{k-1}w^{(l)}\), \(k=1,\ldots,K\). In addition, \(\mv G_1^{(k)}\in\mathbb{C}^{(N_t^3+(k-1)N_t)\times N_t}=\mv 0\), \(\mv B_1^{(k)}=-\delta\mv Q_k\hat{\tilde{\mv h}}_k^\dag\), \(\mv A_1^{(k)}=-\delta\mv Q_k\), \(k=1,\ldots,K\), and \(\{w^{(k)}\ge 0\}\) denote pertinent auxiliary variables. \label{prop:eqv LMI wrt tilde_h1 till tilde_hK}
\end{proposition}}
\textcolor{black}{\begin{IEEEproof}
See Appendix \ref{appendix:proof of prop:eqv LMI wrt tilde_h1 till tilde_hk}.
\end{IEEEproof}}
\textcolor{black}{Next, \eqref{eq:epigraph reformulation on gamma_e} is rewritten as
\begin{multline}\label{eq:linear epigraph reformulation on gamma_e}
\max\limits_{\boldsymbol g_k\in\mathcal{G}_k,\forall k\atop\boldsymbol g_0\in\mathcal{G}_0}\!\!\mv g^T\left(P_s\mv X^{\prime\prime}-\bar\gamma_e\sigma_r^2\mv X^\prime\right)\mv g^\dag-\bar\gamma_e\sum_{k=1}^K\mv g_k^T\mv Q_k\mv g_k^\dag\\
-\bar\gamma_e\sigma_e^2\le 0,
\end{multline}
where \(\mv g\in\mathbb{C}^{N_t^2\times 1}={\rm vec}(\mv g_0^T\otimes\mv I)\) and the equivalent imperfect channel model is given by \(\mv g=\hat{\mv g}+\Delta\mv g\) such that \(\|\Delta\mv g\|^2\le N_t\epsilon_0\).
}
\textcolor{black}{\begin{proposition}
The semi-indefinite constraint of \eqref{eq:linear epigraph reformulation on gamma_e} is satisfied if and only if there exists \(v^{(k)}\ge 0\), \(k=1,\ldots,K\), such that the following block matrix inequality holds:
\begin{align}
\begin{bmatrix} \mv H_2^{(K)} & \mv F_2^{(K)} & \mv G_2^{(K)}\\ \mv F_2^{(K)H} & c_2^{(K)} & \mv B_2^{(K)H}\\ \mv G_2^{(K)H} & \mv B_2^{(K)} & \mv A_2^{(K)}\end{bmatrix}-v^{(K)}\begin{bmatrix}\mv 0 & \mv 0 & \mv 0\\ \mv 0 & 1 & \mv 0\\ \mv 0 & \mv 0 & \frac{-\mv I}{\epsilon_K} \end{bmatrix}\succeq\mv 0, \label{eq:LMI reformulation on gamma_e}
\end{align}where \(\mv H_2^{(K)}\), \(\mv F_2^{(K)}\) and \(c_2^{(K)}\) are recursively given by
\begin{align}
\mv H_2^{(k)}&=\left\{\begin{array}{ll}
\begin{bmatrix}\mv A_2^{(k-1)}+\frac{v^{(k-1)}}{\epsilon_{k-1}}\mv I & \mv G_2^{(k-1)H}\\ \mv G_2^{(k-1)} & \mv H_2^{(k-1)}\end{bmatrix}, & k>1;\\
-P_s\mv X^{\prime\prime}+\bar\gamma_e\sigma_r^2\mv X^\prime+v^{(0)}\mv I, & k=1,
\end{array}\right.\\
\mv F_2^{(k)}&=\left\{\begin{array}{ll}
\begin{bmatrix}\mv B_2^{(k-1)}\\ \mv F_2^{(k-1)} \end{bmatrix},& k>1;\\
(-P_s\mv X^{\prime\prime}+\bar\gamma_e\sigma_r^2\mv X^\prime)\hat{\mv g}^\dag,& k=1,
\end{array}\right.
\end{align}
\begin{multline}
c_2^{(k)}=\hat{{\mv g}}^T(-P_s\mv X^{\prime\prime}+\bar\gamma_e\sigma_r^2\mv X^\prime)\hat{{\mv g}}^\dag+\bar\gamma_e\sum_{j=1}^k\hat{{\mv g}}_j^T\mv Q_j\hat{{\mv g}}_j^\dag+\\
\bar\gamma_e\sum_{i=k+1}^K{{\mv g}}_i^T\mv Q_i{{\mv g}}_i^\dag+\bar\gamma_e\sigma_e^2-v^{(0)}N_t\epsilon_0-\sum_{l=1}^{k-1}v^{(l)},
\end{multline}
\(k=1,\ldots,K\). Also, \(\mv G_2^{(k)}=\mv G_1^{(k)}\), \(\mv B_2^{(k)}=\bar\gamma_e\mv Q_k\hat{\mv g}_k^\dag\), and \(\mv A_2^{(k)}=\bar\gamma_e\mv Q_k\), \(k=1,\ldots,K\). \label{prop:eqv LMI wrt g1 till gK}
\end{proposition}}
\textcolor{black}{\begin{IEEEproof}
See Appendix \ref{appendix:proof of prop:eqv LMI wrt g1 till gk}.
\end{IEEEproof}}
Last, we rewrite \eqref{eq:constraints on the jamming power} to facilitate the robust optimization against the errors introduced by \(\Delta\mv h_k\)'s. By applying Lemma \ref{lemma:S-Procedure}, \eqref{eq:constraints on the jamming power} holds if and only if there exists \(\mu_k\ge 0\), \(k=1,\ldots,K\), such that the following LMI constraint is met:
\begin{align}\label{eq:eqv LMI of available AN for S-Procedure}
\begin{bmatrix}\eta P_s\mv I+\mu_k\mv I & \eta P_s\hat{\mv h}_k\\
\eta P_s\hat{\mv h}_k^H & \eta P_s\|\hat{\mv h}_k\|_2^2-{\rm tr}(\mv Q_k)-\mu_k\epsilon_k^\prime \end{bmatrix}\succeq\mv 0, \; \forall k.
\end{align}
As such, \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}\) is now simplified as
\begin{align}
\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}:&~~~\mathop{\mathtt{max}}_{\boldsymbol{X},\{\boldsymbol{Q}_k\},\delta}\delta\nonumber\\
\mathtt{s.t.}&~~~\eqref{eq:LMI reformulation on delta}, \eqref{eq:LMI reformulation on gamma_e}, \eqref{eq:eqv LMI of available AN for S-Procedure},
\eqref{eq:constraint on the relay power}, \eqref{eq:PSD constriants}.\nonumber
\end{align}
\textcolor{black}{Because of the non-convex term such as \(\delta\mv X^\prime\) in \eqref{eq:LMI reformulation on delta}, problem \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}\) remains very hard to solve. We thus use the bisection method \cite{boyd2004convex} w.r.t. \(\delta\) to solve it. However, using bisection in addition to solving \(\mathrm{(P2^\prime.2)}\) by one-dimension search over \(\bar\gamma_e\) may lead to very high complexity. As a result, we propose an alternative problem to approximate \(\hat H(\bar\gamma_e)\).}
\subsubsection{Solutions to $\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}$}
We propose to approximate \(\hat H(\bar\gamma_e)\) by the optimum value of the following problem.
\begin{subequations}
\begin{align}
&\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}:~
\mathop{\mathtt{max}}_{\boldsymbol{X},\{\boldsymbol{Q}_k\},\tau} \min\limits_{\tilde{\boldsymbol{h}}_0\in\tilde{\mathcal H}_0}P_s{\rm tr}(\boldsymbol{F}_1\boldsymbol{X})\label{eq:robust obj for C-O transfomrmation}\\
& \mathtt{s.t.} \max\limits_{{\tilde{\boldsymbol{h}}_k\in\tilde{\mathcal H}_k,\forall k}\atop {\tilde{\boldsymbol{h}}_0\in\tilde{\mathcal H}_0}}\sigma^2_r{\rm tr}(\overline {\mv Y}_1\mv X)\!+\!\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dagger\!+\!\tau\sigma_b^2 \!\le\! 1, \label{eq:robust constraint on C-O transformation}\\
&\max\limits_{{\boldsymbol{g}_k\in\mathcal{G}_k,\forall k}\atop {\boldsymbol{g}_0\in\mathcal{G}_0}}\frac{P_s{\rm tr}(\mv F_2\mv X)}{\sigma^2_r{\rm tr}(\overline {\mv Y}_2\mv X)+\sum_{k=1}^K\mv g_k^T\mv Q_k\mv g_k^\dagger+\tau\sigma_e^2}\le\bar\gamma_e, \label{eq:robust constraint on SINR of Eve}\\
&{\rm tr}(\overline{\mv \Phi}\mv X)\leq \tau P_r, \label{eq:robust constraint on transmit power of the relay}\\
&{\rm tr}\left(\mv{Q}_k \right )\le \tau\eta P_s\!\min\limits_{\mv h_k\in\mathcal{H}_k,\forall k}\!\left\|\mv{h}_k\right\|^2, \; \forall k, \label{eq:robust constraint on amount of AN for each HJ helper}\\
&\mv X\succeq \mv 0, \, \mv Q_k\succeq \mv 0, \; \forall k, \, \tau\ge 0.\label{eq:robust constraint on PSD}
\end{align}
\end{subequations}
\textcolor{black}{\begin{remark}
It is worth noting that as the numerator and the denominator of the objective function in \(\mathrm{(P2^\prime.1)}\) are coupled by common uncertainty \(\tilde{\mv h}_0\), Charnes-Cooper transformation, in general, cannot be applied to realize equivalent decoupling.
As a result, \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}\) yields a more conservative approximation for \(\hat H(\bar\gamma_e)\) than \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}\). However, considering that \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}\) needs to be solved only once for given \(\bar\gamma_e\) in contrast with \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)}\) requring isection over \(\delta\), we exploit it in the sequel. The effectiveness of this approximation will be evaluated in Section~\ref{subsec:The Imperfect CSI Case}.
\end{remark}}
To proceed, we rewrite $\rm (P2^\prime.1\text{-}RW\text{-}SDR)$ as
\begin{subequations}
\begin{align}
&\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}: \mathop{\mathtt{max}}_{\boldsymbol{X},\{\boldsymbol{Q}_k\},\delta,\tau}&~\delta~\notag\\
\mathtt{s.t.}&~\min\limits_{\tilde{\boldsymbol{h}}_0\in\tilde{\mathcal H}_0}P_s{\rm tr}(\mv F_1\mv X)\ge\delta,\label{eq:robust constraint on delta}\\
&~\eqref{eq:robust constraint on C-O transformation}\text{--}\eqref{eq:robust constraint on PSD}.
\end{align}
\end{subequations}
First, by rewriting \(\mv F=\mv f_1^\dag\mv f_1^T\), where \(\mv f_1=\hat{\mv f}_1+\Delta\mv f_1\), in line with Lemma \ref{lemma:S-Procedure}, the implication \(\|\Delta{\mv f}_1\|^2\le\|\mv h_0\|^2\epsilon_0^\prime\Rightarrow\eqref{eq:robust constraint on delta}\) holds if and only if there exists \(s^{\prime(0)}\ge 0\) such that the following LMI constraint is satisfied:
\begin{equation}\label{eq:LMI of obj for S-Procedure}
\begin{bmatrix}P_s\mv X+s^{\prime(0)}\mv I & P_s\mv X\hat{\mv f}_1^\dag\\
P_s\hat{\mv f}_1^T\mv X & P_s\hat{\mv f}_1^T\mv X\hat{\mv f}_1^\dag-s^{\prime(0)}\epsilon_0^\prime\|\mv h_0\|_2^2-\delta \end{bmatrix}\succeq\mv 0.
\end{equation}
Next, as \({\rm tr}(\overline{\mv Y}_1\mv X)=\mv y_1^T\mv X^\prime\mv y_1^\dag\) (c.f.~\eqref{eq:maniuplation on Y_1}), where \(\mv y_1={\rm vec}(\tilde{\mv h}_0^T\otimes\mv I)\), after some manipulation, \eqref{eq:robust constraint on C-O transformation} holds if and only if there exists \(s^{\prime\prime(0)}\ge 0\) such that
\begin{equation}\label{eq:LMI of y1 for S-Procedure}
\begin{bmatrix} s^{\prime\prime(0)}\mv I-\sigma_r^2\mv X^{\prime} \! & \! -\sigma_r^2\mv X^{\prime}\hat{\mv y}_1^\dag\\
-\sigma_r^2\hat{\mv y}_1^T\mv X^\prime \! & \! c \end{bmatrix}\succeq\mv 0,
\end{equation}
where $ c = -\sigma_r^2\hat{\mv y}_1^T\mv X^\prime\hat{\mv y}_1^\dag-\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k\tilde{\mv h}_k^\dag-\tau\sigma_b^2+1-s^{\prime\prime(0)}N_t\epsilon_0^\prime $.
Then \eqref{eq:robust constraint on C-O transformation} can be rewritten as \begin{align}
\eqref{eq:LMI of y1 for S-Procedure}\ \mbox{for}\ \tilde{\mv h}_k\in\tilde{\mathcal H}_k, \; \forall k, \label{eq:robust reformulation on C-O transformation}
\end{align} which is handled by the following proposition.
\begin{proposition}
The semi-indefinite constraints in \eqref{eq:robust reformulation on C-O transformation} can be replaced by the following LMI constraint:
\begin{equation}\label{eq:LMI wrt tilde_h1 till tilde_hK}
\begin{bmatrix}\bar{\mv H}^{(K)}&\bar{\mv F}^{(K)} &\bar{\mv G}^{(K)} \\
\bar{\mv F}^{(K)H} & \bar c^{(K)} & \bar{\mv B}^{(K)H} \\
\bar{\mv G}^{(K)H}& \bar{\mv B}^{(K)}& \bar{\mv A}^{(K)}
\end{bmatrix}-s^{\prime\prime(K)}\begin{bmatrix} \mv 0 & \mv 0 &\mv 0\\\mv 0 &1 &\mv 0\\ \mv 0&\mv 0 &\frac{-\mv I}{\epsilon_K^{\prime\prime}} \end{bmatrix}\succeq {\bf 0},
\end{equation}
where \(\bar{\mv H}^{(K)}\) and \(\bar{\mv F}^{(K)}\) are recursively given by
\begin{align}
\!\!\!\begin{cases}\bar{\mv H}^{(k)}\!\!=\!\!\begin{bmatrix}\bar{\mv A}^{(k-1)}\!+\!\frac{s^{\prime\prime(k-1)}\mv I}{\epsilon_{k-1}^{\prime\prime}} \!&\! \bar{\mv G}^{(k-1)H}\\ \bar{\mv G}^{(k-1)}&\bar{\mv H}^{(k-1)} \end{bmatrix}, \bar{\mv F}^{(k)}\!\!=\!\!\begin{bmatrix}\bar{\mv B}^{(k-1)} \\ \bar{\mv F}^{(k-1)}\end{bmatrix}\\
k=2,\ldots,K;\\
\bar{\mv H}^{(1)}=s^{\prime\prime(0)}\mv I-\sigma_r^2\mv X^{\prime}, \bar{\mv F}^{(1)}=-\sigma_r^2\mv X^\prime\hat{\mv y}_1^\dag,
\end{cases}\label{eq:recursive relation of bar Hk and bar Fk}
\end{align}
where
$\bar{\mv G}^{(k)}=\mv G_1^{(k)}$, $\bar{\mv B}^{(k)}=-\mv Q_k\hat{\tilde{\mv h}}_k^\dag$, $\bar{\mv A}^{(k)}=-\mv Q_k$, $\bar c^{(k)}=-\sigma_r^2\hat{\mv y}_1^T\mv X^\prime\hat{\mv y}_1^\dag-\Sigma_{j=1}^k\hat{\tilde{\mv h}}_j^T\mv Q_j\hat{\tilde{\mv h}}_j^\dag-\Sigma_{i=k+1}^K\tilde{\mv h}_i^T\mv Q_i\tilde{\mv h}_i^\dag-\tau\sigma_b^2+1-s^{\prime\prime(0)}N_t\epsilon_0^\prime-\Sigma_{l=1}^{k-1}s^{\prime\prime(l)}$, \(k=1,\ldots,K\), and \(\{s^{\prime\prime(k)}\ge 0\}\) denote the auxiliary variables. \label{prop:LMI wrt tilde_h1 till tilde_hK}
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{appendix:proof of prop:LMI wrt tilde_h1 till tilde_hK}.
\end{IEEEproof}
\begin{proposition}
The constraint in \eqref{eq:robust constraint on SINR of Eve} is guaranteed if and only if there exists \(s^{(k)}\ge 0\), \(k=1,\ldots,K\), such that the following LMI holds:
\begin{equation}\label{eq:LMI wrt g1 till gK}
\begin{bmatrix}
\mv H^{(K)}&\mv F^{(K)} &\mv G^{(K)} \\
\mv F^{(K)H} & c^{\prime(K)} & \mv B^{\prime(K)H} \\
\mv G^{(K)H}& \mv B^{\prime(K)}& \mv A^{\prime(K)}
\end{bmatrix}-s^{(K)}\begin{bmatrix} \mv 0 & \mv 0 &\mv 0\\\mv 0 &1 &\mv 0\\ \mv 0&\mv 0 &\frac{-\mv I}{\epsilon_K} \end{bmatrix}\succeq {\bf 0},
\end{equation}
where \(\mv H^{(k)}\) and \(\mv F^{(k)}\) are recursively given by
\begin{align}
\!\!\!\begin{cases}\mv H^{(k)}\!\!=\!\!\begin{bmatrix}
\mv A^{\prime(k-1)}\!+\!\frac{s^{(k-1)}\mv I}{\epsilon_{k-1}}\! &\!\mv G^{(k-1)H} \\
\mv G^{(k-1)}& \mv H^{(k-1)}
\end{bmatrix}, \mv F^{(k)}\!\!=\!\!\begin{bmatrix} \mv B^{\prime(k-1)}\\ \mv F^{(k-1)} \end{bmatrix}\\
k=2,\ldots,K;\\
\mv H^{(1)}=-P_s\mv X^{\prime\prime}+\bar\gamma_e\sigma_r^2\mv X^\prime+s^{(0)}\mv I,\\
\mv F^{(1)}=\left(-P_s\mv X^{\prime\prime}+\bar\gamma_e\sigma_r^2\mv X^\prime\right )\hat{\mv g}^\dag,
\end{cases}\label{eq:recursive relation of Hk and Fk}
\end{align}
in which \(\mv G^{(k)}=\mv G_1^{(k)}\), \(\mv B^{\prime(k)}=\bar\gamma_e\mv Q_k\hat{\mv g}_k^\dagger\), \(\mv A^{\prime(k)}=\bar\gamma_e\mv Q_k\), $c^{\prime(k)}=\hat{{\mv g}}^T(-P_s\mv X^{\prime\prime}+\bar\gamma_e\sigma_r^2\mv X^\prime)\hat{{\mv g}}^\dag+\bar\gamma_e\sum_{j=1}^k\hat{{\mv g}}_j^T\mv Q_j\hat{{\mv g}}_j^\dag+\bar\gamma_e\sum_{i=k+1}^K{{\mv g}}_i^T\mv Q_i{{\mv g}}_i^\dag+
\bar\gamma_e\tau\sigma_e^2-s^{(0)}N_t\epsilon_0-\sum_{l=1}^{k-1}s^{(l)}$, $k=1, \ldots, K$,
and \(\{s^{(k)}\ge 0\}\) denote the auxiliary variables. \label{prop:LMI wrt g1 till gK}
\end{proposition}
\begin{IEEEproof}
It is observed that \eqref{eq:robust constraint on SINR of Eve} differs from \eqref{eq:epigraph reformulation on gamma_e} in the only respect that \(\sigma_e^2\) is replaced by \(\tau\sigma_e^2\). Hence the proof for Proposition~\ref{prop:eqv LMI wrt g1 till gK} can be directly applied herein by substituting \(\tau\sigma_e^2\) for \(\sigma_e^2\).
\end{IEEEproof}
Last, by replacing ``\(\eta P_s\)'' in \eqref{eq:constraints on the jamming power} with ``\(\tau\eta P_s\)'' in \eqref{eq:robust constraint on amount of AN for each HJ helper}, \eqref{eq:robust constraint on amount of AN for each HJ helper} can be replaced by a similar LMI as \eqref{eq:eqv LMI of available AN for S-Procedure}, denoted by $\rm (68e^\prime)$, in which the pertinent auxiliary variables are denoted by \(\{\mu_k\ge 0\}\).
Consequently, the equivalent reformulation for problem $\rm (P2^\prime.1\text{-}RW\text{-}SDR)$ can be summarized as
\begin{subequations}
\begin{align}
&\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}: \mathop{\mathtt{max}}_{\boldsymbol{X},\{\boldsymbol{Q}_k\},\delta,\tau,\atop {s^{(0)}, s^{\prime(0)},s^{\prime\prime(0)},\atop \{s^{(k)}\},\{s^{\prime\prime(k)}\},\{\mu_k\}}}\delta\notag\\
&\mathtt{s.t.}~~~\eqref{eq:LMI of obj for S-Procedure}, \eqref{eq:LMI wrt tilde_h1 till tilde_hK}, \eqref{eq:LMI wrt g1 till gK}, {\rm (68e^\prime)}, \eqref{eq:robust constraint on transmit power of the relay}, \eqref{eq:robust constraint on PSD}, \notag\\
&\, s^{(0)}\ge 0, \, s^{\prime(0)}\ge 0, \, s^{\prime\prime(0)}\ge 0, \label{eq:non negative s0}\\
&s^{(k)}\ge 0, \, s^{\prime\prime(k)}\ge 0, \, \mu_k\ge 0, \; \forall k. \label{eq:non negative sk}
\end{align}
\end{subequations}
\subsection{Proposed Rank-One Solutions to $\rm (P2^\prime.1)$}\label{subsec:Proposed Solutions}
\(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}\) is convex and can be solved efficiently by convex optimization tools such as CVX. Next, we derive the Lagrangian of \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}\). \textcolor{black}{Note that in the following expression, we only consider the uncertainties regarding \(\tilde{\mv h}_0\), \(\mv h_k\)'s, \(\tilde{\mv h}_k\)'s, \(\mv g_0\) and \(\mv g_k\)'s when \(K=1\) for the purpose of simplicity and the results can be easily extended to the case of \(K>1\).} Denote the dual variables associated with \eqref{eq:robust constraint on transmit power of the relay}, \eqref{eq:LMI of obj for S-Procedure}, \eqref{eq:LMI wrt tilde_h1 till tilde_hK} and \eqref{eq:LMI wrt g1 till gK} by \(\beta_0\), \(\mv W\), \(\mv V\) and \(\mv Y\), respectively. \textcolor{black}{Then the partial Lagrangian of \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}\) w.r.t.~\(\mv X\) is
\begin{align}
L(\overline{\mv\Omega})=\mathrm{tr}(\bar{\mv A}\mv X), \label{eq:Lagrangian of (P1-joint-Robust-SDR)}
\end{align}
where \(\overline{\mv\Omega}\) is the set of all primal and dual variables, and
\begin{equation}\label{eq:overline A}
\begin{aligned}
&\overline{\mv A}=P_s\mv W_{1,1}+2P_s\Re\{\hat{\mv f}_1^\dag\mv W_{12}^T\}+P_sw_{2,2}\hat{\mv F}_1\\
&-\sigma_r^2\sum_{i=1}^{N_t}\left(\mv V_{1,1}^{(i,i)}+2\Re\{\overline{\mv V}_{1,2}^{(i,i)}\}+\overline{\mv V}_{2,2}^{(i,i)}\right )\\
&-P_s(\mv h_0^H\otimes\mv I)\mv Y_{1,1}(\mv h_0\otimes\mv I)+2\sigma_r^2\bar\gamma_e\sum_{i=1}^{N_t^2}\Re\{\overline{\mv Y}_{2,1}^{(i,i)}\}\\
&-2P_s\Re\{(\mv h_0^H\otimes\mv I)\overline{\mv Y}_{2,1}(\mv h_0\otimes\mv I)\}\\
&-P_sy_{2,2}(\mv h_0^H\otimes\mv I)\hat{\mv g}^\dag\hat{\mv g}^T(\mv h_0\otimes\mv I)+\sigma_r^2\bar\gamma_e\sum_{i=1}^{N_t^2}\overline{\mv Y}_{2,2}^{(i,i)}.
\end{aligned}
\end{equation}
In \eqref{eq:overline A}, \(\hat{\mv F}_1=\hat{\mv f}_1^\dag\hat{\mv f}_1^T\); \(\mv W_{i,j}\), \(i,j=1,2\), \(\mv V_{i,j}\), \(i,j=1,\ldots,3\) and \(\mv Y_{i,j}\), \(i,j=1,\ldots,3\) are the block submatrices of \(\mv W\in\mathbb{C}^{(N_t^2+1)\times(N_t^2+1)}\), \(\mv V\in\mathbb{C}^{(N_t^3+N_t+1)\times(N_t^3+N_t+1)}\) and \(\mv Y\in\mathbb{C}^{(N_t^3+2N_t+1)\times(N_t^3+2N_t+1)}\) with the same size as block submatrices in \eqref{eq:LMI of obj for S-Procedure}, \eqref{eq:LMI wrt tilde_h1 till tilde_hK} and \eqref{eq:LMI wrt g1 till gK}, respectively.}
Moreover, in \eqref{eq:overline A}, \(\overline{\mv V}_{1,2}=\hat{\mv y}_1^\dag{\mv V}_{1,2}^T\), \(\overline{\mv V}_{2,2}= v_{2,2}\hat{\mv y}_1^\dag\hat{\mv y}_1^T\), \(\overline{\mv Y}_{2,1}=\hat{\mv g}^\dag\mv y_{1,2}^T\) and \(\overline{\mv Y}_{2,2}=y_{2,2}\hat{\mv g}^\dag\hat{\mv g}^T\). Furthermore, \(\mv V_{1,1}^{(i,i)}\), \(\overline{\mv V}_{1,2}^{(i,i)}\) and \(\overline{\mv V}_{2,2}^{(i,i)}\) are the \(i\)th block diagonal submatrices of \(\mv V_{1,1}\in\mathbb{C}^{N_t^3\times N_t^3}\), \(\overline{\mv V}_{1,2}\in\mathbb{C}^{N_t^3\times N_t^3}\) and \(\overline{\mv V}_{2,2}\in\mathbb{C}^{N_t^3\times N_t^3}\), respectively; \(\overline{\mv Y}_{2,1}^{(i,i)}\) and \(\overline{\mv Y}_{2,2}^{(i,i)}\) are the \(i\)th block diagonal submatrices of \(\mv Y_{2,1}\in\mathbb{C}^{N_t^3\times N_t^3}\), and \(\overline{\mv Y}_{2,2}\in\mathbb{C}^{N_t^3\times N_t^3}\), respectively.
\begin{proposition}
\begin{enumerate}
\item The optimal \(\mv X^\ast\) to $\rm (P2^\prime.1\text{-}RW\text{-}SDR)$ is expressed as
\begin{align}
\mv X^\ast=\sum_{n=1}^{N_t^2-\bar r_c}\bar a_n\bar{\mv \eta}_n\bar{\mv \eta}_n^H+\bar b\bar{\mv \xi}\bar{\mv \xi}^H, \label{eq:structure of robust X}
\end{align}
where \(\bar a_n\ge 0\), \(\forall n\), \(\bar b>0\), and \(\bar{\mv\xi}\in\mathbb{C}^{N_t^2\times1}\) is a unit-norm vector orthogonal to \(\bar{\mv\Xi}\) (c.f.~\eqref{eq:structure of optimal X}).
\item According to \eqref{eq:structure of robust X}, if \({\rm rank}(\mv X^\ast)>1\), i.e., there exists at least one \(\bar a_n>0\), we reconstruct a solution to problem $\rm (P2^\prime.1\text{-}RW\text{-}SDR)$ using
\begin{align}
\hat{\mv X}^\ast & =\bar b\bar{\mv\xi}\bar{\mv\xi}^H, \label{eq:reconstructed structure of suboptimal hat X}\\
\hat\delta^\ast & =\delta^\ast, \label{eq:reconstructed suboptimal delta}\\
\hat\tau^\ast & =\tau^\ast, \label{eq:reconstructed suboptimal tau}
\end{align}
while \(\{\hat{\mv Q}_k^\ast\}\) are obtained by solving the following feasibility problem provided that \(\hat{\mv X}^\ast\), \(\hat\delta^\ast\), and \(\hat\tau^\ast\) are given by \eqref{eq:reconstructed structure of suboptimal hat X}, \eqref{eq:reconstructed suboptimal delta} and \eqref{eq:reconstructed suboptimal tau}, respectively:
\begin{align*}
&\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}sub)}:~ \mathop{\mathtt{max}}_{\{\boldsymbol{Q}_k\}, s^{\prime\prime(0)},\atop \{s^{\prime\prime(k)}\},\{\mu_k\}} 0\\
&\mathtt{s.t.}~~~\eqref{eq:LMI wrt tilde_h1 till tilde_hK}\ \mbox{given}\ \hat{\mv X}^\ast, \hat\tau^\ast, \, \, {\rm (68e^\prime)}\ \mbox{given}\ \hat\tau^\ast,\\
&\boldsymbol{Q}_k\succeq\mv 0,\, \mu_k\ge0, \; \forall k,\\
&s^{\prime\prime(0)}\ge0,\, s^{\prime\prime(k)}\ge 0, \; \forall k.
\end{align*}
\end{enumerate}\label{prop:structure of robust X and its rank-one reconstruction}
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{appendix:proof of prop:structure of robust X and its rank-one reconstruction}.
\end{IEEEproof}
The scheme that solves \(\mathrm{(P2^\prime)}\) is summarized in Table \ref{table:Algorithm II}.
\begin{table}[htp]
\begin{center}
\vspace{0.025cm}
\caption{\textcolor{black}{\rm Algorithm II for \(\mathrm{(P2^\prime)}\)}} \label{table:Algorithm II}
\vspace{-0.05cm}
\hrule
\vspace{0.2cm}
\begin{itemize}
\item {\bf Initialize} \(\bar\gamma_{e\_{\rm search}}^\prime=0:\alpha^\prime:\bar\gamma_{e\max}^\prime\) and $i=0$
\item {\bf Repeat}
\begin{itemize}
\item [1)] {\bf Set} $i=i+1$;
\item [2)] Given \(\bar\gamma_e=\bar\gamma_{e\_{\rm search}}^\prime(i)\),\\
{\bf solve} \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}\) and {\bf obtain} \(\hat H(\bar\gamma_e^{(i)})\).
\end{itemize}
\item {\bf Until} \(i=L^\prime\), where \(L^\prime=\lfloor{\tfrac{\bar\gamma_{e\max}^\prime}{\alpha^\prime}}\rfloor+1\) is the length of \(\bar\gamma_{e\_{\rm search}}^\prime\)
\item {\bf Set} \(\bar\gamma_e^\ast=\bar\gamma_{e\_{\rm search}}^\prime\left(\!\arg\max\limits_{i}\!\left\{\tfrac{1+\hat H(\bar\gamma_e^{(i)})}{1+\bar\gamma_e^{(i)}}\right\}\right)\) for \(\mathrm{(P2^\prime.2)}\)
\item Given \(\bar\gamma_e^\ast\), {\bf solve} \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR)}\) to obtain \((\mv X^\ast, \{\mv Q_k^\ast\}, \delta^\ast,\tau^\ast)\)\\
{\bf if} \({\rm rank}(\mv X^\ast)=1\), {\bf apply} EVD on \(\mv X^\ast\) such that \(\mv X^\ast=\mv w^\ast\mv w^{\ast H}\);\\
{\bf else}
\begin{itemize}
\item {\bf construct} \((\hat{\mv X}^\ast,\hat\delta^\ast,\hat\tau^\ast)\), according to \eqref{eq:reconstructed structure of suboptimal hat X}-\eqref{eq:reconstructed suboptimal tau} and {\bf set} \(\mv w^\ast=\sqrt{\bar b}\bar{\mv\xi}\);
\item given \(\hat{\mv X}^\ast\), \(\hat\delta^\ast\) and \(\hat\tau^\ast\),\\
{\bf obtain} \(\{\hat{\mv Q}_k^\ast\}\) by solving \(\mathrm{(P2^\prime.1\text{-}RW\text{-}SDR\text{-}sub)}\).
\end{itemize}
{\bf end}
\item {\bf Recover} \(\mv W^\ast={\rm vec}^{-1}(\mv w^\ast)\)
\end{itemize}
\vspace{0.2cm} \hrule
\end{center}
\end{table}
\section{Numerical Results}\label{sec:Numerical Results}
Here we provide numerical examples to validate our results.
We assume a typical scenario where the \(K\) helpers are evenly distributed around Alice with a radius of \(\rho_k=2\)m and \(\theta_k=\tfrac{2\pi(k-1)}{K}\) (radian by default), where \(\theta_k\) is the angle of direction (w.r.t.~the Alice-relay link by default) of the \(k\)th helper, \(k=1,\ldots,K\). Alice, Bob and Eve are, w.l.o.g., assumed to have the same distance away from the AF relay with their angle of direction \(\pi\), \(\pi/6\) and \(11\pi/6\), respectively. We also assume channel models with both large-scale fading, i.e., path loss and shadowing, and small-scale fading, i.e., multi-path fading. The simplified large-scale fading model is given by \cite{wang2011general}
\begin{equation}\label{eq:shadowing+pass loss}
D=zA_0\left(\frac{d}{d_0}\right)^{-\alpha},~\mbox{for }d\geq d_0,
\end{equation}
\textcolor{black}{where \(z\) is a log-normal random variable capturing the effect of shadowing with the standard derivation \(\sigma=4\)dB, \(A_0=10^{-3}\), \(d\) is the distance, \(d_0\) is a reference distance set to be \(1\)m, and \(\alpha=2\) is the path loss exponent.} Specifically, the channels including \(\mv h_k\)'s, \(\mv h_0\), \(\tilde{\mv h}_0\) and \(\mv g_0\), are assumed to suffer from Rician fading while the channels from the HJ helpers to Bob (\(\tilde{\mv h}_k\)'s) and Eve (\(\mv g_k\)'s) follow Rayleigh distribution due to the missing of line-of-sight (LOS) components with their respective average gain specified by \eqref{eq:shadowing+pass loss}. Take \(\mv h_k\), \(\forall k\), as an example, \(\mv h_k=\sqrt{\tfrac{K_R}{K_R+1}}\bar{\mv h}_k+\sqrt{\tfrac{1}{K_R+1}}\check{\mv h}_k\), where \(\bar{\mv h}_k\) is the LOS component with \(\|\bar{\mv h}_k\|_2^2=D\) (c.f.~\eqref{eq:shadowing+pass loss}), \(\check{\mv h}_k\) is the Rayleigh fading component denoted by \(\check{\mv h}_k\sim\mathcal{CN}(0,D\mv I)\), and \(K_R\) is the Rician factor set to be \(3\). Note that for the involved LOS component, we use the far-field uniform linear antenna array to model the channels \cite{karipidis2007far}.
In addition, unless otherwise specified, the number of HJ helpers, \(K\) is set to be \(5\); the AF relay is assumed to be \(5\)m away from Alice; the EH efficiency, \(\eta=0.5\) and \(\sigma_r^2=\sigma_b^2=\sigma_e^2=-50\)dBm. The results presented in Section \ref{subsec:The Perfect CSI Case} are obtained by averaging over \(500\) times of independent trials.
\subsection{The Perfect CSI Case}\label{subsec:The Perfect CSI Case}
We compare the proposed optimal solutions with three suboptimal schemes in the case of perfect CSI. \textcolor{black}{One suboptimal scheme, denoted by ``Suboptimal~1'', is introduced in Section~\ref{subsubsec:structure-based} by exploiting the optimal structure of \(\mv W\). The other described in Section~\ref{subsubsec:ZF} is known as optimal null-space ZF, denoted by ``Suboptimal~2". Specifically, each jamming beam \(\mv n_k\) is restricted to lie in the orthogonal space of \(\tilde{\mv h}_k^\dag\) such that \(\mv n_k\)'s cause no interference to the IR while maximizing its effect of jamming at the eavesdropper. As a benchmark, we also present the well-known \emph{isotropic jamming} that is particularly useful when there is no Eve's CSI known at each \emph{HJ} helper, \({\sf H}_k\), \(\forall k\) \cite{Luo2013uncoordinated}, denoted by ``Suboptimal~3''. Note that the difference between ``Suboptimal~2'' and ``Suboptimal~3'' only lies in the design of jamming noise, for which the former also aligns the jamming noise to an equivalent Eve's channel to confront Eve with most interference, while the latter transmits isotropic jamming with \(\tilde{\mv n}_k\sim\mathcal{CN}(\mv 0, \eta P_s\|\mv h_k\|^2/(N_t-1))\), \(k=1,\ldots, K\), in directions orthogonal to \(\tilde{\mv h}_k\)'s, due to lack of knowledge of Eve's channel and thus is expected to be less efficient than ``Suboptimal~2'' with perfect CSI.}
\begin{figure}[tp]
\centering
\epsfxsize=1\linewidth
\includegraphics[width=7.5cm]{Perfect_sec_vs_Ps.eps}
\caption{{Secrecy rate versus Alice's transmit power with perfect CSI.}}\label{fig:sec_vs_Ps}
\end{figure}
First, we study the secrecy rate at the receiver versus the transmit power of the transmitter, \(P_s\) with \(P_r=10\)dBm. Fig.~\ref{fig:sec_vs_Ps} demonstrates that for both cases of \(N_t=3\) and \(N_t=5\), the average secrecy rate increases and tends to be saturated as \(P_s\) goes to \(30\)dBm. It also illustrates that ``suboptimal 1'' and ``suboptimal 2'' closely approach the optimal solutions while ``Suboptimal 3'' is outperformed more succinctly with larger number of antennas at the AF relay and the HJ helpers. Moreover, with \(N_t\) increasing, the average secrecy rate gets larger as a result of the higher array gain of the AF relay and more available power transferred to the HJ helpers.
\begin{figure}[tp]
\centering
\epsfxsize=1\linewidth
\includegraphics[width=7.5cm]{Perfect_sec_vs_Pr.eps}
\caption{{Secrecy rate versus the relay's transmit power with perfect CSI.}}\label{fig:sec_vs_Pr}
\vspace{-1.5em}
\end{figure}
In addition, we show in Fig.~\ref{fig:sec_vs_Pr} the secrecy rate achieved by different schemes versus the transmit power of the AF relay, \(P_r\) with \(P_s=30\)dBm. It is seen that the average secrecy rate first grows faster and then slower, since when \(P_r\) increases, not only the desired signal but also the noise yielded from the first transmission phase is amplified to a larger extent. In addition, the performance gap between the optimal scheme and suboptimal schemes is almost negligible. Similar to Fig.~\ref{fig:sec_vs_Ps}, \textcolor{black}{``Suboptimal 3'' appears to have certain performance loss from the optimality but is considered as a promising scheme when no Eve's CSI is available at the HJ helpers.
\subsection{The Imperfect CSI Case}\label{subsec:The Imperfect CSI Case}
Now, we consider the imperfect CSI case and compare the proposed scheme \emph{Robust SDR with HJ}, which is obtained by solving $\rm (P2^\prime.1\text{-}RW\text{-}SDR\text{-}sub)$, against some benchmarks. \textcolor{black}{Note that there are two upper-bound benchmark schemes, namely, \emph{Robust SDR with HJ} and \emph{Robust-eqv with HJ}, as well as two lower-bound benchmarks, which are \emph{Robust w/o HJ} and \emph{Non-robust with HJ}. For \emph{Robust SDR with HJ} (\emph{Robust-eqv with HJ}), given any $\bar\gamma_e$, $\hat H(\bar\gamma_e)$ is approximated by solving the rank constraint relaxed problem $\rm (P2^\prime.1\text{-}RW\text{-}SDR)$ ($\rm (P2^\prime.1\text{-}RW\text{-}SDR\text{-}Eqv)$).} On the other hand, for \emph{Robust w/o HJ}, we solve $\rm (P2^\prime.1\text{-}RW\text{-}SDR)$ by setting $\mv Q_k=0$, $\forall k$ while for \emph{Non-robust with HJ}, \eqref{eq:achievable secrecy rate} is evaluated by applying the optimal solutions to $\rm (P1^\prime.1)$ assuming perfect CSI, to the actual channels including errors that are generated from the sets defined in \eqref{eq:uncertainty regions}.
To assess the worst-case secrecy performance, we use the metric, namely, \emph{secrecy outage probability}, defined as \cite{Liang2008}:
\begin{equation}\label{eq:sec outage prob}
p=P_r(r\le r_0^\ast),
\end{equation}
\textcolor{black}{where \(r_0^\ast\) obtained by solving $\rm (P2^\prime)$ is termed as the $100p\%$-\emph{secrecy outage rate}.}
The parameters are set identical to those in the perfect CSI case. Regarding the uncertainty model in \eqref{eq:uncertainty regions}, we introduce the uncertainty ratios associated with \(\epsilon_0\), \(\epsilon_0^\prime\), \(\epsilon_k\), \(\epsilon_k^\prime\) and \(\epsilon_k^{\prime\prime}\) as \(\alpha_0\), \(\alpha_0^\prime\), \(\alpha_k\), \(\alpha_k^\prime\) and \(\alpha_k^{\prime\prime}\), respectively. For instance, \(\alpha_0\) is
\begin{align}
\alpha_0^2 & =\frac{\epsilon_0}{\mathbb{E}[\|\boldsymbol{g}_0\|^2]},\label{eq:alpha0}
\end{align}
while \(\alpha_0^\prime\), \(\alpha_k\)'s, \(\alpha_k^\prime\)'s and \(\alpha_k^{\prime\prime}\)'s are similarly defined and thus omitted here for brevity. Besides, it is reasonable to assume that the channel estimates w.r.t~Eve suffer from more errors than those for Alice and Bob. Hence, we set \(\alpha_0^{\prime2}=\alpha_k^{\prime2}=\alpha_k^{\prime\prime2}=1\%\) while \(\alpha_0^2=\alpha_k^2=10\%\), \(k=1,\ldots,K\), unless otherwise specified
\begin{figure}[tp]
\centering
\epsfxsize=1\linewidth
\includegraphics[width=7.5cm]{Outage_sec_CDF.eps}
\caption{\textcolor{black}{CDFs of the achievable secrecy rate.}}
\label{fig:CDF}
\vspace{-1.5em}
\end{figure}
Fig.~\ref{fig:CDF} demonstrates the cumulative density function (CDF) of the achievable secrecy rate from \(1000\) samples of random channel errors uniformly distributed over the sets defined by \eqref{eq:uncertainty regions} given fixed actual channel realization. We set \(P_r=20\)dBm, \(P_s=30\)dBm, \(N_t=3\), \(K=5\) and \(\alpha_0^{\prime2}=\alpha_k^{\prime2}=\alpha_k^{\prime\prime2}=2\%\), \(k=1,\ldots, K\). \textcolor{black}{Despite being suboptimal to the upper-bound schemes of ``Robust SDR with HJ'' and ``Robust-eqv with HJ'', the proposed ``Robust with HJ'' scheme outperforms its non-robust counterpart ``Non-robust with HJ'' particularly in the low range of probability, and overwhelmingly surpasses the ``Robust w/o HJ''. For example, ``Robust with HJ'' can achieve a secrecy rate of around \(3.5\)bps/Hz in the \(3\%\) worst case versus that of \(3.3\)bps/Hz and \(1.0\)bps/Hz for the ``Non-robust with HJ'' and ``Robust w/o HJ'', respectively. The solutions for ``Robust SDR with HJ'' is also seen to admit very little gap from those for ``Robust-eqv with HJ'', which suggests that approximating \(\hat H(\bar\gamma_e)\) by solving the complexity reduced ``Robust SDR with HJ '' leads almost no performance loss.}
\begin{figure}[tp]
\centering
\epsfxsize=1\linewidth
\includegraphics[width=7.5cm]{Outage_sec_vs_K.eps}
\caption{{Secrecy outage probability for \(K=3\) and \(K=5\) HJ helpers, respectively.}} \label{fig:outage_vs_K}
\vspace{-1.5em}
\end{figure}
Fig.~\ref{fig:outage_vs_K} illustrates the CDF of the achievable secrecy rate from \(1000\) samples of random channel errors generated in the same way as Fig.~\ref{fig:CDF}, with \(P_r=20\)dBm, \(P_s=30\)dBm and \(N_t=3\). It is observed that proposed solutions to ``Robust with HJ'' nearly achieve their upper-bound rank constraint relaxed solutions, i.e., SDR, to ``Robust upper SDR with HJ'' throughout the whole range of outage probability. Moreover, the ``Robust w/o HJ'' yields the worst performance. In particular, when the outage probability falls to \(3\%\), the ``Robust w/o HJ'' achieves a worst-case secrecy rate of less than \(1\)bps/Hz while the proposed scheme can still guarantee an outage rate of rough \(1.64\)bps/Hz and \(2.07\)bps/Hz for \(K=5\) and \(K=6\), respectively. \textcolor{black}{Also, it is observed that increasing the number of HJ helpers will improve the secrecy performance, but we do not draw conclusions on the extent to which the secrecy rate can increase, since it also depends on the level of channel estimation inaccuracy. For example, more HJ helpers may also yield larger interference to the legitimate receiver if the channels from HJ helpers to Bob are not as well estimated as this instance of \(\alpha_k^{\prime\prime2}=1\%\), \(\forall k\).} Hence we suggest that in practice, a mild number of HJ helpers are sufficient in view of the trade-off between complexity and performance.
\begin{figure}[tp]
\centering
\epsfxsize=1\linewidth
\includegraphics[width=7.5cm]{Outage_sec_vs_error.eps}
\caption{{Secrecy outage rate versus the normalized channel errors.}} \label{fig:outage_vs_error}
\vspace{-1.5em}
\end{figure}
Fig.~\ref{fig:outage_vs_error} shows two different levels (\(p=0.20\) and \(p=0.30\)) of secrecy outage rate versus the channel uncertainty ratios (assuming \(\alpha_0=\alpha_k\), \(k=1,\ldots,K\)), in which \(P_r=30\)dBm, \(P_s=30\)dBm, \(N_t=3\) and \(K=5\). It is observed that the secrecy outage rate by the proposed schemes decreases slowly with the eavesdropper's CSI error ratios, which validates the motivation of the worst-case robust optimization. It is worth noting that the advantage of the \emph{HJ} protocol is more significant when the normalized channel uncertainty of Eve's channels surpasses \(10\%\), since the \emph{HJ} scheme provides more degree of freedom for robust design and thus capable of guaranteeing larger worst-case secrecy rate against worse channel conditions compared to that without \emph{HJ}. The reasonably suboptimal performance of the proposed ``Robust with HJ'' is also seen as from Figs.~\ref{fig:CDF} and \ref{fig:outage_vs_K}.
\begin{figure}[tp]
\centering
\epsfxsize=1\linewidth
\includegraphics[width=7.5cm]{Outage_sec_vs_Pr.eps}
\caption{{Secrecy outage rate versus the relay's transmit power.}} \label{fig:outage_vs_Pr}
\vspace{-1.5em}
\end{figure}
Fig.~\ref{fig:outage_vs_Pr} studies the \(100p\%\)-secrecy outage rate for \(p=0.05\) and \(p=0.20\), respectively, versus the transmit power of the AF relay. Specifically, we set \(P_s=30\)dBm, \(N_t=3\), and \(K=5\). As observed similarly from Fig.~\ref{fig:outage_vs_error}, the robust schemes with the assistance of HJ helpers perform considerably better than solutions without HJ helpers. \textcolor{black}{Furthermore, when the transmit power is set relatively large, i.e., \(P_s=30\)dBm, it is seen that continuously increasing \(P_r\) does not contribute much to the secrecy performance, because in this situation the increased amplified noise at the AF relay compromises the performance, which provides useful insight for practical setting of \(P_r\). In addition, the proposed ``Robust with HJ'' is observed striking a good trade-off between optimality and complexity compared with the two upper-bound solutions.}
\section{Conclusion}\label{sec:Conclusion}
This paper considered improving the secret wireless communications in a multi-antenna AF relay wiretap channel via a novel {\em harvest-and-jam (HJ)} relaying protocol.
The AN covariance matrices at HJ helpers and the AF relay beamforming matrix have been jointly optimized to maximize the achievable secrecy rate and/or worst-case secrecy rate at the legitimate receiver subject to the transmit power constraints of the AF relay as well as the HJ helpers, on perfect and imperfect CSI occasions, respectively, using the technique of semi-definite relaxation (SDR). The SDR was shown tight for the perfect CSI case while suboptimal rank-one reconstruction algorithms for the robust formulation under imperfect CSIs were presented achieving promising tradeoffs between complexity and performance. The effectiveness of the proposed schemes were also verified by numerical results.
\begin{appendix}
\subsection{Proof of Proposition \ref{prop:structure of optimal X and its rank-one reconstruction}} \label{appendix:proof of prop:structure of optimal X and its rank-one reconstruction}
The KKT conditions of $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$ are given by
\begin{subequations}\label{eq:KKT of (P1'.1-joint-SDR)}
\begin{align}
\mv A^\ast\mv X^\ast &=0, \label{eq:complimentary slackness wrt X}\\
\mv B_k^\ast\mv Q_k^\ast&=0, \forall k, \label{eq:complimentary slackness wrt Qk}\\
\beta_k^\ast\left(\mathrm{tr}(\mv Q_k^\ast)-\tau^\ast\eta P_s\|\mv h_k\|^2\right)&=0, \forall k. \label{eq:complimentary slackness wrt trace(Qk)}
\end{align}
\end{subequations}
According to \eqref{eq:Bk}, if for certain \(k\), \(\beta_k^\ast=0\), then \(\mv B_k^\ast=-\lambda^\ast\tilde{\mv h}_k^\ast\tilde{\mv h}_k^T+\alpha^\ast\bar\gamma_e\mv g_k^\ast\mv g_k^T\) and thus \(\mathrm{rank}(\mv B_k^\ast)\le \mathrm{rank}(\tilde{\mv h}_k^\ast\tilde{\mv h}_k^T)+\mathrm{rank}(\mv g_k^\ast\mv g_k^T)=2\), which yields \(\mathrm{rank}(\mv Q_k^\ast)\ge N_t-2\) as a result of \eqref{eq:complimentary slackness wrt Qk}. Otherwise, when \(\beta_k^\ast>0\), we will have \(\mathrm{rank}(\mv B_k^\ast)\ge\mathrm{rank}(-\beta_k^\ast\mv I-\lambda^\ast\tilde{\mv h}_k^\ast\tilde{\mv h}_k^T)-\mathrm{rank}(\alpha^\ast\bar\gamma_e\mv g_k^\ast\mv g_k^T)=N_t-1\) \cite[\emph{Lemma~A.1}]{Liu2014Secrecy}, which implies \(\mathrm{rank}(\mv Q_k^\ast)\le 1\). However, \(\mathrm{rank}(\mv Q_k^\ast)\) cannot be \(0\), since otherwise \(\mathrm{tr}(\mv Q_k^\ast)-\tau^\ast\eta P_s\|\mv h_k\|^2<0\) and thus \(\beta_k^\ast=0\) according to \eqref{eq:complimentary slackness wrt trace(Qk)}, which contradicts to \(\beta_k^\ast>0\). Hence, when \(\beta_k^\ast>0\), \(\mathrm{rank}(\mv Q_k^\ast)=1\).
Next, define \(\mv C^\ast=-\lambda^\ast\sigma_r^2\overline{\mv Y}_1-\alpha^\ast P_s\mv F_2+\alpha^\ast\bar\gamma_e\sigma_r^2\overline{\mv Y}_2-\beta_0^\ast\overline{\mv \Phi}\) and according to \eqref{eq:A}, we have
\begin{align}\label{eq:A star}
\mv A^\ast=P_s\mv F_1+\mv C^\ast.
\end{align}
Then define \(r_c\), \(\mv\Xi\) and \(\mv\eta_n\), \(n=1,\ldots,N_t^2-r_c\) (c.f.~\eqref{eq:structure of optimal X}). Similar to the approach used in \cite[Appendix B]{Liu2014Secrecy}, we discuss the structure of the optimal \(\mv X\) under two cases.
\begin{enumerate}[(1)]
\item {\bf Case I}: \(r_c=N_t^2\).
As \(\mv C^\ast\) is full-rank, \(\mathrm{rank}(\mv A^\ast)\ge r_c-1=N_t^2-1\) and hence \(N_t^2-1\le\mathrm{rank}(\mv A^\ast)\le N_t^2\). If \(\mathrm{rank}(\mv A^\ast)=N_t^2-1\), \(\mathrm{rank}(\mathbf{null}(\mv A^\ast))=1\) and it follows that \(\mv X^\ast=b\mv\xi\mv\xi^H\) by assuming \(\mv\xi\) as the only basis of \({\bf null}(\mv A^\ast)\). Otherwise, according to \eqref{eq:complimentary slackness wrt X}, we obtain \(\mv X^\ast=\mv 0\), which ceases the secrecy transmission and cannot be the optimal solution to $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$.
\item{\bf Case II}: \(r_c<N_t^2\).
If \(\mv C^\ast\) is not full-rank, \(\mathrm{rank}(\mv A^\ast)\ge r_c-1\). Then by pre-multiplying \(\mv\eta_n^H\) and post-multiplying \(\mv\eta_n\in\mv\Xi\) with both sides of \eqref{eq:A star}, we have
\begin{align}\label{eq:null(C star) in null(A star)}
\mv\eta_n^H\mv A^\ast\mv\eta_n=P_s\mv\eta_n^H\mv F_1\mv\eta_n+\mv\eta_n^H\mv C^\ast\mv\eta_n=P_s\mv\eta_n^H\mv F_1\mv\eta_n,\; \forall n.
\end{align}
According to \eqref{eq:Lagrangian of (P1.1-RW-SDR)}, it is necessary for \(\mv A^\ast\preceq\mv 0\) to obtain an optimal solution of \(\mv X^\ast\) and therefore \(\mv\eta_n^H\mv A^\ast\mv\eta_n\le 0\), which conforms to \(P_s\mv\eta_n^H\mv F_1\mv\eta_n\ge 0\) if and only if \(\mv A^\ast\mv\eta_n=0\) and \(\mv F_1\mv\eta_n=0\). Hence, \(\mv\Xi\subseteq{\bf null}(\mv A^\ast)\), i.e., \(N_t^2-\mathrm{rank}(\mv A^\ast)\ge N_t^2-r_c\Rightarrow\mathrm{rank}(\mv A^\ast)\le r_c\).
Next, we show \(\mathrm{rank}(\mv A^\ast)\neq r_c\) by contradiction. If \(\mathrm{rank}(\mv A^\ast)=r_c\), \(\mv\Xi={\bf null}(\mv A^\ast)\), and \(\mv X^\ast=\sum_{n=1}^{N_t^2-r_c}a_n\mv\eta_n\mv\eta_n^H\). However, in this case, since \(\mv F_1\mv\eta_n=0\), \(P_s\mathrm{tr}(\mv F_1\mv X^\ast)=0\), which is apparently not optimal. Hence, we have \(\mathrm{rank}(\mv A^\ast)=r_c-1\) and thus \(\mathrm{rank}({\bf null}(\mv A^\ast))=N_t^2-r_c+1\). This indicates that besides the basis in \(\mv\Xi\), \({\bf null}(\mv A^\ast)\) spans over an extra dimension of basis, which is denoted by \(\mv\xi\), and hence \(\mv X^\ast=\sum_{n=1}^{N_t^2-r_c}a_n\mv\eta_n\mv\eta_n^H+b\mv\xi\mv\xi^H\).
\end{enumerate}
Assume that \((\mv X^\ast,\{\mv Q_k^\ast\},\tau^\ast)\) is the optimal solution to $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$ with \(\mathrm{rank}(\mv X^\ast)>1\). Then construct a new solution \(\{\hat{\mv X}^\ast,\hat{\mv Q}_k^\ast,\hat\tau^\ast\}\) according to \eqref{eq:reconstructed structure of optimal hat X}--\eqref{eq:reconstructed tau}. Now, we check if the reconstructed solution is feasible if \eqref{eq:sufficient condition for rank-one X} holds.
First,
\begin{align}
&\sigma^2_r{\rm tr}(\overline {\mv Y}_1\hat{\mv X}^\ast)+\sum_{k=1}^K\tilde{\mv h}_k^T\hat{\mv Q}_k^\ast\tilde{\mv h}_k^\dagger+\hat\tau^\ast\sigma_b^2\notag\\
&\le\sigma^2_r{\rm tr}\left(\overline {\mv Y}_1\left(\mv X^\ast-\sum_{n=1}^{N_t^2-r_c}a_n\mv\eta_n\mv\eta_n^H\right)\right)+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k^\ast\tilde{\mv h}_k^\dagger\nonumber\\
&+\left(\tau^\ast+\tfrac{\sigma_r^2}{\sigma_b^2}\sum_{n=1}^{N_t^2-r_c}a_n\mathrm{tr}(\overline{\mv Y}_1\mv\eta_n\mv\eta_n^H)\right)\sigma^2_b\notag\\
&=\sigma^2_r{\rm tr}(\overline {\mv Y}_1\mv X^\ast)+\sum_{k=1}^K\tilde{\mv h}_k^T\mv Q_k^\ast\tilde{\mv h}_k^\dagger+\tau^\ast\sigma^2_b\stackrel{(a)}{\le}1.\label{eq:reconstructed C-O transformation constraint}
\end{align}
Moreover,
\begin{align}
&P_s{\rm tr}(\mv F_2\hat{\mv X}^\ast)=P_s{\rm tr}\left(\mv F_2(\mv X^\ast-\sum_{n=1}^{N_t^2-r_c}a_n\mv\eta_n\mv\eta_n^H)\right)\nonumber\\
&\stackrel{(b)}{\le}\bar\gamma_e\left(\sigma^2_r{\rm tr}(\overline {\mv Y}_2\mv X^\ast)+\sum_{k=1}^K\mv g_k^T\mv Q_k^\ast\mv g_k^\dagger+\tau^\ast\sigma_e^2\right)\nonumber\\
&+\bar\gamma_e\left(\sigma^2_e\Delta\tau-\sigma^2_r{\rm tr}\left(\overline{\mv Y}_2\sum_{n=1}^{N_t^2-r_c}a_n\mv\eta_n\mv\eta_n^H\right)\right)\nonumber\\
&=\bar\gamma_e\left(\sigma^2_r{\rm tr}(\overline {\mv Y}_2\hat{\mv X}^\ast)+ \sum_{k=1}^K\mv g_k^T\hat{\mv Q}_k^\ast\mv g_k^\dagger+\hat\tau^\ast\sigma^2_e\right).\label{eq:reconstructed SINR of Eve constraint}
\end{align}
In addition, \eqref{eq:constraint on transmit power of the relay}--\eqref{eq:constraint on X, Qk and tau} are easily shown to satisfy.
In the above, \((a)\) and \( (b)\) hold due to the feasibility in \eqref{eq:constraint on C-O transformation} and \eqref{eq:constraint on SINR of Eve}, respectively. Further, \(P_s{\rm tr}(\mv F_1\hat{\mv X}^\ast)=P_s\mathrm{tr}(\mv F_1\mv X^\ast)\) shows that the reconstructed solution achieves the same optimum value as that of $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$. Hence, an optimal solution to $\rm (P1^\prime.1\text{-}RW\text{-}SDR)$ with rank-one \(\mv X\) is ensured.
\subsection{Proof of Lemma \ref{lemma:optimal structure for W}} \label{appendix:proof of lemma:optimal structure for W}
First, we construct \(\mv W\) as
\begin{align}\label{eq:decomposition of W}
\mv W & =[\mv U_1^\dag,(\mv U_1^{\bot})^\dag]\begin{bmatrix}
\mv B & \mv C \\
\mv D& \mv E
\end{bmatrix}[\mv U_2,\mv U_2^{\bot}]^H\notag\\
& =\mv U_1^\dag\mv B\mv U_2^H+\mv U_1^\dag\mv C(\mv U_2^{\bot})^H+(\mv U_1^{\bot})^\dag\mv D\mv U_2^H\nonumber\\
&+(\mv U_1^{\bot})^\dag\mv E(\mv U_2^{\bot})^H,
\end{align}
where \(\mv B\in\mathbb{C}^{2\times2}\), \(\mv C\in\mathbb{C}^{2\times (N_t-2)}\), \(\mv D\in\mathbb{C}^{(N_t-2)\times 2}\) and \(\mv E\in\mathbb{C}^{(N_t-2)\times (N_t-2)}\) are undetermined matrices. Then according to \eqref{eq:SVD of H1} and \eqref{eq:SVD of H2}, it follows that \(\vert\widetilde{\mv{h}}^T_0\mv{Wh}_0\vert^2=\vert\widetilde{\mv h}_0^T\mv U_1^\dag\mv B\mv U_2^H\mv h_0\vert^2\) and \(\widetilde{\mv{h}}^T_0\mv{WW}^H\widetilde{\mv{h}}_0^\dagger=\|\mv B^H\mv U_1^T\widetilde{\mv h}_0^\dagger\|^2+\|\mv C^H\mv U_1^T\widetilde{\mv h}_0^\dagger\|^2\). Similarly, we also have \(\vert{\mv{g}}^T_0\mv{Wh}_0\vert^2=\vert{\mv g}_0^T\mv U_1^\dag\mv B\mv U_2^H\mv h_0\vert^2\) and \({\mv{g}}^T_0\mv{WW}^H\mv g_0^\dagger=\|\mv g_0^T\mv U_1^\dagger\mv B\|^2+\|\mv g_0^T\mv U_1^\dagger\mv C\|^2\). Thus, \(\gamma_b\) (c.f.~\eqref{eq:SINR at Bob}) and \(\gamma_e\) (c.f.~\eqref{eq:SINR at Eve}) do not depend on \(\mv D\) and \(\mv E\).
Next, by substituting \eqref{eq:decomposition of W} for \(\mv W\) in \eqref{eq:transmit power at the relay}, we have \(P_r\ge P_s(\|\mv B\mv U_2^H\mv h_0\|^2+\|\mv D\mv U_2^H\mv h_0\|^2)+\sigma_r^2\mathrm{tr}(\mv B^H\mv B+\mv C^H\mv C+\mv D^H\mv D+\mv E^H\mv E )\).
Since $\rm (P1^\prime)$ is a secrecy rate maximization problem subject to the given \(P_r\), it turns out that given the optimum secrecy rate, \(P_r\) is the minimized required power by taking \(\mv D=\mv 0\) and \(\mv E=\mv 0\), while \(\mv B\) and \(\mv C\) cannot be determined directly. Thus, \(\mv W=\mv U_1^\dagger\mv B\mv U_2^H+\mv U_1^\dagger\mv C(\mv U_2^{\bot})^H\).
\subsection{Proof of Proposition \ref{prop:semi-closed form}}\label{appendix:proof of prop:semi-closed form}
Denoting the dual variable associated with \eqref{eq:C1 of ZF}, \eqref{eq:C2 of ZF} and \eqref{eq:C3 of ZF} by \(\lambda\), \(\alpha\) and \(\beta_0\), respectively, the Lagrangian of \(\mathrm{(P1^\prime.1\text{-}sub2\text{-}SDR)}\) is expressed as
\begin{multline}
L(\chi)=\\
{\rm tr}\left((P_s\mv F_1-\lambda\sigma_r^2\overline{\mv Y}_1-\alpha P_s\mv F_2+\alpha\bar\gamma_e\sigma_r^2\overline{\mv Y}_2-\beta_0\overline{\mv\Phi})\mv X \right )\\
+\left(-\lambda\sigma_b^2+\alpha\bar\gamma_e(q+\sigma_e^2)+\beta_0P_r\right )\tau+\lambda, \label{eq:Lagrangian of (P1'.1-sub2-SDR)}
\end{multline}
where \(\chi=\{\mv X,\tau,\lambda,\alpha,\beta_0\}\) denotes the set consisting of all the primal and dual variables. Since problem \(\mathrm{(P1^\prime.1\text{-}sub2\text{-}SDR)}\) satisfies the Slater condition, its optimum value admits zero duality gap with its dual counterpart. Furthermore, according to \eqref{eq:Lagrangian of (P1'.1-sub2-SDR)}, in order for the dual function to be bounded from above, the following constraints must hold:
\begin{align}
&\!\!\!\!\!\!\mv Z= P_s\mv F_1-\lambda\sigma_r^2\overline{\mv Y}_1-\alpha P_s\mv F_2+\alpha\bar\gamma_e\sigma_r^2\overline{\mv Y}_2-\beta_0\overline{\mv\Phi}\preceq\mv 0, \label{eq:bounded Lagrangian for Z}\\
&\!\!\!\!-\lambda\sigma_b^2+\alpha\bar\gamma_e(q+\sigma_e^2)+\beta_0P_r\le 0. \label{eq:bounded Lagrangian for tau}
\end{align}
The dual problem is therefore given by
\begin{subequations}
\begin{align}
\mathrm{(D\text{-}P1^\prime.1\text{-}sub2\text{-}SDR)}:\!\!\mathop{\mathtt{min}}_{\lambda,\alpha,\beta_0}\!\!&~\lambda\nonumber\\
\mathtt{s.t.}&~\eqref{eq:bounded Lagrangian for Z}, \eqref{eq:bounded Lagrangian for tau},\\
&~(\lambda,\alpha,\beta_0)^T\ge \mv 0.
\end{align}
\end{subequations}
It is observed that \(\mv Z\) is of the same form as the Hessian matrix with respect to \(\mv X\) without rank relaxation. According to \cite[Theorem~2.1]{Ai2009}, \(\mv Z\preceq\mv 0\) implies that the SDR problem \(\mathrm{(P1^\prime.1\text{-}sub2\text{-}SDR)}\) is tight in this case, i.e., \(\exists\mv w^\ast\) such that $\mv X^\ast=\mv w^\ast\mv w^{\ast H}$. Moreover, since KKT condition necessitates \(\mv Z^\ast\mv X^\ast=\mv 0\), it follows that \(\mv w^\ast\) is the eigenvector corresponds to the zero-eigenvalue of \(\mv Z^\ast\). Hence, we have \(\mv w^\ast=\mu\nu_{\max}(\mv Z^\ast)\), where \(\mu=\sqrt{\tfrac{P_r}{{\rm tr}(\overline{\mv\Phi})\mv\nu_{\max}(\mv Z^\ast)\mv\nu_{\max}^H(\mv Z^\ast)}}\) is due to the power constraint of \eqref{eq:constraint on transmit power of the relay}, which completes the proof.
\subsection{Proof of Proposition \ref{prop:eqv LMI wrt tilde_h1 till tilde_hK}}\label{appendix:proof of prop:eqv LMI wrt tilde_h1 till tilde_hk}
First, given \(\tilde{\mv h}_k\), \(k=2,\ldots,K\), fixed, only consider the uncertainty of \(\tilde{\mv h}_1\). Since \(\|\Delta\tilde{\mv h}_1\|_2^2\le\epsilon_1^{\prime\prime}\), we have \(1-\frac{(\Delta\tilde{\boldsymbol{h}}_1^\dag)^H\Delta\tilde{\boldsymbol{h}}_1^\dag}{\epsilon_1^{\prime\prime}}\ge 0\). By applying Lemma \ref{lemma:LMI from robust block QMI} to \eqref{eq:LMI of eqv obj for S-Procedure} with \({\mv H_1}^{(1)}=P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime+w^{(0)}\mv I\), \({\mv F_1}^{(1)}=(P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime)\hat{\tilde{\mv h}}^\dag\), \({\mv G_1}^{(1)}=\mv 0\),
$c_1^{(1)}=\hat{\tilde{\mv h}}^T(P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime)\hat{\tilde{\mv h}}^\dag-\delta\hat{\tilde{\mv h}}_1^T\mv Q_1\hat{\tilde{\mv h}}_1^\dag-\delta\sum_{i=2}^K{\tilde{\mv h}}_i^T\mv Q_i{\tilde{\mv h}}_i^\dag-\delta\sigma_b^2-w^{(0)}N_t\epsilon_0^\prime$, \({\mv B_1}^{(1)}=-\delta\mv Q_1\hat{\tilde{\mv h}}_1^\dag\), and \({\mv A_1}^{(1)}=-\delta\mv Q_1\), there exists \(w^{(1)}\ge 0\) such that the following LMI holds:{\small
\begin{equation}\label{eq:LMI of eqv tilde h1 for S-Procedure}
\begin{bmatrix} {\mv H}_1^{(1)}&{\mv F_1}^{(1)}&{\mv G_1}^{(1)}\\
{\mv F_1}^{(1)H}& c_1^{(1)} &{\mv B_1}^{(1)H}\\
{\mv G_1}^{(1)H}& {\mv B_1}^{(1)}&{\mv A_1}^{(1)}\end{bmatrix}-w^{(1)}\begin{bmatrix} \mv 0&\mv 0&\mv 0\\
\mv 0 & { 1} & \mv 0\\
\mv 0 & \mv 0 & \frac{-\mv I}{\epsilon_1^{\prime\prime}}\end{bmatrix}\succeq{\mv 0}.
\end{equation}}
Note that for \(\mv Q_1\succeq{\mv 0}\), there always exists \(w^{(1)}>0\) such that \(\tfrac{w^{(1)}\mv I}{\epsilon_1^{\prime\prime}}+{\mv A_1}^{(1)}\succ\mv 0\) and we assume that such constraint is applied. According to the property of Schur-Complements \cite[A.~5.5]{boyd2004convex}, for \eqref{eq:LMI of eqv tilde h1 for S-Procedure}, we have {\small
\begin{equation}\label{eq:array of eqv tilde_h1 for S-procedure}
\left\{\begin{aligned}
&\begin{bmatrix}
{\mv H_1}^{(1)}&{\mv F_1}^{(1)}\\
{\mv F_1}^{(1)H}& c_1^{(1)}-w^{(1)}\end{bmatrix}\\
&-\begin{bmatrix} {\mv G_1}^{(1)}\\ {\mv B_1}^{(1)H}\end{bmatrix}\left({\mv A_1}^{(1)}+\tfrac{w^{(1)}\mv I}{\epsilon_1^{\prime\prime}} \right )^{-1}\begin{bmatrix}{\mv G_1}^{(1)H}&{\mv B_1}^{(1)} \end{bmatrix}\succeq\mv 0,\\
&\tfrac{w^{(1)}\mv I}{\epsilon_1^{\prime\prime}}+{\mv A_1}^{(1)}\succ \mv 0,
\end{aligned}\right.
\end{equation}}
which can be reexpressed as {\small
\begin{align}
\begin{bmatrix} {\mv A_1}^{(1)}+\tfrac{w^{(1)}\mv I}{\epsilon_1^{\prime\prime}}&{\mv G_1}^{(1)H}&{\mv B_1}^{(1)}\\
{\mv G_1}^{(1)}& {\mv H_1}^{(1)} &{\mv F_1}^{(1)}\\
{\mv B_1}^{(1)H}&{\mv F_1}^{(1)H}&c_1^{(1)}-w^{(1)}\end{bmatrix}\succeq\mv 0. \label{eq:immediate LMI of eqv tilde_h1 for iteration}
\end{align}}
Next, assume that the robust design for \eqref{eq:LMI of eqv obj for S-Procedure} has been considered against the precedent \(k-1\) uncertainties, i.e.,{\small
\begin{align}
&\begin{bmatrix} {\mv H_1}^{(k-1)}&{\mv F_1}^{(k-1)}&{\mv G_1}^{(k-1)}\\
{\mv F_1}^{(k-1)H}& c_1^{(k-1)} &{\mv B_1}^{(k-1)H}\\
{\mv G_1}^{(k-1)H}& {\mv B_1}^{(k-1)}&{\mv A_1}^{(k-1)}\end{bmatrix}
-w^{(k-1)}\begin{bmatrix} \mv 0&\mv 0&\mv 0\\
\mv 0 & 1 & \mv 0\\
\mv 0 & \mv 0 & \frac{-\mv I}{\epsilon_{k-1}^{\prime\prime}}\end{bmatrix}\nonumber\\
&\succeq\mv 0, \; k\ge2. \label{eq:LMI of eqv (k-1)s uncertainties for S-procedure}
\end{align}}
Applying a similar procedure as that for \eqref{eq:LMI of eqv tilde h1 for S-Procedure}, \eqref{eq:LMI of eqv (k-1)s uncertainties for S-procedure} can be recast as {\small
\begin{equation}\label{eq:immediate LMI of eqv (k-1)s uncertainties for iteration}
\begin{bmatrix} \tfrac{w^{(k-1)}\mv I}{\epsilon_{k-1}^{\prime\prime}}+{\mv A_1}^{(k-1)}&{\mv G_1}^{(k-1)H}&{\mv B_1}^{(k-1)}\\
{\mv G_1}^{(k-1)}& {\mv H_1}^{(k-1)} &{\mv F_1}^{(k-1)}\\
{\mv B_1}^{(k-1)H}& {\mv F_1}^{(k-1)H}&c_1^{(k-1)}-w^{(k-1)}\end{bmatrix}\succeq\mv 0.
\end{equation}}
Then given \(\tilde{\mv h}_i\), \(i=k+1,\ldots,K\) fixed, accommodate the \(k\)th uncertainty, i.e., \(\tilde{\mv h}_k\in\tilde{\mathcal H}_k\), for \eqref{eq:immediate LMI of eqv (k-1)s uncertainties for iteration}. By applying Lemma \ref{lemma:LMI from robust block QMI} to the uncertainty of \(\tilde{\mv h}_k\), the implication \( \|\Delta\tilde{\mv h}_k\|_2^2\le\epsilon_k^{\prime\prime}\Rightarrow\eqref{eq:immediate LMI of eqv (k-1)s uncertainties for iteration}\) holds if and only if there exists \(w^{(k)}\ge 0\) such that {\small
\begin{equation}
\begin{bmatrix}
{\mv H_1}^{(k)}&{\mv F_1}^{(k)} &{\mv G_1}^{(k)} \\
{\mv F_1}^{(k)H} & c_1^{(k)} & {\mv B_1}^{(k)H} \\
{\mv G_1}^{(k)H}& {\mv B_1}^{(k)}& {\mv A_1}^{(k)}
\end{bmatrix}-w^{(k)}\begin{bmatrix} \mv 0 & \mv 0 &\mv 0\\\mv 0 &1 &\mv 0\\ \mv 0&\mv 0 &\frac{-\mv I}{\epsilon_{k}^{\prime\prime}} \end{bmatrix}\succeq {\mv 0},
\end{equation}}
where
{\small\begin{align}
&{\mv H_1}^{(k)}=\begin{bmatrix}{\mv A_1}^{(k-1)}+\frac{w^{(k-1)}\mv I}{\epsilon_{k-1}^{\prime\prime}}& {\mv G_1}^{(k-1)H}\\ {\mv G_1}^{(k-1)}& {\mv H_1}^{(k-1)} \end{bmatrix},\nonumber\\
&{\mv F_1}^{(k)}=\begin{bmatrix}{\mv B_1}^{(k-1)}\\ {\mv F_1}^{(k-1)} \end{bmatrix},\
{\mv G_1}^{(k)}=\mv 0,
\end{align}}
$c_1^{(k)}=\hat{\tilde{\mv h}}^T(P_s\mv X^{\prime\prime}-\delta\sigma_r^2\mv X^\prime)\hat{\tilde{\mv h}}^\dag-\delta\sum_{j=1}^k\hat{\tilde{\mv h}}_j^T\mv Q_j\hat{\tilde{\mv h}}_j^\dag-\delta\sum_{i=k+1}^K{\tilde{\mv h}}_i^T\mv Q_i{\tilde{\mv h}}_i^\dag-\delta\sigma_b^2-w^{(0)}N_t\epsilon_0^\prime-\sum_{l=1}^{k-1}w^{(l)}$, ${\mv B_1}^{(k)}=-\delta\mv Q_k\hat{\tilde{\mv h}}_k^\dag$ and ${\mv A_1}^{(k)}=-\delta\mv Q_k$, $k\ge 2$. Thus, using the method of mathematical induction, \eqref{eq:LMI of eqv obj for S-Procedure} holds for \(\tilde{\mv h}_k\in\tilde{\mathcal{H}}_k\), \(k=1,\ldots, K\), if and only if there exists \(\{w(k)\ge 0\}\), such that \eqref{eq:LMI reformulation on delta} is satisfied, which completes the proof.
\subsection{Proof of Proposition \ref{prop:eqv LMI wrt g1 till gK}}\label{appendix:proof of prop:eqv LMI wrt g1 till gk}
Taking the similar procedure as that for dealing with \eqref{eq:semi-indefinite form of tilde_h for S-procedure}, the implication \(\|\Delta{\mv g}\|^2\le N_t\epsilon_0\Rightarrow\eqref{eq:linear epigraph reformulation on gamma_e}\) holds if and only if there exists \(v^{(0)}\ge 0\) such that the following LMI holds:{\small
\begin{align}
\begin{bmatrix}\mv H_2 & \mv F_2\\
\mv F_2^H & c_2 \end{bmatrix}\succeq\mv 0, \label{eq:LMI of eqv Eve's SINR for S-Procedure}
\end{align}}
where \(\mv H_2=-P_s\mv X^{\prime\prime}+\bar\gamma_e\sigma_r^2\mv X^\prime+v^{(0)}\mv I\), \(\mv F_2=(-P_s\mv X^{\prime\prime}+\bar\gamma_e\sigma_r^2\mv X^\prime)\hat{{\mv g}}^\dag\) and \(c_2=\hat{{\mv g}}^T(-P_s\mv X^{\prime\prime}+\bar\gamma_e\sigma_r^2\mv X^\prime)\hat{{\mv g}}^\dag+\bar\gamma_e\sum_{k=1}^K{{\mv g}}_k^T\mv Q_k{{\mv g}}_k^\dag+\bar\gamma_e^2-v^{(0)}N_t\epsilon_0\). \eqref{eq:epigraph reformulation on gamma_e} has been equivalently reformulated into \eqref{eq:LMI of eqv Eve's SINR for S-Procedure}. Then, given \({\mv g}_k\), \(k=2,\ldots,K\), fixed, applying similar procedure to that in Appendix~\ref{appendix:proof of prop:eqv LMI wrt tilde_h1 till tilde_hk}, it follows that there exists \(v^{(1)}\ge 0\) such that the following LMI holds:{\small
\begin{equation}\label{eq:LMI of eqv g1 for S-Procedure}
\begin{bmatrix} {\mv H}_2^{(1)}&{\mv F_2}^{(1)}&{\mv G_2}^{(1)}\\
{\mv F_2}^{(1)H}& c_2^{(1)} &{\mv B_2}^{(1)H}\\
{\mv G_2}^{(1)H}& {\mv B_2}^{(1)}&{\mv A_2}^{(1)}\end{bmatrix}-v^{(1)}\begin{bmatrix} \mv 0&\mv 0&\mv 0\\
\mv 0 & { 1} & \mv 0\\
\mv 0 & \mv 0 & \frac{-\mv I}{\epsilon_1}\end{bmatrix}\succeq{\mv 0}.
\end{equation}}
Since \(\tfrac{v^{(1)}\mv I}{\epsilon_1}+{\mv A_2}^{(1)}\succ {\mv 0}\) always holds, \eqref{eq:LMI of eqv g1 for S-Procedure} is equivalent to the following LMI:
{\small
\begin{align}
\begin{bmatrix} {\mv A_2}^{(1)}+\tfrac{v^{(1)}\mv I}{\epsilon_1} &{\mv G_2}^{(1)H}&{\mv B_2}^{(1)}\\
{\mv G_2}^{(1)}& {\mv H_2}^{(1)} &{\mv F_2}^{(1)}\\
{\mv B_2}^{(1)H}&{\mv F_2}^{(1)H}&c_2^{(1)}-v^{(1)}\end{bmatrix}\succeq\mv 0. \label{eq:immediate LMI of eqv g1 for iteration}
\end{align}}
Next, devising the method of mathematical induction again as that for \eqref{eq:immediate LMI of eqv tilde_h1 for iteration}, \eqref{eq:LMI of eqv Eve's SINR for S-Procedure} holds for \(\mv g_k\in\mathcal{G}_k\), \(\forall k\), if and only if there exists \(\{v(k)\ge 0\}\), such that \eqref{eq:LMI reformulation on gamma_e} is satisfied, which completes the proof.
\subsection{Proof of Proposition \ref{prop:LMI wrt tilde_h1 till tilde_hK}}\label{appendix:proof of prop:LMI wrt tilde_h1 till tilde_hK}
We only sketch the proof herein since it is quite similar to that of Proposition~\ref{prop:eqv LMI wrt tilde_h1 till tilde_hK}. First, apply Lemma~\ref{lemma:LMI from robust block QMI} to \eqref{eq:LMI of y1 for S-Procedure} given \(\tilde{\mv h}_k\)'s, \(k=2,\ldots,K\), fixed and obtain an initial LMI. Next, manipulate the resulting LMI according to the property of Schur-Complements to facilitate using Lemma~\ref{lemma:LMI from robust block QMI}. Then, repeat this procedure until all the semi-indefinite constraints w.r.t.~\(\tilde{\mv h}_k\)'s have been incorporated into an equivalent LMI.
\subsection{Proof of Proposition \ref{prop:structure of robust X and its rank-one reconstruction}}\label{appendix:proof of prop:structure of robust X and its rank-one reconstruction}
According to the KKT conditions of $\rm (P2^\prime.1\text{-}RW\text{-}SDR)$, we have \(\bar{\mv A}^\ast\mv X^\ast={\bf 0}\), where \(\bar{\mv A}^\ast\) is given by \eqref{eq:overline A}. Define \(\bar{\mv C}^\ast=\bar{\mv A}^\ast-w_{2,2}^\ast P_s\hat{\mv F}_1\) with \({\rm rank}(\bar{\mv C}^\ast)\) denoted by \(\bar r_c\).
Then take the similar procedure as {\bf Case I} and {\bf Case II} in Appendix \ref{appendix:proof of prop:structure of optimal X and its rank-one reconstruction}, it can be obtained that \(\mv X^\ast=\sum_{n=1}^{N_t^2-\bar r_c}\bar a_n\bar{\mv \eta}_n\bar{\mv \eta}_n^H+\bar b\bar{\mv \xi}\bar{\mv \xi}^H\).
Next, we prove the second half of Proposition \ref{prop:structure of robust X and its rank-one reconstruction}. According to \eqref{eq:reconstructed structure of suboptimal hat X},
\begin{align}\label{eq:reconstructed robust constraint on obj}
\!\! P_s{\rm tr}(\hat{\mv F}_1\hat{\mv X}^\ast)
=P_s{\rm tr}(\hat{\mv F}_1\mv X^\ast)
\ge\!\!\min\limits_{\tilde{\boldsymbol{h}}_0\in\tilde{\mathcal H}_0}\!\!P_s{\rm tr}(\mv F_1\mv X)\ge\delta^\ast,\!\!
\end{align}
and thus \eqref{eq:robust constraint on delta} holds true, which implies that the same optimal value as $\rm (P2^\prime.1\text{-}RW\text{-}SDR)$, i.e., \(\delta^\ast\), is achievable. However, since the constraint in \eqref{eq:robust constraint on SINR of Eve} is ignored, the global optimal \(\bar\gamma_e^\ast\) for $\rm (P2^\prime.2)$ via solving $\rm (P2^\prime.1\text{-}RW\text{-}SDR)$ is probably violated in $\rm (P2^\prime.1\text{-}RW\text{-}SDR\text{-}sub)$. For example, \(\tfrac{P_s{\rm tr}(\boldsymbol{F}_2\hat{\boldsymbol{X}}^\ast)}{\sigma^2_r{\rm tr}(\overline {\boldsymbol{Y}}_2\hat{\boldsymbol{X}}^\ast)+\sum_{k=1}^K\boldsymbol{g}_k^T\hat{\boldsymbol{Q}}_k^\ast\boldsymbol{g}_k^\dagger+\hat\tau^\ast\sigma_e^2}=\bar\gamma_e^{0}\ge\hat F(\bar\gamma_e^\ast)\), which results in the actual objective value for $\rm (P2^\prime.2)$, \(\tfrac{1+\hat H(\bar\gamma_e^\ast)}{1+\bar\gamma_e^{0}}\) smaller than \(\tfrac{1+\hat H(\bar\gamma_e^\ast)}{1+\hat F(\bar\gamma_e^\ast)}\), and thus suboptimal for $\rm (P2^\prime)$.
\end{appendix}
\bibliographystyle{IEEEtran}
\balance
|
1,108,101,563,135 | arxiv |
\section{Introduction}
There are some prediction tasks where the output labels are discrete and are periodic. For example, consider the problem of pose estimation. Although pose can be a continuous variable, in practice, it is often discretized $e.g.$, in 5-degree intervals. Because of the periodic nature of pose, a 355-degree label is closer to 0-degree label than the 10-degree label. Thus it is important to consider the periodic and discrete nature of the pose classification problem.
In previous literature, pose estimation is often cast as a multi-class classification problem \cite{raza2018appearance}, a metric regression problem \cite{prokudin2018deep}, or a mixture of both \cite{mahendran2018mixed}.
In a multi-class classification formulation using the cross-entropy (CE) loss, the class labels are assumed to be independent from each other \cite{raza2018appearance,liu2019research,liu2019feature}. Therefore, the inter-class similarity is not properly exploited. For instance, in Fig. \ref{fig:1}, we would prefer the predicted probability distribution to be concentrated near the ground truth class, while the CE loss does not encourage that.
On the other hand, metric regression methods treat the pose as a continuous numerical value \cite{liu2017adaptive,liu2018adaptive,liu2019hard}, although the label of pose itself is discrete. As manifested in \cite{liu2018ordinal,liu2019unimodala,liu2019unimodalb}, learning a regression model using discrete labels will cause over-fitting and exhibit similar or inferior performance compared with classification.
Recent works either use a joint classification and regression loss \cite{mahendran2018mixed} or divide a circle into several sectors with a coarse classification that ignores the periodicity, and then applying regression networks to each sector independently as an ordinal regression problem \cite{hara2017designing}. Unfortunately, none of them fundamentally address the limitations of CE or regression loss in angular data.
\begin{figure}[t]
\centering
\includegraphics[width=8.3cm]{fig//figx.pdf}\\
\caption{The limitation of CE loss for pose estimation. The ground truth direction of the car is $t_j \ast$. Two possible softmax predictions (green bar) of the pose estimator have the same probability at $t_j \ast$ position. Therefore, both predicted distributions have the same CE loss. However, the left prediction is preferable to the right, since we desire the predicted probability distribution to be larger and closer to the ground truth class.}\label{fig:1}
\end{figure}
In this work, we employ the Wasserstein loss as an alternative for empirical risk minimization. The $1^{st}$ Wasserstein distance is defined as the cost of optimal transport for moving the mass in one distribution to match the target distribution \cite{rubner2000earth,ruschendorf1985wasserstein}. Specifically, we measure the Wasserstein distance between a softmax prediction and its target label, both of which are normalized as histograms. By defining the ground metric as class similarity, we can measure prediction performance in a way that is sensitive to correlations between the classes.
The ground metric can be predefined when the similarity structure is known a priori to incorporate the inter-class correlation, $e.g.,$ the arc length for the pose. We further extend the arc length to its increasing function from an optimization perspective. The exact Wasserstein distance in one-hot target label setting can be formulated as a soft-attention scheme of all prediction probabilities and be rapidly computed. We also propose to learn the optimal ground metric following alternative optimization.
Another challenge of pose estimation comes from low image quality ($e.g.,$ blurry, low resolution) and the consequent noisy labels. This requires 1) modeling the noise for robust training \cite{liu2019unimodala,liu2019unimodalb} and 2) quantifying the uncertainty of predictions in testing phase \cite{prokudin2018deep}.
Wrongly annotated targets may bias the training process \cite{szegedy2016rethinking,belagiannis2015robust}. We instigate two types of noise. The outlier noise corresponds to one training sample being very distant from others by random error and can be modeled by a uniform distribution \cite{szegedy2016rethinking}. We notice that the pose data is more likely to have inlier noise where the labels are wrongly annotated as the near angles and propose to model it using a unimodal distribution. Our solution is to construct a conservative target distribution by smoothing the one-hot label using a wrapped uniform-unimodal mixture model.
Unlike the one-hot setting, the conservative target distribution makes the computation of Wasserstein distance more advanced because of the numerous possible transportation plans. The $\mathcal{O}(N^3)$ computational complexity for $N$ classes has long been a stumbling block in using Wasserstein distance for large-scale applications. Instead of only obtaining its approximate solution using a $\mathcal{O}(N^2)$ complexity algorithm \cite{cuturi2013sinkhorn},
we systematically analyze the fast closed-form computation of Wasserstein distance for our conservative label when our ground metric is a linear, convex, or concave increasing function $w.r.t.$ the arc length. The linear and convex cases can be solved with linear complexity of $\mathcal{O}(N)$. Our $exact$ solutions are significantly more efficient than the approximate baseline.
The main contributions of this paper are summarized as
$\bullet$ We cast the pose estimation as a Wasserstein training problem. The inter-class relationship of angular data is explicitly incorporated as prior information in our ground metric which can be either pre-defined (a function $w.r.t.$ arc length) or adaptively learned with alternative optimization.
$\bullet$ We model the inlier and outlier error of pose data using a wrapped discrete unimodal-uniform mixture distribution, and regularize the target confidence by transforming one-hot label to conservative target label.
$\bullet$ For either one-hot or conservative target label, we systematically conclude the possible fast closed-form solution when a non-negative linear, convex or concave increasing mapping function is applied in ground metric.
We empirically validate the effectiveness and generality of the proposed method on multiple challenging benchmarks and achieve the state-of-the-art performance.
\section{Related Works}
\noindent\textbf{Pose or viewpoint estimation} has a long history in computer vision \cite{murphy2009head}. It arises in different applications, such as head \cite{murphy2009head}, pedestrian body \cite{raza2018appearance}, vehicle \cite{yang2018hierarchical} and object class \cite{su2015render} orientation/pose estimation. Although these systems are mostly developed independently, they are essentially the same problem in our framework.
The current related literature using deep networks can be divided into two categories. Methods in the first group, such as \cite{rad2017bb8,grabner20183d,zhou2018starmap}, predict keypoints in images and then recover the pose using pre-defined 3D object models. The keypoints can be either semantic \cite{pavlakos20176,wu2016single,massa2016crafting} or the eight corners of a 3D bounding box encapsulating the object \cite{rad2017bb8,grabner20183d}.
The second category of methods, which are more close to our approach, estimate angular values directly from the image \cite{elhoseiny2016comparative,wang2016viewpoint}. Instead of the typical Euler angle representation for rotations \cite{elhoseiny2016comparative}, biternion representation is chosen in \cite{beyer2015biternion,prokudin2018deep} and inherits the periodicity in its $sin$ and $cos$ operations. However, their setting is compatible with only the regression. Several studies have evaluated the performance of classification and regression-based loss functions and conclude that the classification methods usually outperform the regression ones in pose estimation \cite{massa2016crafting,mahendran2018mixed}.
These limitations were also found in the recent approaches which combine classification with regression or even triplet loss \cite{mahendran2018mixed,yang2018hierarchical}.
\noindent\textbf{Wasserstein distance} is a measure defined between probability distributions on a given metric space \cite{kolouri2016sliced}. Recently, it attracted much attention in generative models $etc$ \cite{arjovsky2017wasserstein}. \cite{frogner2015learning} introduces it for multi-class multi-label task with a linear model. Because of the significant amount of computing needed to solve the exact distance for general cases, these methods choose the approximate solution, whose complexity is still in $\mathcal{O}(N^2)$ \cite{cuturi2013sinkhorn}. The fast computing of discrete Wasserstein distance is also closely related to SIFT \cite{cha2002measuring} descriptor, hue in HSV or LCH space \cite{cha2002fast} and sequence data \cite{su2017order}. Inspired by the above works, we further adapted this idea to the pose estimation, and encode the geometry of label space by means of the ground matrix. We show that the fast algorithms exist in our pose label structure using the one-hot or conservative target label and the ground metric is not limited to the arc length.
\noindent\textbf{Robust training with noise data} has long been studied for general classification problems \cite{huber2011robust}. Smoothing the one-hot label \cite{szegedy2016rethinking} with a uniform distribution or regularizing the entropy of softmax output \cite{pereyra2017regularizing} are two popular solutions. Some works of regression-based localization model the uncertainty of point position in a plane with a 2D Gaussian distribution \cite{szeto2017click}. \cite{zou2019confidence} propose to regularize self-training with confidence. However, there are few studies for the discrete periodic label. Besides sampling on Gaussian, the Poisson and the Binomial distribution are further discussed to form a unimodal-uniform distribution.
\noindent\textbf{Uncertainty quantification of pose estimation} aims to quantify the reliability of a result $e.g.,$ a confidence distribution of each class rather than a certain angle value for pose data \cite{prokudin2018deep}. A well-calibrated uncertainty is especially important for large systems to assess the consequence of a decision \cite{che2019deep,han2019unsupervised}. \cite{prokudin2018deep} proposes to output numerous sets of the mean and variation of Gaussian/Von-Mises distribution following \cite{beyer2015biternion}. It is unnecessarily complicated and is a somewhat ill-matched formulation as it assumes the pose label is continuous, while it is discrete. We argue that the $softmax$ is a natural function to capture discrete uncertainty, and is compatible with Wasserstein training.
\subsection{Wasserstein training with one-hot target}
The one-hot encoding is a typical setting for multi-class one-label dataset. The distribution of a target label probability is ${\rm\textbf{t}}=\delta_{j,j^*}$, where $j^*$ is the ground truth class, $\delta_{j,j^*}$ is a Dirac delta, which equals to 1 for $j=j^*$\footnote{\noindent We use $i,j$ interlaced for ${\rm \textbf{s}}$ and ${\rm \textbf{t}}$, since they index the same group of positions in a circle.}, and $0$ otherwise.
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[height=2.8cm]{fig//fig2.pdf}&\includegraphics[height=2.8cm]{fig//fig2_2.pdf}
\end{tabular}
\caption{Left: The only possible transport plan in one-hot target case. Right: the ground matrix using arc length as ground metric.}
\label{fig:2}
\end{figure}
\noindent\textbf{Theorem 1.} \textit{Assume that} $\sum_{j=0}^{N-1}t_j=\sum_{i=0}^{N-1}s_i$, \textit{and} ${\rm{\textbf{t}}}$ \textit{is a one-hot distribution with} $t_{j^*}=1 ($or $\sum_{i=0}^{N-1}s_i)$\footnote{We note that softmax cannot strictly guarantee the sum of its outputs to be 1 considering the rounding operation. However, the difference of setting $t_{j^*}$ to $1$ or $\sum_{i=0}^{N-1}s_i)$ is not significant in our experiments using the typical format of softmax output which is accurate to 8 decimal places.}, \textit{there is only one feasible optimal transport plan.}
According to the criteria of ${\rm\textbf{W}}$, all masses have to be transferred to the cluster of the ground truth label $j^*$, as illustrated in Fig. \ref{fig:2}. Then, the Wasserstein distance between softmax prediction {\rm{\textbf{s}}} and one-hot target {\rm{\textbf{t}}} degenerates to\begin{equation}
\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{f}}({\rm{\textbf{s},\textbf{t}}})=\sum_{i=0}^{N-1} s_i f(d_{i,j^*}) \label{con:df}
\end{equation} where ${\rm\textbf{D}}_{i,j}^f=f(d_{i,j})$. Practically, $f$ can be an increasing function proper, $e.g., p^{th}$ power of $d_{i,j}$ and Huber function. The exact solution of Eq. \eqref{con:df} can be computed with a complexity of $\mathcal{O}(N)$. The ground metric term $f(d_{i,j^*})$ works as the weights $w.r.t.$ $s_i$, which takes all classes into account following a soft attention scheme \cite{liu2018dependency,liu2019dependency,liu2019permutation}. It explicitly encourages the probabilities distributing on the neighboring classes of $j^*$. Since each $s_i$ is a function of the network parameters, differentiating $\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{f}} w.r.t.$ network parameters yields $\sum_{i=0}^{N-1}s_i'f(d_{i,j^*})$.
In contrast, the cross-entropy loss in one-hot setting can be formulated as $-1{\rm log}s_{j^*}$, which only considers a single class prediction like the hard attention scheme \cite{liu2018dependency,liu2019dependency,liu2019permutation}, that usually loses too much information. Similarly, the regression loss using softmax prediction could be $f(d_{i^*,j^*})$, where $i^*$ is the class with maximum prediction probability.
In addition to the predefined ground metric, we also propose to learn ${\rm\textbf{D}}$ adaptively along with our training following an alternative optimization scheme \cite{liu2018joint}.
\noindent\textbf{Step 1:} Fixing ground matrix ${\rm\textbf{D}}$ to compute $\mathcal{L}_{{\rm\textbf{D}}_{i,j}}({\rm{\textbf{s},\textbf{t}}})$ and updating the network parameters.
\noindent\textbf{Step 2:} Fixing network parameters and postprocessing ${\rm\textbf{D}}$ using the feature-level $\ell_1$ distances between different poses.
We use the normalized second-to-last layer neural response in this round as feature vector, since there is no subsequent non-linearities. Therefore, it is meaningful to average the feature vectors in each pose class to compute their centroid and reconstruct ${{\rm\textbf{D}}_{i,j}}$ using the $\ell_1$ distances between these centroids $\overline{d}_{i,j}$. To avoid the model collapse, we construct the ${{\rm\textbf{D}}_{i,j}=\frac{1}{1+\alpha}\left\{f(\overline{d}_{i,j})+\alpha f(d_{i,j})\right\}}$ in each round, and decrease $\alpha$ from 10 to 0 gradually in the training.
\subsection{Wrapped unimodal-uniform smoothing}
The outlier noise exists in most of data-driven tasks, and can be modeled by a uniform distribution \cite{szegedy2016rethinking}. However, pose labels are more likely to be mislabeled as a close class of the true class. It is more reasonable to construct a unimodal distribution to depict the inlier noise in pose estimation, which has a peak at class $j^*$ while decreasing its value for farther classes. We can sample on a continuous unimodal distribution ($e.g.,$ Gaussian distribution) and followed by normalization, or choose a discrete unimodal distribution ($e.g.,$ Poisson/Binomial distribution).
\noindent\textbf{Gaussian/Von-Mises Distribution} has the probability density function (PDF) $f(x)=\frac{{\rm{exp}}\left\{-(x-\mu)^2/2\sigma^2\right\}}{\sqrt{2\pi\sigma^2}}$ for $x\in [0,\small K]$, where $\mu=K/2$ is the mean, and $\sigma^{2}$ is the variance. Similarly, the Von-Mises distribution is a close approximation to the circular analogue of the normal distribution ($i.e., \small K=N-1$). We note that the geometric loss \cite{su2015render} is a special case, when we set $\xi=1,\eta=0$, $\small{K=N-1}$, remove the normalization and adopt CE loss. Since we are interested in modeling a discrete distribution for target labels, we simply apply a softmax operation over their PDF. Note that the output values are mapped to be defined on the circle.
\noindent\textbf{Poisson Distribution} is used to model the probability of the number of events, $k$ occurring in a particular interval of time. Its probability mass function (PMF) is:\vspace{-5pt}\begin{equation}
p_k=\frac{\lambda^k{\rm{exp}}(-\lambda)}{k!},~~~k= 0, 1, 2, ...,
\end{equation}\vspace{-2pt}where $\lambda\in \mathbb{R}^+$ is the average frequency of these events. We can sample $K+1$ probabilities ($i.e., 0\leq k\leq K$) on this PMF and followed by normalization for discrete unimodal probability distributions. Since its mean and variation are the same ($i.e., \lambda$), it maybe inflexible to adjust its shape.
\noindent\textbf{Binomial Distribution} is commonly adopted to model the probability of a given number of successes out of a given number of trails $k$ and the success probability $p$.\vspace{-5pt}\begin{equation}
p_k={n\choose k}p^k(1-p)^{n-k},~~~n\in\mathbb{N},~~k=0,1,2, ...,n
\end{equation}\vspace{-2pt}We set $n=K$ to construct a distribution with $K+1$ bins without softmax normalization. Its warp processing with $K=20$ is illustrated in Fig. \ref{fig:3}.
\begin{figure}[t]
\centering
\includegraphics[width=8.2cm]{fig//fig3.pdf}\\
\caption{Left: the wrapping operation with a Binomial distribution ($\small K+1$ is the number of involved classes of unimodal distribution). Right: the distribution of conservative target label.}\label{fig:3}
\end{figure}
The conservative target distribution ${\rm{{\overline{\textbf{t}}}}}$ is constructed by replacing $t_{j}$ in ${\rm\textbf{t}}$ with $(1-\xi-\eta)t_{j}+\xi p_j +\eta \frac{1}{N}$, which can be regarded as the weighted sum of the original label distribution ${\rm\textbf{t}}$ and a unimodal-uniform mixture distribution. When we only consider the uniform distribution and utilize the CE loss, it is equivalent to label smoothing \cite{szegedy2016rethinking}, a typical mechanism for outlier noisy label training, which encourages the model to accommodate less-confident labels.
By enforcing {\rm\textbf{s}} to form a unimodal-uniform mixture distribution, we also implicitly encourage the probabilities to distribute on the neighbor classes of $j^*$.
\subsection{Wasserstein training with conservative target}
With the conservative target label, the fast computation of Wasserstein distance in Eq. \eqref{con:df} does not apply. A straightforward solution is to regard it as a general case and solve its closed-form result with a complexity higher than $\mathcal{O}(N^3)$ or get an approximate result with a complexity in $\mathcal{O}(N^2)$. The main results of this section are a series of analytic formulation when the ground metric is a nonnegative increasing linear/convex/concave function $w.r.t.$ arc length with a reasonable complexity.\\
\noindent\textbf{3.3.1 Arc length $d_{i,j}$ as the ground metric.}
When we use $d_{i,j}$ as ground metric directly, the Wasserstein loss $\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},\overline{\textbf{t}}})$ can be written as \begin{equation}
\mathcal{L}_{d_{i,j}}{(\rm{{\textbf{s},\overline{\textbf{t}}}})=\mathop{}_{\alpha\in\mathbb{R}}^{inf}}\sum_{j=0}^{N-1}|{\sum_{i=0}^{j}(s_i-\overline{{t}}_i)}-\alpha|\label{con:medi}
\end{equation}
To the best of our knowledge, Eq. \eqref{con:medi} was first developed in \cite{werman1986bipartite}, in which it is proved for sets of points with unitary masses on the circle. A similar conclusion for the Kantorovich-Rubinstein problem was derived in \cite{cabrelli1995kantorovich,cabrelli1998linear}, which is known to be identical to the Wasserstein distance problem when ${{\rm\textbf{D}}_{i,j}}$ is a distance. We note that this is true for $\mathcal{L}_{d_{i,j}}$ (but false for $\mathcal{L}_{{\rm\textbf{D}}^{\rho}}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$ with $\rho>1$). The optimal $\alpha$ should be the median of the set values $\left\{\sum_{i=0}^{j}(s_i-\overline{{t}}_i), 0\leq j\leq {\scriptsize{N}}-1\right\}$ \cite{pele2008linear}. An equivalent distance is proposed from the circular cumulative distribution perspective \cite{rabin2009statistical}. All of these papers notice that computing Eq. \eqref{con:medi} can be done in linear time ($i.e., \mathcal{O}(N)$) weighted median algorithm (see \cite{villani2003topics} for a review).
We note that the partial derivative of Eq. \eqref{con:medi} $w.r.t.$ $s_n$ is $\sum_{j=0}^{N-1}{\rm{sgn}}(\varphi_j)\sum_{i=0}^{j}(\delta_{i,n}-s_i),$ where $\varphi_j=\sum_{i=0}^{j}(s_i-{\overline{t}_i}),$ and $\delta_{i,n}=1$ when $i=n$. Additional details are given in Appendix B.
~\\
\noindent\textbf{3.3.2 Convex function $ w.r.t.$ $d_{i,j}$ as the ground metric}
Next, we extend the ground metric as an nonnegative increasing and convex function of $d_{i,j}$, and show its analytic formulation. If we compute the probability with a precision $\epsilon$, we will have $M=1/\epsilon$ unitary masses in each distribution. We define the cumulative distribution function of ${\rm{\textbf{s}}}$ and ${\overline{\rm\textbf{t}}}$ and their pseudo-inverses as follows \begin{equation}
\begin{array}{ll}
{\rm{\textbf{S}}}(i)=\sum_{i=0}^{N-1}s_i; ~{\rm{\textbf{S}}}{(m)}^{-1}={\rm{inf}}\left\{i; {\rm{\textbf{S}}}(i)\geq m\right\}\\
{\rm\overline{\textbf{T}}}(i)=\sum_{i=0}^{N-1}\overline{t}_i; ~{\rm\overline{\textbf{T}}}{(m)}^{-1}={\rm{inf}}\left\{i; {\rm\overline{\textbf{T}}}(i)\geq m\right\}\\
\end{array}
\end{equation} where $m\in \left\{\frac{1}{M},\frac{2}{M},\cdots,1\right\}$. Following the convention ${\rm{\textbf{S}}}(i+N)={\rm{\textbf{S}}}(i)$, ${\rm{\textbf{S}}}$ can be extended to the whole real number, which consider ${\rm{\textbf{S}}}$ as a periodic (or modulo \cite{cha2002measuring}) distribution on $\mathbb{R}$.
\textbf{Theorem 2.} \textit{Assuming the arc length distance $d_{i,j}$ is given by} Eq. \eqref{con:d} \textit{and the ground metric} ${{\rm\textbf{D}}_{i,j}}=f(d_{i,j})$, \textit{with f a nonnegative, increasing and convex function. Then} \begin{equation}
\mathcal{L}_{{\rm\textbf{D}}^{conv}_{i,j}}{(\rm{{\textbf{s},\overline{\textbf{t}}}})=\mathop{}_{\alpha\in\mathbb{R}}^{inf}}\sum_{m=\frac{1}{M}}^{1} f(|{{\rm\small{\textbf{S}}}(m)}^{-1}-{({\rm\small\overline{\textbf{T}}}(m)-\alpha)}^{-1}|)\label{con:conv}
\end{equation} where $\alpha$ is a to-be-searched transportation constant. A proof of Eq. \eqref{con:conv} $w.r.t.$ the continuous distribution was given in \cite{delon2010fast}, which shows it holds for any couple of desecrate probability distributions. Although that proof involves some complex notions of measure theory, that is not needed in the discrete setting. The proof is based on the idea that the circle can always be ``cut'' somewhere by searching for a $m$, that allowing us to reduce the modulo problem \cite{cha2002measuring} to ordinal case. Therefore, Eq. \eqref{con:conv} is a generalization of the ordinal data. Actually, we can also extend Wasserstein distance for discrete distribution in a line \cite{villani2003topics} as \begin{equation}\sum_{m=\frac{1}{M}}^{1} f(|{{\rm{\textbf{S}}}(m)}^{-1}-{{\rm\overline{\textbf{T}}}(m)}^{-1}|)\label{con:ordinal} \end{equation} where $f$ can be a nonnegative linear/convex/concave increasing function $w.r.t.$ the distance in a line. Eq. \eqref{con:ordinal} can be computed with a complexity of $\mathcal{O}(N)$ for two discrete distributions. When $f$ is a convex function, the optimal $\alpha$ can be found with a complexity of $\mathcal{O}({\rm log}M)$ using the Monge condition\footnote{${\rm\textbf{D}}_{i,j}$+${\rm\textbf{D}}_{i',j'}<{\rm\textbf{D}}_{i,j'}$+${\rm\textbf{D}}_{i',j}$ whenever $i<i'$ and $j<j'$.} (similar to binary search). Therefore, the exact solution of Eq. \eqref{con:conv} can be obtained with $\mathcal{O}(N{\rm log}M)$ complexity. In practice, $\small {\rm log}M$ is a constant (${\rm log}10^8$) according to the precision of softmax predictions, which is much smaller than $N$ (usually $N=360$ for pose data).
Here, we give some measures\footnote{We refer to ``measure'', since a $\rho^{th}$-root normalization is required to get a distance \cite{villani2003topics}, which satisfies three properties: positive definiteness, symmetry and triangle inequality.} using the typical convex ground metric function.
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^\rho}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$, the Wasserstein measure using $d^\rho$ as ground metric with $\rho=2,3,\cdots$. The case $\rho=2$ is equivalent to the Cram\'{e}r distance \cite{rizzo2016energy}. Note that the Cram\'{e}r distance is not a distance metric proper. However, its square root is.\begin{equation}
{\rm\textbf{D}}_{i,j}^\rho= d_{i,j}^\rho
\end{equation}
\vspace{-3pt}
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{H\tau}}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$, the Wasserstein measure using a Huber cost function with a parameter $\tau$.\begin{equation}
{\rm\textbf{D}}_{i,j}^{H\tau}=\left\{
\begin{array}{ll}
d_{i,j}^2&{\rm{if}}~d_{i,j}\leq\tau\\
\tau(2d_{i,j}-\tau)&{\rm{otherwise}}.\\
\end{array}
\right.
\end{equation}
~\\
\noindent\textbf{3.3.3 Concave function $w.r.t.$ $d_{i,j}$ as the ground metric}
In practice, it may be useful to choose the ground metric as a nonnegative, concave and increasing function $w.r.t.$ $d_{i,j}$. For instance, we can use the chord length. \begin{equation}
{\rm\textbf{D}}_{i,j}^{chord}=2r~{\rm sin}(d_{i,j}/2r)
\end{equation}where $r=N/2\pi$ is the radius. Therefore, $f(\cdot)$ can be regarded as a concave and increasing function on interval [0,$N$/2] $w.r.t.$ $d_{i,j}$.
It is easy to show that ${\rm\textbf{D}}_{i,j}^{chord}$ is a distance, and thus $\mathcal{L}_{{\rm\textbf{D}}^{chord}}(\rm{\textbf{s},\overline{\textbf{t}}})$ is also a distance between two probability distributions \cite{villani2003topics}. Notice that a property of concave distance is that they do not move the mass shared by the $\rm{\textbf{s}}$ and ${\rm\overline{\textbf{t}}}$ \cite{villani2003topics}. Considering the Monge condition does not apply for concave function, there is no corresponding fast algorithm to compute its closed-form solution. In most cases, we settle for linear programming. However, the simplex or interior point algorithm are known to have at best a $\mathcal{O}(N^{2.5}{\rm{log}}(ND_{max}))$ complexity to compare two histograms on $N$ bins \cite{orlin1993faster,burkard2009society}, where $D_{max}=f(\frac{N}{2})$ is the maximal distance between the two bins.
Although the general computation speed of the concave function is not satisfactory, the step function $f(t)=\mathbbm{1}_{t\neq 0}$ (one every where except at 0) can be a special case, which has significantly less complexity \cite{villani2003topics}. Assuming that the $f(t)=\mathbbm{1}_{t\neq 0}$, the Wasserstein metric between two normalized discrete histograms on $N$ bins is simplified to the $\ell_1$ distance. \begin{equation}
\mathcal{L}_{\mathbbm{1}{d_{i,j}\neq 0}}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}=\frac{1}{2}\sum_{i=0}^{N-1}{|{\rm{s}}_i-{\rm{\overline{t}}}_i|}=\frac{1}{2}||{\rm{\textbf{s}}}-{\rm{\overline{\textbf{t}}}}||_1
\end{equation}where $||\cdot||_1$ is the discrete $\ell_1$ norm.
Unfortunately, its fast computation is at the cost of losing the ability to discriminate the difference of probability in a different position of bins.
\section{Methodology}
We consider learning a pose estimator ${h}_\theta$, parameterized by $\theta$, with $N$-dimensional softmax output unit. It maps a image {\rm\textbf{x}} to a vector ${\rm\textbf{s}}\in\mathbb{R}^N$. We perform learning over a hypothesis space $\mathcal{H}$ of ${h}_\theta$. Given input {\rm\textbf{x}} and its target ground truth one-hot label ${\rm\textbf{t}}$, typically, learning is performed via empirical risk minimization to solve $\mathop{}_{{h}_\theta\in\mathcal{H}}^{\rm min}\mathcal{L}({h}_\theta({\rm\textbf{x}}),{\rm\textbf{t}})$, with a loss $\mathcal{L}(\cdot,\cdot)$ acting as a surrogate of performance measure.
Unfortunately, cross-entropy, information divergence, Hellinger distance and $\mathcal{X}^2$ distance-based loss treat the output dimensions independently \cite{frogner2015learning}, ignoring the similarity structure on pose label space.
Let ${\rm\textbf{s}}=\left\{s_i\right\}_{i=0}^{N-1}$ be the output of ${h}_\theta({\rm\textbf{x}})$, $i.e.,$ softmax prediction with $N$ classes (angles), and define ${\rm\textbf{t}}=\left\{t_j\right\}_{j=0}^{N-1}$ as the target label distribution, where $i,j\in\left\{0,\cdots,{\small N-1}\right\}$ be the index of dimension (class). Assume class label possesses a ground metric ${\rm\textbf{D}}_{i,j}$, which measures the semantic similarity between $i$-th and $j$-th dimensions of the output. There are $N^2$ possible ${\rm\textbf{D}}_{i,j}$ in a $N$ class dataset and form a ground distance matrix $\textbf{D}\in\mathbb{R}^{N\times N}$. When ${\rm\textbf{s}}$ and ${\rm\textbf{t}}$ are both histograms, the discrete measure of exact Wasserstein loss is defined as \begin{equation}
\mathcal{L}_{\textbf{D}_{i,j}}({\rm{\textbf{s},\textbf{t}}})=\mathop{}_{\textbf{W}}^{{\rm inf}}\sum_{j=0}^{N-1}\sum_{i=0}^{N-1}\textbf{D}_{i,j}\textbf{W}_{i,j} \label{con:df}
\end{equation} where \textbf{W} is the transportation matrix with \textbf{W}$_{i,j}$ indicating the mass moved from the $i^{th}$ point in source distribution to the $j^{th}$ target position. A valid transportation matrix \textbf{W} satisfies: $\textbf{W}_{i,j}\geq 0$; $\sum_{j=0}^{N-1}\textbf{W}_{i,j}\leq s_i$; $\sum_{i=0}^{N-1}\textbf{W}_{i,j}\leq t_j$; $\sum_{j=0}^{N-1}\sum_{i=0}^{N-1}\textbf{W}_{i,j}={\rm min}(\sum_{i=0}^{N-1}s_i,\sum_{j=0}^{N-1}t_j)$.
The ground distance matrix ${\rm\textbf{D}}$ in Wasserstein distance is usually unknown, but it has clear meanings in our application. Its $i,j$-th entry ${\rm\textbf{D}}_{i,j}$ could be the geometrical distance between the $i$-th and $j$-th points in a circle. A possible choice is using the arc length ${d_{i,j}}$ of a circle ($i.e., \ell_1$ distance between the $i$-th and $j$-th points in a circle) as the ground metric $\textbf{D}_{i,j}={d_{i,j}}$.
\begin{equation}
d_{i,j}={\rm min}\left\{|i-j|,N-|i-j|\right\} \label{con:d}
\end{equation}
The Wasserstein distance is identical to the Earth mover's distance when the two distributions have the same total masses ($i.e., \sum_{i=0}^{N-1}s_i=\sum_{j=0}^{N-1}t_j$) and using the symmetric distance $d_{i,j}$ as ${\rm\textbf{D}}_{i,j}$.
This setting is satisfactory for comparing the similarity of SIFT or hue \cite{rubner2000earth}, which do not use a neural network optimization. The previous efficient algorithm usually holds only for $\textbf{D}_{i,j}={d_{i,j}}$. We propose to extend the ground metric in ${\rm\textbf{D}}_{i,j}$ as $f(d_{i,j})$, where $f$ is a positive increasing function $w.r.t.$ $d_{i,j}$.
\subsection{Head pose}
Following \cite{beyer2015biternion,prokudin2018deep}, we choose the occluded version of CAVIAR dataset \cite{fisher2005caviar} and construct the training, validation and testing using 10802, 5444 and 5445 identity-independent images respectively. Since the orientation of gaze is coarsely labeled, and almost 40\% training samples lie within ${\pm} 4^{\circ}$ of the four canonical orientations, regression-based methods \cite{beyer2015biternion,prokudin2018deep} are inefficient.
For fair comparison, we use the same deep batch normalized VGG-style \cite{simonyan2014very} backbone as in \cite{beyer2015biternion,prokudin2018deep}. Instead of a sigmoid unit in their regression model, the last layer is set to a softmax layer with 8 ways for Right, Right-Back, Back, Left-Back, Left, Left-Front, Front and Right-Front poses.
The metric used here is the mean absolute angular deviation (MAAD), which is widely adopted for angular regression tasks. The results are summarized in Table \textcolor{red}{1}. The Wasserstein training boosts the performance significantly. Using convex $f$ can further improve the result, while the losses with concave $f$ are usually inferior to the vanilla Wasserstein loss with arc length as the ground metric. The adaptive ground metric learning is helpful for the $\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},{\textbf{t}}})$, but not necessary when we extend the ground metric to the square of $d_{i,j}$.
We also note that the exact Wasserstein distances are consistently better than their approximate counterparts \cite{cuturi2013sinkhorn}. More appealingly, in the training stage, $\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},{\textbf{t}}})$ is 5$\times$ faster than $\approx\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},{\textbf{t}}})$ and 3$\times$ faster than conventional regression-based method \cite{prokudin2018deep} to achieve the convergence.
\subsection{Pedestrians orientation}
\begin{figure}[t]
\centering
\includegraphics[width=8.4cm]{fig//fig4.pdf}\\
\caption{Normalized adaptively learned ground matrix and polar histogram $w.r.t.$ the number of training samples in TUD dataset.}\label{fig:4}
\end{figure}
The TUD multi-view pedestrians dataset \cite{andriluka2010monocular} consists of 5,228 images along with bounding boxes. Its original annotations are relatively coarse, with only eight classes. We adopt the network in \cite{raza2018appearance} and show the results in Table \textcolor{red}{2}. Our methods, especially the $\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$ outperform the cross-entropy loss-based approach in all of the eight classes by a large margin. The improvements in the case of binomial-uniform regularization ($\xi=0.1,\eta=0.05,K=4,p=0.5$) seems limited for 8 class setting, because each pose label covers 45$^\circ$ resulting in relatively low noise level.
\begin{table}[t]
\renewcommand\arraystretch{1.2}
\scriptsize
\label{tab:different_nets}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Method&0$^{\circ}$&$45^{\circ}$&$90^{\circ}$&135$^{\circ}$&180$^{\circ}$&225$^{\circ}$&270$^{\circ}$&315$^{\circ}$\\\hline\hline
CE loss\cite{raza2018appearance}&0.90&0.96&0.92&\textbf{1.00}&0.92&0.88&0.89&0.95\\\hline\hline
$\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},{\textbf{t}}})$&0.93&0.97&0.95&\textbf{1.00}&\textbf{0.96}&0.91&0.91&0.95\\\hline
A-${\mathcal{L}_{d_{i,j}}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&0.94&0.97&\textbf{0.96}&\textbf{1.00}&0.95&0.92&0.91&\textbf{0.96}\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},{\textbf{t}}}})}$&\textbf{0.95}&0.97&\textbf{0.96}&\textbf{1.00}&\textbf{0.96}&0.92&0.91&\textbf{0.96}\\\hline
CE loss${(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&{0.90}&{0.96}&{0.94}&\textbf{1.00}&{0.92}&{0.90}&{0.90}&{0.95}\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&\textbf{0.95}&\textbf{0.98}&\textbf{0.96}&\textbf{1.00}&\textbf{0.96}&\textbf{0.93}&\textbf{0.92}&\textbf{0.96}\\\hline
\end{tabular}\label{tab:2}
\end{center}
\caption{Class-wise accuracy for TUD pedestrian orientation estimation with 8 pose setting (the higher the better).}
\end{table}
\begin{table}[t]
\scriptsize
\renewcommand\arraystretch{1.2}
\label{tab:different_nets}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Method&Mean AE&{$Acc_{\frac{\pi}{8}}$}&{$Acc_{\frac{\pi}{4}}$}\\\hline\hline
RTF\cite{hara2017growing}&34.7&0.686&0.780\\\hline
SHIFT\cite{hara2017designing}&22.6&0.706&0.861\\\hline\hline
${\mathcal{L}_{d_{i,j}}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&19.1&0.748&0.900\\\hline
A-${\mathcal{L}_{d_{i,j}}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&20.5&0.723&0.874\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},{\textbf{t}}}})}$&18.5&0.756&0.905\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}~|$ SHIFT ${(\rm{{\textbf{s},\overline{\textbf{t}}}})}_G$&\underline{16.4} $|$ 20.1&\underline{0.764} $|$ 0.724&\underline{0.909} $|$ 0.874\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}~|$ SHIFT ${(\rm{{\textbf{s},\overline{\textbf{t}}}})}_P$&17.7 $|$ 20.8&0.760 $|$ 0.720&0.907 $|$ 0.871\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}~|$ SHIFT ${(\rm{{\textbf{s},\overline{\textbf{t}}}})}_B$&\textbf{16.3} $|$ 20.1&\textbf{0.766} $|$ 0.723&\textbf{0.910} $|$ 0.875\\\hline\hline
Human \cite{hara2017designing}&9.1&0.907&0.993\\\hline
\end{tabular}\label{tab:3}
\end{center}
\caption{Results on TUD pedestrian orientation estimation $w.r.t.$ Mean Absolute Error in degree (the lower the better) and {$Acc_{\frac{\pi}{8}}$},{$Acc_{\frac{\pi}{4}}$} (the higher the better). $|$ means ``or''. The suffix $G,P,B$ refer to Gaussian, Poison and Binomial-uniform mixture conservative target label, respectively.}
\end{table}
The adaptive ground metric learning can contribute to higher accuracy than the plain $\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},{\textbf{t}}})$. Fig. \ref{fig:4} provides a visualization of the adaptively learned ground matrix. The learned $\overline{d}_{i,j}$ is slightly larger than ${d}_{i,j}$ when limited training samples are available in the related classes, $e.g., {d}_{225^{\circ},180^{\circ}}<{d}_{225^{\circ},270^{\circ}}$. A larger ground metric value may emphasize the class with fewer samples in the training.
We also utilize the 36-pose labels provided in \cite{hara2017growing,hara2017designing}, and adapt the backbone from \cite{hara2017designing}. We report the results $w.r.t.$ mean absolute error and accuracy at $\frac{\pi}{8}$ and $\frac{\pi}{4}$ in Table \textcolor{red}{3}, which are the percentage of images whose pose error is less than $\frac{\pi}{8}$ and $\frac{\pi}{4}$, respectively. Even the plain $\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},{\textbf{t}}})$ outperforms SHIFT \cite{hara2017designing} by 4.4\% and 3.9\% $w.r.t.$ $Acc\frac{\pi}{8}$ and $Acc\frac{\pi}{4}$. Unfortunately, the adaptive ground metric learning is not stable when we scale the number of class to 36.
The disagreement of human labeling is significant in 36 class setting. In such a case, our conservative target label is potentially helpful. The discretized Gaussian distribution ($\xi=0.1,\eta=0.05,\mu=5,\sigma^2=2.5$) and Binomial distribution ($\xi=0.1,\eta=0.05,K=10,p=0.5$) show similar performance, while the Poisson distribution ($\xi=0.1,\eta=0.05,K=10,\lambda=5$) appears less competitive. Note that the variance of Poisson distribution is equal to its mean $\lambda$, and it approximates a symmetric distribution with a large $\lambda$. Therefore, it is not easy to control the shape of target distribution. Our $\small \mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}_B$ outperforms \cite{hara2017designing} by 6.3$^\circ$, 6\% and 4.9\% in terms of Mean AE, $Acc\frac{\pi}{8}$ and $Acc\frac{\pi}{4}$.
\subsection{Vehicle orientation}
The EPFL dataset \cite{ozuysal2009pose} contains 20 image sequences of 20 car types at a show. We follow \cite{hara2017designing} to choose ResNet-101 \cite{he2016deep} as the backbone and use 10 sequences for training and the other 10 sequences for testing. As shown in Table \textcolor{red}{4}, the Huber function ($\tau=10$) can be beneficial for noisy data learning, but the improvements appear to be not significant after we have modeled the noise in our conservative target label with Binomial distribution ($\xi=0.2,\eta=0.05,K=30,p=0.5$). Therefore, we would recommend choosing $\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}$ and Binomial-uniform mixture distribution as a simple yet efficient combination. The model is not sensitive to the possible inequality of $\sum_{i=0}^{N-1}t_i$ and $\sum_{i=0}^{N-1}s_i$ caused by numerical precision.
Besides, we visualize the second-to-last layer representation of some sequences in Fig. \ref{fig:5} left. As shown in Fig. \ref{fig:5} right, the shape of Binomial distribution is important for performance. It degrades to one-hot or uniform distribution when $K=0$ or a large value. All of the hyper-parameters in our experiments are chosen via grid searching. We see a 27.8\% Mean AE decrease from \cite{hara2017designing} to $\small \mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$, and 33\% for Median AE.
\begin{table}
\scriptsize
\renewcommand\arraystretch{1.2}
\label{tab:different_nets}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Method&Mean AE&Median AE\\\hline\hline
HSSR\cite{yang2018hierarchical}&20.30&3.36\\\hline
SMMR\cite{huang2017soft}&12.61&3.52\\\hline
SHIFT\cite{hara2017designing}&9.86&3.14\\\hline\hline
$\mathcal{L}_{d_{i,j}}{(\rm{{\textbf{s},{\textbf{t}}}})}|{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&6.46 $|$ 6.30&2.29 $|$ 2.18\\\hline
$\mathcal{L}_{d_{i,j}}{(\rm{{\textbf{s},{\textbf{t}}}})}|{(\rm{{\textbf{s},\overline{\textbf{t}}}})},t_j*=\sum_{i=0}^{N-1}s_i$$^\text{\dag}$&6.46 $|$ 6.30&2.29 $|$ 2.18\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},{\textbf{t}}}})}|{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&6.23 $|$ \textbf{6.04}&2.15 $|$ \underline{2.11}\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},{\textbf{t}}}})}|{(\rm{{\textbf{s},\overline{\textbf{t}}}})},t_j*=\sum_{i=0}^{N-1}s_i$$^\text{\dag}$&6.23 $|$ \textbf{6.04}&2.15 $|$ \underline{2.11}\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^3}{(\rm{{\textbf{s},{\textbf{t}}}})}|{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&6.47 $|$ 6.29&2.28 $|$ 2.20\\\hline
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{H\tau}}{(\rm{{\textbf{s},{\textbf{t}}}})}|{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&6.20 $|$ \textbf{6.04}&2.14 $|$ \textbf{2.10}\\\hline
\end{tabular}\label{tab:4}
\end{center}
\caption{Results on EPFL $w.r.t.$ Mean and Median Absolute Error in degree (the lower the better). $|$ means ``or''.$^\text{\dag}$ denotes we assign $t_j*=\sum_{i=0}^{N-1}s_i$, and $t_j*=1$ in all of the other cases.}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=8.3cm]{fig//fig5.pdf}\\
\caption{Left: The second-to-last layer feature of the 7 sequences in EPFL testing set with t-SNE mapping (not space position/angle). Right: Mean AE as a function of $K$ for the Binomial distribution showing that the hyper-parameter $K$ matters.}\label{fig:5}
\end{figure}
\begin{table*}[t]
\tiny
\label{tab:different_nets}
\begin{center}
\begin{tabular}{|c|c|c|cccccccccccc|c|}
\cline{3-16}
\multicolumn{2}{c|}{~}&backbone&aero&bike&boat&bottle&bus&car&chair&table&mbike&sofa&train&tv&mean\\\hline
\multirow{12}*{\rotatebox{90}{$Acc_{\frac{\pi}{6}}$}}&Tulsiani $et~al.$ \cite{tulsiani2015viewpoints}&AlexNet+VGG&0.81&0.77&0.59&0.93&\textbf{0.98}&0.89&0.80&0.62&{0.88}&0.82&0.80&0.80&0.8075\\
&Su $et~al.$ \cite{su2015render}&AlexNet$^\text{\dag}$&0.74&0.83&0.52&0.91&0.91&0.88&0.86&0.73&0.78&0.90&0.86&\textbf{0.92}&0.82\\
&Mousavian $et~al.$ \cite{mousavian20173d}&VGG&0.78&0.83&0.57&0.93&0.94&0.90&0.80&0.68&0.86&0.82&0.82&0.85&0.8103\\
& Pavlakos $et~al.$ \cite{pavlakos20176}&Hourglass&0.81&0.78&0.44&0.79&0.96&0.90&0.80&N/A&N/A&0.74&0.79&0.66&N/A\\
&Mahendran $et~al.$ \cite{mahendran2018mixed}&ResNet50$^\text{\dag}$&0.87&0.81&0.64&\textbf{0.96}&\underline{0.97}&\underline{0.95}&\textbf{0.92}&0.67&0.85&\textbf{0.97}&0.82&0.88&0.8588\\
&Grabner $et~.al.$ \cite{grabner20183d}&ResNet50&0.83&0.82&0.64&0.95&\underline{0.97}&0.94&0.80&0.71&0.88&0.87&0.80&0.86&0.8392\\
&Zhou $et~al.$ \cite{zhou2018starmap}&ResNet18&0.82&\textbf{0.86}&0.50&0.92&\underline{0.97}&0.92&0.79&0.62&0.88&0.92&0.77&0.83&0.8225\\
&Prokudin $et~al.$ \cite{prokudin2018deep}&InceptionResNet&0.89&0.83&0.46&\textbf{0.96}&0.93&0.90&0.80&0.76&0.90&\textbf{0.90}&0.82&\underline{0.91}&0.84\\\cline{2-16}
&${\mathcal{L}_{d_{i,j}}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&ResNet50&0.88&0.79&\underline{0.67}&0.93&0.96&\textbf{0.96}&0.86&0.73&0.86&0.91&\textbf{0.89}&0.87&0.8735\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},{\textbf{t}}}})}$&ResNet50&0.89&\underline{0.84}&\underline{0.67}&\textbf{0.96}&0.95&\underline{0.95}&0.87&0.75&0.88&0.92&\underline{0.88}&0.89&0.8832\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{H\tau}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&ResNet50&\underline{0.90}&0.82&\textbf{0.68}&0.95&\underline{0.97}&0.94&\underline{0.89}&\underline{0.76}&0.88&\underline{0.93}&0.87&0.88&\underline{0.8849}\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&ResNet50&\textbf{0.91}&0.82&\underline{0.67}&\textbf{0.96}&\underline{0.97}&\underline{0.95}&\underline{0.89}&\textbf{0.79}&\textbf{0.90}&\underline{0.93}&0.85&0.90&\textbf{0.8925}\\\hline\hline
\multirow{7}*{\rotatebox{90}{$Acc_{\frac{\pi}{18}}$}}&Zhou $et~al.$ \cite{zhou2018starmap}&ResNet18&0.49&0.34&0.14&0.56&\textbf{0.89}&\textbf{0.68}&0.45&0.29&0.28&\textbf{0.46}&0.58&0.37&0.4818\\\cline{2-16}
&${\mathcal{L}_{d_{i,j}}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&ResNet50&0.48&0.64&\textbf{0.20}&\textbf{0.60}&0.83&0.62&0.42&0.37&0.32&0.42&0.58&0.39&0.5020\\
&${\mathcal{L}_{d_{i,j}}}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&ResNet50&0.48&0.65&\underline{0.19}&0.58&0.86&0.64&0.45&0.38&\underline{0.35}&0.41&0.55&0.36&0.5052\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},{\textbf{t}}}})}$&ResNet50&0.49&0.63&0.18&0.56&0.85&\underline{0.67}&\underline{0.47}&\textbf{0.41}&0.26&0.43&\textbf{0.62}&0.38&0.5086\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&ResNet50&\underline{0.51}&0.65&\underline{0.19}&\underline{0.59}&0.86&0.63&\textbf{0.48}&\underline{0.40}&0.28&0.41&0.57&\underline{0.40}&\underline{0.5126}\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{H\tau}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&ResNet50&\textbf{0.52}&\textbf{0.67}&0.16&0.58&\underline{0.88}&\underline{0.67}&0.45&0.33&0.25&0.44&\underline{0.61}&0.35&0.5108\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{H\tau}}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&ResNet50&0.50&\underline{0.66}&0.17&0.55&0.85&0.65&0.46&\underline{0.40}&\textbf{0.38}&\underline{0.45}&0.59&\textbf{0.41}&\textbf{0.5165}\\\hline\hline
\multirow{12}*{\rotatebox{90}{$MedErr$}}&Tulsiani $et~al.$ \cite{tulsiani2015viewpoints}&AlexNet+VGG&13.8&17.7&21.3&12.9&5.8&9.1&14.8&15.2&14.7&13.7&8.7&15.4&13.59\\
&Su $et~al.$ \cite{su2015render}&AlexNet&15.4&14.8&25.6&9.3&3.6&6.0&9.7&10.8&16.7&9.5&6.1&12.6&11.7\\
&Mousavian $et~al.$ \cite{mousavian20173d}&VGG&13.6&12.5&22.8&8.3&3.1&5.8&11.9&12.5&12.3&12.8&6.3&11.9&11.15\\
&Pavlakos $et~al.$ \cite{pavlakos20176}&Hourglass&11.2&15.2&37.9&13.1&4.7&6.9&12.7&N/A&N/A&21.7&9.1&38.5&N/A\\
&Mahendran $et~al.$ \cite{mahendran2018mixed}&ResNet50&\textbf{8.5}&14.8&20.5&7.0&3.1&5.1&\textbf{9.3}&11.3&14.2&10.2&5.6&11.7&11.10\\
&Grabner $et~al.$ \cite{grabner20183d}&ResNet50&10.0&15.6&\textbf{19.1}&8.6&3.3&5.1&13.7&11.8&12.2&13.5&6.7&11.0&10.88\\
&Zhou $et~al.$ \cite{zhou2018starmap}&ResNet18&10.1&14.5&30.3&9.1&3.1&6.5&11.0&23.7&14.1&11.1&7.4&13.0&10.4\\ &Prokudin $et~al.$ \cite{prokudin2018deep}&InceptionResNet&\underline{9.7}&15.5&45.6&\textbf{5.4}&2.9&\underline{4.5}&13.1&12.6&11.8&\textbf{9.1}&\underline{4.3}&12.0&12.2\\\cline{2-16}
&${\mathcal{L}_{d_{i,j}}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&ResNet50&9.8&13.2&26.7&6.5&\textbf{2.5}&\textbf{4.2}&\underline{9.4}&10.6&\textbf{11.0}&10.5&\textbf{4.2}&\textbf{9.8}&9.55\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},{\textbf{t}}}})}$&ResNet50&10.5&12.6&23.1&5.8&\underline{2.6}&5.1&9.6&11.2&\underline{11.5}&9.7&\underline{4.3}&\underline{10.4}&9.47\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{H\tau}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&ResNet50&11.3&\textbf{11.8}&\underline{19.2}&6.8&3.1&5.0&10.1&\textbf{9.8}&11.8&\underline{9.4}&4.7&11.2&\underline{9.46}\\
&$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$&ResNet50&10.1&\underline{12.0}&21.4&\underline{5.6}&2.8&4.6&10.0&\underline{10.3}&12.3&9.6&4.5&11.6&\textbf{9.37}\\\hline
\end{tabular}\label{tab:5}
\end{center}
\caption{Results on PASCAL 3D+ view point estimation $w.r.t.$ $Acc_{\frac{\pi}{6}}$ $Acc_{\frac{\pi}{18}}$ (the higher the better) and $MedErr$ (the lower the better). Our results are based on ResNet50 backbone and without using external training data.}
\end{table*}
\subsection{3D object pose}
PASCAL3D+ dataset \cite{xiang2014beyond} consists of 12 common categorical rigid object images from the Pascal VOC12 and ImageNet dataset \cite{deng2009imagenet} with both detection and pose annotations. On average, about 3,000 object instances per category are captured in the wild, making it challenging for estimating object pose. We follow the typical experimental protocol, that using ground truth detection for both training and testing, and choosing Pascal validation set to evaluate our viewpoint estimation quality \cite{mahendran2018mixed,prokudin2018deep,grabner20183d}.
The pose of an object in 3D space is usually defined as a 3-tuple (azimuth, elevation, cyclo-rotation), and each of them is computed separately. We note that the range of elevation is [0,$\pi$], the Wasserstein distance for non-periodic ordered data can be computed via Eq. \eqref{con:ordinal}. We choose the Binomial-uniform mixture distribution ($\xi=0.2,\eta=0.05,K=20,p=0.5$) to construct our conservative label. The same data augmentation and ResNet50 from \cite{mahendran2018mixed} (mixture of CE and regression loss) is adopted for fair comparisons, which has 12 branches for each categorical and each branch has three softmax units for 3-tuple.
We consider two metrics commonly applied in literature: Accuracy at $\frac{\pi}{6}$, and Median error ($i.e.,$ the median of rotation angle error). Table \textcolor{red}{5} compares our approach with previous techniques. Our methods outperform previous approaches in both testing metrics. The improvements are more exciting than recent works. Specifically, $\small \mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},\overline{\textbf{t}}}})}$ outperforms \cite{mahendran2018mixed} by 3.37\% in terms of $Acc_{\frac{\pi}{6}}$, and the reduces $MedErr$ from 10.4 \cite{zhou2018starmap} to 9.37 (by 9.5\%).
Besides, we further evaluate $Acc\frac{\pi}{18}$, which assesses the percentage of more accurate predictions. This shows the prediction probabilities are closely distributed around the ground of truth pose.
\section{Experiments}
\begin{table}[t]
\scriptsize
\renewcommand\arraystretch{1.2}
\label{tab:different_nets}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\cline{1-2}\cline{4-5}
{Method}&MAAD&\scriptsize{~}&{Method}&MAAD\\\cline{1-2}\cline{4-5}
BIT\cite{beyer2015biternion}&25.2$^{\circ}$&\scriptsize{~}&A-${\mathcal{L}_{d_{i,j}}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&17.5$^{\circ}$\\\cline{1-2}\cline{4-5}
DDS\cite{prokudin2018deep}$^\text{\dag}$&23.7$^{\circ}$&\scriptsize{~}&A-${\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&\underline{17.3}$^{\circ}$\\\cline{1-2}\cline{4-5}
$\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},{\textbf{t}}})$&18.8$^{\circ}$&\scriptsize{~}& $\approx\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},{\textbf{t}}})$&19.0$^{\circ}$\\\cline{1-2}\cline{4-5}
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},{\textbf{t}}}})}$&\textbf{17.1}$^{\circ}$&\scriptsize{~}&$\approx\mathcal{L}_{{\rm\textbf{D}}_{i,j}^2}{(\rm{{\textbf{s},{\textbf{t}}}})}$&17.8$^{\circ}$\\\cline{1-2}\cline{4-5}
$\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{chord}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&$19.1^{\circ}$&\scriptsize{~}&$\approx\mathcal{L}_{{\rm\textbf{D}}_{i,j}^{chord}}{(\rm{{\textbf{s},{\textbf{t}}}})}$&19.5$^{\circ}$\\\cline{1-2}\cline{4-5}
\end{tabular}\label{con:1}
\end{center}
\caption{Results on CAVIAR head pose dataset (the lower MAAD the better).$^\text{\dag}$ Our implementation based on their publicly available codes. The best are in bold while the second best are underlined.}
\end{table}
In this section, we show the implementation details and experimental results on the head, pedestrian body, vehicle and 3D object pose/orientation estimation tasks. To illustrate the effectiveness of each setting choice and their combinations, we give a series of elaborate ablation studies along with the standard measures.
We use the prefix A and $\approx$ denote the adaptively ground metric learning (in Sec. 3.1) and approximate computation of Wasserstein distance \cite{cuturi2013sinkhorn,frogner2015learning} respectively. ${(\rm{{\textbf{s},{\textbf{t}}}})}$ and ${(\rm{{\textbf{s},\overline{\textbf{t}}}})}$ refer to using one-hot or conservative target label. For instance, $\mathcal{L}_{d_{i,j}}(\rm{\textbf{s},{\textbf{t}}})$ means choosing Wasserstein loss with arc length as ground metric and using one-hot target label.
\section{Conclusions}
We have introduced a simple yet efficient loss function for pose estimation, based on the Wasserstein distance. Its ground metric represents the class correlation and can be predefined using an increasing function of arc length or learned by alternative optimization. Both the outlier and inlier noise in pose data are incorporated in a unimodal-uniform mixture distribution to construct the conservative label. We systematically discussed the fast closed-form solutions in one-hot and conservative label cases. The results show that the best performance can be achieved by choosing convex function, Binomial distribution for smoothing and solving its exact solution. Although it was originally developed for pose estimation, it is essentially applicable to other problems with discrete and periodic labels. In the future, we plan to develop a more stable adaptive ground metric learning scheme for more classes, and adjust the shape of conservative target distribution automatically.
\section{Acknowledgement}
The funding support from Youth Innovation Promotion Association, CAS (2017264), Innovative Foundation of
CIOMP, CAS (Y586320150) and Hong Kong Government General Research Fund GRF (Ref. No.152202/14E) are greatly appreciated.
|
1,108,101,563,136 | arxiv | \section{Introduction}
The Advanced Encryption Standard (AES) is a 128-bit block cipher with
128/192/256 - bit key, defined in the FIPS 197 standard \cite{NI01}.
AES is a mandatory building block of the TLS 1.3 \cite{Re18} security
protocol and is widely used for storage encryption, shared-secret
authentication, cryptographic random number generation, and in many
other applications.
The SM4 block cipher \cite{SA16A} fulfills a similar role to AES in
the Chinese market and is the main block cipher recommended for use
in China. It is also standardized with ISO \cite{IS18}.
SM4 also has a 128-bit block size, but only one key size, 128
bits. Even though its high-level structure differs completely from
AES, the two share significant similarities in their sole nonlinear
component, which is a single $8 \times 8$-bit ``S-Box'' substitution
table in both cases.
Cache timing attacks on AES became well known in the mid-2000s when
it was demonstrated that common table-based implementations can be
exploited even remotely \cite{OsShTr06,Be05}; very similar issues
also affect SM4. In presence of a cache, the only way to make the
execution time of these ciphers fully independent of secret data is to
eliminate the table lookup either by implementing it as bitsliced Boolean
logic or by providing a specific ISA extension for the S-Box lookup.
Consumer CPUs have had instructions to support AES for almost a
decade via the Intel AES-NI in x86 \cite{Gu09} and ARMv8-A cryptographic
extensions \cite{AR19}; these are almost universally available in PCs
and higher-end mobile devices such as phones. ARM also supports SM4 via
the ARMv8.2-SM extension. The AES instructions have been shown to make
AES much less of a throughput bottleneck for high-speed TLS communication
(servers) and storage encryption (mobile devices), thereby also extending
battery life in the latter. Both Intel and ARM cryptographic
ISAs require 128-bit (SIMD) registers and are not available on
lower-end CPUs.
In this work, we show that it is possible to create a simple AES and SM4
ISA extension that offers a significant performance improvement
and timing side-channel resistance with a minimally increased hardware
footprint. It is especially suitable for lightweight RV32 targets.
\section{A Lightweight AES and SM4 ISA Extension}
The ISA extension operates on the main register file only, using two
source registers, one destination register, and a 5-bit field
\verb|fn[4:0]| which can be seen either as an ``immediate constant''
or just code points in instruction encoding.
In either case, the interface to the (reference) combinatorial logic is:
\begin{lstlisting}
module saes32(
output [31:0] rd, // to output register
input [31:0] rs1, // input register 1
input [31:0] rs2, // input register 2
input [4:0] fn // 5-bit func specifier
);
\end{lstlisting}
See Section \ref{sec:interface} for encoding details of SAES32 as an
RV32 R-type custom instruction for testing purposes. For RV64 the
words are simply truncated or zero-extended.
The five bits of $\verb|fn|$ cover encryption, decryption, and key
schedule for both algorithms. Bits \verb|fn[1:0]| first select
a single byte from \verb|rs2|. Two bits \verb|fn[4:3]| indicate which
$8 \to 8$ - bit S-Box is used (AES, AES$^{-1}$, or SM4), and
additionally \verb|fn[4:2]| specifies a $8 \to 32$ - bit linear
expansion transformation (each of three S-Boxes has two alternative
linear transforms, indicated by \verb|fn[2]|). The expanded 32-bit
value is then rotated by 0--3 byte positions based on \verb|fn[1:0]|.
The result is finally XORed with \verb|rs1| and written to \verb|rd|.
Table \ref{tab:fnids} contains the identifiers (pseudo instructions)
that we currently use for bits \verb|fn[4:2]|. We may arrange
computation so that \verb|rd| = \verb|rs1| without increasing
instruction count, making a two-operand ``compressed'' encoding possible.
\begin{table}
\caption{High-level assembler identifiers for fn[4:2].}
\label{tab:fnids}
\begin{center}
\begin{tabular}{l c l}
{\bf Instruction} & {\bf fn[4:2]} & {\bf Description or Use} \\
\hline
\verb|saes32.encsm| & \verb|3'b000| & AES Encrypt round. \\
\verb|saes32.encs| & \verb|3'b001| & AES Final / Key sched. \\
\verb|saes32.decsm| & \verb|3'b010| & AES Decrypt round. \\
\verb|saes32.decs| & \verb|3'b011| & AES Decrypt final. \\
\verb|ssm4.ed| & \verb|3'b100| & SM4 Encrypt and Decrypt. \\
\verb|ssm4.ks| & \verb|3'b101| & SM4 Key Schedule. \\
{\it Unused} & \verb|3'b11x| & ($4 \times 6=24$ points used.)\\
\hline
\end{tabular}
\end{center}
\end{table}
For AES the instruction selects a byte from \verb|rs2|, performs a single
S-box lookup ({\it SubBytes} or its inverse), evaluates a part of the MDS
matrix ({\it MixColumns}) if that linear expansion is step selected,
rotates the result by a multiple of 8 bits
({\it ShiftRows}), and XORs the result with \verb|rs1| ({\it AddRoundKey}).
There is no need for separate instructions for individual steps of AES
as small parts of each of them have been incorporated into a single
instruction. We've found that each one of these substeps requires
surprisingly little additional logic.
For SM4 the instruction has the same data path with byte selection,
S-Box lookup, and two different linear operations, depending on whether
encryption/decryption or key scheduling task is being performed.
Both AES \cite{NI01} and SM4 \cite{SA16A} specifications are written
using big-endian notation while RISC-V uses primarily little-endian
convention \cite{_WaAs19}. To avoid endianness conversion the
linear expansion step outputs have a flipped byte order.
This is less noticeable with AES, but the 32-bit word rotations of SM4
become less intuitive to describe (while wiring is equivalent).
We refer to the concise reference implementation discussed in Section
\ref{sec:refimpl} for details about specific logic operations required to
implement the ISA extension, and for standards-derived unit tests.
\section{Using the AES and SM4 Instructions}
AES and SM4 were originally designed primarily for 32-bit software
implementation. SAES32/SSM4 adopts the ``intended'' 32-bit
implementation structure but removes table lookups and rolls several
individual steps into the same instruction. Both AES and SM4
implementations are also realizable with the reduced ``E''
register file without major changes.
\subsection{AES Computation and Key Schedule}
The structure of an AES implementation is similar to a ``T-Table''
implementation, with sixteen invocations of \verb|saes32.encsm|
per round and not much else (apart from fetching the round subkeys).
In practice, two sets of four registers are used to store
the state, with one set being used to rewrite the other, depending
on whether an odd or even-numbered round is being processed. AES
has $r \in \{10, 12, 14\}$ rounds, depending on the key size
which can be $\{128, 192, 256\}$, respectively. The final round requires
sixteen invocations of \verb|saes32.encs|. The same instructions are
also used in the key schedule which expands the secret key to
$4r+4$ subkey words.
The inverse AES operation is structured similarly, with 16
\verb|saes32.decsm| per main body round and 16 \verb|saes32.decs| for the
final round. These instructions are also used for reversing the
key schedule.
Four precomputed subkey words must be fetched in each round, requiring
four loads (lw instructions) in addition to their address increment
(typically every other round).
There is no need for separate {\it AddRoundKey} XORs as the
subkeys simply initialize either one of the four-register sets used
to store the state.
It is also possible to compute the round keys ``on the fly'' without
committing them to RAM. This may be helpful in certain types of security
applications. The overhead is roughly 30\%. However, if the load
operation is much slower than register-to-register arithmetic,
the overhead of on-the-fly subkey computation can become negligible.
On-the-fly keying is more challenging in reverse.
\subsection{SM4 Computation and Key Schedule}
SM4 has only one key size, 128 bits. The algorithm has 32 steps,
each using a single 32-bit subkey word. The steps are typically organized
into 8 full rounds of 4 steps each.
Due to its Feistel-like structure, SM4 does not require an inverse S-Box
for decryption like AES, which is a substitution-permutation network (SPN).
The inverse SM4 cipher is equivalent to the forward cipher, but with a
reversed subkey order.
Each step uses all four state words and one subkey word as inputs,
replacing a single state word. Since input mixing is built from XORs,
some of the temporary XOR values are unchanged and can be shared between
steps. Each round requires ten XORs in addition to sixteen
\verb|ssm4.ed| invocations, bringing the total number of arithmetic
instructions to 26 per round -- or 6.5 per step. Therefore SM4 performance
is slightly lower than that of AES-128, despite having fewer full rounds.
The key schedule similarly requires 16 invocations of \verb|ssm4.ks|
and 10 XORs to produce a block of four subkey words. The key schedule
uses 32 ``CK'' round constants which can be either fetched from a table
or computed with 8-bit addition operations on the fly.
For SM4 each block of four consecutive invocations
of \verb|ssm4.ed| and \verb|ssm4.ks| share the same source and
destination registers, differing only in \verb|fn[1:0]| which steps
through $\{0,1,2,3\}$. We denote such a four-SSM4 blocks as pseudo
instruction \verb|ssm4.ed4| and \verb|ssm4.ks4|. One can reduce the
per-round instruction count of SM4 from 26 (+4 lw) to 14 (+4 lw)
by implementing \verb|ssm4.ed4| as a ``real'' instruction. In terms
of hardware area \verb|ssm4.ed4| would be almost four times larger than
\verb|ssm4.ed| as it has four parallel S-Boxes.
Note that a 32-bit ``T-Table'' type AES implementation does
{\it not} benefit from four parallel S-Boxes in encryption or decryption,
only in key schedule.
\section{Reference Implementation}
\label{sec:refimpl}
An open-source reference implementation is available\footnote{AES/SM4
ISA Extension: \url{https://github.com/mjosaarinen/lwaes_isa}}.
The distribution contains HDL combinatorial logic for the SAES32
instruction (including the S-Boxes) and provisional assembler listings
for full AES-128/192/256 and SM4-128.
The package also has (C-language) emulator code for the instruction
logic, ``runnable pseudocode'' implementations of algorithms, and a
set of standards-derived unit tests.
This research distribution is primarily intended for obtaining data such
as instruction counts and intermediate values but can be readily
integrated into many RISC-V cores and emulators.
\subsection{About the AES, SM4 S-Boxes}
AES and SM4 can share data paths so it makes sense to explore their
additional structural similarities and differences.
Both SM4 and AES S-Boxes are constructed from finite field inversion
$x^{-1}$ in $\msf{GF}(2^8)$ together with a linear (affine)
transformations on input and/or output. The inversion makes them
``Nyberg S-Boxes'' \cite{Ny93} with desirable properties against
differential and linear cryptanalysis, while the linear mixing steps
are intended to break the bytewise algebraic structure.
Since $x^{-1}$ is an involution (self-inverse) and affine isomorphic
regardless of polynomial basis, AES, AES$^{-1}$, and SM4 S-Boxes really
differ only in their inner and outer linear layers.
Boyar and Peralta \cite{BoPe12} show how to build low-depth circuits
for AES that are composed of a linear top and bottom layers and
a shared nonlinear middle stage.
Here XOR and XNOR gates are ``linear'' and the shared nonlinear layer
consists of XOR and AND gates only. We created new outer layers for SM4
that use the same the middle layer as AES and AES$^{-1}$.
\begin{table}
\caption{Algebraic gate counts for a Boyar-Peralta type low-depth
S-Boxes that implement SM4 in addition to AES and AES$^{-1}$.}
\label{tab:sboxgates}
\begin{center}
\begin{tabular}{l r@{\hskip3pt}c@{\hskip3pt}l z{8mm} z{8mm} z{8mm} z{8mm}}
{\bf Component} & \multicolumn{3}{c}{In, Out}
& \scriptsize{\sf XOR} & \scriptsize{\sf XNOR}
& \scriptsize{\sf AND} & {\bf Total} \\
\hline
Shared middle & 21 & $\to$ & 18 & 30 & - & 34 & 64 \\
AES top & 8 & $\to$ & 21 & 26 & - & - & 26 \\
AES bottom & 18 & $\to$ & 8 & 34 & 4 & - & 38 \\
AES$^{-1}$ top & 8 & $\to$ & 21 & 16 & 10 & - & 26 \\
AES$^{-1}$ bottom & 18 & $\to$ & 8 & 37 & - & - & 37 \\
SM4 top & 8 & $\to$ & 21 & 18 & 9 & - & 27 \\
SM4 bottom & 18 & $\to$ & 8 & 33 & 5 & - & 38 \\
\hline
\end{tabular}
\end{center}
\end{table}
Each S-Box expands an 8-bit input to 21 bits in a linear inner (``top'')
layer, uses the shared nonlinear 21-to-18 bit mapping as a middle
layer, and again compresses 18 bits to 8 bits in the outer (``bottom'')
layer. Table \ref{tab:sboxgates} gives the individual gate counts
to each layer; summing up top, middle, and bottom gives the total
S-Box gate count ($\approx$ 128).
Despite such a strict structure and limited choice of gates
(that is suboptimal for silicon but very natural to
mathematics), these are some of the smallest circuits for AES known.
Note that it is possible to implement AES with fewer gates (113 total),
but this results in 50\% higher circuit depth \cite{BoPe10}.
\subsection{Experimental Instruction Encoding and Synthesis}
\label{sec:interface}
For prototyping we interfaced the SAES32 logic using the {\it custom-0}
opcode and R-type instruction encoding with \verb|fn[4:0]| occupying
lower 5 bits of the funct7 field:
\begin{scriptsize}
\begin{center}
\begin{tabular}{| c | c | c | c | c | c | c | }
\multicolumn{1}{c}{[31:30]} &
\multicolumn{1}{c}{[29:25]} &
\multicolumn{1}{c}{[24:20]} &
\multicolumn{1}{c}{[19:15]} &
\multicolumn{1}{c}{[14:12]} &
\multicolumn{1}{c}{[11:7]} &
\multicolumn{1}{c}{[6:0]} \\
\cline{1-7}
00 & {\tt fn} & {\tt rs2} & {\tt rs1} & 000 & {\tt rd} & 0001011
\Tstrut\Bstrut\\
\cline{1-7}
\end{tabular}
\end{center}
\end{scriptsize}
The implementation has been tested with PQShield's ``Pluto'' RISC-V
core. We synthesized the same core on low-end Xilinx Artix-7 FPGA target
(XC7A35TICSG324-1L) with and without the SAES32 (AES, SM4) instruction
extension and related execution pipeline interface. Table \ref{tab:socsize}
summarizes the relative area added by SAES32. For comparison, we also
measured the size of a memory-mapped AES module ``EXTAES''. This module
implements AES encryption only.
Based on our FPGA experiments we estimate that the full (AES, AES$^{-1}$,
SM4) instruction proposal increases the amount of core logic (LUTs)
by about 5\% over a typical baseline RV32I core, but relatively much
less for more complex cores.
Table \ref{tab:yosys} contains area estimates for the SAES32 module
(not the additional decoding logic)
using the Yosys ``Simple CMOS'' flow which uses a mock-up ASIC
cell library. Here GE is the gate count (NAND2 Equivalents) and Longest
Topological Path (LTP) is the reported depth (delay) measure.
Implementors can experiment if it is beneficial to multiplex the S-Box
linear layers with the shared middle layer. The required mux logic seems
large and increases the circuit depth, so our current reference
implementation does not use it.
\begin{table}
\begin{center}
\caption{RV32 SoC area with and without SAES32 (AES, AES$^{-1}$, SM4);
``Pluto'' core on an Artix-7 FPGA. EXTAES is a CPU-external
memory-mapped AES-only module, presented for comparison.}
\label{tab:socsize}
\begin{tabular}{| l | r | r@{\hskip3pt}l | r@{\hskip3pt}l |}
\hline
{\bf Resource} & {\bf Base}
& \multicolumn{2}{c|}{{\bf SAES32} ($\Delta$)}
& \multicolumn{2}{c|}{{\bf EXTAES} ($\Delta$)} \\
\hline
Logic LUTs & 7,767 & 8,202 & (+435) & 9,795 & (+2,028 \\
Slice regs & 3,319 & 3,342 & (+23) & 4,361 & (+1,042) \\
SLICEL & 1,571 & 1,864 & (+293) & 2,068 & (+497) \\
SLICEM & 734 & 737 & (+3) & 851 & (+117) \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Yosys Simple CMOS Flow area estimates for SAES32.}
\label{tab:yosys}
\begin{tabular}{| l | r | r | r |}
\hline
{\bf Target} & GE (NAND2) & Transistors & LTP \\
\hline
AES Encrypt only & 642 & 2,568 & 25 \\
SM4 Full & 767 & 3,066 & 25 \\
AES Full & 1,240 & 4,960 & 28 \\
AES + SM4 Full & 1,679 & 6,714 & 28 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Performance and Security Analysis}
The hand-optimized AES implementation\footnote{Ko Stoffelen:
``RISC-V Crypto'' \cite{St19} \url{https://github.com/Ko-/riscvcrypto}}
referenced in \cite{St19} requires 80 core arithmetic instructions per
round. The same task can be accomplished with 16 SAES32 instructions.
Furthermore, 16 of those 80 are memory loads, which typically require
more cycles than a simple arithmetic instruction (or SAES32). Each AES
round additionally requires a few operations for loading subkeys and
managing instruction flow.
Overall, based on RV32 and RV64 instruction counts we estimate that the
performance of a SAES32 AES can be expected to be more than 500\% better
than the fastest AES implementations that use the baseline ISA only.
Much of the precise performance gain over a table-based
implementation depends on the latency of memory load operations.
SAES32-based AES and SM4 implementations are inherently constant-time and
resistant to timing attacks. Stoffelen \cite{St19} also presents a
constant-time, bitsliced AES implementation for RISC-V which requires
$2.5$ times more cycles than the optimized table-based implementation.
So SAES32 speedup over a timing side-channel hardened base
ISA implementation is expected to be roughly 15-fold.
We are not aware of any definitive assembler benchmarks for SM4 on
RISC-V, but based on instruction count estimates the performance
improvement can be expected to be roughly similar or more (over 500 \%).
Without SAES32 a simple SM4 software implementation would benefit
from rotation instructions (in the proposed RISC-V bit
manipulation extension).
We have only discussed timing side-channel attacks. Since these
instructions interact with the main register file, any electromagnetic
emission countermeasures would probably have to be extended to additional
parts of the CPU core.
\section{Conclusions}
We propose a minimalistic RISC-V ISA extension for AES and SM4
block ciphers. The resulting speedup is 500\% or more for both ciphers
when compared to hand-optimized base ISA assembler implementations
that use lookup tables.
In addition to saving energy and reducing latency in secure communications
and storage encryption, the main security benefit of the instructions
is their constant-time operation and resulting resistance against cache
timing attacks. Such countermeasures are expensive in pure software
implementations.
The instructions require logic only for a single S-Box, which is combined
with additional linear layers for increased code density and performance.
The hardware footprint of the instruction is very small as a result.
If both AES and SM4 are implemented on the same target they can share
data paths which further simplifies hardware. However, AES and SM4
are independent of each other and AES$^{-1}$ is also optional.
It is not rare to implement and use the forward AES without inverse AES
as common CTR-based AES modes (such as GCM) do not require the inverse
cipher for decryption \cite{Dw01}.
This proposal is targeted towards (ultra) lightweight MCUs
and SoCs. A different type of ISA extension may provide additional
speedups on 64-bit and vectorized platforms, but with the cost of
increased implementation area. Designers may still want
to choose this minimal-footprint option if timing side-channel
resistance is their primary concern.
{\bf Postscript.} Since the original preprint distribution
of this work in February 2020, these 32-bit scalar AES and SM4
instructions have been evaluated (AES as ``$\nu_3$'' in \cite{MaNePa+20})
and adopted into the RISC-V Crypto Extension Proposal \cite{_Ma20}.
We have changed the instruction naming (from ``ENC1S'' to ``SAES32'')
in this paper to reflect that specification.
\ifhyper
\bibliographystyle{abbrvurl}
\else
\bibliographystyle{abbrv}
\fi
|
1,108,101,563,137 | arxiv | \section{Introduction}
What is the defining (or vanishing) ideal of a finite set ${\mathbb X}$ of points
in the affine space?
The standard answer is that it is the set of all the polynomials which vanish
at~${\mathbb X}$. And there are very efficient methods to compute it, based on
Buchberger-M\"oller's algorithm (see for
instance~\cite{ABKR}, \cite{AKR} and~\cite{BM}).
However, the logical and computational environment changes
completely when the coordinates of the points are perturbed by errors,
a situation which is normal when dealing with real world problems.
In that case one has to use approximation and to consider the question
of stability. Introductory material about this topic can be found in
the book~\cite{RA}, in particular in the
paper~\cite{KPR} and its bibliography.
The methods used so far share
the strategy of modifying the Buchberger-M\"oller Algorithm and compute
a Gr\"obner basis or a border basis of an ideal of polynomials which
{\it almost vanish}\/ at ${\mathbb X}$ (see for instance~\cite{F} and~\cite{FT}).
A key remark is that, whatever algorithm is used,
at a certain moment one has computed $n$ polynomials $f_1, \dots, f_n$
which generate a zero-dimensional ideal.
Since the dimension has dropped from $n$ to zero,
the $n$ polynomials form a complete intersection
which almost vanishes at~${\mathbb X}$.
Further steps in the algorithm will be used to eliminate spurious
points and to produce a Gr\"obner or border basis.
Now, a complete intersection of $n$ polynomials in $n$
indeterminates has only a finite number of zeros, and the main
question is: how do the zeros change when the coefficients of the
polynomials are perturbed? Can we devise a strategy to make
the situation reasonably stable? In other words, can we change
the generating polynomials so that the stability of their
common zeros increases?
It is well-known that for a linear system with $n$ equations and $n$
unknowns, the most stable situation occurs when the coefficient
matrix is orthonormal. Is there an analogue to orthonormality when
we deal with polynomial systems?
In numerical analysis the condition number of a problem
measures the sensitivity of the solution to small changes
in the input data, and so it
reveals how numerically well-conditioned the problem is.
There exist a huge body of results about condition
numbers for various numerical problems, for instance
the solution of a linear system, the problem of matrix inversion,
the least squares problem, and the computation of eigenvalues and
eigenvectors.
On the other hand, not very much is known about condition numbers
of polynomial systems. As a notable exception we mention the
paper~\cite{SS93} of Shub and Smale who
treated the case of zero-dimensional homogeneous polynomial
systems; later on their result was extended by
D\'egot (see~\cite{D01}) to the case of positive-dimensional
homogeneous polynomial systems.
Tackling the above mentioned problem entails a
preliminary analysis of the following question of algebraic nature.
If we are given a zero-dimensional complete intersection of
polynomials with simple zeros, how far can we perturb
the coefficients so that the zeros remain smooth
and their number does not change?
It is quite clear that smoothness and constancy
of the number of zeros are essential if we want to
consider the perturbation a good one.
Starting with the classical idea that a perturbed system is a
member of a family of systems, we describe a good subset of
the parameter space over which the members of the family
share the property that their zero sets
have the same number of smooth real points.
This is the content of Section~\ref{Families of Complete Intersections}
where we describe a free (see~Proposition~\ref{flatness}), and a smooth
(see~Theorem~\ref{jacobian}) locus in the parameter space.
Then we provide a suitable algorithm to
compute what we call an $I$-optimal subscheme of the parameter
space (see Corollary~\ref{algo-optimal}): it is a subscheme
over which the complete intersection schemes are smooth
and have the same number of complex points.
The last important result of Section~\ref{Families of Complete Intersections}
is Theorem~\ref{sturm} which proves the existence of an
open non-empty semi-algebraic subscheme of the $I$-optimal subscheme
over which the number of real zeros is constant.
Having described a good subscheme of the parameter space
over which we are allowed to move, and hence over which we
can perturb our data, we pass in Section~\ref{Condition Numbers}
to the next problem and concentrate
our investigation on a single point of the zero set.
After some preparatory results,
we introduce a local condition number
(see Definition~\ref{LocalCondNumb}) and with its help we
prove Theorem~\ref{theoremCN}
which has the merit of fully generalizing a classical result in
numerical linear algebra (see Remark~\ref{remarkCN}).
The subsequent short
Section~\ref{Optimization of the local condition number}
illustrates how to manipulate the equations in order to
lower, and sometimes to minimize, the local condition number
(see Proposition~\ref{scalingCN}). Then we concentrate on the
case of the matrix 2-norm and show how to
achieve the minimum when the polynomials involved
have equal degree (see Proposition~\ref{min2norm}).
The final Section~\ref{Experiments} describes examples
which indicate that our approach is good, in particular we see that when
the local condition number is lowered, indeed the corresponding
solution is more stable.
This paper reports on the first part of a wider investigation.
Another paper is already planned to describe how to deal with
global condition numbers and how to generalize our method
to the case where the polynomials involved have arbitrary degrees.
All the supporting computations were performed
with \cocoa\ (see~\cite{Co}).
We thank Marie-Fran\c coise Roy and Saugata Basu for
some help in the proof of Theorem~\ref{sturm}.
\section{Families of Complete Intersections}
\label{Families of Complete Intersections}
Given a zero-dimensional
smooth complete intersection ${\mathbb X}$, we want to embed it into
a family of zero-dimensional complete intersections and study
when and how~${\mathbb X}$ can move inside the family.
In particular, we study the locus of the parameter-space
over which the fibers are smooth with the same number
of points as~${\mathbb X}$, and we give special emphasis
to the case of real points.
\medskip
We start the section by recalling some definitions.
The notation is borrowed
from~\cite{KR1} and~\cite{KR2}, in particular
we let~$x_1, \dots, x_n$ be
indeterminates and let $\mathbb T^n$
be the monoid of the power products in the
symbols $x_1, \dots, x_n$.
Most of the times, for simplicity we use the notation
${\mathbf x} = x_1, \dots, x_n$.
If~$K$ is a field, the multivariate
polynomial ring~$K[{\mathbf x}]=K[x_1,\dots,x_n]$ is
denoted by~$P$, and if $f_1({\mathbf x}), \dots, f_k({\mathbf x})$
are polynomials in $P$,
the set~$\{f_1({\mathbf x}), \dots, f_k({\mathbf x})\}$ is denoted by~${\mathbf f}({\mathbf x})$
(or simply by~${\mathbf f}$).
Finally, we denote the {\it polynomial system}\/
associated to~${\mathbf f}({\mathbf x})$ by~${\mathbf f}({\mathbf x})=0$ (or simply by ${\mathbf f}=0$),
and we say that the system is zero-dimensional if
the ideal generated by~${\mathbf f}({\mathbf x})$ is zero-dimensional
(see~\cite{KR1}, Section~3.7).
Easy examples show that, unlike the homogeneous case,
in the inhomogeneous case regular sequences are not
independent of the order of their entries. For instance,
if $f_1 = y(x+1)$, $f_2 = z(x+1)$, $f_3 = x$,
then $(f_1, f_2, f_3)$ is not a regular sequence,
while $(f_3, f_1, f_2)$ is such. However,
we prefer to avoid a distinction between these cases,
and we call them {\it complete intersections}.
In other words, we use the following definition.
\begin{definition}
Let $t$ be a positive integer, let~${\mathbf f}({\mathbf x})$ be a set
of~$t$ polynomials in $P=K[x_1, \dots, x_n]$
and let $I$ be the ideal generated by ${\mathbf f}({\mathbf x})$.
\begin{itemize}
\item[(a)] The set~${\mathbf f}({\mathbf x})$ (and the ideal~$I$)
is called a {\bf complete intersection} if the equality
$\dim(P/I) = n-t$ holds.
\item[(b)] The set~${\mathbf f}({\mathbf x})$ (and the ideal~$I$)
is called a {\bf zero-dimensional complete intersection}
if it is a complete intersection and $t=n$.
\end{itemize}
\end{definition}
\noindent
Let $n$ be a positive integer,
let~$P$ denote the polynomial
ring~$K[x_1, \dots,x_n]$,
let ${\mathbf f}({\mathbf x})=\{f_1({\mathbf x}),\ldots,f_n({\mathbf x})\}$
be a zero-dimensional complete intersection, and let~$I$
be the ideal of~$P$ generated by~${\mathbf f}({\mathbf x})$.
We let~$ m$ be a positive integer
and let ${\mathbf a} = (a_1, \dots, a_m)$
be an $m$-tuple of indeterminates which will play the role of
parameters. If $F_1({\mathbf a}, {\mathbf x}), \ldots, F_n({\mathbf a}, {\mathbf x})$ are polynomials
in~$K[{\mathbf a}, {\mathbf x}]$ we let~${F({\mathbf a}, {\mathbf x})=0}$ be the corresponding
family of systems of equations
parametrized by~${\mathbf a}$, and
the ideal generated by~$F({\mathbf a},{\mathbf x})$ in $K[{\mathbf a},{\mathbf x}]$
is denoted by $I({\mathbf a},{\mathbf x})$.
If the scheme of the ${\mathbf a}$-parameters is ${\mathcal S}$,
then there is a $K$-algebra homomorphism
$\phi: K[{\mathbf a}] \To K[{\mathbf a},{\mathbf x}]/I({\mathbf a},{\mathbf x})$ or, equivalently,
a morphism of schemes
$\Phi: \mathop{\rm Spec}\nolimits(K[{\mathbf a},{\mathbf x}]/I({\mathbf a},{\mathbf x})) \To {\mathcal S}$.
Although it is not strictly necessary for the theory, for our applications
it suffices to consider independent parameters. Here is
the formal definition.
\begin{definition}\label{independparams}
If ${\mathcal S}= \mathbb A^m_K$ and
${I({\mathbf a},{\mathbf x})\cap K[{\mathbf a}] = (0)}$, then the parameters~${\mathbf a}$ are
said to be {\bf independent} with respect to $F({\mathbf a},{\mathbf x})$, or simply
independent if the context is clear.
\end{definition}
The first important step is to embed the
system~${\mathbf f}({\mathbf x})=0$ into a family, but we must be careful
and exclude families of the following type.
\begin{example}\label{bad}
Consider the family ${F(a,{\mathbf x}) = \{ x_1(ax_2+1), x_2(ax_2+1)\}}$.
It is a zero dimensional complete intersection only for $a = 0$
while the generic member is positive-dimensional.
\end{example}
\begin{definition}
Let~${\mathbf f}({\mathbf x})$ be a set of polynomials in $P=K[x_1, \dots, x_n]$ so
that~${\mathbf f}({\mathbf x})$ is a zero-dimensional complete intersection
and let~$F({\mathbf a}, {\mathbf x})$ be a family parametrized by~$m$
independent parameters~${\mathbf a}$.
We say that $F({\mathbf a}, {\mathbf x})$ (and similarly $K[{\mathbf a},{\mathbf x}]/I({\mathbf a},{\mathbf x})$
and $\mathop{\rm Spec}\nolimits(K[{\mathbf a},{\mathbf x}]/I({\mathbf a},{\mathbf x}))$) is
a {\bf generically zero-dim\-ensional
family containing ${\mathbf f}({\mathbf x})$}, if~${\mathbf f}({\mathbf x})$ is a member of the
family and the generic member of the family is
a zero-dimensional complete intersection.
\end{definition}
A theorem called {\it generic flatness}\/ (see~\cite{E}, Theorem 14.4)
prescribes the existence of a non-empty Zariski-open
subscheme ${\mathcal U}$ of ${\mathcal S}$ over which the morphism
$\Phi^{-1}({\mathcal U}) \To {\mathcal U}$ is {\it flat}. In particular, it is possible
to explicitly compute a subscheme over which the morphism is free.
To do this, Gr\"obner bases reveal themselves as a fundamental tool.
\begin{definition}\label{iflat}
Let $F({\mathbf a}, {\mathbf x})$ be a generically zero-dimensional family
which contains a zero-dimensional complete intersection~${\mathbf f}({\mathbf x})$.
Let ${\mathcal S}=\mathbb A^m_K$ be the scheme of the independent
${\mathbf a}$-parameters
and let $\Phi: \mathop{\rm Spec}\nolimits(K[{\mathbf a},{\mathbf x}]/I({\mathbf a},{\mathbf x})) \To {\mathcal S}$ be the associated
morphism of schemes.
A dense Zariski-open subscheme~${\mathcal U}$ of~${\mathcal S}$
such that~${\Phi^{-1}({\mathcal U}) \To {\mathcal U}}$ is
free (flat, faithfully flat), is said to be an~{\bf $I$-free
($I$-flat, $I{\bf-faithfully\ flat}$}) subscheme of~${\mathcal S}$ or
simply an~$I$-free ($I$-flat, $I$-faithfully flat) scheme.
\end{definition}
\begin{proposition}\label{flatness}
With the above assumptions and notation,
let $I({\mathbf a}, {\mathbf x})$ be the ideal
generated by~$F({\mathbf a},{\mathbf x})$ in $K[{\mathbf a},{\mathbf x}]$, let $\sigma$
be a term ordering on~$\mathbb T^n$, let~$G({\mathbf a},{\mathbf x})$ be the
reduced~$\sigma$-Gr\"obner basis of the ideal~$I({\mathbf a}, {\mathbf x})K({\mathbf a})[{\mathbf x}]$,
let~$d({\mathbf a})$ be the least common multiple of all the denominators
of the coefficients of the polynomials in $G({\mathbf a},{\mathbf x})$,
and let~$T =\mathbb T^n\setminus \mathop{\rm LT}\nolimits_\sigma(I({\mathbf a},{\mathbf x})K({\mathbf a})[{\mathbf x}])$.
\begin{itemize}
\item[(a)] The open subscheme ${\mathcal U}$ of $\mathbb A^m_K$
defined by $d({\mathbf a})\ne 0$ is~$I$-free.
\item[(b)] The multiplicity of each fiber over ${\mathcal U}$ coincides with
the cardinality of~$T$.
\end{itemize}
\end{proposition}
\begin{proof}
The assumption that $F({\mathbf a},{\mathbf x})$ is a generically zero-dimensional
family implies that
${\rm Spec}\big(K({\mathbf a})[{\mathbf x}]/I({\mathbf a},{\mathbf x})K({\mathbf a})[{\mathbf x}]\big)
\To {\rm Spec}(K({\mathbf a}))$
is finite, in other words that $K({\mathbf a})[{\mathbf x}]/I({\mathbf a},{\mathbf x})K({\mathbf a})[{\mathbf x}]$ is a
finite-dimensional~$K({\mathbf a})$-vector space.
A standard result in Gr\"obner basis theory
(see for instance~\cite{KR1}, Theorem 1.5.7)
shows that the residue classes of the elements
in~$T$ form a~$K({\mathbf a})$-basis
of this vector space. We denote by ${\mathcal U}$ the open subscheme
of~$\mathbb A^m_K$ defined by $d({\mathbf a}) \ne 0$.
For every point in ${\mathcal U}$, the given
reduced Gr\"obner basis evaluates to the reduced
Gr\"obner basis of the corresponding ideal.
Therefore the leading term ideal is the
same for all these fibers, and so is its complement~$T$.
If we denote by~$K[{\mathbf a}]_{d({\mathbf a})}$
the localization of~$K[{\mathbf a}]$ at the
element $d({\mathbf a})$ and by $I({\mathbf a},{\mathbf x})^e$ the
extension of the ideal~$I({\mathbf a},{\mathbf x})$ to the
ring~$K[{\mathbf a}]_{d({\mathbf a})}$,
then $K[{\mathbf a}]_{d({\mathbf a})}[{\mathbf x}]/I({\mathbf a},{\mathbf x})^e$ turns out
to be a free~$K[{\mathbf a}]_{d({\mathbf a})}$-module.
So claim (a) is proved.
Claim (b) follows immediately from (a).
\end{proof}
\begin{remark}\label{varie}
We collect here a few remarks about this proposition.
First of all, we observe that the term ordering $\sigma$ can be
chosen arbitrarily.
Secondly, for every $\alpha\in {\mathcal U}$ let~$L_\alpha$
be the leading term ideal
of the corresponding ideal $I_\alpha$.
If $\sigma$ is a degree-compatible term ordering,
then $L_\alpha$ is is also
the leading term ideal of the
homogenization~$I_\alpha^{\rm hom}$ of $I_\alpha$
(see~\cite{KR2}, Proposition 5.6.3 and its proof).
\end{remark}
\goodbreak
\begin{example}\label{secondflat}
We consider the ideal ${I = (f_1, g)}$ of $K[x,y]$
where~$f_1 = x^3-y$,
$g = x(x-1)(x+1)(x-2)(x+2)(x-3)(x+3)(x+13)(x^2+x+1)$.
We check that $I = (f_1,f_2)$
where $f_2 = xy^3 + 504x^2y - 183xy^2 + 14y^3 - 504x^2
+ 650xy - 147y^2 - 468x + 133y$.
It is a zero-dimensional complete intersection and we
embed it into the family~$I({\mathbf a},{\mathbf x}) = (ax^3-y, g)$.
If we pick $\sigma = {\tt Lex}$ with $y>x$ and
perform the computation as suggested by the
proposition, we get the freeness of
the family for all $a$. Instead, we get the freeness of the
family~$I({\mathbf a},{\mathbf x}) = (ax^3-y, f_2)$ for~$a\ne0$ (see a further
discussion in Example~\ref{continued}).
\end{example}
\begin{example}\label{twopoints}
We let $P = {\mathbb C}[x]$, the univariate polynomial ring, and
embed the ideal~$I$ generated by the following polynomial
${x^2-3x+2}$ into the generically zero-dimensional
family~${F({\mathbf a}, x) = \{a_1x^2 - a_2x + a_3\}}$.
Such family is given by the canonical $K$-algebra
homomorphism
$$
\phi:\! {\mathbb C}[{\mathbf a}]\! \To\! {\mathbb C}[{\mathbf a}, x]/(a_1, a_2, a_3)/(a_1x^2 - a_2x + a_3)
$$
It is a zero dimensional complete intersection~for
$\{{\bm \alpha} \in {\mathbb C}^3 \ | \ \alpha_1\ne 0\}\ \cup \
\{{\bm \alpha} \in {\mathbb C}^3 \ | \ \alpha_1=0, \ \alpha_2 \ne 0\}$.
\noindent It represents two distinct smooth points for
$\{{\bm \alpha} \in {\mathbb C}^3 \ | \ \alpha_1 \ne 0, \
\alpha_2^2-4\alpha_1\alpha_3 \ne 0\}$.
\noindent It represents
a smooth point for
$\{{\bm \alpha} \in {\mathbb C}^3 \ | \ \alpha_1=0,\ \alpha_2\ne 0\}$.
\noindent It is not a zero-dimensional complete intersection for
$\{{\bm \alpha} \in {\mathbb C}^3 \ | \ \alpha_1=0,\ \alpha_2=0\}$.
\end{example}
This kind of examples motivates the following definition.
\begin{definition}\label{ismooth}
Let $F({\mathbf a}, {\mathbf x})$ be a generically zero-dimensional family
containing a zero-dimensional complete intersection~${\mathbf f}({\mathbf x})$.
Let ${\mathcal S}=\mathbb A^m_K$ be the scheme of the
independent~${\mathbf a}$-parameters
and let $\Phi\!\!: \mathop{\rm Spec}\nolimits(K[{\mathbf a},{\mathbf x}]/I({\mathbf a},{\mathbf x})) \To {\mathcal S}$ be the
associated morphism of schemes.
A dense Zariski-open subscheme~${\mathcal U}$ of~${\mathcal S}$
such that~$\Phi^{-1}({\mathcal U}) \To {\mathcal U}$ is {\bf smooth}, i.e.\ all the
fibers of~$\Phi^{-1}({\mathcal U}) \To {\mathcal U}$
are zero-dimensional smooth complete intersections,
is said to be an~{\bf $I$-smooth} subscheme of ${\mathcal S}$ or
simply an~$I$-smooth scheme.
\end{definition}
For instance in Example~\ref{twopoints} we
have the equality ~${\mathcal S} = {\mathbb A}_{\mathbb C}^3$
and the following open set
${\mathcal U}=\{{\bm \alpha} \in {\mathbb C}^3 \ | \ \alpha_1 \ne 0, \
\alpha_2^2-4\alpha_1\alpha_3 \ne 0\}$
is $I$-smooth.
\begin{remark}
We observe that a dense $I$-smooth scheme may
not exist. It suffices to consider the ideal $I = (x-1)^2$
embedded into the family $(x-a)^2$.
In any event, a practical way to find one, if there is one,
is via Jacobians, as we are going to show.
\end{remark}
\begin{theorem}\label{jacobian}
Let $F({\mathbf a}, {\mathbf x})$ be a generically zero-dimensional family
containing a zero-dimensional complete intersection~${\mathbf f}({\mathbf x})$.
We let ${\mathcal S}=\mathbb A^m_K$ be the scheme of the
independent~${\mathbf a}$-parameters, let $I({\mathbf a}, {\mathbf x})$ be the ideal
generated by~$F({\mathbf a},{\mathbf x})$ in~$K[{\mathbf a},{\mathbf x}]$,
let~$D({\mathbf a}, {\mathbf x})\!=\!\det(\mathop{\rm Jac}\nolimits_F({\mathbf a}, {\mathbf x}))$ be the determinant of
the Jacobian matrix of~$F({\mathbf a}, {\mathbf x})$ with respect to the
indeterminates~${\mathbf x}$, let
$J({\mathbf a},{\mathbf x})$ be the ideal sum~$I({\mathbf a},{\mathbf x}) +
(D({\mathbf a},{\mathbf x}))$ in~$K[{\mathbf a},{\mathbf x}]$, and
let $H$ be the ideal in~$K[{\mathbf a}]$ defined by the
equality~$H = J({\mathbf a},{\mathbf x}) \cap K[{\mathbf a}]$.
\begin{itemize}
\item[(a)] There exists an $I$-smooth subscheme
of ${\mathcal S}$ if and only if $H\ne (0)$.
\item[(b)] If $0\ne h({\mathbf a}) \in H$ then the open subscheme of ${\mathcal S}$
defined by the inequality $h({\mathbf a})\ne 0$ is $I$-smooth.
\end{itemize}
\end{theorem}
\begin{proof}
To prove one implication of claim (a), and simultaneously
claim (b), we assume that $H \ne (0)$ and let $0\ne h({\mathbf a}) \in H$.
We have an equality of type $h({\mathbf a}) =
a({\mathbf a},{\mathbf x}) f({\mathbf a},{\mathbf x}) + b({\mathbf a},{\mathbf x}) D({\mathbf a},{\mathbf x})$
with~$f({\mathbf a},{\mathbf x}) \in I({\mathbf a},{\mathbf x})$, and hence an equality
$1 = \frac{a({\mathbf a},{\mathbf x})}{h({\mathbf a})}f({\mathbf a},{\mathbf x}) +
\frac{b({\mathbf a},{\mathbf x})}{h({\mathbf a})}D({\mathbf a},{\mathbf x})$
in~$J({\mathbf a}, {\mathbf x})K({\mathbf a})[{\mathbf x}]$.
For every $\alpha\in{\mathcal S}$ such that~$h(\alpha) \ne 0$
the equality implies that
the corresponding complete intersection has no common zeros
with the determinant of its Jacobian matrix, hence it is smooth.
Conversely, assume that $H=(0)$. Then the canonical $K$-algebra
homomorphism~$K[{\mathbf a}] \To K[{\mathbf a},{\mathbf x}]/J({\mathbf a},{\mathbf x})$ is injective
and hence it induces a
morphism $\mathop{\rm Spec}\nolimits\big(K[{\mathbf a},{\mathbf x}]/J({\mathbf a},{\mathbf x})\big) \To \mathbb A^m_K$
of affine schemes which is dominant. It means that for a
generic point of~$\mathbb A^m_K$, the
scheme $\mathop{\rm Spec}\nolimits\big(K[{\mathbf a},{\mathbf x}]/J({\mathbf a},{\mathbf x})\big)$
is not empty, and hence the corresponding complete
intersection $\mathop{\rm Spec}\nolimits\big(K[{\mathbf a},{\mathbf x}]/I({\mathbf a},{\mathbf x})\big)$ is not smooth.
\end{proof}
The following example illustrates these results.
\begin{example}\label{firstflat}
Let us consider the polynomials
$f_1 = x_1^2+x_2^2-1$, ${f_2 = x_2^2+x_1}$ in~$\ {\mathbb C}[x_1,\!x_2]$
and the ideal ${I \!=\! (f_1, f_2)}$ generated by them.
It is a zero-dimensional complete intersection and we
embed it into $I({\mathbf a},{\mathbf x}) = ({x_1^2+a_1x_2^2-1},\ \ x_2^2+a_2x_1)$.
It is a free family over $\mathbb A^2_{\mathbb C}$, and the
multiplicity of each fiber is $4$.
We compute $D({\mathbf a},{\mathbf x})=\det(\mathop{\rm Jac}\nolimits_F({\mathbf a}, {\mathbf x}))$
and get $D({\mathbf a},{\mathbf x}) = -2a_1a_2x_2 + 4x_1x_2$.
We let
$$J ({\mathbf a},{\mathbf x})= I({\mathbf a},{\mathbf x}) +(D({\mathbf a},{\mathbf x}) )=
(x_1^2 + a_1x_2^2 -1,\ x_2^2+a_2x_1, \ -2a_1a_2x_2 + 4x_1x_2)$$
A computation with \cocoa\ of $\ {\tt Elim}([x_1,x_2], J)$
yields~$(\tfrac{1}{2}a_1^2a_2^3+2a_2)$, and
hence $J({\mathbf a},{\mathbf x})\cap K[{\mathbf a}] = (\tfrac{1}{2}a_1^2a_2^3+2a_2)$.
According to the theorem, if $ {\mathcal U}$ is the complement
in~$\mathbb A^2_{\mathbb C}$ of the curve defined
by $\tfrac{1}{2}a_1^2a_2^3 + 2a_2=0$,
then~${\mathcal U}$ is an~\hbox{$I$-smooth}
subscheme of~$\mathbb A^2_{\mathbb C}$.
On the other hand, the curve has three components, $a_2=0$,
and ${a_1a_2\pm 2i=0}$. If $a_2=0$ then the
corresponding ideal is $(x_1^2-1,x_2^2)$ which is not smooth.
If $a_1a_2\pm 2i=0$, then the corresponding ideals
are
$(x_1^2 \mp \tfrac{2i}{a_2}x_2^2 - 1,\ x_2^2+a_2x_1)$
which can be written as
$((x_1\pm i)^2,\ x_2^2+a_2x_1)$
and hence are not smooth.
Let us now consider the zero-dimensional complete
intersection described by the ideal $I = (f_1, f_2)$
where $f_1 = x_1^2+x_2^2$, $f_2 = x_2^2+x_1$.
We embed it into the
family~$I({\mathbf a},{\mathbf x}) = (x_1^2-a_1x_2^2,\ x_2^2+a_2x_1)$.
As before, it is a free family
over $\mathbb A^2_{\mathbb C}$, and the multiplicity
of each fiber is $4$.
We compute $D({\mathbf a},{\mathbf x})=\det(\mathop{\rm Jac}\nolimits_F({\mathbf a}, {\mathbf x}))$
and get $D({\mathbf a},{\mathbf x}) = 2a_1a_2x_2 + 4x_1x_2$.
The computation of~$\ {\tt Elim}([x,y], J)$
yields~$(0)$, and hence there is no
subscheme of $\mathbb A^2_K$ which is $I$-smooth.
Indeed, for $a_2\ne0$ we
have $I ({\mathbf a},{\mathbf x})= (x_1+\frac{1}{a_2}x_2^2,\
\frac{1}{a_2^2}x_2^4-a_1x_2^2)$ which is not smooth.
Incidentally, we observe that also for $a_2=0$ the
corresponding zero-dimensional complete
intersection is not smooth.
\end{example}
The following example illustrates other subtleties
related to the theorem.
\begin{example}{(\bf Example~\ref{secondflat}
continued)}\label{continued}\\
We consider the family $I({\mathbf a},{\mathbf x}) = (ax^3-y, f_2)$
for~$a\ne0$ of Example~\ref{secondflat},
compute $D({\mathbf a},{\mathbf x})=\det(\mathop{\rm Jac}\nolimits_F({\mathbf a}, {\mathbf x}))$
and get $D({\mathbf a},{\mathbf x}) = 9ax^3y^2 + 1512ax^4- 1098ax^3y + 126ax^2y^2
+ 1950ax^3 - 882ax^2y + y^3 + 399ax^2 + 1008xy
- 183y^2 - 1008x + 650y - 468$.
We let $J ({\mathbf a},{\mathbf x})= I({\mathbf a},{\mathbf x}) +(D({\mathbf a},{\mathbf x}) )$ and get
$J({\mathbf a},{\mathbf x})\cap K[{\mathbf a}] = (h({\mathbf a}))$ where {\tiny $$h({\mathbf a})=
a^9 - \frac{738170716516748}{7749152384519}a^8 +
\frac{218039463835944563500746}{91409877182005574647}a^7 -
\frac{166557011563009981474061668}{31353587873427912103921}a^6
$$
$$
-\frac{276169260891419750846552207}{31353587873427912103921}a^5
+ \frac{986809115998719019081678896}{31353587873427912103921}a^4
- \frac{63247607413926237871517952}{31353587873427912103921}a^3
$$
$$
-\frac{1316764479863922379654192128}{31353587873427912103921}a^2
+ \frac{317872550804296477704192}{13058553883143653521}a
- \frac{974975584016793600000}{266501099655992929}
$$}
Therefore, if $\ {\mathcal U}$ denotes the
complement in $\mathbb A^1_K$ of the zeros of $h(a)$, the theorem
says that it is a Zariski-open $I$-smooth subscheme. However,
we have already seen in Example~\ref{secondflat} that
$a=0$ (the origin is in~${\mathcal U}$) is not in the free locus: we observe
that the corresponding complete intersection is smooth, but it has
only two points.
The other subtlety is that the B\'ezout number of the family
is $3\times 4=12$, but if we substitute $y = ax^3$ into $f_2$
we get a univariate polynomial of degree~$10$.
The two {\it missing}\/ points are at infinity. No member of the
family represents twelve points. The final remark is that if we
move the parameter $a$ in the locus described
by~$a\!\cdot \!h(a) \ne 0$ we always get a smooth
complete intersection of~$10$ points.
If $K = {\mathbb C}$ the ten points have complex coordinates, some of
them are real, but there are no values of $a$ for which all
the $10$ points are real. The reason is that
if ${r_1 = \frac{-1+\sqrt{3}i}{2},\ r_2= \frac{-1-\sqrt{3}i}{2}}$ are the two
complex roots of $x^2+x+1 =0$, then two of the ten points are
$(r_1, r_1^3)$, $(r_2,r_2^3)$ which are not real points
(see Theorem~\ref{sturm} and Example~\ref{realroots}).
\end{example}
Combining Theorem~\ref{jacobian} and Proposition \ref{flatness}
we get a method to select a Zariski-open
subscheme of the parameter space over which all the fibers are
smooth complete intersections of constant multiplicity
(see~\cite{SW05} for similar results).
Before describing the algorithm, we need
a definition which captures this concept.
\begin{definition}\label{ioptimal}
With the above notation,
a dense Zariski-open subscheme~${\mathcal U}$ of~${\mathcal S}$
such $\Phi^{-1}({\mathcal U}) \To {\mathcal U}$ is smooth and free is said to be
an~{\bf $I$-optimal} subscheme of~${\mathcal S}$.
\end{definition}
\goodbreak
\begin{corollary}\label{algo-optimal}
Let ${\mathcal S} = \mathbb A^m_K$ and
consider the following sequence of instructions.
\begin{itemize}
\item[(1)] Compute $D({\mathbf a},{\mathbf x}) = \det({\rm Jac}_F({\mathbf a},{\mathbf x}))$.
\item[(2)] Let $J({\mathbf a},{\mathbf x}) = I({\mathbf a},{\mathbf x}) + (D({\mathbf a},{\mathbf x}))$ and
compute $H =J({\mathbf a},{\mathbf x}) \cap K[{\mathbf a}]$.
\item[(3)] If $H = (0)$ return {\rm ``There is no $I$-smooth
subscheme of $\mathbb A^m_K\,$"} and stop.
\item[(4)] Choose $h({\mathbf a}) \in H\setminus 0$ and
let ${\mathcal U}_1=\mathbb A^m_K\setminus\{{\bm \alpha} \in \mathbb A^m_K \, |\, h({\bm \alpha})=0\}$.
\item[(5)] Choose a term
ordering $\sigma$ on $\mathbb T^n$ and compute the
reduced $\sigma$-Gr\"obner basis $G({\mathbf a},{\mathbf x})$ of~$I({\mathbf a},{\mathbf x})K({\mathbf a})[{\mathbf x}]$
\item[(6)]
Let $T =\mathbb T^n\setminus \mathop{\rm LT}\nolimits_\sigma(I({\mathbf a},{\mathbf x})K({\mathbf a})[{\mathbf x}])$,
compute the cardinality of~$T$ and call it~$\mu$; then
compute the least common multiple of all the denominators of the
coefficients of the polynomials in~$G({\mathbf a},{\mathbf x})$, and call it $d({\mathbf a})$;
finally, let ${{\mathcal U}_2 = \mathbb A^m_K\setminus \{{\bm \alpha} \in
\mathbb A^m_K \, |\, d({\bm \alpha}) \ne 0\}}$
and let ${\mathcal U} = {\mathcal U}_1\cap {\mathcal U}_2$.
\item[(7)] Return $\ {\mathcal U}_1$, ${\mathcal U}_2$, ${\mathcal U}$, $T$, $\mu$.
\end{itemize}
\noindent This is an algorithm which returns ${\mathcal U}_1$ which is $I$-smooth,
${\mathcal U}_2$ which is $I$-free, ${\mathcal U}$ which is $I$-optimal, $T$ which provides
a basis as $K$-vector spaces of all the fibers over ${\mathcal U}_2$,
and $\mu$ which is the multiplicity of all the fibers over ${\mathcal U}_2$.
\end{corollary}
\begin{proof}
It suffices to combine Theorem~\ref{jacobian} and
Proposition~\ref{flatness}.
\end{proof}
\goodbreak
\begin{example}
We consider the ideal ${I = (f_1, f_2)}$ of $K[x,y]$
where~$f_1 = xy-6$, $f_2 = x^2+y^2-13$.
It is a zero-dimensional complete intersection and we
embed it into the family~$I({\mathbf a},{\mathbf x}) = (a_1xy+a_2,\ a_3x^2+a_4y^2+a_5)$.
We compute the reduced {\tt DegRevLex}-Gr\"obner basis
of $I({\mathbf a},{\mathbf x})K({\mathbf a})[{\mathbf x}]$ and
get
$$
\{x^2+\tfrac{a_4}{a_3}y^2+\tfrac{a_5}{a_3},\ \ xy + \tfrac{a_2}{a_1},\ \
y^3- \tfrac{a_2a_3}{a_1a_4}x + \tfrac{a_1a_5}{a_1a_4}y\}
$$
according to the above results, a free locus is given by $a_1a_3a_4 \ne 0$.
Now we compute $D({\mathbf a},{\mathbf x})=\det(\mathop{\rm Jac}\nolimits_F({\mathbf a}, {\mathbf x}))$
and get $D({\mathbf a},{\mathbf x}) = -2a_1a_3x^2+2a_1a_4y^2$.
We let $J ({\mathbf a},{\mathbf x})= I({\mathbf a},{\mathbf x}) +(D({\mathbf a},{\mathbf x}) )$ and
compute $J({\mathbf a},{\mathbf x})\cap K[{\mathbf a}]$.
We get the principal ideal generated
by~${a_2^2}^{\mathstrut}a_3a_4 -\tfrac{1}{4}a_1^2a_5^2$.
In conclusion, an $I$-optimal subscheme
is~${\mathcal U}=\mathbb A^5_K \setminus F$ where $F$ is the closed
subscheme defined by the
equation~$\ a_1a_3a_4({a_2^2}^{\mathstrut}a_3a_4
-\tfrac{1}{4}a_1^2a_5^2)=0$, and $\mu = 4$.
\end{example}
\goodbreak
\begin{definition}\label{realpoints}
We say that {\bf a point is complex} if its coordinates
are complex numbers, and we say that {\bf a point is real}
if its coordinates are real numbers.
\end{definition}
The following example illustrates the fact that even
if we start with a set of real points, a zero-dimensional
complete intersection which contains them may also
contain complex non-real points.
\begin{example}
Let ${\mathbb X}$ be the set of the $10$ real points
$\{(-1,-1),\, \! (2, 8),\,\! (-2,\!-8),\\
(3,27),\, (-3,-27),\, (4,64),\,
(5,125),\, (-5,-125),\, (6,216),\, (-6,-216)\}$.
A zero-dim\-ensional complete intersection
containing~${\mathbb X}$ is $\{f_1, f_2\}$
where $f_1 = y-x^3$ and
$f_2 = x^2y^2 - 1/4095y^4 + 1729/15x^2y - 74/15xy^2 + 1/15y^3 -
8832/5x^2 + 5852/15xy - 10754/315y^2
+ 2160x - 4632/5y + 250560/91$.
Let $I$ denote the vanishing ideal of the $10$ points
and let $J$ denote the ideal generated by $\{f_1, \, f_2\}$.
The colon ideal~$J:I$ defines the residual
intersection. Since $J$ is the intersection of a cubic and a
quartic curve, the residual intersection is a zero-dimensional
scheme of multiplicity~2. Indeed, a computation
(performed with \cocoa) shows that
$J:I $ is generated by $(x+1/78y-87/26,\, y^2-756y+658503)$.
Since $756^2 - 4*658503 = -2062476 <0$, the two
extra points on the zero-dimensional complete
intersection are complex, non real points.
\end{example}
\begin{theorem}\label{sturm}
Let ${\mathbf f}({\mathbf x})$ be a zero-dimensional complete intersection
in~${\mathbb R}[{\mathbf x}]$ and let ${\mathbf f}({\mathbf a},{\mathbf x}) \in {\mathbb R}[{\mathbf a},{\mathbf x}]$ be a zero-dimensional family
containing ${\mathbf f}({\mathbf x})$. Let~$I$ be the ideal in~${\mathbb R}[{\mathbf x}]$ generated by ${\mathbf f}({\mathbf x})$,
assume that there exists an $I$-optimal subscheme~${\mathcal U}$
of~$\mathbb A^m_{\mathbb R}$, and let~${\bm \alpha}_I\in {\mathcal U}$ be the point in the parameter
space which corresponds to~$I$. If $\mu_{{\mathbb R},I}$ is the number of distinct real
points in the fiber over~${\bm \alpha}_I$ (i.e.\ zeroes of~$I$),
then there exist an open semi-algebraic subscheme ${\mathcal V}$ of~${\mathcal U}$ such that
for every ${\bm \alpha}\in {\mathcal V}$ the number of real points in the fiber over~${\bm \alpha}$
is $\mu_{{\mathbb R},I}$.
\end{theorem}
\begin{proof}
We consider the ideal $ \mathcal{I} = I({\mathbf a},{\mathbf x}){\mathbb R}({\mathbf a})[{\mathbf x}]$. It is
zero-dimensional and the field ${\mathbb R}({\mathbf a})$ is infinite.
Since a linear change of coordinates does not change the problem,
we may assume that ~$\mathcal{I}$ is in $x_n$-normal position
(see~\cite{KR1}, Section 3.7). Moreover, we have already
observed (see Remark~\ref{varie}) that in
Proposition~\ref{flatness} the choice of~$\sigma$ is arbitrary.
We choose $\sigma ={\tt Lex}$ and hence the
reduced ${\tt Lex}$-Gr\"obner basis
of $\mathcal{I}$ has the shape prescribed by the
Shape Lemma (see~\cite{KR1} Theorem 3.7.25).
Therefore there exists a univariate polynomial $h_{\mathbf a} \in {\mathbb R}({\mathbf a})[x_n]$
whose degree is the multiplicity of both the generic fiber and
the fiber over~${\bm \alpha}_I$, which is the number of
complex zeros of~$I$.
Due to the shape of the reduced Gr\"obner basis, a point
is real if and only if its $x_n$-coordinate is real. Therefore
it suffices to prove the following statement:
given a univariate square-free polynomial~$h_{\mathbf a} \in {\mathbb R}({\mathbf a})[x_n]$
such that~$h_{{\bm \alpha}_I}$ has exactly~$\mu_{{\mathbb R},I}$ real roots,
there exists an open
semi-algebraic subset of~$A^m_{\mathbb R}$ such that
for every point ${\bm \alpha}$ in it, the polynomial $h_{\bm \alpha}$ has
exactly~$\mu_{{\mathbb R},I}$ real roots. This statement follows
from~\cite{BPR}, Theorem 5.12 where it is shown that for every root
there exists an open semi-algebraic set in~$A^m_{\mathbb R}$
which isolates the root. Since complex non-real roots have to occur
in conjugate pairs, this implies that real roots stay real.
\end{proof}
Let us see some examples.
\begin{example}
We consider the ideal $I = (xy-2y^2+2y,\ x^2-y^2-2x)$ in~${\mathbb R}[x,y]$,
and we embed it into the family~$I({\mathbf a},{\mathbf x}) = (xy-ay^2+ay,\ x^2-y^2-2x)$.
We compute the reduced {\tt Lex}-Gr\"obner
basis of~$I({\mathbf a},{\mathbf x}){\mathbb R}({\mathbf a})[{\mathbf x}]$ and get
$$\{x^2 - 2x - y^2,\ xy - ay^2 + ay, \
y^3 - \tfrac{2a}{a-1}y^2 + \tfrac{a^2+2a}{a^2-1}y\}
$$
Applying the algorithm illustrated in Corollary~\ref{algo-optimal}
we get an $I$-smooth subscheme of~$\mathbb A^1_{\mathbb R}$
for $a(a+2) \ne 0$, and an $I$-free subscheme for $(a-1)(a+1) \ne 0$.
For $a$ different from $0, -2,\ 1, -1$ we have an $I$-optimal subscheme
and the multiplicity is~$4$.
\goodbreak
Our ideal $I$ is obtained for $a = 2$,
and hence it lies over the optimal subscheme. It has multiplicity $4$
and the four zeros are real.
The computed {\tt Lex}-Gr\"obner basis does not have the
shape prescribed by the Shape Lemma, so we
perform a linear change of coordinates by setting $x = x+y,\ y=x-y$.
We compute the reduced {\tt Lex}-Gr\"obner basis and get
$$
\{x + 4 \tfrac{a+1}{a-1}y^3 - 2 \tfrac{a+1}{a-1}y^2 - \tfrac{3a+1}{a-1}y, \ \
y^4 - y^3 - \tfrac{1}{2}\tfrac{a}{a+1}y^2 + \tfrac{1}{2}\tfrac{a}{a+1}y\}
$$
It has the good shape, so we can use the
polynomial
$$
h_{{\mathbf a}} = y^4 - y^3 - \tfrac{1}{2}\tfrac{a}{a+1}y^2 + \tfrac{1}{2}\tfrac{a}{a+1}y
= y(y-1)(y^2- \tfrac{1}{2}\tfrac{a}{a+1})$$
We get the following result.
\begin{itemize}
\item For $a < -1,\ a\ne -2$ there are $4$ real points.
\item For $-1<a<0$ there are $2$ real points.
\item For $a>0, \ a\ne 1$ there are $4$ real points.
\end{itemize}
To complete our analysis, let us see what happens
at the {\it bad}\/ points $0, -2,\ 1, -1$.
At $0$ the primary decomposition of the ideal $I_0$ is
$(x-2,y)\cap(y^2 + 2x, xy, x^2)$, hence the
fiber consists in the simple point $(2,0)$ and a triple point at $(0,0)$.
At $-2$ we see that $(x+\tfrac{2}{3}, \ y-\tfrac{4}{3})\cap (x,y) \cap(x-2,y^2)$
is the primary decomposition of the ideal $I_{-2}$, and hence the
fiber consists in the simple point $(-\tfrac{2}{3}, \tfrac{4}{3})$, the simple
point $(0,0)$ and a double point at $(2,0)$.
At $-1$ the primary decomposition of the ideal $I_{-1}$
is $(x,y) \cap(x-2,y)$, hence the fiber consists of the two simple
real points $(0,0)$ and $(2,0)$.
At $1$ we see that $(x,y) \cap(x-2,y)\cap(x+\tfrac{1}{4},y-\tfrac{3}{4})$
is the primary decomposition of the ideal $I_{1}$, hence the fiber
consists of the three simple real
points $(0,0)$, $(2,0)$, $(-\tfrac{1}{4}, \tfrac{3}{4})$.
\end{example}
\begin{example}\label{realroots}
We consider the ideal $I = (xy+1,\ x^2+y^2-5)$ in~${\mathbb R}[x,y]$,
and we embed it into the family~$I({\mathbf a},x,y) = (xy+a_1x+1,\ x^2+y^2+a_2)$.
We compute the reduced {\tt Lex}-Gr\"obner
basis of~$I({\mathbf a},{\mathbf x})K({\mathbf a})[x,y]$ and get $G({\mathbf a},x,y)=\{g_1,g_2\}$ where
\begin{eqnarray*}
g_1 &=& x - y^3 - a_1y^2 - a_2y - a_1a_2,\\
g_2 &=& y^4 + 2 a_1y^3 + (a_1^2 + a_2)y^2
+2 a_1 a_2y + (a_1^2a_2+1)
\end{eqnarray*}
which has the shape prescribed by the Shape Lemma
(see~\cite{KR1} Theorem 3.7.25).
There is no condition for the free locus, and
$D({\mathbf a},x,y)=\det(\mathop{\rm Jac}\nolimits_F({\mathbf a}, x,y)) = -2x^2 + 2y^2 + 2a_1 y$.
We let $J ({\mathbf a},x,y)= I({\mathbf a},x,y) +(D({\mathbf a},x,y) )$ and compute $J({\mathbf a},x,y)\cap K[{\mathbf a}]$.
We get the principal ideal generated
by the following polynomial ${h({\mathbf a})=a_1^6a_2 + 3a_1^4a_2^2 + a_1^4+ 3a_1^2a_2^3+ 20a_1^2a_2 + a_2^4
- 8a_2^2 + 16}$.
An~$I$-optimal subscheme
is~$\ {\mathcal U}=\mathbb A^4_{\mathbb R} \setminus F$ where $F$ is the closed
subscheme defined by the
equation~$h({\mathbf a})=0$, and we observe that $\mu = 4$.
At this point we know that for $h({\mathbf a}) \ne 0$
each fiber is smooth and has multiplicity~$4$, hence it consists of $4$
distinct complex points. What about real points?
\begin{center}
\includegraphics[width = .8 \textwidth]{sturm.jpg}
\end{center}
The real curve defined by $h({\mathbf a})=0$ is shown in the above picture.
It is the union of two branches and the isolated point $(0,2)$.
The upper region R1 (with the exception of the point $(0,2)$)
corresponds to the ideals in the family whose zeros are four
complex non-real points. The regions R2 and R3
correspond to the ideals whose zeros are two complex
non-real points and two real points.
The region R4 corresponds to the ideals whose zeros
are four real points.
To describe the four regions algebraically, we use
the Sturm-Habicht sequence (see~\cite{GLRR}) of~$g_2 \in {\mathbb R}({\mathbf a})[y]$.
The leading monomials are
$y^4,\ 4y^3,\ 4r({\mathbf a})y^2,\ -8\ell({\mathbf a})y,\ 16h({\mathbf a})$
where
$r({\mathbf a}) = a_1^2-2a_2,\
\ell({\mathbf a}) = a_1^4a_2+2a_1^2a_2^2+2a_1^2+a_2^3-4a_2$.
To get the total number of real roots we count the
sign changes in the sequence at $-\infty$ and $+\infty$;
in particular, we observe that in the parameter space the ideal~$I$
corresponds to the point $(0,-5)$ which belongs to the region R4.
We get
$$
{\rm R4} = \{{\bm \alpha} \in {\mathbb R}^2\ | \ r({\bm \alpha})>0, \ \ell({\bm \alpha}) <0,\ h({\bm \alpha})>0\}
$$
which is semi-algebraic open, not Zariski-open.
\end{example}
\goodbreak
\section{Condition Numbers}
\label{Condition Numbers}
In this section we introduce a notion of {\it condition number}\/ for
zero-dimensional smooth complete intersections in~${\mathbb R}[{\mathbf x}]$;
the aim is to give a measure of the sensitivity of
its real roots with respect to small perturbations of the input data,
that is small changes of the coefficients of the involved polynomials.
The section starts with the recall of well-known facts
about numerical linear algebra.
We let $m,n$ be positive integers and let $\rm{Mat}_{m \times n}({\mathbb R})$
be the set of~$m \times n$ matrices with entries in ${\mathbb R}$;
if $m=n$ we simply write $\rm{Mat}_{n}({\mathbb R})$.
\begin{definition}
Let $M=(m_{ij})$ be a matrix in $ \rm{Mat}_{m \times n}({\mathbb R})$,
$v=(v_1, \dots, v_n)$
a vector in ${\mathbb R}^n$ and~$\| \cdot \|$ a vector norm.
\begin{itemize}
\item[(a)] Let $r \ge 1$ be a real number; the {\bf $r$-norm}
on the vector space~${\mathbb R}^n$ is defined by the formula
$\|v\|_r = \left( \sum_{i=1}^n |v_i|^r \right)^{\frac{1}{r}} $
for every $v \in {\mathbb R}^n$.
\item[(b)] The {\bf infinity norm} on ${\mathbb R}^n$ is defined by the formula
$\|v\|_\infty = {\rm max}_i |v_i|$.
\item[(c)] The {\bf spectral radius} $\rho(M)$ of the matrix $M$
is defined by the formula
$\rho(M) = \max_i |\lambda_i |$, where the $\lambda_i$ are
the {\it complex}\/ eigenvalues of~$M$.
\item[(d)] The real function defined on $\rm{Mat}_{m \times n}({\mathbb R}) $ by
$M \mapsto \max_{\|v\|=1} \| Mv\|$
is a matrix norm called the {\bf matrix norm induced} by
$\| \cdot \|$. A matrix norm induced by a vector norm is
called an {\bf induced matrix norm}.
\item[(e)] The matrix norm induced by $\|\cdot\|_1$
is given by the following formula
$\|M\|_1 = {\rm max}_j(\sum_i|m_{ij}|)$.
The matrix norm induced by $\|\cdot\|_\infty$ is given
by the formula
$\|M\|_\infty = {\rm max}_i(\sum_j |m_{ij}|)$.
Finally, the matrix norm induced by $\|\cdot\|_2$
is given by the formula
$\|M\|_2 = {\rm max}_i(\sigma_i)$ where the $\sigma_i$
are singular values of~$M$.
\end{itemize}
\end{definition}
If no confusion arises, from now on we will use the
symbol $\| \cdot \|$ to denote both a vector norm and a matrix norm.
We recall some facts about matrix norms
(see for instance~\cite{BCM},~\cite{H96}).
\begin{proposition}\label{lemmaInverseNorm}
Let $M$ be a matrix in $\rm{Mat}_n({\mathbb R})$, let $I$ be the identity
matrix of type $n$ and let~$\|\cdot \|$ be an induced matrix
norm on $\rm{Mat}_n({\mathbb R})$.
If the matrix $I+M$ is invertible then $(1 - \|M\|) \: \|(I+M)^{-1}\| \leq 1$.
\end{proposition}
\begin{proposition} \label{classicalIneq2}
Let $M \in \rm {Mat}_{m \times n}({\mathbb R})$ and denote by
$M_i$ the $i$-th row of~$M$.
Let $r_1 \ge 1, r_2 \ge 1$ be real numbers such that
$\frac{1}{r_1}+\frac{1}{r_2}=1$; then
\begin{eqnarray*}
\max_i \|M_i\|_{r_2} \le \|M\|_{r_1} \le m^{1/r_1} \max_i \|M_i\|_{r_2}
\end{eqnarray*}
In particular, for $r_1=r_2=2$
\begin{eqnarray*}
\max_i \|M_i\|_2 \le \|M\|_2 \le \sqrt{m} \max_i \|M_i\|_2
\end{eqnarray*}
\end{proposition}
This introductory part ends with the recollection
of some facts about the polynomial ring~$K[{\mathbf x}]$.
In particular, given $\eta=(\eta_1,\ldots,\eta_n) \in {\mathbb N}^n$
we denote by~$|\eta|$ the number $\eta_1+\ldots+\eta_n$,
by $\eta!$ the number~$\eta_1! \ldots \eta_n!$,
and by ${\mathbf x}^\eta$ the power
product $x_1^{\eta_1} \ldots x_n^{\eta_n}$.
\begin{definition}
Let $p$ be a point of~$K^n$; the $K$-linear map
on~$K[{\mathbf x}]$ defined by $f \mapsto f(p)$ is called
the {\bf evaluation map} associated to~$p$ and
denoted by ${\rm ev}_{p}(f)$.
\end{definition}
\begin{definition}\label{defTaylor}
Let $d$ be a nonnegative integer, let $r \ge 1$ be a real number,
let~$p$ be a point of~${\mathbb R}^n$ and let $g({\mathbf x})$
be a polynomial in~${\mathbb R}[{\mathbf x}]$.
\begin{itemize}
\item[(a)] The formal Taylor expansion of $g({\mathbf x})$ at~$p$
is given by the following expression:
$
g({\mathbf x}) = \sum_{|\eta| \ge 0} \frac{1}{\eta!}
\frac{\partial^\eta g}{\partial {\mathbf x}^\eta}(p) ({\mathbf x}-p)^\eta
$.
\item[(b)] The polynomial $\sum_{|\eta| \ge d} \frac{1}{\eta!}
\frac {\partial^\eta g} {\partial {\mathbf x}^\eta}(p) ({\mathbf x}-p)^\eta$
is denoted by $g^{\ge d}({\mathbf x},p)$.
\item[(c)] The {\bf $r$-norm of $g({\mathbf x})$ at $p$} is
defined as the $r$-norm of the
vector $\frac{\partial g}{\partial {\mathbf x}}(p)$.
If $\|\frac{\partial g}{\partial {\mathbf x}}(p)\|_r=1$ then~$g({\mathbf x})$
is called {\bf unitary at~$p$}.
\end{itemize}
\end{definition}
We use the following formulation of Taylor's theorem.
\begin{proposition}\label{propTaylor}
Let $p$ be a point of~${\mathbb R}^n$ and let $g({\mathbf x})$ be a
polynomial in~${\mathbb R}[{\mathbf x}]$.
For every point $q \in {\mathbb R}^n$ we have
$$
g(q)= g(p) + \mathop{\rm Jac}\nolimits_g(p)(q-p) + \frac{1}{2} (q-p)^t H_g(\xi) (q-p)
$$
where $\xi$ is a point of the line connecting~$p$ to~$q$
and $H_g(\xi)$ is the Hessian matrix of~$g$ at~$\xi$.
\end{proposition}
Given ${\mathbf f}({\mathbf x})=\{f_1({\mathbf x}),\ldots,f_n({\mathbf x})\}$, a
zero-dimensional smooth complete intersection in~${\mathbb R}[{\mathbf x}]$, we
introduce a notion of admissible
perturbation of~${\mathbf f}({\mathbf x})$. Roughly speaking, the polynomial
set $\bm \varepsilon({\mathbf x})=\{\varepsilon_1({\mathbf x}),\ldots,
\varepsilon_n({\mathbf x})\} \subset {\mathbb R}[{\mathbf x}]$ is considered to be an
admissible perturbation of~${\mathbf f}({\mathbf x})$ if the real solutions
of $({\mathbf f} + \bm \varepsilon)({\mathbf x})=0$ are nonsingular and derive
from perturbations of the real solutions of ${\mathbf f}({\mathbf x})=0$.
Using the results of Section~\ref{Families of Complete Intersections}
we formalize this concept as follows.
\begin{definition}\label{admissible}
Let ${\mathbf f}({\mathbf x})=\{f_1({\mathbf x}),\ldots,f_n({\mathbf x})\}$ be a zero-dimensional
smooth complete intersection
in~${\mathbb R}[{\mathbf x}]$, let~$\mu_{{\mathbb R},I}$ be the
number of real solutions of ${\mathbf f}({\mathbf x})=0$, and
let $\bm \varepsilon({\mathbf x})=
\{\varepsilon_1({\mathbf x}),\ldots,\varepsilon_n({\mathbf x})\}$ be a set of
polynomials in ${\mathbb R}[{\mathbf x}]$.
Suppose that the assumptions of Theorem~\ref{sturm}
are satisfied, let $\mathcal V \subset \mathbb A_{\mathbb R}^m$
be an open semi-algebraic subset of~${\mathcal U}$
such that ${\bm \alpha}_I \in {\mathcal V}$, and for every ${\bm \alpha} \in {\mathcal V}$ the
number of real roots of ${{\mathbf f}({\bm \alpha},{\mathbf x})=0}$ is equal to~$\mu_{{\mathbb R},I}$.
If there exists ${\bm \alpha} \in {\mathcal V}$ such
that $({\mathbf f}+\bm \varepsilon)({\mathbf x})={\mathbf f}({\bm \alpha}, {\mathbf x})$,
then $\bm \varepsilon({\mathbf x})$ is called
an {\bf admissible perturbation} of~${\mathbf f}({\mathbf x})$ .
\end{definition}
Henceforth we
let $ \bm \varepsilon({\mathbf x})=\{\varepsilon_1({\mathbf x}),\ldots,\varepsilon_n({\mathbf x})\}$
be an admissible perturbation of~${\mathbf f}({\mathbf x})$, and let
$\mathcal Z_{\mathbb R}({\mathbf f})=\{p_1,\ldots, p_{\mu_{{\mathbb R},I}}\}$,
$\mathcal Z_{\mathbb R}({\mathbf f} + \bm \varepsilon)=\{r_1,\ldots,r_{\mu_{{\mathbb R},I}}\}$ be
the sets of real solutions of ${\mathbf f}({\mathbf x})=0$
and $({\mathbf f} + \bm \varepsilon)({\mathbf x})=0$ respectively.
We consider each~$r_i$ as a perturbation
of the root~$p_i$, hence we
write $r_i=p_i + \Delta p_i$ for $i=1, \dots,\mu_{{\mathbb R},I}$.
\medskip
Now we concentrate on a single element $p$ of $\mathcal Z_{\mathbb R}({\mathbf f})$.
\begin{corollary}\label{taylorsystem}
Let $p$ be one of the real solutions of ${\mathbf f} = 0$, and
$p + \Delta p$ the
corresponding real solution of ${\mathbf f}+ \bm \varepsilon = 0$. The we have
\begin{eqnarray}\label{firstOrderApprox}
0=({\mathbf f}+ \bm \varepsilon)(p+ \Delta p) =
\bm \varepsilon(p) + \mathop{\rm Jac}\nolimits_{{\mathbf f}+ \bm \varepsilon}(p)\Delta p +
\left( v_1(\xi_1), \ldots, v_n(\xi_n) \right)^t
\end{eqnarray}
where $\xi_1,\ldots,\xi_n$ are points on the line which
connects the points $p$ and $p + \Delta p$, and
$v_j(\xi_j) = \frac{1}{2} \Delta p^t H_{f_j+\varepsilon_j}(\xi_j)\Delta p$
for each~$j=1,\dots,n$.
\end{corollary}
\begin{proof}
It suffices to put $q=p+ \Delta p$, apply the formula
of Proposition~\ref{propTaylor} to the
polynomial system $({\mathbf f} + \bm \varepsilon)({\mathbf x})$, and use the fact
that ${\mathbf f}(p) = 0$.
\end{proof}
\begin{example}\label{singularJacobian}
We consider the zero-dimensional smooth complete
intersection ${\mathbf f}=\{f_1, f_2\}$ where $f_1=xy-6$, $f_2=x^2+y^2-13$
and observe that $\mathcal Z_{\mathbb R}({\mathbf f})=\{(-3,-2),(3,2), (-2,-3), (2,3)\}$.
The set ${\mathbf f}({\mathbf x})$ is embedded into the
following family $F({\mathbf a},{\mathbf x})=\{xy+a_1,x^2+a_2y^2+a_3\}$.
The semi-algebraic open set
$$
{\mathcal V}=\{{\bm \alpha} \in {\mathbb R}^3 \:|\: \alpha_3^2-4\alpha_1^2 \alpha_2>0,
\alpha_2>0, \alpha_3<0\}
$$
is a subset of the $I$-optimal
scheme ${\mathcal U} = \{\alpha\in A_{\mathbb R}^3 | \
\alpha_2(\alpha_3^2-4\alpha_1^2 \alpha_2) \ne 0\}$.
Moreover, it contains the
point ${{\bm \alpha}_I=(-6,\, 1, -13)}$, and the fiber over
each ${\bm \alpha} \in {\mathcal V}$ consists of~$4$ real points.
The set $\bm \varepsilon({\mathbf x})=\{\delta_1,
\delta_2y^2+\delta_3\}$, with $\delta_i \in {\mathbb R}$, is an
admissible perturbation of~${\mathbf f}({\mathbf x})$ if and only if the
conditions $(\delta_3-13)^2-4(\delta_1-6)^2 (\delta_2+1)>0$,
$\delta_2>-1$, and $\delta_3<13$
are satisfied.
Since the values $\delta_1=2$, $\delta_2= \tfrac{5}{4}$,
and $\delta_3=0$ satisfy the previous conditions, the polynomial
set $\bm \varepsilon({\mathbf x})=\{2,\ \tfrac{5}{4}{y^2}^{\mathstrut}\}$ is an
admissible perturbation of~${\mathbf f}({\mathbf x})$.
The real roots of $({\mathbf f} + \bm \varepsilon)({\mathbf x})=0$ are
\begin{eqnarray*}
\mathcal Z_{\mathbb R}({\mathbf f} + \bm \varepsilon)=\left\{ \left(-3,-\tfrac{4}{3}\right),
\left(3,\tfrac{4}{3}\right), (-2,-2), (2,2) \right \}
\end{eqnarray*}
For each $r_i \in \mathcal Z_{\mathbb R}({\mathbf f}+ \bm \varepsilon)$
the matrix $\mathop{\rm Jac}\nolimits_{{\mathbf f}+ \bm \varepsilon}(r_i)$ is invertible,
as predicted by the theory.
On the contrary, by evaluating $\mathop{\rm Jac}\nolimits_{{\mathbf f}+ \bm \varepsilon}({\mathbf x})$
at the third and the fourth point of $\mathcal Z_{\mathbb R}({\mathbf f})$ we
obtain a singular matrix. This is an obstruction to the
development of the theory which suggests further restrictions
(see the following discussion).
\end{example}
Our idea is to evaluate~$\Delta p$ using
equation~(\ref{firstOrderApprox}) of Corollary~\ref{taylorsystem}.
However, while the assumption that~$\bm \varepsilon({\mathbf x})$ is an
admissible perturbation of~${\mathbf f}({\mathbf x})$ combined with the
Jacobian criterion guarantee the non singularity of the
matrix~$\mathop{\rm Jac}\nolimits_{{\mathbf f} + \bm \varepsilon}(p + \Delta p)$,
they do not imply the non singularity of the
matrix $\mathop{\rm Jac}\nolimits_{{\mathbf f}+ \bm \varepsilon}(p)$, as we have
just seen in~Example~\ref{singularJacobian}.
The next step is to find a criterion which
guarantees the non singularity
of~$\mathop{\rm Jac}\nolimits_{{\mathbf f} + \bm \varepsilon}(p)$.
\begin{lemma}\label{lemmaTau}
If $\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p)\|< 1$
then $\mathop{\rm Jac}\nolimits_{{\mathbf f}+ \bm \varepsilon}(p)$ is invertible.
\end{lemma}
\begin{proof}
By assumption $p$ is a nonsingular root of ${\mathbf f}({\mathbf x})=0$,
hence $\mathop{\rm Jac}\nolimits_{\mathbf f}(p)$ is invertible and
so $\mathop{\rm Jac}\nolimits_{{\mathbf f} + \bm \varepsilon}(p) $
can be rewritten as
$
\mathop{\rm Jac}\nolimits_{{\mathbf f} + \bm \varepsilon}(p) =
\mathop{\rm Jac}\nolimits_{\mathbf f}(p) + \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p) = \mathop{\rm Jac}\nolimits_{\mathbf f}(p)
\left( I + \mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p) \right)
$.
Consequently, it suffices to show
that the matrix ${I + \mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p)}$
is invertible.
And we achieve it by proving that the spectral
radius~$\rho(\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p))$
is smaller than 1. We have
$
\rho(\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p))
\le \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p)\| <1
$,
and the proof is now complete.
\end{proof}
Note that the
requirement~$\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p)\|< 1$
gives a restriction on the admissible choices
of~$ \bm \varepsilon({\mathbf x})$, as we see in the following example.
\begin{example}{\bf (Example~\ref{singularJacobian}
continued)}\label{exSingJacContinued}\\
Let $ \bm \varepsilon({\mathbf x})=\{\delta_1,\delta_2y^2+\delta_3\}$,
with $\delta_i \in {\mathbb R}$, be an admissible perturbation of the
zero-dimensional complete
intersection~${\mathbf f}({\mathbf x})$ of Example~\ref{singularJacobian}.
We consider the real solution $p_4=(2,3)$ of ${\mathbf f}=0$ and
compute $\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p_4)^{-1} \mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p_4)\|^2_2
= \frac{117}{25}\delta_2^2$. From Lemma~\ref{lemmaTau}
the condition $|\delta_2| < \frac{5}{39}\sqrt{13}$ is
sufficient to have $\mathop{\rm Jac}\nolimits_{{\mathbf f}+\bm \varepsilon}(p_4)$ invertible.
\end{example}
From now on we assume that the hypothesis of
Lemma~\ref{lemmaTau} is satisfied.
In order to deduce an upper bound
for~$\|\Delta p\|$ we consider an approximation of it.
\begin{definition}
If $\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p)\|$ is different from $1$,
we denote the number ${1/(1-\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p)\|)}$
by~$\Lambda({\mathbf f},\bm \varepsilon,p)$. Moreover, if
equation~(\ref{firstOrderApprox}) is truncated at the first order,
we get the approximate
solution~$- \mathop{\rm Jac}\nolimits_{{\mathbf f} + \bm \varepsilon}(p)^{-1} \bm \varepsilon(p)$
which we call~$\Delta p^1$.
\end{definition}
\begin{proposition}
\label{estimateFirstOrderSolution}
Assume that ${\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p)\| <1}$ and
let $\|\cdot\|$ be an induced matrix norm. Then we have
\begin{eqnarray}\label{DeltaP}
\|\Delta p^1\| \le \Lambda({\mathbf f},\bm \varepsilon,p) \;
\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\| \; \|\bm \varepsilon(p)\|
\end{eqnarray}
\end{proposition}
\begin{proof}
Lemma~\ref{lemmaTau} guarantees that the
matrix $\mathop{\rm Jac}\nolimits_{{\mathbf f} +\bm \varepsilon}(p)$ is invertible, so
\begin{eqnarray*}
\Delta p^1 &=& - \mathop{\rm Jac}\nolimits_{{\mathbf f}+\bm \varepsilon}(p)^{-1} \bm \varepsilon(p)
= -(\mathop{\rm Jac}\nolimits_{\mathbf f}(p)+\mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p))^{-1} \bm \varepsilon(p) \\
&=& - \left ( I + \mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p) \right )^{-1}
\mathop{\rm Jac}\nolimits_{\mathbf f} (p)^{-1} \bm \varepsilon(p)
\end{eqnarray*}
We apply the inequality of Proposition~\ref{lemmaInverseNorm}
to $\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)$, and get
\begin{eqnarray*}
\|\Delta p^1\| &\le&
\|(I + \mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)^{-1})\| \;
\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\| \; \|\bm \varepsilon(p)\| \\
&\le& \Lambda({\mathbf f},\bm \varepsilon,p) \;
\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\| \; \|\bm \varepsilon(p)\|
\end{eqnarray*}
which concludes the proof.
\end{proof}
We introduce the local condition
number of the polynomial system ${\mathbf f}({\mathbf x})=0$.
\begin{definition}\label{LocalCondNumb}
Let ${\mathbf f}({\mathbf x})$ be a zero-dimensional smooth complete intersection
in~${\mathbb R}[{\mathbf x}]$, let~$p$ be a nonsingular real solution of ${\mathbf f}({\mathbf x})=0$,
and let $\|\cdot\|$ be a norm.
\begin{itemize}
\item[(a)]
The number $\kappa({\mathbf f},p) = \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\| \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|$
is called the {\bf local condition number} of~${\mathbf f}({\mathbf x})$ at~$p$.
\item[(b)] If the norm is an $r$-norm, the local condition
number is denoted
by $\kappa_r({\mathbf f},p)$.
\end{itemize}
\end{definition}
The following theorem illustrates the importance of the
local condition number.
It depends on $f$ and $p$, not on $\varepsilon$ and
is a key ingredient to provide an upper bound for the relative
error $\tfrac{\|\Delta p^1\|}{\|p\|}$.
\begin{theorem}{\bf (Local Condition Number)} \label{theoremCN}\\
Let $\|\cdot\|$ be an induced matrix norm;
under the above assumptions and the
condition $\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{ \bm \varepsilon}(p)\|< 1$
we have
\begin{eqnarray}\label{UB1}
\frac{\|\Delta p^1\|}{\|p\|} \le \Lambda({\mathbf f},\bm \varepsilon,p) \; \kappa({\mathbf f}, p)
\left( \frac{\| \mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)\|}{\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|} +
\frac{\|\bm \varepsilon(0) - \bm \varepsilon^{\ge 2}(0,p)\|}
{\|{\mathbf f}(0) - {\mathbf f}^{\ge 2}(0,p)\|} \right)
\end{eqnarray}
\end{theorem}
\begin{proof}
By Definition~\ref{defTaylor} the evaluation of $\bm \varepsilon$
at $0$
can be expressed in this way $\bm \varepsilon(0)=
\bm \varepsilon(p) - \mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)p+
\bm \varepsilon^{\ge 2} (0,p)$, and
so $\bm \varepsilon(p) =\bm \varepsilon(0)+
\mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)p- \bm \varepsilon^{\ge 2} (0,p)$.
Dividing~(\ref{DeltaP}) of
Proposition~\ref{estimateFirstOrderSolution} by~$\|p\|$
we obtain
\begin{eqnarray*}
\frac{\|\Delta p^1\|}{\|p\|} &\le&
\Lambda({\mathbf f},\bm \varepsilon,p) \; \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\|
\: \frac{\|\bm \varepsilon(p)\|}{\|p\|}\\
&\le& \Lambda({\mathbf f},\bm \varepsilon,p)
\; \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\| \:
\frac{\| \mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)\| \|p\| +
\|\bm \varepsilon(0) - \bm \varepsilon^{\ge 2} (0,p)\|}{\|p\|}\\
& = & \Lambda({\mathbf f},\bm \varepsilon,p)
\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\| \left ( \|\mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)\| +
\frac{\|\bm \varepsilon(0) -
\bm \varepsilon^{\ge 2} (0,p)\|}{\|p\|} \right )
\end{eqnarray*}
Using again Definition~\ref{defTaylor} we
express ${\mathbf f}(0)= {\mathbf f}(p) - \mathop{\rm Jac}\nolimits_{\mathbf f}(p) p + {\mathbf f}^{\ge 2} (0,p)$;
since ${\mathbf f}(p)=0$ we have $\|{\mathbf f}(0) - {\mathbf f}^{\ge 2} (0,p)\| =
\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p) p\| \le \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)\| \|p \|$ from which
$$
\frac{1}{\|p\|} \le \frac{\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|}{\|{\mathbf f}(0) -
{\mathbf f}^{\ge 2} (0,p)\|}
$$
We combine the inequalities to obtain
\begin{eqnarray*}
\frac{\| \Delta p^1\|}{\|p\|} &\le&
\Lambda({\mathbf f},\bm \varepsilon,p)\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\|
\left ( \|\mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)\| + \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|
\frac{\|\bm \varepsilon(0) -
\bm \varepsilon^{\ge 2} (0,p)\|}{\|{\mathbf f}(0) - {\mathbf f}^{\ge 2} (0,p)\|} \right )\\
&\le& \Lambda({\mathbf f},\bm \varepsilon,p)
\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\| \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|
\left ( \frac{\|\mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)\|}{\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|} +
\frac{\|\bm \varepsilon(0) -
\bm \varepsilon^{\ge 2} (0,p)\|}{\|{\mathbf f}(0) - {\mathbf f}^{\ge 2} (0,p)\|} \right )
\end{eqnarray*}
and the proof is concluded.
\end{proof}
\goodbreak
The following remark contains observations
about the local condition number.
\begin{remark}\label{remarkCN}
We call attention to the following observations.
\begin{itemize}
\item[(a)] The notion of local condition number
given in Definition~\ref{LocalCondNumb} is a
generalization of the classical notion of condition
number of linear systems (see~\cite{BCM}).
In fact, if ${\mathbf f}({\mathbf x})$ is linear, that
is ${\mathbf f}({\mathbf x})=A {\mathbf x}-b$ with $A \in \rm{Mat}_n({\mathbb R})$
invertible, and $\mathcal Z_{\mathbb R}({\mathbf f})=\{p\}=\{A^{-1}b\}$,
then $\kappa({\mathbf f},p)$ is the classical condition
number of the matrix~$A$. In fact $\mathop{\rm Jac}\nolimits_{\mathbf f}({\mathbf x})=A$, and so
$
\kappa({\mathbf f},p) = \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1}\| \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)\| = \|A^{-1}\| \|A\|
$.
Further, if we consider the
perturbation $\bm \varepsilon({\mathbf x}) = \Delta A {\mathbf x} - \Delta b$,
relation (\ref{UB1}) becomes
$$ \label{classical}
\frac{\|\Delta p\|}{\|p\|} \le \frac{1}{1- \|A^{-1}\| \;
\|\Delta A\|} \|A^{-1}\| \; \|A\| \left( \frac{\|\Delta A\|}{\|A\|} +
\frac{\|\Delta b\|}{\|b\|}\right) \eqno {(4)}
$$
which is the relation that quantifies the sensitivity
of the $Ax=b$ problem (see~\cite{BCM}, Theorem~4.1).
\item[(b)] Using any induced matrix norm, the
condition number $\kappa({\mathbf f},p)$ turns out to
be greater than or equal to $1$.
In particular, using the $2$-norm we have
$\kappa_2({\mathbf f},p)=
\frac{\sigma_{\max}(\mathop{\rm Jac}\nolimits_{\mathbf f}(p))}{\sigma_{\min}(\mathop{\rm Jac}\nolimits_{\mathbf f}(p))}$;
in this case the local condition number
attains its minimum, that is $\kappa_2({\mathbf f},p)=1$,
when $\mathop{\rm Jac}\nolimits_{\mathbf f}(p)$ is orthonormal.
\item[(c)] The condition number $\kappa({\mathbf f},p)$ is
invariant under a scalar multiplication of the
polynomial system ${\mathbf f}({\mathbf x})$ by a unique
nonzero real number $\gamma$.
On the contrary, $\kappa({\mathbf f},p)$ is not invariant
under a generic scalar multiplication of each
polynomial~$f_j({\mathbf x})$ of~${\mathbf f}({\mathbf x})$.
The reason is that if we multiply each~$f_j({\mathbf x})$
by a nonzero real number $\gamma_j$ we
obtain the new polynomial
set ${{\mathbf g}({\mathbf x}) = \{\gamma_1 f_1({\mathbf x}),\ldots,\gamma_n f_n({\mathbf x})\}}$
whose condition number at $p$ is
\begin{eqnarray*}
\kappa({\mathbf g}, p) = \|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \Gamma^{-1}\| \:
\| \Gamma \mathop{\rm Jac}\nolimits_{\mathbf f}(p) \| \neq \kappa({\mathbf f},p)
\end{eqnarray*}
where $\Gamma={\rm diag}(\gamma_1,
\ldots,\gamma_n) \in \rm{Mat}_n({\mathbb R})$
is the diagonal matrix with
entries~$\gamma_1,\ldots,\gamma_n$.
\item[(d)]
It is interesting to observe that if $p$ is the origin
then Formula~(\ref{UB1}) of the theorem is not applicable.
However, one can translate $p$ away from the origin,
and the nice thing is that the local condition number
does not change.
\bigskip
\end{itemize}
\end{remark}
\section{Optimization of the local condition number}
\label{Optimization of the local condition number}
In this section we introduce a strategy to improve the
numerical stability of zero-dimensional smooth
complete intersections.
Let ${\mathbf f}({\mathbf x})=\{f_1({\mathbf x}),\ldots,f_n({\mathbf x})\}$ be a
zero-dimensional smooth complete intersection
in ${\mathbb R}[{\mathbf x}]$, and let~$I$ be the ideal of~${\mathbb R}[{\mathbf x}]$
generated by~${\mathbf f}({\mathbf x})$; our aim is to find an alternative
representation of~$I$ with minimal local condition number.
Motivated by Remark~\ref{remarkCN}, item (b) and (c), we consider
the strategy of resizing each polynomial of~${\mathbf f}({\mathbf x})$,
and study its effects on the condition number.
The following proposition shows that rescaling
each~$f_j({\mathbf x})$ so that $\frac{\partial f_j}{\partial {\mathbf x}}(p)$
has unitary norm is a nearly optimal, in some cases optimal, strategy.
The result is obtained by adapting the method of
Van der Sluis (see~\cite{H96}, Section 7.3) to the polynomial case.
\begin{proposition}\label{scalingCN}
Let~$p$ be a nonsingular real solution of ${\mathbf f}({\mathbf x})=0$,
let $r_1 \ge 1, r_2 \ge 1$
be real numbers such that ${\frac{1}{r_1}+ \frac{1}{r_2}=1}$,
including the pairs $(1, \infty)$ and $(\infty, 1)$,
let $\gamma =(\gamma_1,\ldots,\gamma_n)$
be an $n$-tuple of nonzero real numbers, and
let ${\mathbf g}_\gamma({\mathbf x})$, ${\bm u}({\mathbf x})$ be the polynomial systems
defined by ${\mathbf g}_\gamma({\mathbf x})=\{\gamma_1 f_1({\mathbf x}),\ldots,\gamma_n f_n({\mathbf x})\}$
and~${\bm u}({\mathbf x}) = \{ \|\frac{\partial f_1}{\partial {\mathbf x}}(p)\|_{r_2}^{-1} f_1({\mathbf x}),
\ldots,\|\frac{\partial f_n}{\partial {\mathbf x}}(p)\|_{r_2}^{-1} f_n({\mathbf x})\}$.
\begin{itemize}
\item[(a)] We have the inequality
$\kappa_{r_1} ({\bm u},p) \le n^{1/r_1} \kappa_{r_1}({\mathbf g}_\gamma,p)$.
\item[(b)] In particular, if $(r_1, r_2) = (\infty, 1)$ we have the equality
$$\kappa_\infty({\bm u},p) = {\rm min}_\gamma \kappa_\infty({\mathbf g}_\gamma,p)$$
where
${\bm u}({\mathbf x}) = \{ \|\frac{\partial f_1}{\partial {\mathbf x}}(p)\|_1^{-1} f_1({\mathbf x}),
\ldots,\|\frac{\partial f_n}{\partial {\mathbf x}}(p)\|_1^{-1} f_n({\mathbf x})\}$.
\end{itemize}
\end{proposition}
\begin{proof}
Let $\Gamma=\rm{diag(\gamma_1,\ldots,\gamma_n)}$
and $D=\rm{diag}(\|\frac{\partial f_1}{\partial {\mathbf x}}(p)\|_{r_2}^{-1},
\ldots,\|\frac{\partial f_n}{\partial {\mathbf x}}(p)\|_{r_2}^{-1})$;
then $\mathop{\rm Jac}\nolimits_{{\mathbf g}_\gamma}({\mathbf x}) = \Gamma \mathop{\rm Jac}\nolimits_{\mathbf f}({\mathbf x})$
and $\mathop{\rm Jac}\nolimits_{\bm u}({\mathbf x})=D \mathop{\rm Jac}\nolimits_{\mathbf f}({\mathbf x})$.
The condition numbers of ${\mathbf g}_\gamma({\mathbf x})$ and ${\bm u}({\mathbf x})$ at~$p$
are given by
\begin{eqnarray*}
\kappa_{r_1} ({\mathbf g}_\gamma,p) &
=& \| (\Gamma \mathop{\rm Jac}\nolimits_{\mathbf f}(p))^{-1}\|_{r_1} \|\Gamma \mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|_{r_1}\\
\kappa_{r_1} ({\bm u},p) &
=& \| (D \mathop{\rm Jac}\nolimits_{\mathbf f}(p))^{-1} \|_{r_1} \|D \mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|_{r_1}
\end{eqnarray*}
From Proposition~\ref{classicalIneq2} we have
\begin{eqnarray*}
\|D \mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|_{r_1} &\le& n^{1/r_1}
\max_i \|(D \mathop{\rm Jac}\nolimits_{\mathbf f}(p))_i\|_{r_2} = n^{1/r_1}\\
\|(D \mathop{\rm Jac}\nolimits_{\mathbf f}(p))^{-1}\|_{r_1} &=&
\|\mathop{\rm Jac}\nolimits_{\mathbf f}^{-1}(p) D^{-1}\|_{r_1} = \|\mathop{\rm Jac}\nolimits_{\mathbf f}^{-1}(p)
\Gamma^{-1} \Gamma D^{-1}\|_{r_1} \\
&\le& \|\mathop{\rm Jac}\nolimits_{\mathbf f}^{-1}(p) \Gamma^{-1}\|_{r_1}
\max_i \left ( | \gamma_i | \: \left\| \frac{\partial f_i}
{\partial {\mathbf x}}(p) \right\|_{r_2} \right) \\
&\le& \|\mathop{\rm Jac}\nolimits_{\mathbf f}^{-1}(p) \Gamma^{-1}\|_{r_1}
\|\Gamma \mathop{\rm Jac}\nolimits_{\mathbf f}(p)\|_{r_1} = \kappa_{r_1}({\mathbf g}_\gamma,p)
\end{eqnarray*}
therefore $
\kappa_{r_1} ({\bm u},p) \le n^{1/r_1} \kappa_{r_1}({\mathbf g}_\gamma,p)$
and $(a)$ is proved. To prove $(b)$ it suffices to use $(a)$
and observe that $n^{1/\infty} = 1$
\end{proof}
\begin{remark}
The above proposition implies that the strategy of
rescaling each polynomial~$f_j({\mathbf x})$ to make it unitary at~$p$
(see Definition~\ref{defTaylor}) is beneficial
for lowering the local condition number of~${\mathbf f}({\mathbf x})$ at~$p$.
This number is minimum when ${r=\infty}$, it is within
factor $\sqrt{n}$ of the minimum when $r=2$.
However, for $r=2$ we can do better, at least when all the
polynomials $f_1({\mathbf x}), \dots, f_n({\mathbf x})$ have equal degree.
The idea is to use Remark~\ref{remarkCN}, item~(b)
which says that when using the matrix $2$-norm,
the local condition number attains its minimum when the
Jacobian matrix is orthonormal.
\end{remark}
\begin{proposition}\label{min2norm}
Let ${\mathbf f} = (f_1, \dots, f_n)$ be a smooth zero-dimensional complete
intersection in~${\mathbb R}[{\mathbf x}]$ such that $\deg(f_1) = \cdots = \deg(f_n)$
and let $p\in \mathcal Z_{\mathbb R}({\mathbf f})$.
Moreover, let $C = (c_{ij}) \in {\rm Mat}_n({\mathbb R})$
be an invertible matrix, and let ${\mathbf g}$ be defined
by ${\mathbf g}^{\rm tr} = C\cdot {\mathbf f}^{\rm tr}$.
Then the following conditions are equivalent
\begin{itemize}
\item[(a)] $\kappa_2({\mathbf g}, p)=1$, the minimum possible.
\item[(b)] $C^t C = (\mathop{\rm Jac}\nolimits_{\mathbf f}(p) \mathop{\rm Jac}\nolimits_{\mathbf f}(p)^t)^{-1}$.
\end{itemize}
\end{proposition}
\begin{proof}
We know that $\kappa_2({\mathbf g},p) =1$ if and only
if the matrix $\mathop{\rm Jac}\nolimits_{\mathbf g}(p)$ is orthonormal.
This condition can be expressed by the
equality $\mathop{\rm Jac}\nolimits_{\mathbf g}(p) \mathop{\rm Jac}\nolimits_{\mathbf g}(p)^t = I_n$,
that is $C \mathop{\rm Jac}\nolimits_{\mathbf f}(p) \mathop{\rm Jac}\nolimits_{\mathbf f}(p)^t C^t=I_n$
and the conclusion follows.
\end{proof}
We observe that condition $(b)$ of the proposition
requires that the entries of $C$
satisfy an underdetermined system of $(n^2+n)/2$
independent quadratic
equations in $n^2$ unknowns.
\section{Experiments}\label{Experiments}
In numerical linear algebra it is well-known
(see for instance~\cite{BCM}, Ch. 4, Section~1) that the
upper bound given by the classical formula (4) of
Remark~\ref{classical} (a)
is not necessarily sharp.
Since our upper bound~(\ref{UB1})
generalizes the classical one, as shown in
Remark~\ref{remarkCN},
we provide some experimental evidence
that lowering the condition number not only
sharpens the upper bound, but indeed
stabilizes the solution point.
\begin{example}
We consider the ideal $I=(f_1, f_2)$ in ${\mathbb R}[x,y]$ where
\begin{eqnarray*}
f_1 &=& \tfrac{1}{4}x^2y + xy^2 + \tfrac{1}{4}y^3 + \tfrac{1}{5}x^2 -
\tfrac{5}{8}xy + \tfrac{13}{40}y^2 + \tfrac{9}{40}x - \tfrac{3}{5}y + \tfrac{1}{40}\\
f_2 &=& x^3 + \tfrac{14}{13}xy^2 + \tfrac{57}{52}x^2 - \tfrac{25}{52}xy
+ \tfrac{8}{13}y^2 - \tfrac{11}{52}x - \tfrac{4}{13}y - \tfrac{4}{13}
\end{eqnarray*}
It is a zero-dimensional smooth complete
intersection with~$7$ real roots and
we consider the point $p=(0,1) \in \mathcal Z_{\mathbb R}({\mathbf f})$.
The polynomial system ${\mathbf f}=\{f_1, f_2\}$ is unitary at~$p$
and its condition number is $\kappa_2({\mathbf f},p)=8$.
Using Proposition~\ref{min2norm} we construct a
new polynomial system~${\mathbf g}$ with minimal local
condition number at~$p$.
The new pair of generators ${\mathbf g}$ is defined
(see Proposition~\ref{min2norm}) by the following
the formula ${\mathbf g}^{\rm tr}=C \cdot {\mathbf f}^{\rm tr}$,
where $C=(c_{ij}) \in \mathop{\rm Mat}\nolimits_2({\mathbb R})$ is an invertible
matrix whose entries satisfy the following system
\begin{eqnarray*}
\left \{ \begin{array}{lll}
c_{11}^2 + c_{21}^2 &=& \tfrac{25}{16}\\
c_{11}c_{12} + c_{21}c_{22} &=& -\tfrac{15}{16}\\
c_{12}^2 + c_{22}^2 &=& \tfrac{25}{16}
\end{array} \right .
\end{eqnarray*}
A solution is given by $c_{11}=1$, $c_{12}
=0$, $c_{21}=\frac{63}{16}$, $c_{22}=-\frac{65}{16}$, and
we observe that the associated unitary polynomial
system ${\mathbf g}=\{f_1, \frac{63}{16} f_1 - \frac{65}{16}f_2\}$
provides an alternative representation of~$I$ with
minimal local condition
number $\kappa_2({\mathbf g},p)=1$ at the point~$p$.
Now we embed the system~${\mathbf f}(x,y)$ into the
family $F(a,x,y)=\{F_1, F_2\}$ where
\begin{eqnarray*}
F_1(a,x,y) &=& \tfrac{1}{4}x^2y + xy^2
+ \tfrac{1}{4}y^3 + \tfrac{1}{5}x^2 -
\tfrac{5}{8}xy + \left(\tfrac{13}{40}-a \right)y^2\\
&& + \left(\tfrac{9}{40}+a \right)x
+ \left(- \tfrac{3}{5}+a \right)y + \tfrac{1}{40}-2a\\
F_2(a,x,y) &=& x^3 + \tfrac{14}{13}xy^2
+ \tfrac{57}{52}x^2 - \tfrac{25}{52}xy
+ \left(\tfrac{8}{13}+a \right)y^2\\
&& + \left(- \tfrac{11}{52}+a \right)x
- \left(\tfrac{4}{13}+a \right)y - \tfrac{4}{13}+a^2
\end{eqnarray*}
We denote by $I_F(a,x,y)$ the ideal
generated by~$F(a,x,y)$ in~${\mathbb R}[a,x,y]$,
compute the reduced {\tt Lex}-Gr\"obner
basis of $I_F(a,x,y){\mathbb R}(a)[x,y]$, and get
$$
\{x + \tfrac{l_1(a,y)}{d_F(a)}, \ \ y^9 + l_2(a,y)\}
$$
where $l_1(a,y),l_2(a,y) \in {\mathbb R}[a,y]$ have
degree~$8$ in~$y$ and
$d_F(a) \in {\mathbb R}[a]$ has degree~$12$.
This basis has the shape prescribed by
the Shape Lemma and a flat locus is given
by $\{\alpha\in {\mathbb R} \;|\; d_F(\alpha) \neq 0\}$.
We let $D_F(a,x,y)=\det(\mathop{\rm Jac}\nolimits_F(a,x,y))$,
$J_F(a,x,y)=I_F(a,x,y)+(D_F(a,x,y))$,
compute $J_F(a,x,y) \cap {\mathbb R}[a]$, and
we get the principal ideal generated
by a univariate polynomial $h_F(a)$ of degree~$28$.
An $I$-optimal subscheme
is ${\mathcal U}_F=\{\alpha \in {\mathbb R} \;|\; d_F(\alpha)h_F(\alpha) \neq 0 \}$.
An open semi-algebraic subset~${\mathcal V}_F$ of~${\mathcal U}_F$
which contains the point $\alpha_I=0$ and such that the
fiber over each $\alpha \in {\mathcal V}_F$
consists of~$7$ real points, is given by the open
interval $(\alpha_1,\alpha_2)$,
where $\alpha_1<0$ and $\alpha_2>0$ are the real
roots of $d_F(a)h_F(a)=0$ closest to the origin.
Their approximate values are $\alpha_1=-0.00006$
and $\alpha_2=0.01136$.
To produce similar perturbations, we embed the
system~${\mathbf g}(x,y)$ into the family $G(a,x,y)=\{G_1, G_2\}$ where
\begin{eqnarray*}
G_1(a,x,y) &=& \tfrac{1}{4}x^2y + xy^2 + \tfrac{1}{4}y^3
+ \tfrac{1}{5}x^2 - \tfrac{5}{8}xy + \left(\tfrac{13}{40}-a \right)y^2\\
&& + \left(\tfrac{9}{40}+a \right)x + \left(- \tfrac{3}{5}
+a \right)y + \tfrac{1}{40}-2a\\
G_2(a,x,y) &=& -\tfrac{65}{16}x^3
+ \tfrac{63}{64}x^2y - \tfrac{7}{16}xy^2
+ \tfrac{63}{64}y^3 - \tfrac{1173}{320}x^2
- \tfrac{65}{128}xy\\
&& + \left(- \tfrac{781}{640}+a \right)y^2
+ \left( \tfrac{1117}{640}+a \right)x
+ \left(- \tfrac{89}{80}-a \right)y + \tfrac{863}{640}+a^2
\end{eqnarray*}
We denote by $I_G(a,x,y)$ the ideal generated
by~$G(a,x,y)$ in~${\mathbb R}[a,x,y]$,
compute the reduced {\tt Lex}-Gr\"obner
basis of $I_G(a,x,y){\mathbb R}(a)[x,y]$, and get
$$
\{x + \tfrac{l_3(a,y)}{d_G(a)}, \; y^9 + l_4(a,y)\}
$$
where $l_3(a,y),l_4(a,y) \in {\mathbb R}[a,y]$ have
degree~$8$ in~$y$ and
$d_G(a) \in {\mathbb R}[a]$ has degree~$12$, therefore the basis has
the shape prescribed by the Shape Lemma.
A flat locus is given by $\{\alpha\in {\mathbb R} \;|\; d_G(\alpha) \neq 0\}$.
We let $D_G(a,x,y)=\det(\mathop{\rm Jac}\nolimits_G(a,x,y))$, $J_G(a,x,y)
=I_G(a,x,y)+(D_G(a,x,y))$
and compute $J_G(a,x,y) \cap {\mathbb R}[a]$.
We get the principal ideal generated
by a univariate polynomial $h_G(a)$ of degree~$28$.
An $I$-optimal subscheme
is ${\mathcal U}_G=\{\alpha \in {\mathbb R} \;|\; d_G(\alpha)h_G(\alpha) \neq 0 \}$.
An open semi-algebraic subset~${\mathcal V}_G$ of~${\mathcal U}_G$
containing the point $\alpha_I=0$ and such that the
fiber over each $\alpha \in {\mathcal V}_G$
consists of~$7$ real points is given by the
open interval $(\alpha_3,\alpha_4)$,
where $\alpha_3<0$ and $\alpha_4>0$ are
the real roots of $d_G(a)h_G(a)=0$
closest to the origin.
Their approximate values are $\alpha_3=-0.00009$
and $\alpha_4=0.00914$.
Let $\alpha \in (\alpha_1,\alpha_4)$.
According to Definition~\ref{admissible} the polynomial set
$\bm \varepsilon(x,y)=\{-\alpha y^2 + \alpha x
+ \alpha y - 2\alpha,\; \alpha y^2 + \alpha x
- \alpha y + \alpha^2\}$ is an admissible perturbation
of~${\mathbf f}(x,y)$ and~${\mathbf g}(x,y)$.
Further, since $\|\mathop{\rm Jac}\nolimits_{\mathbf f}(p)^{-1} \mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)\|_2
= \sqrt{65} |\alpha| < 1$ and
$\|\mathop{\rm Jac}\nolimits_{\mathbf g}(p)^{-1} \mathop{\rm Jac}\nolimits_{\bm \varepsilon}(p)\|_2
= \sqrt{2} |\alpha| < 1$
Theorem~\ref{theoremCN} can be applied.
We let $q \in \mathcal Z_{\mathbb R}({\mathbf f} + \bm \varepsilon)$
and $r \in \mathcal Z_{\mathbb R}({\mathbf g} +\bm \varepsilon)$
be the two perturbations of the point~$p$.
In order to compare the numerical behaviour of~${\mathbf f}$
and~${\mathbf g}$ at the real root~$p$
we compare the relative errors $\frac{\|q - p\|_2}{\|p\|_2}$
and $\frac{\|r - p\|_2}{\|p\|_2}$
for different values of~$\alpha$.
The first column of the following table contains the
values of the local condition
numbers of~${\mathbf f}$ and~${\mathbf g}$ at~$p$.
The second column contains the mean values of the
upper bounds ${\rm UB}({\mathbf f},p)$ and ${\rm UB}({\mathbf g},p)$
given by Theorem~\ref{theoremCN},
computed for $100$ random values
of ~$\alpha \in (\alpha_1,\alpha_4)$.
The third column contains the mean values of
$\frac{\|q-p\|_2}{\|p\|_2}$ and $\frac{\|r-p\|_2}{\|p\|_2}$
for the same values of $\alpha$.
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|}
\hline
$\kappa_2({\mathbf f},p)$ & ${\rm UB}({\mathbf f},p)$ & $\frac{\|q-p\|_2}{\|p\|_2}$\\
\hline
$8$ & $0.1729$ & $0.000097$ \\
\hline \hline
$\kappa_2({\mathbf g},p)$ &${\rm UB}({\mathbf g},p)$ & $\frac{\|r-p\|_2}{\|p\|_2}$ \\
\hline
$1$ & $0.0275$ & $0.000023$\\
\hline
\end{tabular}
\end{table}
\noindent The fact that the mean values of $\frac{\|q - p\|_2}{\|p\|_2}$ are smaller than the
mean values of~$\frac{\|r - p\|_2}{\|p\|_2}$ suggests that~$p$ is more stable when
it is considered as a root of~${\mathbf g}$ instead of as a root of~${\mathbf f}$.
\end{example}
\begin{example}
We consider the ideal $I=(f_1, f_2, f_3)$ in ${\mathbb R}[x,y,z]$
where
\begin{eqnarray*}
f_1 &=& \tfrac{6}{17}x^2 + xy - \tfrac{24}{85}x
- \tfrac{8}{85}y - \tfrac{6}{85}\\
f_2 &=& \tfrac{39}{89}x^2 + \tfrac{70}{89}xy
+ yz - \tfrac{39}{89}x + \tfrac{10}{89}y\\
f_3 &=& y^2 + 2xz + z^2 - z
\end{eqnarray*}
It is a zero-dimensional smooth complete
intersection with~$6$ real roots and
we consider the point $p=(1,0,0) \in \mathcal Z_{\mathbb R}({\mathbf f})$.
The polynomial system ${\mathbf f}=\{f_1, f_2, f_3\}$ is unitary
at~$p$ and its condition number is $\kappa_2({\mathbf f},p)=123$.
Using Proposition~\ref{min2norm} we construct a new
polynomial system~${\mathbf g}$ with minimal local condition
number at~$p$. The new set ${\mathbf g}$ is defined
by ${\mathbf g}^{\rm tr}=C \cdot {\mathbf f}^{\rm tr}$,
where $C=(c_{ij}) \in \mathop{\rm Mat}\nolimits_3({\mathbb R})$ is an invertible
matrix whose entries satisfy the following system
\begin{eqnarray*}
\left \{ \begin{array}{llc}
c_{11}^2 + c_{21}^2 + c_{31}^2 &=& \tfrac{57229225}{15129}\\
c_{11}c_{12} + c_{21}c_{22} + c_{31}c_{32} &
=& -\tfrac{57221660}{15129}\\
c_{11}c_{13} + c_{21}c_{23} + c_{31}c_{33} &=& 0\\
c_{12}^2 + c_{22}^2 + c_{32}^2 &=& \tfrac{57229225}{15129}\\
c_{12}c_{13} + c_{22}c_{23} + c_{32}c_{33} &=& 0\\
c_{13}^2 + c_{23}^2 + c_{33}^2 &=& 1\\
\end{array} \right .
\end{eqnarray*}
A solution is given by $c_{11}=c_{33}=1$, $c_{12}=c_{13}
=c_{23}=c_{32}=0$, ${c_{21}=\frac{7564}{123}}$,
${c_{22}=-\frac{7565}{123}}$.
Therefore the associated unitary
polynomial system is the following
${\mathbf g}=\{f_1, \frac{7564}{123}f_1 -\frac{7565}{123}f_2,
f_3\}$. It provides an alternative representation of~$I$
with minimal local condition number $\kappa_2({\mathbf g},p)=1$ at the point $p$.
We embed the system~${\mathbf f}(x,y,z)$ into the
family $F(a,x,y,z)=\{F_1, F_2, F_3\}$ where
\begin{eqnarray*}
F_1(a,x,y,z) &=& \tfrac{6}{17}x^2 + (1-a^2)xy
+ (-\tfrac{24}{85}+a)x +(-\tfrac{8}{85}-a)y + (-\tfrac{6}{85}+a^2)\\
F_2(a,x,y,z) &=& \tfrac{39}{89}x^2 + (\tfrac{70}{89}+a)xy
+ yz + (\tfrac{39}{89}+a)x + (\tfrac{10}{89}+a)y\\
F_3(a,x,y,z) &=& y^2 + 2xz + (1-2a)z^2 + (-1+a)z
\end{eqnarray*}
We denote by $I_F(a,x,y,z)$ the ideal generated
by~$F(a,x,y,z)$ in~${\mathbb R}[a,x,y,z]$,
compute the reduced {\tt Lex}-Gr\"obner basis
of $I_F(a,x,y,z){\mathbb R}(a)[x,y,z]$, and get
$$
\{x + \tfrac{l_1(a,z)}{d_F(a)}, \; y +
\tfrac{l_2(a,z)}{d_F(a)}, \; z^9 + \tfrac{l_3(a,z)}{e_F(a)}\}
$$
where $l_1(a,z),l_2(a,z),l_3(a,z) \in {\mathbb R}[a,z]$ have degrees
$\deg_z(l_1)=\deg_z(l_2)=7$ and $\deg_z(l_3)=8$ while
$d_F(a) \in {\mathbb R}[a]$ has degree~$54$,
and $e_F(a) \in {\mathbb R}[a]$ has degree~$11$.
The basis has the shape prescribed by the Shape Lemma.
A flat locus is given
by $\{\alpha\in {\mathbb R} \;|\; d_F(\alpha) e_F(\alpha) \neq 0\}$.
We let $D_F(a,x,y,z)=\det(\mathop{\rm Jac}\nolimits_F(a,x,y,z))$,
$J_F(a,x,y,z)=I_F(a,x,y,z)+(D_F(a,x,y,z))$
and compute $J_F(a,x,y,z) \cap {\mathbb R}[a]$.
We get the principal ideal generated
by a univariate polynomial $h_F(a)$ of degree~$59$.
An $I$-optimal subscheme is ${\mathcal U}_F=\{\alpha \in {\mathbb R} \;|\;
d_F(\alpha)e_F(\alpha)h_F(\alpha) \neq 0 \}$.
An open semi-algebraic subset~${\mathcal V}_F$ of~${\mathcal U}_F$
containing the point $\alpha_I=0$ and such that
the fiber over each $\alpha \in {\mathcal V}_F$
consists of~$6$ real points is given by the open
interval $(\alpha_1,\alpha_2)$,
where $\alpha_1<0$ and $\alpha_2>0$ are the
real roots of $d_F(a)e_F(a) h_F(a)=0$
closest to the origin. Their approximate values
are $\alpha_1=-0.17082$ and $\alpha_2=0.20711$.
To produce similar perturbations,
we embed the system~${\mathbf g}(x,y,z)$ into the family
$G(a,x,y,z)=\{G_1, G_2, G_3\}$ where
\begin{eqnarray*}
G_1(a,x,y) &=& \tfrac{6}{17}x^2 + (1-a^2)xy
+ (-\tfrac{24}{85}+a)x +(-\tfrac{8}{85}-a)y + (-\tfrac{6}{85}+a^2)\\
G_2(a,x,y) &=& -\tfrac{3657}{697}x^2
+ (\tfrac{538}{41}+a)xy - \tfrac{7565}{123}yz
+ (\tfrac{33413}{3485}+a)x \\
&& +(- \tfrac{44254}{3485}+a)y - \tfrac{15128}{3485}\\
G_3(a,x,y) &=& y^2 + 2xz + (1-2a)z^2 + (-1+a)z
\end{eqnarray*}
We denote by $I_G(a,x,y,z)$ the ideal
generated by~$G(a,x,y,z)$ in~${\mathbb R}[a,x,y,z]$,
compute the reduced {\tt Lex}-Gr\"obner
basis of $I_G(a,x,y,z){\mathbb R}(a)[x,y,z]$, and get
$$
\{x + \tfrac{l_4(a,z)}{d_G(a)}, \; y
+ \tfrac{l_5(a,z)}{d_G(a)}, \; z^9 + \tfrac{l_6(a,z)}{e_G(a)}\}
$$
where $l_4(a,z),l_5(a,z),l_6(a,z) \in {\mathbb R}[a,z]$ have degrees
$\deg_z(l_4)=\deg_z(l_5)=7$ and $\deg_z(l_6)=8$ while
$d_G(a) \in {\mathbb R}[a]$ has degree~$54$,
and $e_G(a) \in {\mathbb R}[a]$ has degree~$11$.
The basis has the shape prescribed by the Shape Lemma.
A flat locus is given by $\{\alpha\in {\mathbb R} \;|\; d_{G1}
(\alpha) d_{G2}(\alpha) \neq 0\}$.
We let $D_G(a,x,y,z)=\det(\mathop{\rm Jac}\nolimits_G(a,x,y,z))$,
$J_G(a,x,y,z)=I_G(a,x,y,z)+(D_G(a,x,y,z))$
and compute $J_G(a,x,y,z) \cap {\mathbb R}[a]$.
We get the principal ideal generated by a
univariate polynomial $h_G(a)$ of degree~$59$.
An $I$-optimal subscheme is
${\mathcal U}_G=\{\alpha \in {\mathbb R} \;|\; d_G(\alpha)e_G(\alpha)h_G(\alpha) \neq 0 \}$.
An open semi-algebraic subset~${\mathcal V}_G$ of~${\mathcal U}_G$
containing the point $\alpha_I=0$ and such that the
fiber over each $\alpha \in {\mathcal V}_G$
consists of~$6$ real points is given by the open
interval $(\alpha_3,\alpha_4)$,
where $\alpha_3<0$ and $\alpha_4>0$ are the
real roots of $d_G(a) e_G(a) h_G(a)=0$
closest to the origin. Their approximate
values are $\alpha_3=-0.02942$
and $\alpha_4=0.03312$.
Let $\alpha \in (\alpha_3,\alpha_4)$. According to
Definition~\ref{admissible} the polynomial set
$\bm \varepsilon(x,y)=\{-\alpha^2xy
+ \alpha x - \alpha y + \alpha^2, \; \alpha xy + \alpha x
+ \alpha y, \; -2 \alpha z^2+ \alpha z\}$ is an
admissible perturbation of~${\mathbf f}(x,y,z)$ and~${\mathbf g}(x,y,z)$.
We let $q \in \mathcal Z_{\mathbb R}({\mathbf f} + \bm \varepsilon)$
and $r \in \mathcal Z_{\mathbb R}({\mathbf g} +\bm \varepsilon)$
be the two perturbations of the point~$p$.
In order to compare the numerical behaviour of~${\mathbf f}$
and~${\mathbf g}$ at the real root~$p$
we compare the relative errors $\frac{\|q - p\|_2}{\|p\|_2}$
and $\frac{\|r - p\|_2}{\|p\|_2}$ for different values of~$\alpha$.
The first column of the following table contains the
values of the local condition numbers of~${\mathbf f}$ and~${\mathbf g}$ at~$p$.
The second column contains the mean values of
$\frac{\|q-p\|_2}{\|p\|_2}$ and $\frac{\|r-p\|_2}{\|p\|_2}$
for $100$ random values of $\alpha \in (\alpha_1,\alpha_4)$.
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|}
\hline
$\kappa_2({\mathbf f},p)$ & $\frac{\|q-p\|_2}{\|p\|_2}$\\
\hline
$123$ & $0.0436$ \\
\hline \hline
$\kappa_2({\mathbf g},p)$ & $\frac{\|r-p\|_2}{\|p\|_2}$ \\
\hline
$1$ & $0.0221$\\
\hline
\end{tabular}
\end{table}
\noindent As in the example before, the fact that the mean values of $\frac{\|q - p\|_2}{\|p\|_2}$ are smaller than the
mean values of $\frac{\|r - p\|_2}{\|p\|_2}$ suggests that~$p$ is more stable when
it is considered as a root of~${\mathbf g}$ instead of as a root of~${\mathbf f}$.
\end{example}
\bigbreak\bigbreak
|
1,108,101,563,138 | arxiv | \section{Introduction}
Variational autoencoders (VAE) (\cite{Kingma} and \cite{Rezende2014}) are described by \citet{Goodfellow2016} as an ``excellent manifold learning algorithm'' due to the fact that the model is forced ``to learn a predictable coordinate system that the encoder can capture''. VAE do so by using a regularization term in order to get to low energy regions.
According to \citet{LeCun2020}, regularization like in the VAE case helps to keep the energy function smooth, which is desirable for the model to learn meaningful dependencies (e.g. to fill blanks). In contrast, maximum likelihood approaches push down the energy surface only at training sample regions. Therefore, their inherent objective is ``to make the data manifold an infinitely deep and infinitely narrow canyon'' (see \citealt{LeCun2020}).
Learning meaningful dependencies is a desirable concept for advancing deep learning. Hence, there exists an interest in understanding and developing VAE. Recent work aims at explaining and overcoming well-known pitfalls of VAE, such as spurious global optima (see \citealt{Dai2019}), posterior collapse (see \citealt{Lucas2019}) or prior posterior mismatch (see \citealt{Dai2019} and \citealt{Ghosh}). In these works, a Gaussian observation model distribution is assumed.
In this work, we answer the following research question:
\begin{itemize}
\item[]\textit{Is there a way to generalize the loss analysis of $\beta$-VAE based on the observation model distribution?}
\end{itemize}
For this, we establish a connection between $\beta$-VAE and generalized linear models (GLM) and provide a framework for analysing $\beta$-VAE based on the observation model distribution. By doing so, we generalize works of \citet{Dai2018}, \citet{Lucas2019} and \citet{Sicks2020}. We provide an approximation to the evidence lower bound (ELBO), which is exact in the Gaussian distribution case (see also \citealt{Dai2018} and \citealt{Lucas2019}) and a lower bound for the Bernoulli distribution case (see also \citealt{Sicks2020}). Further, we analyse the maximum likelihood \replaced{estimates}{estimators} (MLE) of this approximation.\deleted{Using the MLE as initialization, we show that the training performance of a VAE net can be enhanced. Furthermore, do we find an analytical description of the auto-pruning property of $\beta$-VAE, a reason for posterior collapse.}
\added{Given the MLE, we}
\begin{itemize}
\item \added{propose a MLE based initialization and show that the training performance of a VAE net can be enhanced. }
\item \added{find an analytical description of the auto-pruning property of $\beta$-VAE, a reason for posterior collapse. }
\item \added{analytically calculate a statistic used for predicting the number of inactive units in a yet to be trained VAE net and show its practical applicability. }
\end{itemize}
As GLM are based on exponential dispersion families (EDF), the analysis is based on the distribution assumption for the observation model. This is favourable as VAE, based on EDF, are applied in various different fields (with different distribution assumptions), as e.g.:
anomaly detection using Gaussian distribution (see \citealt{Xu2018a}), molecules representation using Bernoulli distribution (see \citealt{Blaschke2018}), image compression using Bernoulli distribution (see \citealt{Duan2019}) or multivariate spatial point processes using Poisson distribution (see \citealt{Yuan2020}).
This work is structured as follows: In Section \ref{sec:Motivation and Related Work}, we give a motivation as well as an overview of related work. In Section \ref{sec: Theoretical Background and Advancements}, we present the theoretical background and our results that are a consequence of connecting VAE and GLM. Afterwards in Section \ref{sec:Simulation results}, we provide simulations validating our theoretical results. In Section \ref{sec:Conclusion}, we summarize our contributions and point out future research directions.
\section{Motivation and Related Work} \label{sec:Motivation and Related Work}
Our main contribution is to interpret the decoder of a $\beta$-VAE as a GLM (see Section \ref{subsec:The EDF and the decoder as GLM}). This will allow us to identify well-known activation functions as the reciprocal of link functions of GLM. Therefore, we are able to provide a systematic generalization of the loss analysis for VAE based on the assumption that the observation model belongs to an EDF \added{(see Section \ref{subsec:Local Behaviour for general EDF observation model})}.
Given an affine transformation for a part of the decoder, we derive MLE for an approximation to the $\beta$-VAE objective. Even though the decoder architecture is arguably simple, analysing the critical points and the loss landscapes of VAE helps to understand these models better.
\added{For example \mbox{\citet{Lucas2019}} consider this approach for their analysis with a Gaussian observation model.
Given the derived MLE, we derive weight and bias initializations (see Section \ref{subsec:MLE-based initialization} and Appendix \ref{subsubsec:Initialization}), analyse the auto-pruning of $\beta$-VAE (see Section \ref{subsec:MLE and optimal solutions}) and analytically calculate a statistic used for predicting the number of inactive units (see Section \ref{subsec:MLE and optimal solutions} and Section \ref{subsec:Latent dimension activities}).}
\deleted{Using this approach with a Gaussian observation model, \mbox{\citet{Lucas2019}} analyse posterior collapse. In this paper, for general observation models (belonging to EDF), we provide an analytical description on the auto-pruning property of $\beta$-VAE, which leads to posterior collapse.}
By auto-pruning, we mean that during training the net sets nodes for the latent space inactive (i.e. to zero) and is presumably not able to activate them again due to local minima or saddle points in the loss surface.
On the one hand, this property of $\beta$-VAE is desirable, as the model focusses on useful representations. On the other hand, it is considered a problem when too many units become inactive before learning a useful representation.
To weaken the effect of auto-pruning, different approaches are considered in the literature. \citet{Bowman2015}, \citet{KaaeSonderby2016} and \citet{Huang2018} use annealing of the parameter $\beta$ during training. Our results below suggest that the annealing does not influence the final amount of active units, if training is conducted long enough and the loss surface is smooth. \citet{Lucas2019} make the same observation for the Gaussian observation model case.
Other approaches to tackle posterior collapse are to adjust the training objective.
\citet{Kingma2016} propose an alternative objective, in which the gradient is ignored if the KL-Divergence is below a pre-determined threshold. In \citet{Razavi2018} a variational posterior is chosen so that the posterior collapse cannot happen by design. Hence, an implicit threshold is chosen.
To reduce posterior collapse, \citet{Yeung2017} propose to use masking of groups in the latent dimension during training in the fashion of dropout layers.
\citet{He2019} find empirically that the variational approximation lags behind the true model posterior in the initial stages of training. They propose to train the inference network separately to alleviate posterior collapse.
\citet{Burda2015} consider importance weighting and show in their experiments that their proposed method yields less inactive units. For this, they calculate an activity statistic for each net after training. In this work, we provide a closed form for this statistic that can be calculated without training.
As our work focuses on analysing critical points of $\beta$-VAE, in the following, we give an overview of literature for analysing Autoencoders and VAE as well as on GLM used in the context of neural nets.
\replaced{Various}{Varios} authors have analysed optimal points \replaced{of}{or} the loss surface for squared loss (i.e. a Gaussian observation model) autoencoders.
\added{For autoencoders with linearised activations,} \citet{Bourlard1988} show \deleted{for autoencoders with linearised activations }that the optimal solution is given by the solution of a singular value decomposition (SVD). \citet{Baldi1989a} extend these results and analyse the squared error loss of autoencoders for all critical points.
\citet{Zhou2018} provide analytical forms for critical points and characterize the values of the corresponding loss functions as well as properties of the loss landscape for one-hidden layer ReLU autoencoders.
\citet{Kunin2019} consider regularizations on linear autoencoders and analyse the loss-landscape for different regularizations. They show that regularized linear autoencoders are capable of learning the principal directions and have connections to pPCA.
Variational Autoencoders with Gaussian observation models have been considered in \citet{Dai2018}, \citet{Dai2019}, \citet{Lucas2019} and \citet{Dai2020}.
\citet{Dai2018} analyse Gaussian VAE and show connections to pPCA and robust PCA as well as smoothing effects for local optima of the loss landscape.
\citet{Dai2019} analyse deep Gaussian VAE. Assuming the existence of an invertible and differentiable mapping between low-rank manifolds in the sample space and the latent space, they show that spurious global optima exist, which do not reflect the data-generating manifold appropriately.
\citet{Lucas2019} extend results of \citet{Dai2018} to analyse posterior collapse. They do this by analysing the difference from the true marginal to the ELBO which is possible under Gaussian assumptions, as $P_{\theta}(Z|X)$ becomes tractable in their setting. Furthermore, they provide experimental results on the posterior collapse for deep non-linear cases.
For Gaussian observation models, \citet{Dai2020} introduce a taxonomy for different types of posterior collapse. Furthermore, they show that apart from the KL-Divergence, bad local minima, inherent in deep autoencoders, can lead to posterior collapse.
In this work, we use a linearisation of the decoder.
\citet{Rolinek2019} analyse $\beta$-VAE and show that local orthogonality is promoted on the decoder.
\citet{Kumar2020a} generalize the work of \citet{Rolinek2019} to different observation models and show that diagonal covariances of the variational distribution naturally encourages orthogonal columns of the Jacobian of the decoder. We extend their work as we provide an alternative formulation as well as critical points and error bounds for their approximation.
\citet{Sicks2020} formulate a lower bound for the ELBO of a Bernoulli observation model using the linearisation of the decoder. They use the MLE to derive an initialization scheme and empirically compare it on synthetic data.
\citet{Wuthrich2020} describes connections between GLM and neural network regression models, by interpreting the last layer of a neural net as GLM. With this, he is able to use a $L^1$ regularized neural net to learn representative features to improve a standard GLM. Furthermore, favourable properties (as for an actuarial context, the ''balance property``) are achieved by a proposed hybrid model.
\section{Analysing the \texorpdfstring{$\beta$}{TEXT}-VAE objective}\label{sec: Theoretical Background and Advancements}
For realizations $x^{(1)},\ldots, x^{(N)}$ of a random variable (r.v.) $X$, we consider a $\beta$-VAE as in \citet{Higgins2017} with the objective $\mathcal{L}$, given by
\begin{align}
\mathcal{L}(\phi,\theta) :=& \dfrac{1}{N} \sum_{i=1}^{N} \mathbb{E}_{Z \sim q_{\phi}\left(\cdot|x^{(i)}\right)}\left[\log P_{\theta}(x^{(i)}|Z)\right] -\beta D_{KL}\left(q_{\phi}(Z|x^{(i)})|| P_{\theta}(Z) \right),\label{eq:ELBO_up_here}
\end{align}
where $\beta\geq0$. Interpreting the expectation in this expression as autoencoder yields the encoder $q_{\phi}(Z|x^{(i)})$ and the decoder $P_{\theta}(x^{(i)}|Z)$. We make the usual assumptions (see \citealt{Kingma}) $P_{\theta}(Z) \sim \mathcal{N}(\boldsymbol{\vec{0}}, \boldsymbol{I})$ and $P_{\theta}(Z|X)$ is approximated by the recognition model with variational distribution
\[
q_{\phi}(z|X) \sim \mathcal{N}(\boldsymbol{\mu}_z,\boldsymbol{\Sigma}_z).
\]
We further assume for the encoder parameters $\bmu_z^{(i)} := f_1(x^{(i)},\phi)$ and $0 \prec \bSigma_z^{(i)} := \bS_z^{(i)}{\bS_z^{(i)}}^T$, with $\bS_z^{(i)} := f_2(x^{(i)},\phi)$, where $f_1$ and $f_2$ are arbitrary functions including affine transformations.
In the following, we first provide results for the special case of a Gaussian observation model. Then, we provide the theoretical background on EDF and show how the decoder can be interpreted as GLM. Finally, given this new perspective, we present an approximation to the objective in \eqref{eq:ELBO_up_here}, MLE for this approximation and use these MLE to describe the auto-pruning of $\beta$-VAE.
\subsection{Behaviour of the \texorpdfstring{$\beta$}{TEXT}-VAE objective for a Gaussian observation model}\label{subsec:Behaviour for a Gaussian observation model}
In this introductory section, \added{we motivate the novel derivations for the EDF distribution families (see Section \ref{subsec:Local Behaviour for general EDF observation model}), by recapitulating known results for the Gaussian observation model case. We mainly follow the line of argument given in \mbox{\cite{Kumar2020a}}.}
\replaced{Consider a}{we have a look at the} Gaussian observation model with independent \replaced{marginals}{model stochasticity} $P_{\theta}(x | z_0) \sim \mathcal{N}\left(\boldsymbol{\vartheta},\sigma^2 \boldsymbol{I}\right)$. The deterministic component\footnote{For notational reasons (see Section \ref{subsec:The EDF and the decoder as GLM}), we use $\boldsymbol{\vartheta}$ instead of the commonly used $\boldsymbol{\mu}$. } $\boldsymbol{\vartheta}: Z \rightarrow \mathbb{R}^d$ of the decoder maps to the location parameter (in this case the mean) of the observation model $\log P_{\theta}(x | z)$.
To derive their ``Gaussian Regularized Autoencoder'' \citet{Kumar2020a} use a second order Taylor series expansion of the decoder $f_x(z) = \log P_{\theta}(x | z)$ in a $z_0 \in \mathbb{R}^\kappa$
\begin{equation}\label{eq:taylor PooleKumar}
f_x(z) \approx \log P_{\theta}(x | z_0) + J_{f_x}(z_0) ( z- z_0) + \dfrac{1}{2}( z- z_0)^T H_{f_x}(z_0)( z- z_0),
\end{equation}
where $J_{f_x} \in \mathbb{R}^{1 \times \kappa}$ denotes the Jacobian and $H_{f_x} \in \mathbb{R}^{\kappa \times \kappa}$ the Hessian of $f_x$ evaluated at $z_0$. \added{The benefit of this approximation is that we can analytically calculate the expectation term \eqref{eq:ELBO_up_here}. Given an analytically solvable KL-Divergence, the target of the ELBO becomes deterministic, which is beneficial for the analysis of $\beta$-VAE. }
\replaced{\mbox{\citet{Kumar2020a}} show that if piecewise linear activations are considered}{To achieve computationally feasibly Hessians, \mbox{\citet{Kumar2020a}} consider linear activations} for the deterministic component\footnote{\citet{Kumar2020a} denote this component as $g$. } $\vartheta$\replaced{, we get}{ and get}
\begin{equation}\label{eq:pw Lin Hf PooleKumar}
H_{f_x}(z) = J_\vartheta^T \left(\nabla_\vartheta^2 \log P_{\theta}(x | z)\right)J_\vartheta,
\end{equation}
where $J_\vartheta \in \mathbb{R}^{d \times \kappa}$ is the Jacobian of $\boldsymbol{\vartheta}$. By considering piecewise linear functions, we allow for the decoder architecture to have an arbitrary amount of layers with prominent activations like the ReLU (see \citealt{nair2010rectified}) and alternations of this. \citet{Kumar2020a} choose $z_0^{(i)}=\mathbb{E}_{q_{\phi}\left(\cdot|x^{(i)}\right)}\left(Z\right)$ to remove the first order term in \eqref{eq:taylor PooleKumar}. This is a reasonable choice, but for the sake of later results, we will stick with general Taylor expansion points $z_0^{(i)} \in \mathbb{R}^\kappa$. Using \eqref{eq:taylor PooleKumar} in \eqref{eq:ELBO_up_here}, yields the deterministic objective
\begin{align}
\mathcal{L}(\phi,\theta) \approx \widehat{\mathcal{L}}(\phi,\theta) := \dfrac{1}{N} \sum_{i=1}^{N} & \log P_{\theta}(x | z_0^{(i)}) + J_{f_x}(z_0^{(i)}) ( \bmu_z^{(i)}- z_0^{(i)}) \label{eq:Tay VAE Objective}\\&+ \dfrac{1}{2}\textnormal{tr}\left(J_\vartheta^T(z_0^{(i)}) \left(\nabla_\vartheta^2 \log P_{\theta}(x | z_0^{(i)})\right)J_\vartheta(z_0^{(i)}) \bSigma_z^{(i)}\right)\nonumber\\
&-\beta D_{KL}\left(q_{\phi}(Z|x^{(i)})|| P_{\theta}(Z) \right).\nonumber
\end{align}
\citet{Kumar2020a} argue that for the deterministic approximation to be accurate either higher \added{central} moments of the variational distribution or the higher order derivates ($\nabla^n_z \log P_{\theta}(x | z), n \geq 3$) need to be small. Based on the EDF representation, we give bounds for this approximation error (see Corollary \ref{corr: F error}). For the Gaussian case, the approximation error vanishes, regardless of the choice for $\boldsymbol{\vartheta}$. Hence, the Taylor expansion point is arbitrary in this case.
Maximizing w.r.t. $\bSigma_z^{(i)}$ and $\bmu_z^{(i)}$ yields
\begin{equation}\label{eq:optSigma_up_here}
\bhSigma_z^{(i)} = \left( I - \dfrac{1}{\beta}H_{f_x}(z_0^{(i)})\right)^{-1}
\end{equation}
and
\begin{equation}\label{eq:optMu_up_here}
\hat{\bmu}_z^{(i)} = \dfrac{1}{\beta}\bhSigma_z^{(i)}\left(J_{f_x}(z_0^{(i)})^T - H_{f_x}(z_0^{(i)}) \cdot z_0^{(i)} \right) .
\end{equation}
For now\footnote{The piecewise linear case is considered in section \ref{subsec:Local Behaviour for general EDF observation model}.} considering the Gaussian case and $\boldsymbol{\vartheta}(z)= Wz + b$, with $W \in \mathbb{R}^{d \times \kappa}$ and $b \in \mathbb{R}^d$, we get by substituting the expressions \eqref{eq:optSigma_up_here} and \eqref{eq:optMu_up_here}
\begin{align}
\mathcal{L}(\theta,\hat{\phi}) = \dfrac{-1}{2N}\sum_{i=1}^N\bigg[& \left(x^{(i)} - b \right)^T C^{-1} \left(x^{(i)} - b \right)+ \beta \log |C| + d \log(2\pi \sigma^{2(1-\beta)}) \bigg], \label{eq:Gaussian ELBO pPCA}
\end{align}
where $C := \left(\sigma^2 \boldsymbol{I} + \beta^{-1} WW^T\right)$. The derivation of this objective is the same as in the proof of Proposition \ref{prop:general prop for VAE target}, which can be found in Appendix \ref{app:proof of prop}.
The objective in \eqref{eq:Gaussian ELBO pPCA} is equivalent to the objective in \eqref{eq:ELBO_up_here} and reveals the connections of the VAE objective with $\beta=1$ to probabilistic PCA. As stated in \citet{Dai2018}, a solution for $W$ and $b$ can be derived analytically as given in \citet{Tipping1999}.
Solutions for the general EDF observation model can be found in Section \ref{subsec:MLE and optimal solutions}.
In the next section, we introduce the EDF, to which the Gaussian distribution belongs, and GLM. Further, we state assumptions in order to generalize the approach from this section.
\subsection{The EDF and the decoder of a VAE as GLM}\label{subsec:The EDF and the decoder as GLM}
\citet{Nelder1972} introduce GLM, providing a generalization of linear statistical models and thus of well-known statistical tools, such as analysis of variance (ANOVA), deviance statistics and MLE (see \citealt{McCullagh1989}).
GLM consist of three parts: A random component $X$ with a distribution belonging to the EDF, a systematic component given as affine mapping of features $Z$ used to estimate $\mathbb{E}(X|Z)$, and a link function connecting these two components. The EDF is defined by the structure of the density.
\begin{defi}
We say the distribution of $X$ given $Z$ belongs to the exponential dispersion family (EDF), if the (conditional) density can be written as
\begin{equation}\label{eq:log_density_expfam}
\log P_{\vartheta,\varphi}(X| Z) = \dfrac{X \cdot \vartheta(Z) - F(\vartheta(Z))}{\varphi} + K(X,\varphi),
\end{equation}
where $F$ and $K$ are one-dimensional functions. $F: \mathbb{R} \rightarrow \mathbb{R}$ is called the log-normalizer. It ensures that integration w.r.t. the density in \eqref{eq:log_density_expfam} over the support of $X$ is equal to one.
$\vartheta(Z) \in \Theta$ is the location parameter. $\Theta$ is an open, convex space with
\[\Theta = \left\{\vartheta \in \mathbb{R}: \int_x \exp \left(\dfrac{x\vartheta }{\varphi} + K(x,\varphi)\right) dx < \infty\right\}.
\]
$\varphi > 0$ is called the dispersion parameter and is independent of $Z$.
\end{defi}
\added{The EDF is a subset of the more general Exponential Family and differs by the fact that we can identify the dispersion parameter $\varphi$. }Several well-known distributions, like the Gaussian, Bernoulli and Poisson distribution belong to this family. See Table \ref{tab:exp_fam_dists} for the respective representations.
{\renewcommand{\arraystretch}{2}
\begin{table}[!htb]
\caption{An overview of well-known distributions that can be written as EDF distribution. The functions for the representation as exponential family member as well as $\vartheta$ and $\varphi$ in terms of the natural parameters are displayed.}
\label{tab:exp_fam_dists}
\centering
\begin{tabular}{ccccc}
\toprule
Dist. of X & $F(\vartheta)$ & $K(x,\varphi)$ & $\vartheta$ & $\varphi$ \\
\midrule
\makecell[c]{$Bin(n,p)$,\\with $n$ fixed} & $n \log\left(1+\exp\left(\vartheta\right)\right)$ & $\log\binom{n}{x}$ & $\log\left(\dfrac{p}{1-p}\right)$ & $1$ \\
\hline
\makecell[c]{$Bern(p)$\\$=Bin(1,p)$}& $\log\left(1+\exp\left(\vartheta\right)\right)$ & $0$ & $\log\left(\dfrac{p}{1-p}\right)$ & $1$ \\
\hline
\makecell[c]{$\mathcal{N}(\mu,\sigma^2)$,\\with $\sigma^2$ fixed} & $\dfrac{\vartheta^2}{2}$ & $-\dfrac{x^2}{2\varphi} - \dfrac{\log\left(2\pi\varphi\right)}{2}$ & $\mu$ & $\sigma^2$ \\
\hline
$Pois(\lambda)$ & $\exp(\vartheta)$ & $- \log\left(x!\right)$ & $\log(\lambda)$ & $1$ \\
\bottomrule
\end{tabular}
\end{table}}
The EDF is studied in \citet{BarndorffNielsen2014}, \citet{Jorgensen1986} and \citet{Jorgensen1987a}. For an EDF distribution, the expectation as well as the variance can easily be computed. Further, the log-normalizer $F$ provides explicit forms of the conditional expectation and variance and has further desirable properties, as can be seen in the following Lemma.
\begin{lemm}\label{lemm: EDF e-wert, F}
Let the distribution of a one-dimensional r.v. $X\sim P_{\vartheta,\varphi}(X| Z)$ given $Z$ belong to the EDF. Then, it holds $\mathbb{E}(X| Z) = F'(\vartheta(Z))$ and $\mathbb{V}ar(X| Z)= \dfrac{1}{\varphi}F''(\vartheta(Z))$.
Furthermore, the log-normalizer function $F$ is convex and possesses all derivatives.
\end{lemm}
The proof for the unconditional case is performed in Theorem 7.1, Corollary 7.1 and Theorem 8.1 in \citet{BarndorffNielsen2014}. The statement for the conditional case follows analogously.
We interpret the decoder $P_{\theta}(x^{(i)}|Z)$ as GLM. Therefore, we assume that the independent identical marginal distributions of $X$ given $Z$ belong to an EDF, where they share the same $\varphi$. With $Z \sim q_{\phi}\left(\cdot|x^{(i)}\right)$ from the encoder, the parameters of the decoder $P_{\theta}(x^{(i)}| Z)$ are given by $\theta = \left\{\boldsymbol{\vartheta},\varphi\right\}$ and we have
\[
\boldsymbol{\vartheta}(Z) = \left(\vartheta_1(Z),\ldots,\vartheta_d(Z)\right)^T.
\]
In order for the neural net implementation $\boldsymbol{m} \circ \boldsymbol{\vartheta}: \mathbb{R}^{\kappa} \rightarrow \mathbb{R}^d$ of a decoder to be reasonable for the log-likelihood part in \eqref{eq:ELBO_up_here}, the decoder should approximate the expectation of $x^{(i)}$ given $Z$. According to Lemma \ref{lemm: EDF e-wert, F}, the last activation $\boldsymbol{m}: \boldsymbol{\vartheta}(Z) \rightarrow \mathbb{R}^d$ has to be $\boldsymbol{m} = F'$ (applied element wise) to get
\begin{equation}\label{eq:decoder as GLM}
\boldsymbol{m}(\boldsymbol{\vartheta}(Z)) = F'(\boldsymbol{\vartheta}(Z)) = \mathbb{E}_{\boldsymbol{\vartheta},\varphi}\left(x^{(i)}| Z\right).
\end{equation}
We call the choice of $\boldsymbol{m}$ in \eqref{eq:decoder as GLM} ``canonical activation''. This name originates from the ``canonical link function''. As mentioned before, for GLM a link function $g$ connecting the systematic component of the model $\boldsymbol{\vartheta}(z)$ to the random component $\mathbb{E}_{\boldsymbol{\vartheta}}\left(X| z\right)$ is used. This function is called canonical if $g = (F')^{-1}$. Hence, the canonical activation is the inverse of the canonical link. In practice various different link functions, or in our case activations, are considered.
We want to emphasize that common neural net implementations depend on the choice of the last activation function to properly map to the natural parameters of the distribution, as the choice of loss function is strongly connected to this:
\begin{itemize}
\item If we use a Mean-Squared-Error loss and therefore implicitly\footnote{Furthermore, $\sigma^2$ is implicitly set to 1/2 which can result in unwanted consequences (see Section \ref{subsec:MLE and optimal solutions}).} assume a Gaussian ($\mathcal{N}(\boldsymbol{\mu},\sigma^2\boldsymbol{I})$) log-likelihood, the last activation has to be linear. In Section \ref{subsec:Behaviour for a Gaussian observation model}, we have implicitly assumed the last activation $\boldsymbol{m}$ to be the identity to ensure
\[
\boldsymbol{m}(\boldsymbol{\vartheta}) = id(\boldsymbol{\vartheta})= F'(\boldsymbol{\vartheta}) = \boldsymbol{\mu}.
\]
\item For the Binary Cross-Entropy loss, we implicitly assume a Bernoulli distribution $Bern\left(p\right)$. Hence, the last activation should be the sigmoid function to ensure \[
\boldsymbol{m}(\boldsymbol{\vartheta}) = \dfrac{1}{1+\exp(-\boldsymbol{\vartheta})} = F '(\boldsymbol{\vartheta}) = p.
\]
\end{itemize}
Actually, all activations that are equivalent to the choice in \eqref{eq:decoder as GLM} up to a scalar $\rho \in \mathbb{R}\setminus\{0\}$, i.e.
\begin{equation}\label{eq: linearly canonical mapping}
\boldsymbol{m}(\boldsymbol{\vartheta}) = F'(\rho \cdot \boldsymbol{\vartheta}),
\end{equation}
are legitimate choices. We call such activations ``linearly canonical activation'' and for canonical activations we have $\rho=1$. The following example shows that the tanh activation can be used as a ``linearly canonical activation''.
\begin{exam}[Bernoulli distribution - tanh activation]
Assume $X \sim Bern\left(p\left(\boldsymbol{\vartheta}\right)\right)$ and set the activation as
\[
\boldsymbol{m}(\boldsymbol{\vartheta})= 1/2 \cdot \tanh(\boldsymbol{\vartheta}) + 1/2.
\]
As in the example before, $F(\boldsymbol{\vartheta}) = \log\left(1 + \exp\left(\boldsymbol{\vartheta}\right)\right)$ and it can be shown that
\[
\boldsymbol{m}(\boldsymbol{\vartheta}) = F '(2 \cdot \boldsymbol{\vartheta}).
\]
\end{exam}
Our theory presented in this paper applies for any linearly canonical activation. As we consider piecewise linear functions for $\boldsymbol{\vartheta}$, we can substitute $\hat{\boldsymbol{\vartheta}} := \rho \cdot \boldsymbol{\vartheta}$ and calculate
\[
\boldsymbol{m}(\boldsymbol{\vartheta}(Z)) = F'(\hat{\boldsymbol{\vartheta}}(Z)) = \mathbb{E}_{\hat{\boldsymbol{\vartheta}},\varphi}\left(x^{(i)}| Z\right).
\]
Therefore, for notational ease we will stick with the canonical activations. During our simulations, settings with either sigmoid or tanh activation were indistinguishable.
\subsection{Local Behaviour of the \texorpdfstring{$\beta$}{TEXT}-VAE objective for EDF observation models}\label{subsec:Local Behaviour for general EDF observation model}
In this section, we derive an approximation to the $\beta$-VAE objective in \eqref{eq:ELBO_up_here} similar to the way in Section \ref{subsec:Behaviour for a Gaussian observation model}, but for a more general case \added{by considering distributions from the EDF.
In their work, \mbox{\citet{Kumar2020a}} consider distributions with finite first and second moments, which is even more general. Using the more restrictive class of EDF distributions, we provide an error characterization for the approximation in \eqref{eq:taylor PooleKumar}.} \replaced{Further, given Proposition \ref{prop:general prop for VAE target}, we derive MLE for the affine decoder case (see Section \ref{subsec:MLE and optimal solutions}). Given these, we produce}{We can use the results to derive} weight and bias initializations (see \added{Section \ref{subsec:MLE-based initialization} and} Appendix \ref{subsubsec:Initialization}\added{)} \deleted{and the results in Section \ref{subsec:MLE-based initialization}), to} \added{and} analyse the auto-pruning of $\beta$-VAE (see Section \ref{subsec:MLE and optimal solutions} and \ref{subsec:Latent dimension activities})\replaced{.}{or to monitor the training of a $\beta$-VAE with a tractable reference point (see Section \ref{subsec:MLE-based initialization})}
Apart from the assumptions in Section \ref{sec: Theoretical Background and Advancements}, for the Taylor Series expansion based on the decoder $\boldsymbol{m} \circ \boldsymbol{\vartheta}$ as in Section \ref{subsec:Behaviour for a Gaussian observation model}, we further assume
\begin{itemize}
\item $\boldsymbol{\vartheta}$ to be piecewise linear,
\item a canonical activation function $\boldsymbol{m}$ and
\item the Taylor expansion points $z_0^{(i)}$ from \eqref{eq:taylor PooleKumar} belong to the null space (kernel) of $\boldsymbol{\vartheta}$: $z_0^{(i)} \in ker(\boldsymbol{\vartheta})$.
\end{itemize}
\begin{prop}\label{prop:general prop for VAE target}
Assume that the independent identical marginals of $X$ given $Z$ belong to the same EDF distribution with functions $F$ and $K$ as in \eqref{eq:log_density_expfam}. Under the assumptions stated in the beginning of this section, there exists an approximative representation for the VAE objective in \eqref{eq:ELBO_up_here},
\begin{equation}\label{eq: elbo approx helbo}
\mathcal{L}(\theta,\phi) \approx \widehat{\mathcal{L}}(\theta,\phi),
\end{equation}
that admits optimal solutions $\hat{\phi} = \left\{\hat{\bmu}_z^{(i)} ,\bhSigma_z^{(i)}; i=1,\ldots,N\right\}$ (given in \eqref{eq:optSigma_up_here} and \eqref{eq:optMu_up_here}), such that it can be written as
\begin{align}
\widehat{\mathcal{L}}(\theta) := \widehat{\mathcal{L}}(\theta, \hat{\phi}) =
\dfrac{-1}{2N} \sum_{i=1}^N \Bigg[&\left(F''(0)^{-1}(x^{(i)}-F'(0)) + J_\vartheta(z_0^{(i)})z_0^{(i)} \right)^T C(z_0^{(i)})^{-1}\nonumber\\
&\quad\quad\left(F''(0)^{-1}(x^{(i)}-F'(0)) + J_\vartheta(z_0^{(i)})z_0^{(i)} \right)\nonumber\\
&+ \beta\log \left|C(z_0^{(i)})\right|+ \beta \cdot d\log \left(\varphi^{-1} F''(0)\right) +D\left(\varphi\right)\Bigg],\label{eq:prop alt Target}
\end{align}
where $C(z_0^{(i)}) := F''(0)^{-1} \varphi \boldsymbol{I}_d + \beta^{-1} J_\vartheta(z_0^{(i)})J_\vartheta(z_0^{(i)})^T$ and $J_\vartheta(z_0^{(i)}) \in \mathbb{R}^{d\times\kappa}$ is the Jacobian of $\boldsymbol{\vartheta}$. The definition of $D(\varphi)$ can be found in equation \eqref{eq:definition of D(varphi)} of the appendix.
\end{prop}
The proof can be found in Appendix \ref{app:proof of prop}.
By choosing a common Taylor expansion point $z_0^{(i)} = z_0$ for all $i =1,\ldots,N$ (i.e. for all observations), Proposition \ref{prop:general prop for VAE target} shows that the local approximation of the $\beta$-VAE objective for different EDF admits a pPCA fashioned representation. Furthermore, this representation belongs to the matrix perspective functions class in \citet{Won2020}, which can be optimized using proximity operators.
See Table \ref{tab:comb_exp_act} for different EDF distribution associated parameters. Unfortunately, this approximation is not possible for all EDF distributions. As an example the canonical activation of the Gamma distribution (which also belongs to the EDF) is given by $-1/x$, with support in $\mathbb{R}^{-}$. Hence, we cannot choose $z_0^{(i)} \in ker(\boldsymbol{\vartheta})$.
{\renewcommand{\arraystretch}{2}
\begin{table}[!htb]
\caption{EDF distribution associated parameters in Proposition \ref{prop:general prop for VAE target}.}
\label{tab:comb_exp_act}
\centering
\begin{tabular}{ccccc}
\toprule
Dist. of $X|Z$ & $F(0)$ & $F'(0)$ & $F''(0)$ & $\beta \cdot d\log \left(\varphi^{-1} F''(0)\right) + D(\varphi)$ \\
\midrule
Bern($p$)& $\log(2)$ & $1/2$ & $1/4$& $\left(1-\beta\right) d \log(4) - 4 N^{-1} \sum_{i=1}^{N} \left\|x^{(i)} - 1/2\right\|_2^2 $ \\
$\mathcal{N}(\mu,\sigma^2)$ & $0$ & $0$ & $1$& $d \log(2\pi \sigma^{2(1-\beta)})$ \\
$Pois(\lambda)$ & $1$ & $1$ & $1$& $2d + N^{-1} \sum_{i=1}^{N}\left[- \left\|x^{(i)} - 1\right\|_2^2 + 2 \log\left(\prod_{j=1}^{d}x^{(i)}_j!\right)\right]$\\
\bottomrule
\end{tabular}
\end{table}}
In the following C\deleted{c}orollary, we quantify the introduced Taylor remainder in \eqref{eq:taylor PooleKumar} for different distributions.
\begin{corollary}\label{corr: F error}
Let the assumptions of Proposition \ref{prop:general prop for VAE target} be given. We introduce the remainder of a second order Taylor term $T(z;z_0^{(i)})$ in \eqref{eq:taylor PooleKumar}, by
\[
f_x(z) - T(z;z_0^{(i)}) = R_2(z;z_0^{(i)}).
\]
\begin{itemize}
\item For a Gaussian observation model, we have $R_2(z;z_0^{(i)})=0 \quad\forall z_0^{(i)} \in Z$ and hence in \eqref{eq: elbo approx helbo}
\[
\mathcal{L}(\theta,\phi) = \widehat{\mathcal{L}}(\theta,\phi).
\]
\item For a Binomial observation model, we obtain
\[
R_2(z;z_0^{(i)}) = \dfrac{n}{8 \cdot 4!} \left\|J_\vartheta(\xi)(z-z_0^{(i)})\right\|_4^4 \cdot M,
\]
with $M \in \left[ \dfrac{-1}{3},1\right]$ and $\xi= z_0^{(i)} + c \left(z - z_0^{(i)}\right)$, where $c \in [0,1]$.
Further, if we assume $\boldsymbol{\vartheta}$ to be affine on the convex set spanned by $z$ and $z_0^{(i)}$, we have $M \in \left[0,1\right]$ and hence
\begin{equation}\label{eq:bound < L < zero}
\mathcal{L}(\theta,\phi) \geq \widehat{\mathcal{L}}(\theta,\phi).
\end{equation}
\item For a Poisson observation model, if we assume $\boldsymbol{\vartheta}$ to be affine on the convex set spanned by $z$ and $z_0^{(i)}$, it can be shown that we have
\[
\sum_{j=1}^d -\boldsymbol{\vartheta}_j(z)^3 \cdot \exp( \boldsymbol{\vartheta}_j(z))/6 \leq R_2(z,z_0^{(i)}) \leq \sum_{j=1}^d -\boldsymbol{\vartheta}_j(z)^3/6.
\]
\end{itemize}
\end{corollary}
See Appendix \ref{app:proof of corollary F error} for the proof.
If we choose $\beta=1$, the objective in \eqref{eq:ELBO_up_here} becomes the ELBO. For $\boldsymbol{\vartheta}(z)= Wz+b$, with $W \in \mathbb{R}^{d \times \kappa}$ and $b \in \mathbb{R}^d$, Corollary \ref{corr: F error} highlights how our theory generalizes the works of \citet{Dai2018}, \citet{Lucas2019} and \citet{Sicks2020}. Under the Gaussian assumption $\widehat{\mathcal{L}}$ is exact. Then, Proposition \ref{prop:general prop for VAE target} yields the objective in \eqref{eq:Gaussian ELBO pPCA} also given by \citet{Dai2018} and analysed by \citet{Lucas2019}. For the Bernoulli distribution, according to \eqref{eq:bound < L < zero} we approximate the ELBO in \eqref{eq:ELBO_up_here} from below. The result is the same lower bound as reported in \citet{Sicks2020}. Thus, the ELBO is bounded from both sides as naturally its values have to be smaller than zero. \added{Further, as we will see in the simulations, the expected error $\mathbb{E}_{q_{\hat{\phi}}}[R_2(\hat{\theta})]:= \dfrac{1}{N} \sum_{i=1}^N \mathbb{E}_{q_{\hat{\phi}}}[R_2(z^{(i)};z_0^{(i)})]$ evaluated at $M=1$ and $\theta = \hat{\theta}$ serves as an indicator for us on what to expect from training. }
\subsection{MLE for the affine transformation case}\label{subsec:MLE and optimal solutions}
In this section, we derive analytical solutions for the objective in \eqref{eq:prop alt Target}, when the location parameter is given by an affine transformation $\boldsymbol{\vartheta}(z)=Wz+b$. We analyse the optimal values of $W \in \mathbb{R}^{d \times \kappa}$ and $b \in \mathbb{R}^d$ and highlight interesting implications.
First, we rewrite the objective in \eqref{eq:prop alt Target}. With $\boldsymbol{\vartheta}(z)=Wz+b$, we obtain a similar representation as \citet{Tipping1999} for pPCA, given by
\begin{equation}\label{eq:hELBO Tipping Bishop}
\widehat{\mathcal{L}}(W,b) = \dfrac{-1}{2} \left( \textnormal{tr}\left(C^{-1}S\right) + \beta \log|C| + \beta \cdot d\log \left(\varphi^{-1} F''(0)\right) + D\left(\varphi\right)\right),
\end{equation}
where $C := \left( F''(0)^{-1} \varphi I_d + \beta^{-1} WW^T \right)$ and
\[
S:= \dfrac{1}{N}\sum\limits_{i=1}^{N} \left(F''(0)^{-1}\left(x^{(i)}-F'(0)\right) - b \right)\left(F''(0)^{-1}\left(x^{(i)}-F'(0)\right) - b \right)^T.
\]
According to \citet{Tipping1999}, the MLE for $\hat{b}$ is given by the sample mean
\begin{equation}\label{eq:b MLE sample mean}
\hat{b} = \dfrac{1}{N}\sum_{i=1}^N F''(0)^{-1}\left(x^{(i)}-F'(0)\right).
\end{equation}
Therefore, $S$ with $\hat{b}$ becomes the sample covariance, which we denote as $\hat{S}$. With $\lambda_1, \ldots,\lambda_d$ we denote the (ordered) eigenvalues of the matrix $\hat{S}$ and in a similar way to \citet{Tipping1999}, for $F''(0) \leq 1$, we can derive the MLE of $W$ as
\begin{equation}\label{eq:opt_w}
\hat{W}= U_{\kappa} \left(K_{\kappa}- \beta F''(0)^{-1}\varphi I_{\kappa}\right)^{1/2} R =: U_{\kappa} L R,
\end{equation}
where $U_{\kappa} \in \mathbb{R}^{d\times\kappa}$ is composed of $\kappa$ eigenvectors of the matrix $\hat{S}$. The eigenvectors are associated with the $\kappa$ biggest eigenvalues $\lambda_1, \ldots,\lambda_\kappa$. $K_{\kappa} \in \mathbb{R}^{\kappa\times\kappa}$ is a diagonal matrix with entries
\begin{equation}\label{eq: k bigger 1/alpha}
k_j=\left\{\begin{array}{ll} \lambda_j, & \lambda_j \geq \beta F''(0)^{-1}\varphi \\
\beta F''(0)^{-1}\varphi , & \textnormal{else.}\end{array}\right.
\end{equation}
$R \in \mathbb{R}^{\kappa\times\kappa}$ is an arbitrary rotation matrix, which implies that our optimal solution is invariant to rotations. \citet{Dai2018} show this as well as invariance to permutations in their Theorem 2.
Further for the Gaussian case, the MLE for $\varphi = \sigma^2$ is given by
\begin{equation}\label{eq:sigma-estimator}
\hat{\sigma}^2 = \dfrac{1}{(d-\beta\kappa)} \sum_{i=\kappa+1}^{d} \lambda_i,
\end{equation}
which can be interpreted \deleted{this }as the variance lost due to the dimension reduction by the autoencoder. This expression is only well-defined for $\beta \in [0, d/\kappa)$ and we have $\hat{\sigma}^2 > 0$ if $rank(S)>\kappa$. Further, the estimator $\hat{\sigma}^2$ is increasing in $\beta$. Hence the VAE performs optimal in view of reconstruction (has the lowest variance lost), when $\beta=0$.
\replaced{This observation agrees with the definition of the objective \eqref{eq:ELBO_up_here}: lower $ \beta$ emphasize the reconstruction part. As pointed out by an anonymous reviewer, our observation is in line with the analysis by \mbox{\cite{Alemi2018}} as well as \mbox{\cite{Rezende2018}}. While the results therein are originated from a different point of view, the interpretations on $\beta$ are consistent.}{This observation agrees with the definition of the objective \eqref{eq:ELBO_up_here}: lower $\beta$ emphasize the reconstruction part.}
It is possible to have $\textnormal{rank}(\hat{W})< \kappa$ as the ``cut-off'' term
\begin{equation}
\label{eq:cut-off}
\beta F''(0)^{-1}\varphi
\end{equation}
controls how much columns in the matrix $ \hat{W}R^T$ are zero. We interpret this as a consequence of the auto-pruning property of VAE. If the data signal is not strong enough, it is pruned away. In common VAE implementations $\sigma^2$ is often implicitly assumed to be equal to $1/2$ (i.e. when using MSE loss without any scaling). \citet{Lucas2019} show how the stability of the estimator for $W$ is influenced by the choice of $\sigma^2$. If $\sigma^2$ (and hence the cut-off value) is chosen too large, principal components cannot be captured by the model. We agree with their conclusion that learning $\sigma^2$ is necessary for gaining a full latent representation.
For the parameter estimates of the variational distribution, we get
\begin{equation}\label{eq:optSigma with opt W}
\boldsymbol{\hat{\Sigma}}_z = \dfrac{\beta \varphi}{F''(0)}R^T K_{\kappa}^{-1}R
\end{equation}
and
\begin{equation}\label{eq:optMu with opt W}
\hat{\bmu}_z^{(i)} = \dfrac{1}{\beta\varphi}\boldsymbol{\hat{\Sigma}}_z \hat{W}^T \left(x^{(i)} - \bar{x}\right)= R^T K_{\kappa}^{-1} L U_{\kappa}^T \dfrac{1}{F''(0)} \left(x^{(i)} - \bar{x}\right).
\end{equation}
When a diagonal covariance structure is imposed, the decoder Jacobian columns are forced to be orthogonal. In \eqref{eq:optSigma with opt W} a diagonal covariance matrix means $R=I_\kappa$. As a result, we have orthogonal columns in the matrix $\hat{W}$. This result supports the findings of \citet{Kumar2020a}. They show an implicit regularization in the local behaviour of the VAE objective \eqref{eq:ELBO_up_here} for a diagonal covariance assumption, without presenting analytical solutions as the one in \eqref{eq:optSigma with opt W}.
Next, we analyse how the parameter $\beta$ influences the optimal variational parameters and as a consequence the auto-pruning of $\beta$-VAE.
\begin{itemize}
\item For $\beta$ high enough, we get $\hat{W}=\boldsymbol{0}$ and $\boldsymbol{\Sigma}_z = \boldsymbol{I}_\kappa$ and hence $\bmu_z^{(i)} = 0$ independent of the input $x^{(i)}$. Therefore, the Kullback-Leibler Divergence part in \eqref{eq:ELBO_up_here} is amplified enough such that the variational distribution generates independent noise. The posterior collapses.
\item For smaller $\beta$ values, more and more eigenvalue dimensions covered by $U_{\kappa}^T$ are used and scaled appropriately with $K_{\kappa}^{-1} L$. Therefore, in $\bmu_z^{(i)}$ the inputs $F''(0)^{-1}\left(x^{(i)} - \bar{x}\right)$ are transformed better and better to the latent space to guarantee a proper reconstruction.
\end{itemize}
We can further analytically compute statistics used to detect active latent dimensions in $\beta$-VAE. \citet{Burda2015} propose the statistic $A_{z_j} = Cov_x\left(\mathbb{E}_{q_{\phi}\left(z_j|x\right)}[z_j]\right)$. They define the dimension $z_j$ to be active if $A_{z_j} > 0.01$. Using the sample covariance to approximate this value with the given data points and using \eqref{eq:optMu with opt W}, we get
\begin{equation}\label{eq:activity analytical}
A_{z_j} \approx \dfrac{\left(k_j-\beta F''(0)^{-1}\varphi\right)\lambda_j}{k_j^2} = \left\{\begin{array}{ll} \dfrac{\left(\lambda_j-\beta F''(0)^{-1}\varphi\right)}{\lambda_j}, & \lambda_j \geq \beta F''(0)^{-1}\varphi \\
0 , & \textnormal{else.}\end{array}\right.
\end{equation}
So the value is either $0$ or equals the (positive) relative distance of the eigenvalue $\lambda_j$ to the cut-off value $\beta F''(0)^{-1}\varphi$. The effect on how $\beta$ controls the activity of the latent space dimensions becomes apparent. The bigger $\beta$ the less latent dimensions remain non-zero.
This result yields the ineffectiveness of annealing the $\beta$ parameter during training. If training is conducted long enough and the loss surface is smooth enough, the MLE will be achieved by optimization. Hence, the active units are determined by \eqref{eq:activity analytical} for the last $\beta$ value during annealing.
\section{Simulation results}\label{sec:Simulation results}
In this section, we provide simulation results to illustrate our theoretical results from Section \ref{sec: Theoretical Background and Advancements}.
We consider two applications.
\begin{enumerate}
\item We show that the use of the MLE derived in section \ref{sec: Theoretical Background and Advancements} as initialization for VAE implementations yields a faster training convergence.
\item We compare the analytical calculations of the activity statistics in \eqref{eq:activity analytical} with the resulting activities of $\beta$-VAE implementations. The analytical values serve as good indicator on how much latent dimensions become inactive during training.
\end{enumerate}
\subsection{MLE-based initialization}\label{subsec:MLE-based initialization}
We focus on the Bernoulli case, popular for image data and set $\beta=1$. According to Corollary \ref{corr: F error}, $\widehat{\mathcal{L}} = \widehat{\mathcal{L}}(\theta)$ from Proposition \ref{prop:general prop for VAE target} becomes a lower bound yielding \eqref{eq:bound < L < zero}. Therefore, we expect the ELBO of VAE with an according architecture to lie above $\widehat{\mathcal{L}}$. The essential messages of the simulations are the following:
\begin{itemize}
\item It is reasonable to use $\widehat{\mathcal{L}}$ to analyse the training performance on real life data sets.
\item The statement above is also valid for ReLU-Net decoders.
\item The MLE points, from Section \ref{subsec:MLE and optimal solutions}, used as initialization enhance the training performance.
\end{itemize}
For training of the nets, we use the Adam optimizer by \citet{Kingma2015} with learning rate $0.0001$ and a batch size of $100$. Training was done for a total of $25,000$ batch evaluations. The simulations ran on a dual Intel Xeon E5-2670 with 16 CPU @ 2.6 GHz. The longest setup took about one hour of computing time.
By varying the following hyper parameters, we conduct a total of 18 different simulation setups:
\begin{itemize}
\item Architecture: ``Affine'' or ``ReLU-Net'' decoder.
\item Latent dimension $\kappa$: 2, 5 or 20.
\item Data: ``synthetic'', ``frey'' or ``mnist''.
\end{itemize}
We compare our initialization scheme (``MLE-B'') to a benchmark (``Bench'') given by \citet{He2015}. The initialization schemes and different hyper parameters are explained in detail in Appendix \ref{app:Simulation}. Figure \ref{fig:data_frey_kappa_2_act_sigmoid_plot} shows the result of training the two different initialized VAE on the frey data set with $\kappa=2$. \added{The curves are based on simulations with 10 different seeds. We display the average training performances with pointwise 0.95 confidence intervals.}
\begin{figure*}[!htb]
\vskip -0.2in
\begin{center}
\centerline{\includegraphics[width=1.2\linewidth]{pics/review/data_frey_kappa_2_act_sigmoid_plot.pdf}}
\caption{The figure shows two different setups ReLU-Net and Affine with frey data, $\kappa=2$ and sigmoid activation. Displayed are the ELBOs of both initialisations MLE-B and Bench as well as the lower bound $\widehat{\mathcal{L}}$ and the expected error \added{$\mathbb{E}_{q_{\hat{\phi}}}[R_2(\hat{\theta})]$ }as provided by Corollary \ref{corr: F error}, calculated based on MLE. On the left the \added{ELBO} values are calculated with the training data and on the right with test data. \added{The curves are based on simulations with 10 different seeds. We display the average training performances with pointwise 0.95 confidence intervals.}}
\label{fig:data_frey_kappa_2_act_sigmoid_plot}
\end{center}
\vskip -0.35in
\end{figure*}
In Figure \ref{fig:data_frey_kappa_2_act_sigmoid_plot}, the bound $\widehat{\mathcal{L}}$ is reasonable and both architectures do not perform significantly better.
The results of all simulation schemes can be found in Appendix \ref{app:Simulation results}. Comparing these simulation results and considering Figure \ref{fig:data_frey_kappa_2_act_sigmoid_plot}, we observe the following:
\begin{itemize}
\item For the affine decoder architecture, the initialization MLE-B converges directly, whereas the Benchmark takes much more time. The end values are comparable. For the ReLU-Net decoder architecture, the performance of the two initialization methods mostly shows a small initial advantage of MLE-B which, however, is not as clear as for the affine architecture.
\item In no simulation setup a net was over-fitting, not even for large values of $\kappa$ with synthetic data, where a much smaller $\kappa$ \replaced{is}{was} needed. This is a consequence of the auto-pruning.
\item \added{For the MLE values, based on Corollary \ref{corr: F error} we know that $\mathcal{L}(\hat{\theta},\hat{\phi})$ lies above $\widehat{\mathcal{L}}(\hat{\theta},\hat{\phi})$. }It seems that MLE-B needs a very short burn-in period to perform according to Corollary \ref{corr: F error}. We believe that the offset at the beginning originates from not readily initialized hidden layers (mainly for the encoder). \added{We can observe a performance above the expected error. Possible reasons for this are that the sampled values during optimization differ from the analytically calculated expectation and different realized values for $\theta$ and $\phi$. }
\end{itemize}
\subsection{Latent dimension activities}\label{subsec:Latent dimension activities}
In this section, we show that the analytical activities in \eqref{eq:activity analytical} serve as a good indicator for the amount of active nodes without conducting training. This statement also holds for ReLU-Net decoder and not just for the affine case. We consider the mnist data set. For the Gaussian observation model, Figure \ref{fig:Activities_21_26} shows \added{histograms of} the \replaced{activity statistics $A_{z_j}$, proposed by \mbox{\citet{Burda2015}},}{calculated activities} for the analytical case and an Affine / ReLU-Net decoder (Details on the architectures can be found in Appendix \ref{app:Architecture}) after the training. Displayed are the values for different $\beta$. \added{Table \ref{tab:Normal Histogram distance} displays the calculated distances to the analytical calculations based on 10 simulations.
} \replaced{The corresponding figure and table for the}{The} Bernoulli observation model can be found in Appendix \ref{app:Bernoulli Activities}.
\begin{figure*}[!htb]
\begin{center}
\centerline{\includegraphics[width=\linewidth]{pics/Activities_21_26_review.pdf}}
\caption{The figure shows\added{ histograms of} the activities for 40 latent dimensions for our analytical calculation and an Affine/ ReLU-Net decoder (as described in appendix \ref{app:Architecture}) after training. \replaced{We have considered}{Considered is} the mnist data set with a Gaussian observation model. \added{Above each Affine and ReLU-Net histogram plot, we show the distance ($\in [0,1]$, lower is better) as defined in \eqref{eq:histogramm distance} to the analytical histogram. }}
\label{fig:Activities_21_26}
\end{center}
\vskip -0.35in
\end{figure*}
{\renewcommand{\arraystretch}{0.5}
\begin{table}[!htb]
\caption{\added{We display the distances ($\in [0,1]$, lower is better) of the histograms to the corresponding analytical calculation for the Gaussian observation model. Displayed are the results of 10 simulations as ``mean$\pm$std''.}}
\label{tab:Normal Histogram distance}
\centering
\begin{tabular}{ccccccc}
\toprule
& Beta 1 & Beta 2 &Beta 5 &Beta 10 &Beta 15 & Beta 20 \\
\midrule
Affine & 0.24 ($\pm$0.06) & 0.04 ($\pm$0.02) & 0.04 ($\pm$0.02) & 0.05 ($\pm$0.01) & 0.03 ($\pm$0.02) & 0 ($\pm$0) \\
ReLU & 0.18 ($\pm$0.03) & 0.3 ($\pm$0.02) & 0.49 ($\pm$0.04) & 0.55 ($\pm$0.03) & 0.23 ($\pm$0.01) & 0 ($\pm$0) \\
\bottomrule
\end{tabular}
\end{table}}
Since the analytical calculations are based on the affine decoder architecture, the first two rows look similar. Given a value of $\beta$ we can make trustworthy predictions how much latent dimensions become inactive during training.
The ReLU-Net decoder behaves \deleted{slightly }differently. \added{It seems to be that the deeper structure and piecewise linear functions allow the model to use less latent dimensions to properly model the data distribution and hence more latent dimensions can become inactive. }
\replaced{Given this point of view, we can use the analytical calculation of the statistics as a lower estimate of }{But still, given our analytical calculations we can expect }how much latent dimensions \deleted{approximately}\added{will} turn out to be inactive after training.
\added{Since the analytical calculation of the statistic in \eqref{eq:activity analytical} is low cost, we recommend to use it in either case. }
\section{Conclusion}\label{sec:Conclusion}
We have established a new framework for $\beta$-VAE, by interpreting the decoder of a $\beta$-VAE as a GLM. Given this framework, we derive and analyse an approximation to the $\beta$-VAE objective based on the EDF observation model.
We derive MLE for this approximation in the affine transformation setting. Furthermore, we present simulation results validating the theory on real world data sets, like the frey and mnist data set. The results here generalize previous work in this field.
\replaced{Further, we}{We further} provide an analytical description of the auto-pruning of $\beta$-VAE \replaced{. We show that the parameter MLEs are directly influenced by the cut-off term in \eqref{eq:cut-off}, which yields}{ and explain} the dependence on the parameter $\beta$ for the affine decoder setting. \added{Furthermore, the amount of active units is directly affected by this term.} Our simulation results suggest that the implications can be used for ReLU-Net decoders.
A possible extension is to integrate distributions like the Gamma distribution which belongs to the EDF.
|
1,108,101,563,139 | arxiv | \section{Introduction}
Large-scale, highly collimated energetic plasma outflows are observed in some active galactic nuclei (AGN). Many models have been proposed for the formation of these jets \citep{Ferrari:1998p1023}, but their launching, collimation, and stability remain open issues. Recent observations indicate that the magnetic field structure in AGN jets is helical in nature \citep{Asada:2005p4636, Gabuzda:2004p952, Marscher:2008p1575}. This suggests that magnetic fields play a strong role in the collimation of AGN jets, as was proposed by Blandford and Payne \citep{Blandford:1982p1468}, and that one can use a magnetohydrodynamic (MHD) model to describe their formation and evolution. However, both theory and laboratory experiments show that helical MHD equilibria can be unstable to current-driven kink modes. Understanding the effect of the kink mode on jet morphology is therefore critical to understanding their evolution. Here, we describe a computational MHD study of the stability of plasma jets relative to the kink mode and the effect that jet rotation has on the stability properties.
Many of the earlier computational efforts to model extragalactic jets concentrate on two-dimensional MHD models in which the accretion disk is treated as a boundary condition \citep{Romanova:1997p179, Ouyed:1997p345, Ustyugova:2000p258}. Even though each of the studies cited uses a different initial magnetic field, they all observe the formation of a steady outflow. More recent three-dimensional MHD simulations study the stability of the jet far from the galactic nucleus \citep{Nakamura:2001p2683}. These calculations inject flow and torsional Alfv\'en waves into an MHD equilibrium and show that wiggled structures form in the jet due to the kink mode. Similar calculations, which consider a more realistic atmosphere into which the jet expands, also examine the effect of the kink mode on the jet \citep{Nakamura:2004p687}. These calculations show that rapid rotation of the jet can have a stabilizing effect. The study discussed here aims to further examine the effect of equilibrium rotation on the stability of an expanding jet.
The effect of equilibrium flow on current-driven MHD instabilities has been investigated both in theory and laboratory experiments. Linear MHD calculations show that a sheared axial flow has a stabilizing effect on the kink mode, while a uniform axial flow has no effect on the growth of the instability \citep{Shumlak:1995p2833}. This effect was confirmed experimentally \citep{Shumlak:2003p1156}. Later theoretical work studied the effect of sheared helical flow on the kink mode and showed that the sheared azimuthal flow stabilizes the mode by creating a phase shift in the plasma eigenfunctions \citep{Wanex:2005p1603, Wanex:2007p2684}.
The work discussed here extends the two-dimensional simulations of jet launching \citep{Romanova:1997p179, Ouyed:1997p345, Ustyugova:2000p258} to three-dimensions via nonlinear MHD calculations and considers the effect of jet rotation on the current-driven kink mode. By scanning the rotation of the disk, we scan jet rotation, and similar to previous results \citep{Nakamura:2004p687}, the rotation of the jet is observed to stabilize the column. To better understand the stabilizing mechanism of the rotation, we perform linear MHD analysis for a simple cylindrical plasma equilibrium with rigid rotation. These calculations show that the Coriolis force stabilizes the non-resonant kink by distorting the eigenmode.
The paper is organized as follows. Section \ref{nonlinear_jet} discusses the results of nonlinear simulations of extragalactic jet launching and evolution. The stability with regard to the kink mode is shown to depend on the rotational velocity of the accretion disk relative to the Alfv\'en speed of the initial magnetic arcade. Motivated by this result, Section \ref{livc} examines the linear stability of the kink mode in a cylindrical equilibrium with rigid rotation via initial-value MHD calculations. The results show that rigid rotation provides a stabilizing effect. In Section \ref{levc}, ideal MHD eigenvalue calculations are used to confirm the results of Section \ref{livc} and to examine the effect of equilibrium rigid rotation on the unstable range of axial wave numbers. We also examine the physical mechanism of rotational stabilization using the eigenvalue calculations in Section \ref{levc} and show that the Coriolis force stabilizes the kink mode. Discussion of the results and conclusions are given in Section \ref{conclusions}.
\section{Nonlinear Calculations of Jet Propogation}
\label{nonlinear_jet}
To investigate jet propagation, we model the expansion of a magnetic arcade due to accretion disk rotation using a non-relativistic MHD model which ignores gravitational effects. Similar to previous studies \citep{Romanova:1997p179, Ouyed:1997p345, Ustyugova:2000p258}, the accretion disk is treated as a boundary condition on the computational domain. The simulation is initialized with axisymmetric vacuum magnetic field that is tied to the disk and has zero net magnetic flux through the disk. Thus, both ends of all magnetic field lines are anchored to the accretion disk. The differential rotation of the accretion disk, which rotates with a Keplerian velicity profile, injects magnetic helicity and magnetic pressure into the magnetic field, causing it to coil and expand. The coiled magnetic field produces a hoop stress on the plasma that collimates it on the central axis. The effect of jet rotation on the stability of the column is explored by varying the rotation rate of the accretion disk in individual simulations.
We numerically evolve the visco-resitive nonrelativistic MHD equations,
\begin{equation}
\frac{\partial n}{\partial t} + \bm \nabla \bm \cdot (n \; \mathbf v) = \bm \nabla \bm \cdot D \bm \nabla n
\label{mhd1}
\end{equation}
\begin{equation}
\frac{\partial \mathbf B}{\partial t} = \bm \nabla \bm \times (\mathbf v \bm \times \mathbf B) - \bm \nabla \bm \times \frac{\eta}{\mu_o} (\bm \nabla \bm \times \mathbf B)
\label{mhd2}
\end{equation}
\begin{eqnarray}
\rho \frac{\partial \mathbf v}{\partial t} + \rho \; (\mathbf v \bm \cdot \bm \nabla \mathbf v) = & \frac{1}{\mu_o} (\bm \nabla \bm \times \mathbf B) \bm \times \mathbf B - \bm \nabla p \nonumber \\
& + \bm \nabla \bm \cdot \nu \; \rho \; \bm \nabla \mathbf v
\label{mhd3}
\end{eqnarray}
\begin{eqnarray}
\frac{n}{\gamma - 1} (\frac{\partial k_B T}{\partial t} + (\mathbf v \bm \cdot \bm \nabla) \; k_B T) = -\frac{1}{2} p \; \bm \nabla \bm \cdot \mathbf v \nonumber \\
+ \bm \nabla \bm \cdot n K \bm \nabla k_B T
\label{mhd4}
\end{eqnarray}
\noindent
where $n$ is the particle density, $\mathbf B$ is the magnetic field, $\mathbf v$ is the flow velocity, $p$ is the thermal pressure, $T$ is the ion and electron temperature, $K$ is the isotropic thermal diffusivity, $\nu$ is the viscosity, $\eta$ is the resistivity, and $\gamma$ is the ratio of the specific heats chosen such that $\gamma = 5/3$. The particle density, $n$, is related to the mass density, $\rho$, by a factor of the ion mass. The pressure and temperature are related by the ideal gas relation, assuming that the electrons and ions have the same temperature, $p = 2 n k_B T$. There is an extra term added to the right hand side of the continuity equation (Eq. \ref{mhd1}), given by $\bm \nabla \bm \cdot D \bm \nabla n$. This diffusive term is added for numerical smoothing and the diffusivity coefficient, $D$, is generally chosen to be small. The thermal diffusivity, $K$, is chosen to be $100$ times the electromagnetic diffusivity. The effect of the gravitational force due to the massive galactic central object has been ignored, so gravity does not appear in the momentum equation (Eq. \ref{mhd3}).
The MHD equations are evolved in time using the NIMROD code \citep{Sovinec:2004p1473}. NIMROD is well benchmarked and has been used to model a wide array of plasma experiments \citep{Sovinec:2003p4098} and magnetospheric physics \citep{Zhu:2006p4221}. A cylindrical computational domain with a cylindrical coordinate system given by $(r,\theta,z)$ is used. The spatial discretization scheme combines two numerical methods. A mesh of high-order finite elements is used in the poloidal ($r$-$z$) plane, where the degree of the polynomial basis functions is chosen by the user, and the azimuthal ($\theta$) direction is represented with finite Fourier series. The parameter $m$ is used to identify Fourier components in the azimuthal direction. Convergence studies show that a resolution of $0 \leq m \leq 5$, is sufficient for the dynamics of the expanding jet. Using logarithmic packing of the poloidal mesh on the central axis and disk boundary, we can resolve the jet dynamics in a large domain using a poloidal mesh of $48$ by $48$ fifth-order elements.
Previous studies searching for steady state outflows have treated the outer boundaries of the domain with open boundary conditions, allowing kinetic and magnetic energy to flow out of the domain \citep{Romanova:1997p179, Ouyed:1997p345, Ustyugova:2000p258}. We use closed, perfectly conducting boundary conditions on the outer boundaries to avoid inward propagating wave characteristics. While this boundary condition is certainly unphysical, the outer boundaries are placed at a distance of $r = z = 100 \; r_i$, where $r_i$ is the inner radius of the accretion disk, which is far from the dynamic region of the calculation.
The model of the accretion disk/jet system treats the accretion disk as a boundary condition at $z=0$, where a smoothed axisymmetric Keplerian velocity profile is applied to $v_\theta$:
\begin{equation}
v_\theta(r,\theta,z=0) = \frac{\sqrt{GM} \; r}{(r^2 + r_i^2)^{3/4}} \; .
\end{equation}
\noindent
The remaining components of the fluid velocity on the disk boundary at $z=0$ are chosen such that $v_r = v_z = 0$. A Dirichlet boundary condition is applied on the other, distant boundaries with $\mathbf v = 0$. On all of the boundaries, the number density is constrained to be constant. Mass is allowed to diffuse through the disk boundary (to fill in the coronal mass that is removed by the jet flow) by shaping the diffusivity parameter, $D$, in Eq. \ref{mhd1} such that it is large near the disk boundary and small in the rest of the domain. All of the boundaries are treated as perfect conductors by holding the normal component of the magnetic field constant in time.
The initial condition is a currentless coronal magnetic field, just above the accretion disk. This field is chosen such that there is zero net magnetic flux through the accretion disk boundary. The poloidal magnetic flux, $\psi$, defined by $\mathbf B = \bm \nabla \psi \bm \times \bm \nabla \theta$, is chosen to be
\begin{equation}
\psi(r, z=0) = r^2 \left[1 + \left( \frac{r}{r_i} \right)^2 \right]^{-\alpha} e^{-r^2 / r_o^2} \;
\label{ncjl_2}
\end{equation}
\noindent
on the disk boundary, where $\alpha$ is a parameter with $ 0 < \alpha < 1$. The initial magnetic field is found in terms of the magnetic potential, $\Phi$, which satisfies Laplace's equation and is given by
\begin{equation}
\mathbf B = \bm \nabla \Phi.
\label{ncjl_3}
\end{equation}
\noindent
An analytic solution for $\Phi$ in the domain is found by solving the boundary value problem $\bm \nabla^2 \Phi = 0$, where the normal component of $\bm \nabla \Phi$ on the accretion disk boundary is specified by Eq. \ref{ncjl_2},
\begin{equation}
\frac{\partial \Phi}{\partial z} = \frac{1}{r} \frac{\partial \psi}{\partial r},
\label{ncjl_4}
\end{equation}
\noindent
and $\Phi = 0$ on all of the other boundaries. The poloidal flux on the disk boundary increases from zero at $r=0$ to a maximum value at the O-point of the magnetic field, defined to be the radius where $B_z = 0$, and then exponentially decays to zero. For the calculations discussed here, the values $\alpha = 3/4$ and $r_o = 30 \; r_i$ are used. For this choice of $\alpha$, the radius of the O-point of the magnetic field is at $15.13 \; r_i$.
The initial number density and temperature are constant across the computational domain. Thus, the entire domain is initially filled with a plasma that is essentially unmagnetized away from the initial arcade, and the magnetized jet expands into this thermal plasma. The initial flow velocity is set to zero, and the accretion disk flow is ramped from zero at $t=0$ to a steady profile within one turn of the disk at $r = r_i$. The Keplerian flow of the disk acts to twist the coronal magnetic field, building magnetic pressure above the disk which launches the outflow. This twisting of the magnetic field also creates a strong $\theta$-component to the field, causing a hoop stress which pinches the plasma on the central axis and collimates the outflow.
The results discussed here are given in units of the initial field quantities. All velocities are given in units of the Alfv\'en velocity at the origin, ${v_A}_o = B_o \; \rho_o^{-1/2}$, where $B_o$ and $\rho_o$ are the magnetic field and mass density at the origin respectively. The magnetic field is given in units of $B_o$. Time is given in units of $T_i$, the rotation period of the disk at $r = r_i$.
Four dimensionless parameters are used to describe the system. The first three are commonly used to describe plasma systems: the Lundquist number, $\textrm{S} = \tau_{R} \; \tau_{A_o}^{-1}$, where $\tau_R = \mu_o \; \pi \; r_i^2 \; \eta^{-1}$ is the resistive diffusion time across the inner radius of the accretion disk and ${\tau_A}_o = r_i \; {v_A}_o^{-1}$ is the Alfv\'enic propagation time across the inner radius of the disk based on the Alfv\'en speed at the origin; the magnetic Prandtl number, $\textrm{P}_\textrm{M} = \nu \; \mu_o \; \eta^{-1}$; and the plasma beta at the origin, $\beta = P_T \; P_B^{-1}$, where $P_T$ is the thermal pressure and $P_B$ is the magnetic pressure. The last dimensionless parameter, the drive parameter, $\hat V_D$, is defined as
\begin{equation}
\hat V_D = \frac{v_\theta(r = r_i,z=0)}{v_A(r=r_i, z=0)} \; ,
\end{equation}
\noindent
where $v_A$ is the Alfv\'en velocity. This parameter can be understood as the ratio of how fast the accretion disk twists coronal magnetic field lines to how fast the information of this twisting propagates through the corona. In order to maintain a constant resistive diffusion time relative to the rotation period of the accretion disk in different simulations, the parameter $S \bm \cdot \hat V_D$ is held constant as $\hat V_D$ is varied. Three sets of parameters are considered: P$_\textrm{M}$, $\beta$, and the product $\textrm{S} \bm \cdot \hat V_D$ are fixed at $1$, $1$, and $200 \pi$ respectively, and $\hat V_D$ is varied with the values $0.5$, $1.0$, and $4.0$. The values of $\hat V_D$ are chosen to be similar to previous studies \citep{Moll:2008p4392, Nakamura:2004p687, Ouyed:2003p337} which consider sub-Alfv\'enic disk rotation, and to extend the disk rotation to the previously unstudied super-Alfv\'enic regime.
\begin{figure}[t]
\plotone{f1.pdf}
\caption{Cross section of the $z$-component of the fluid velocity, for $\hat V_D = 0.5$ and $4.0$, at times $t = 65.6$ and $121.7 \; T_i$ respectively. Velocity is shown in units of ${v_A}_o$. Note that the domain extends to $100 \; r_i$ in radius and height.}
\label{nonlinear_jet_vz}
\end{figure}
\begin{figure}[t]
\plotone{f2.pdf}
\caption{Three-dimensional contours of the magnitude of the magnetic field ($B = 0.34 \; B_o$) for $\hat V_D = 0.5$ and $1.0$ after the $m=1$ kink mode has saturated at $t = 43 \; T_i$. Note that the domain extends to $100 \; r_i$ in radius and height.}
\label{modB}
\end{figure}
The $z$-component of the fluid velocity for the $\hat V_D = 0.5$ and $4.0$ calculations at $t = 65.6$ and $121.7 \; T_i$ respectively is shown in Fig. \ref{nonlinear_jet_vz}. While a collimated outflow is produced for both values of $\hat V_D$, non-axisymmetric structure forms in the column for the $\hat V_D = 0.5$ case, due to the presence of an MHD instability. The effect of the instability on the magnetic structure of the jet can be seen in Fig. \ref{modB}. Here the magnitude of the magnetic field is shown for $\hat V_D = 0.5$ and $1.0$ at $t = 43 \; T_i$. While the jet has expanded to a similar length for both values of $\hat V_D$ at these times, the modification of the magnetic structure is more significant for $\hat V_D = 0.5$. For both cases an $m=1$ kink mode creates a helical distortion to the magnetic structure.
To confirm the source of the asymmetry in the $\hat V_D = 0.5$ simulation, we plot the energy of individual Fourier components in Fig. \ref{jet_energy_spectrum_with_m_1_removed}. The $m=1$ component is the first to become unstable, and it nonlinearly drives the $m > 1$ components when it reaches a significant level. The nonlinear drive is confirmed by artificially resetting the dependent fields in the $m=1$ component to zero during the course of a simulation. As can be seen from the dashed traces in Fig \ref{jet_energy_spectrum_with_m_1_removed}, removing the $m=1$ component causes the larger-$m$ components to decay, until the $m=1$ returns to a significant level. Thus, the $m=1$ component nonlinearly channels energy into the $m > 1$ components.
A plot of the magnetic energy of the $m=1$ Fourier component for all three jet simulations is shown in Fig. \ref{m_1_energy}. The jet is unstable to an $m=1$ mode for $\hat V_D = 0.5$ and $1.0$, while it remains nearly stable for $\hat V_D = 4.0$. We calculate the linear growth rate of the $m=1$ mode by making a linear fit to the magnetic energy of the $m=1$ mode when it is in the linearly growing phase. The growth rates for $\hat V_D = 0.5$, $1.0$, $4.0$ are found to be $2.98$, $1.88$, and $0.13 \; T_i^{-1}$ respectively. Thus, we see that as the accretion disk rotation increases relative to the Alfv\'en velocity of the coronal plasma, i.e. as the drive for the jet increases, the growth rate of the $m=1$ kink mode decreases.
\begin{figure}[t]
\plotone{f3.pdf}
\caption{Solid lines show the energy of individual azimuthal Fourier components as a function of time for $\hat V_D = 0.5$. The dashed lines show results from a second computation where the fields in the $m=1$ component are reset to zero at $t = 8.0 \; T_i$. The magnetic energy is normalized to $V_{dom} B_o^2 / (2 \mu_o)$, where $V_{dom}$ is the volume of the computational domain.}
\label{jet_energy_spectrum_with_m_1_removed}
\end{figure}
\begin{figure}[t]
\plotone{f4.pdf}
\caption{Energy of the $m=1$ Fourier mode as a function of time for $\hat V_D = 0.5$ (solid line), $1.0$ (dashed line), and $4.0$ (dashed-dotted line). For $\hat V_D = 0.5$ and $1.0$ the jet is observed to be unstable to the $m=1$ kink mode, while for $\hat V_D = 4.0$ the jet is nearly stable. The magnetic energy is normalized to $V_{dom} B_o^2 / (2 \mu_o)$, where $V_{dom}$ is the volume of the computational domain.}
\label{m_1_energy}
\end{figure}
\section{Linear Initial Value Calculations}
\label{livc}
The stability of the kink mode for large values of $\hat V_D$ indicates that flow plays an important role in stabilizing the jet. Motivated by this observation, we perform linear initial value calculations in a simple geometry where the flow can be scanned systematically and consider both rotation and axial flow. A cylindrical domain is used with a coordinate system given by $(r, \theta, z)$. The fields are defined to be periodic in the $z$-direction, and the boundary at $r = r_a$ is treated as a perfect conductor, where $r_a$ is much smaller than the radius of the domain of the nonlinear simulations described in Sec. \ref{nonlinear_jet}. The results are given in terms of the Alfv\'en propagation time across the radius of the cylinder, $\tau_A$ = $r_a \; v_A^{-1}$, where $v_A$ is the Alfv\'en speed at $r = 0$. Here, we solve a linear version of Eqns. \ref{mhd1}-\ref{mhd4} for perturbations to MHD equilibria, with an arbitrary perturbation included in the initial velocity field. The dissipation coefficients are chosen to give a Lundquist number of $S = 1 \bm \times 10^6$ and a magnetic Prandlt number of $P_M = 1$. If an MHD equilibrium is unstable, the solution obtained will be the most unstable linear eigenmode, and the growth rate is determined from the resulting exponential growth.
Our MHD equilibria are based on the paramagnetic pinch \citep{Bickerton:1958p2343}, which is a one-dimensional Ohmic equilibrium with uniform axial electric field. The equilibrium is characterized by the parallel current profile, $\lambda(r)$, defined as
\begin{equation}
\lambda(r) = \frac{\mu_o \mathbf{J_0(r)} \bm \cdot \mathbf B_0(r)}{B_0(r)^2} = \frac{{E_0}_z {B_0}_z(r)}{\eta B_0(r)^2},
\label{stabalization_1}
\end{equation}
\noindent
where a subscript $0$ is used to represent equilibrium fields. The profile discussed here is defined in terms of the on-axis parallel current, $\lambda_o = \lambda(r = 0)$, and the width of the equilibrium current profile decreases with increasing $\lambda_o$. Given that the $-1/2 \int \lambda \; \delta \mathbf E^{*} \cdot \delta \mathbf B \; d \mathbf x$ term is the only potentially destabilizing term in the linear ideal potential energy that is independent of $\nabla p_o$ \citep{Freidberg:1987p2544pg259}, the parallel current is related to the free magnetic energy available to drive the kink mode. Moreover, for the paramagnetic pinch, $\lambda_o$ serves as a stability parameter for the mode. The equilibrium magnetic field is found by choosing a value for $\lambda_o$ and numerically integrating Ampere's Law, $\bm \nabla \bm \times \mathbf B_0 = \mu_o \mathbf J_0$, using Eq. \ref{stabalization_1} for the parallel component of $\mathbf J_0$.
A plot of radial profiles of $\lambda$ from the nonlinear jet calculation with $\hat V_D = 4.0$ at $t = 121.7 \; T_i$ is shown in Fig. \ref{jet_lambda_slices} for $z = 20.25$, $30.14$, $40.41$, and $50.23 \; r_i$. The curves overlap since there is not a significant gradient in $\lambda$ in the $z$-direction. Thus, a one dimensional equilibrium for the linear calculations is a good approximation of the $\lambda$ profiles in the nonlinear jet calculations. For comparison, the $\lambda$ profile for the paramagnetic pinch with $\lambda_o = 5.0$ is also plotted in Fig. \ref{jet_lambda_slices}.
\begin{figure}[t]
\plotone{f5.pdf}
\caption{Radial $\lambda$ profiles for $z = 20.25$, $30.14$, $40.41$, and $50.23 \; r_i$, from the nonlinear jet calculation with $\hat V_D = 4.0$, at $t = 121.7 \; T_i$ are shown as solid colored lines. The curves overlap since there is not a significant change in $\lambda$ in the $z$-direction. The $\lambda$ profile for the paramagnetic pinch with $\lambda_o = 5.0$ is shown as a dashed line.}
\label{jet_lambda_slices}
\end{figure}
The stability of diffuse pinches, such as the paramagnetic pinch, without equilibrium flow relative to the ideal kink mode has been well studied and is known to depend on the pitch of the magnetic field, $P(r) = r \; B_z(r) \; B_\theta(r)^{-1}$. Considering eigenfunctions of the form $e^{i m \theta - i k z}$, energy analysis shows that for $m=1$ and $\frac{dp_0}{dr} = 0$, the plasma is stable if $k P(r) > 1$ or $k P(r) < (k^2 r^2 - 1) \; (3 + k^2 r^2)^{-1}$ for $r \geq 0$ and all values of $k$ \citep{Robinson:1971p4260}. When there is a region in the plasma where $k' P(r) \leq 1$ and $k' P(r) \geq (k'^2 r^2 - 1) \; (3 + k'^2 r^2)^{-1}$, there is a source of free energy for the $m=1$, $k = k'$ kink mode, and it may be unstable. When $k' P(r) < 1$ in the entire plasma, the mode is non-resonant. If there is a radius, $r_s$, in the plasma where $k' P(r_s) = 1$, $r_s$ divides the plasma into two regions; one where there is free energy for the kink, and one where there is not; and the mode is called resonant. For the paramagnetic pinch, $P(r)$ decreases monotonically, and there is free energy for the kink in the region with $r > r_s$. Since the free energy for the kink is at radii larger than $r_s$, the stabilizing effect of the conducting boundary at $r = r_a$ affects both resonant and non-resonant modes. The magnetic pitch profile of the equilibrium used here is shown in Fig. \ref{safety_factor}. Since $P(r = 0) = 2.0 \; \lambda_o^{-1}$, $k \geq 0.5 \: \lambda_o$ modes are resonant and $k < 0.5 \: \lambda_o$ modes are non-resonant.
\begin{figure}[t]
\plotone{f6.pdf}
\caption{Magnetic pitch, $P(r) = r \; B_z(r) \; B_\theta(r)^{-1}$, for the paramagnetic pinch equilibrium with $\lambda_o = 5.0$ and $\beta = 1.0$.}
\label{safety_factor}
\end{figure}
To examine the effect of jet rotation, we consider MHD equilibria with rigid rotation in the $\theta$-direction, and use $\Omega$ to denote the rotation frequency. While previous studies have shown that sheared flow is more efficient at stabilizing the kink mode \citep{Wanex:2005p1603}, our nonlinear computations show little azimuthal shear in the vicinity of the the jet. Radial profiles of the jet rotation frequency from the nonlinear jet simulation with $\hat V_D = 4.0$ at various times and axial positions are shown in Fig. \ref{jet_rot_freq_slices}. As time increases, the jet rotation frequency reaches a steady state at higher axial positions along the length of the column. For all values of $z$, the rotation frequency is uniform to within $20 \%$ across the radius of the jet, which has a width of $r \leq 1$, and as the column propagates, the rotation frequency flattens. Thus, rigid rotation is a reasonable simplification.
\begin{figure}[t]
\plotone{f7.pdf}
\caption{Radial profiles of the jet rotation frequency for $z = 20.25$, $33.44$, and $50.23 \; r_i$, from the nonlinear jet calculation with $\hat V_D = 4.0$, at times $t = 80.6$, $101.2$, $121.7 \; T_i$. The rotation frequency, $\Omega$, is given by $\Omega = v_\theta \; r^{-1}$.}
\label{jet_rot_freq_slices}
\end{figure}
The paramagnetic pinch is often considered to be a force-free equilibrium in which the current is purely parallel to the magnetic field. However, equilibrium azimuthal flow breaks the force-free nature, since the centrifugal force of the flow must be balanced by another MHD force. Two choices of force balance are considered here. The first, labeled `magnetic-balance', balances the centrifugal force against the force from the perpendicular current,
\begin{equation}
\mathbf J_0 \bm \times \mathbf B_0 = - \rho_0 \Omega^2 \mathbf r,
\label{stabalization_3}
\end{equation}
\noindent
and the parallel current is unchanged. Thus, while the current profile is modified by the introduction of the rotation, the $\lambda(r)$ profile, which is related to the free-energy source for the kink mode, is unaffected. The second force-balance model, labeled `pressure-balance', balances the centrifugal force against the equilibrium pressure,
\begin{equation}
\bm \nabla p_0 = \rho_0 \Omega^2 \mathbf r.
\label{stabalization_4}
\end{equation}
\noindent
For this case the current profile is unchanged by the introduction of the rotation. However, as the rotation increases, the pressure profile becomes increasingly hollow in the sense that it peaks on the edge of the plasma, which can have a stabilizing effect \citep{Freidberg:1987p2544pg259}. The equilibrium pressure is characterized by the plasma $\beta$ on the central axis. The choice of the values for $\lambda_o$ and $\beta$ is motivated by our nonlinear jet calculations, giving $\lambda_o = 5.0$ and $\beta = 1.0$.
Our numerically computed growth rate of the $m=1$, $k = 0.4 \; \lambda_o$ kink mode as a function of equilibrium rotation frequency, for both force balance models, is plotted in Fig. \ref{nim_lin_gr}. The results show that the growth rate of the mode decreases as rotation increases for both force balance models. The growth rate decreases somewhat faster in the pressure-balance model than in the magnetic-balance model, which we surmise is a result of the additional stabilizing effect of the hollow pressure profile in the pressure-balance model. While the results point to rotation as the important stabilizing mechanism, force-balance requires changes to the pressure profile or the perpendicular current profile as rotation is increased. To examine the effect of modifying the equilibrium forces to balance the centrifugal force from the rotation, a plasma which has the same equilibrium current as the magnetic-balance model, but without rotation, is considered. Here, the equilibrium pressure gradient replaces the centrifugal force by defining a profile which is peaked on the central axis. The resulting growth rate is also plotted in Fig. \ref{nim_lin_gr}. As the pressure gradient increases, the growth rate of the kink mode increases. This result confirms that rotation is the stabilizing influence in the $\Omega$-scans.
\begin{figure}[t]
\plotone{f8.pdf}
\caption{Growth rates from linear initial value calculations for the magnetic-balance and pressure-balance models, and with an equilibrium pressure profile which replaces the centrifugal force (peaked-pressure model). For the peaked-pressure model, there is no equilibrium rotation; instead, the rotationally equivalent pressure is given by $p_0(\Omega,r) = \beta - \Omega^2 r^2 / 2$.}
\label{nim_lin_gr}
\end{figure}
Previous theoretical and experimental studies show that sheared axial flow can stabilize the kink mode in a cylindrical plasma \citep{Shumlak:1995p2833, Shumlak:2003p1156}. Thus, we consider what effect axial flow has on jet stability in the nonlinear simulations via linear initial value calculations with equilibrium axial flow. Non-rotating force-free paramagnetic pinch equilibria with Gaussian axial flow profiles, given by $v_z(r) = v_M \: e^{-\left(2 r / w_g \right)^2}$, are considered. Motivated by the axial flow profiles in the nonlinear jet simulations, we choose $v_M = 0.3 \; v_A$ and consider a range of $w_g$ from $5.0$ to $50.0 \; \lambda_o^{-1}$, where smaller values of $w_g$ correspond to larger flow shear. Axial flow profiles from the stable $\hat V_D = 4.0$ jet simulation and the Gaussian profile used for the linear calculation with $w_g = 25.0 \; \lambda_o^{-1}$ are shown in Fig. \ref{jet_vz}. Growth rates of the kink mode as a function of $w_g$ are plotted in Fig. \ref{gamma_vs_wg}. A flow shear range comparable to that considered by \citet{Shumlak:1995p2833} is considered, but the change in the kink growth rate is less than $6.0 \%$. We attribute this to the difference in the equilibria considered here and that examined by \citet{Shumlak:1995p2833}. Based on these results, we conclude that axial flow does not significantly influence the stability of the magnetic column in our nonlinear jet simulations.
\begin{figure}[t]
\plotone{f9.pdf}
\caption{Axial flow profiles from the stable nonlinear jet simulation with $\hat V_D = 4.0$ at $t = 121.7 \; T_i$ at the indicated axial locations. The black dashed line shows the Gaussian flow profile used in the linear calculations with $v_M = 0.3 \; v_A$ and $w_g = 25.0 \; \lambda_o^{-1}$.}
\label{jet_vz}
\end{figure}
\begin{figure}[t]
\plotone{f10.pdf}
\caption{Growth rates from linear initial value calculations with equilibrium axial flow as a function of the Gaussian $v_z$ profile width. The black dashed line shows the growth rate without equilibrium flow.}
\label{gamma_vs_wg}
\end{figure}
\section{Linear Eigenvalue Calculations}
\label{levc}
The results of the linear initial value calculations indicate that the nonlinear simulations remain robust to the kink mode for high rotation rates of the accretion disk because of the rotation of the jet itself. To further examine the effect of azimuthal rotation on the kink mode, we investigate the linear ideal MHD spectrum for rotating paramagnetic equilibria. This eigenmode analysis helps us develop physical insight into the effect of rotation, which is difficult to obtain from the initial value calculations. The theory considers a cylindrical domain which is periodic in the $z$-direction with the one-dimensional rigid-rotation equilibria described in Section \ref{livc}.
\subsection{Linear Eigenvalue Theory}
The simplest approach in considering an MHD equilibrium with flow is to work in a Lagrangian representation. Assuming perturbations to the equilibrium depend on time as $e^{-i \omega t}$, the linearized MHD equation of motion is given by
\begin{eqnarray}
& -\rho_0 \omega^2 \bm \xi - 2 i \rho_0 \omega \mathbf v_0 \bm \cdot \bm \nabla \bm \xi \nonumber \\
& + \rho_0 \mathbf v_0 \bm \cdot \bm \nabla (\mathbf v_0 \bm \cdot \bm \nabla \bm \xi) = \mathbf F(\bm \xi),
\label{lin_th_lag_frame_1}
\end{eqnarray}
\noindent
where from this point, fields without subscripts represent perturbations and are assumed to be small \citep{Freiman:1960p1471, Waelbroeck:1996p1472}. The plasma displacement, $\bm \xi$, is defined by
\begin{equation}
\mathbf v = \frac{\partial \bm \xi}{\partial t}.
\end{equation}
\noindent
The linear force operator, $\mathbf F( \bm \xi)$, is given by
\begin{eqnarray}
\mathbf F( \bm \xi )= -\bm \nabla p + \frac{1}{\mu_o}\mathbf J_0 \bm \times \mathbf B + \frac{1}{\mu_o}(\bm \nabla \bm \times \mathbf B) \bm \times \mathbf B_0 \nonumber \\
+ \bm \nabla \bm \cdot (\rho_0 \bm \xi \; \mathbf v_0 \bm \cdot \bm \nabla \mathbf v_0),
\label{lin_th_lag_frame_2}
\end{eqnarray}
\noindent
where the perturbed magnetic field and the perturbed pressure are given by
\begin{equation}
\mathbf B = \bm \nabla \bm \times (\bm \xi \bm \times \mathbf B_0),
\label{lin_th_lag_frame_3}
\end{equation}
\begin{equation}
p = -(\bm \xi \bm \cdot \bm \nabla p_0 + \gamma p_0 \bm \nabla \bm \cdot \bm \xi).
\label{lin_th_lag_frame_4}
\end{equation}
As a consistency check, we also evaluate the spectra derived from an Eulerian frame of reference by including equilibrium flow in the definition of the Lagrangian displacement vector, $\bm \xi$, which satisfies \citep{Chandrasekhar:1961p3747}
\begin{equation}
\mathbf v = \frac{\partial \bm \xi}{\partial t} + \bm \nabla \bm \times (\bm \xi \bm \times \mathbf v_0).
\label{lin_eu_frame_1}
\end{equation}
\noindent
Linearizing the MHD equations in the Eulerian frame with rigid equilibrium rotation gives the following momentum equation, force operator, induction equation, and pressure equation respectively,
\begin{eqnarray}
- \omega_D^2 \bm \xi - 2 i \Omega \omega_D (\bm{\hat z} \bm \times \bm \xi) + r \Omega^2 (\bm \nabla \bm \cdot \bm \xi) \nonumber \\
\left[ \left( 3 + \frac{m \Omega}{\omega_D} \right) \bm{\hat r} + i \frac{\omega_D}{\Omega} \bm{\hat \theta} \right] = \frac{1}{\rho_0} \mathbf F( \bm \xi ) ,
\label{lin_eu_frame_2.1}
\end{eqnarray}
\begin{eqnarray}
\mathbf F( \bm \xi ) = & \frac{1}{\mu_o} \left( \mathbf B_0 \bm \cdot \bm \nabla \mathbf B + \mathbf B \bm \cdot \bm \nabla \mathbf B_0 \right) - \nonumber \\
& \bm \nabla \left( p + \frac{1}{\mu_o} \mathbf B \bm \cdot \mathbf B_0 \right),
\label{lin_eu_frame_2.2}
\end{eqnarray}
\begin{equation}
\mathbf B = \bm \nabla \bm \times (\bm \xi \bm \times \mathbf B_0) + \frac{\Omega {B_0}_z}{\omega_D} (\bm \nabla \bm \cdot \bm \xi) (r k \bm{\hat \theta} - m \bm{\hat z}),
\end{equation}
\begin{equation}
p = - (\bm \xi \bm \cdot \bm{\hat r}) \frac{dp_0}{dr} - \gamma p_0 \left( 1 + \frac{m \Omega}{\omega_D} \right) (\bm \nabla \bm \cdot \bm \xi),
\label{lin_eu_frame_2}
\end{equation}
\noindent
where $\omega_D = \omega - m \Omega$ is the Doppler shifted eigenfrequency.
Generalizing the analysis in Ref. \citep{Freidberg:1987p2544pg473}, the linearized equations in either reference frame are reduced to a pair of coupled first-order differential equations for the radial plasma displacement, $\xi_r$, and the total perturbed plasma pressure, $\tilde P = p + B \; B_0 / \mu_o$. Assuming spatial dependence of the perturbed fields of the form $e^{i m \theta - i k z}$, Eqs. \ref{lin_th_lag_frame_1}-\ref{lin_th_lag_frame_4} and Eqs. \ref{lin_eu_frame_2.1}-\ref{lin_eu_frame_2} become systems of ordinary differential equations (ODE's) with respect to the $r$-coordinate. By considering the projection of Eqs. \ref{lin_th_lag_frame_1} and \ref{lin_eu_frame_2.1} in the $\bm{\hat b}$ and $\bm{\hat \eta}$ directions, where $\bm{\hat b} = \frac{\mathbf B_0}{| \mathbf B_0 |}$ and $\bm{\hat \eta} = \bm{\hat b} \bm \times \bm{\hat r}$, the $\bm{\hat b}$ and $\bm{\hat \eta}$ components of the plasma displacement can be solved analytically. Substituting these results into Eqs. \ref{lin_th_lag_frame_1} and \ref{lin_th_lag_frame_4} and Eqs. \ref{lin_eu_frame_2.1} and \ref{lin_eu_frame_2} produces sets of coupled ODE's with the same general form,
\begin{equation}
\underline{\underline{A}}(r,\omega) \; \frac{d}{dr} \left(\begin{array}{c}r \xi_r \\ \tilde P \end{array}\right) = \underline{\underline{B}}(r,\omega) \; \left(\begin{array}{c}r \xi_r \\ \tilde P \end{array}\right),
\label{lin_th_lag_frame_5}
\end{equation}
\noindent
in both reference frames.
We consider the plasma to be surrounded by a conducting shell at the radius $r = r_a$ by defining $\xi_r(r_a) = 0$. The regularity condition at $r = 0$ is imposed by the cylindrical geometry of the domain. Expansion of $\xi_r$ in a power series for small values of $r$ shows that regular solutions satisfy $\xi_r \propto r^{m-1}$. Equation \ref{lin_th_lag_frame_5} coupled with these boundary conditions defines an eigenvalue problem with $\omega$ as the eigenvalue.
It should be noted that while the form of this eigenvalue equation is the same in both reference frames, the ODE coefficient matrices $\underline{\underline{A}}$ and $\underline{\underline{B}}$ are unique to each frame. Equation \ref{lin_th_lag_frame_5} is derived for a general equilibrium flow in a Lagrangian frame, and the coefficients can be found in \citet{Bondeson:1987p2834}. The ODE coefficients for a plasma equilibrium with rigid rotation and uniform axial flow in an Eulerian frame can be found in \citet{Appl:1992p2913}.
Due to the complexity of the ODE coefficients in Eq. \ref{lin_th_lag_frame_5}, we use a shooting method to solve the eigenvalue problem. A value is chosen for $\omega$, and Eq. \ref{lin_th_lag_frame_5} is numerically integrated from $r = 0$ to $r = r_a$ using fourth-order Runge-Kutta integration. The choice of $\omega$ is varied until the eigenfunction satisfies $\xi_r(r_a) = 0$. A Newton-Raphson method is used to search the $\omega$-parameter space for functions that satisfy this boundary condition.
In the absence of equilibrium flow, the MHD force operators in Eqs. \ref{lin_th_lag_frame_2} and \ref{lin_eu_frame_2.2} are self-adjoint, and $\omega$ is either purely real or purely imaginary \citep{Freidberg:1987p2544pg242}. With the introduction of equilibrium flow, the force operator is no longer self-adjoint, and $\omega$ and $\bm \xi(r)$ can be complex \citep{Freiman:1960p1471}. The real component of the eigenvalue, $\Re[\omega]$, gives the oscillation frequency of the eigenmode, and the imaginary component, $\Im[\omega]$, determines its growth or decay rate. The Newton-Raphson method employed here is generalized to search the complex parameter space \citep{Press:2007p3009}. While Newton-Raphson readily generalizes to multiple dimensions, it converges only if the initial guess for the root is in the vicinity of the actual root. Since $\omega$ is either purely real or purely imaginary without the equilibrium flow, Newton-Raphson is used in a one-dimensional space to find $\Im[\omega]$ with $\Omega = 0$ for a given mode. The $\Omega = 0$ result is then used as an initial guess for a nearby equilibrium with flow, and that result is used as an initial guess for a slightly larger value of $\Omega$. This process is repeated for increasing values of $\Omega$.
\subsection{Linear Eigenvalue Results}
\label{linear_eigenvalue_results}
\begin{figure}[t]
\plotone{f11.pdf}
\caption{Growth rates of the non-resonant $m=1$, $k=0.4 \; \lambda_o$ kink mode as a function of the equilibrium rotation frequency, $\Omega$, from Lagrangian and Eulerian eigenvalue calculations, and from the linear initial value calculations. }
\label{gamma_vs_rotation}
\end{figure}
\begin{figure}[t]
\plotone{f12.pdf}
\caption{Growth rates of the resonant $m=1$, $k=0.6 \; \lambda_o$ kink mode as a function of the equilibrium rotation frequency, $\Omega$. }
\label{resonant_growth_rates}
\end{figure}
Results of the eigenmode analysis and growth rates from the initial value formulation of Sec. \ref{livc} for the non-resonant $m=1$, $k = 0.4 \; \lambda_o$ kink mode can be seen in Fig. \ref{gamma_vs_rotation}. Here, calculations are shown for both both force-balance models in both reference frames. The curves from the two reference frames are indistinguishable in this plot, and comparison of the eigenvalue formulation and the initial value formulation of the problem are shown to be in agreement. These results show that as $\Omega$ increases, the growth rate of the kink mode decreases and is stable with sufficient rotation. We note that the marginal rotation period is larger than the Alfv\'en propagation time, i.e. Alfv\'enic flow within the cylinder is not required for stabilization.
We also examine the effect of rotation on resonant kink modes via the eigenvalue formulation. Growth rates for the $m=1$, $k=0.6 \: \lambda_o$ mode can be seen in Fig. \ref{resonant_growth_rates}. While equilibrium rotation fully stabilizes the non-resonant kink mode described previously, rotation only reduces the growth rate of the resonant mode and does not completely stabilize it.
The eigenmode solutions treat the radial boundary at $r = r_a$ as a solid wall by setting $\xi(r_a) = 0$. However, there is no close boundary surrounding the plasma column in the nonlinear jet simulations. To evaluate the influence of the wall location, we recompute the eigenvalues as $r_a$ is varied. The critical rotation frequency, $\Omega_c$, for stabilization of the $m=1$, $k = 0.4 \; \lambda_o$ kink mode as a function of $r_a$ is plotted in Fig. \ref{crit_rot_vs_rmax}. The resulting critical rotation frequency asymptotically approaches the value $\Omega_c = 0.24 \; (k \; v_A)^{-1}$, indicating that the stabilizing effect of the rotation remains as $r_a \rightarrow \infty$. The growth rate with $\Omega = 0$, $\gamma_o(r_a)$, as a function of $r_a$ is also plotted in Fig. \ref{crit_rot_vs_rmax}. The $\Omega_c(r_a)$ and $\gamma_o(r_a)$ curves follow the same asymptotic trend, implying that the dependence of $\Omega_c$ on $r_a$ is related to the free energy of the kink mode and not due to any changes in the stabilizing influence of rotation.
\begin{figure}[t]
\plotone{f13.pdf}
\caption{Critical rotation frequency for stabilization of the kink mode (Solid Line), and growth rate of the kink mode for $\Omega = 0$ (Dotted Line), as a function of the outer radial boundary, $r_a$, for the $m=1$, $k = 0.4 \; \lambda_o$ kink mode.}
\label{crit_rot_vs_rmax}
\end{figure}
The linear eigenvalue formulation allows for the examination of a range of axial wave numbers. The growth rate of the $m=1$ kink mode as a function of $k$ for various values of $\Omega$ is calculated, and the results are shown in Fig. \ref{gr_vs_k}. Without equilibrium rotation, there are lower and upper bounds on the unstable values of $k$. Both the upper and the lower bound increase with increasing rotation. For the equilibria considered here, modes with $k < 0.5 \: \lambda_o$ are non-resonant, and modes with $k \geq 0.5 \: \lambda_o$ are resonant. While the range in $k$-space of unstable non-resonant kink modes decreases with increasing rotation, the range of unstable resonant modes broadens with small growth rates on the order of $10^{-3} \; \tau_A^{-1}$.
\begin{figure}[t]
\plotone{f14.pdf}
\caption{Growth rates of the $m=1$ kink mode as a function of the axial wave number, $k$, for various equilibrium rotation frequencies.}
\label{gr_vs_k}
\end{figure}
We have explored a range of $\beta$ values to examine the effect of equilibrium thermal pressure on rotational stabilization. For moderate values of $\beta$, rotational stabilization is observed to be independent of $\beta$. However, for low $\beta$-values ($\beta \leq 0.06$) rigid rotation destabilizes the kink mode for $\Omega \gtrsim 0.4 \; \tau_A^{-1}$. The destabilized modes are compressible with a $\theta$-component of $\vec \xi$ that is much larger than the other components. Thus, these modes are stabilized by equilibrium pressure for the moderate values of $\beta$ relevant to extragalactic jet systems.
The choice of initial and disk boundary conditions in simulations of jet formation can have a profound effect on the magnetic pitch profile, $P(r)$ \citep{Moll:2008p4392}. Thus far, we have considered only equilibria with monotonically decreasing $P(r)$ as is observed in the nonlinear jet simulations discussed in Sec. \ref{nonlinear_jet}. To check the effect of rotation on a monotonically increasing pitch profile we consider equilibria with $P(r) = 1/2 + r^2/2$. The growth rate of the $m=1$, $k=1.0 \; r_a^{-1}$ mode as a function of equilibrium rotation frequency is plotted in Fig. \ref{q_increasing_gamma_vs_rotation} for both force balance models. Similar to the decreasing $P(r)$ cases, rigid rotation is shown to stabilize the kink mode, and we conclude that the rotational stabilization mechanism is not sensitive to the shape of the $P(r)$ profile.
\begin{figure}[t]
\plotone{f15.pdf}
\caption{Growth rate of the $m=1$, $k=1.0 \; r_a^{-1}$ non-resonant kink mode for monotonically increasing magnetic pitch equilibria as a function of equilibrium rotation.}
\label{q_increasing_gamma_vs_rotation}
\end{figure}
We also use the eigenmode calculations to investigate the physical mechanism for the rotational stabilization. The linearized momentum equation in the Eulerian frame is given by
\begin{equation}
\frac{\partial \mathbf v}{\partial t} + \rho_0 \mathbf v \bm \cdot \bm \nabla \mathbf v_0 + \rho_0 \mathbf v_0 \bm \cdot \bm \nabla \mathbf v + \rho \mathbf v_0 \bm \cdot \bm \nabla \mathbf v_0 = \mathbf F(\mathbf v),
\label{ler_1}
\end{equation}
\noindent
and the growth rate of the $m=1, k = 0.4 \; \lambda_o$ kink mode is calculated as a function of $\Omega$, removing one equilibrium flow term from the left side at a time. The results are plotted in Fig. \ref{mom_term_removal}, but results with the $\rho \mathbf v_0 \bm \cdot \bm \nabla \mathbf v_0$ term removed are not shown, as this term does not have a significant effect. In the computations without the $\rho_0 \mathbf v \bm \cdot \bm \nabla \mathbf v_0$ term, the growth rate increases with increasing $\Omega$, so this term must play a central role in the stabilization. With rigid rotation, this inertial term is
\begin{equation}
(\mathbf v \bm \cdot \bm \nabla) \mathbf v_0 = -i \; \Omega \; \omega_D (\bm{\hat z} \bm \times \bm \xi) + \Omega^2 (\bm \nabla \bm \cdot \bm \xi) \mathbf r.
\label{ler_2}
\end{equation}
\noindent
By individually removing each of the two terms on the right side of Eq. \ref{ler_2} at a time, we have determined that it is the first term which provides the stabilization. This term contributes to the Coriolis force in the frame of the plasma.
Plots of the $\xi_r$ and $\xi_\theta$ components of the eigenfunction for various equilibrium rotation rates are shown in Figs. \ref{nonresonant_xi_r} and \ref{nonresonant_xi_theta}. While there is a slight change in $\Re[\xi_r]$ and $\Im[\xi_\theta]$ as $\Omega$ is varied, the change in $\Im[\xi_r]$ and $\Re[\xi_\theta]$ is more apparent. We note that the Coriolis term locally couples the radial and azimuthal components of translation due to the kink. This distorts the mode giving a radially dependent phase shift in $\xi_r$, and a corresponding change in the real part of $\xi_\theta$. This result is similar to that described in Ref. \citep{Wanex:2005p1603}, where a radially dependent phase shift in the eigenmode due to a sheared equilibrium flow is shown to stabilize the kink mode. Here, we find that a rotational flow without shear introduces a stabilizing distortion of the mode via the Coriolis force.
It should be noted that the Coriolis term also appears in the $\rho_0 \mathbf v_0 \bm \cdot \bm \nabla \mathbf v$ term in Eq. \ref{ler_1}:
\begin{eqnarray}
(\mathbf v_0 \bm \cdot \bm \nabla) \mathbf v = -i \; \Omega \; \omega_D (\bm{\hat z} \bm \times \bm \xi) + m \; \Omega \; \omega_D \; \bm \xi \nonumber \\
+ \Omega^2 (\bm \nabla \bm \cdot \bm \xi) \mathbf r - i \; m \; r \; \Omega^2 (\bm \nabla \bm \cdot \bm \xi) \; \bm{\hat \theta},
\label{ler_3}
\end{eqnarray}
\noindent
but when the $(\mathbf v_0 \bm \cdot \bm \nabla) \mathbf v$ term is removed, the stabilization effect is not lost. Equation \ref{ler_3} contains another term which is first order in $\Omega$ given by, $m \; \Omega \; \omega_D \; \bm \xi$. This term provides the Doppler shift in the frequency $\omega$. This Doppler shift appears in the other MHD equations as well. Thus, removing the $\rho_0 \mathbf v_0 \bm \cdot \bm \nabla \mathbf v$ term temporally decouples the velocity field from the magnetic field, reducing the growth rate of the instability, as shown in Fig. \ref{mom_term_removal}.
\begin{figure}[t]
\plotone{f16.pdf}
\caption{Growth rates of the $m=1, k=0.4 \; \lambda_o$ kink mode as a function of rotation frequency with individual inertial terms removed from the linearized momentum equation. For the solid curve all of the terms are present, for the dashed curve the $\rho_o \mathbf v_1 \bm \cdot \bm \nabla \mathbf v_o$ is removed, and for the dot-dashed curve the $\rho_o \mathbf v_o \bm \cdot \bm \nabla \mathbf v_1$ is removed.}
\label{mom_term_removal}
\end{figure}
\begin{figure}[t]
\plotone{f17.pdf}
\caption{The $\xi_r$ component of eigenfunctions of non-resonant $m=1$, $k = 0.4 \: \lambda_o$ kink modes for various equilibrium rotation rates. The eigenmodes are normalized to the maximum value of $\Re[\xi_r]$.}
\label{nonresonant_xi_r}
\end{figure}
\begin{figure}[t]
\plotone{f18.pdf}
\caption{The $\xi_\theta$ component of eigenfunctions of non-resonant $m=1$, $k = 0.4 \: \lambda_o$ kink modes for various equilibrium rotation rates. The eigenmodes are normalized to the maximum value of $\Re[\xi_r]$.}
\label{nonresonant_xi_theta}
\end{figure}
We also examine the effect of rotation on the resonant eigenmodes. Plots of $\xi_r$ for the $m=1$, $k = 0.6 \: \lambda_o$ kink mode, for various equilibrium rotation rates, are shown in Fig. \ref{resonant_xi_r}. Similar to the non-resonant case, the rotation introduces a significant phase shift in the radial component of the eigenfunction. However, for the resonant case, there is also a significant change in the real part of $\xi_r$ near the rational surface.
\begin{figure}[t]
\plotone{f19.pdf}
\caption{The $\xi_r$ component of eigenfunctions of resonant $m=1$, $k = 0.6 \: \lambda_o$ kink modes for various equilibrium rotation rates. The eigenfunctions are normalized to the maximum value of $\Re[\xi_r]$. The vertical black line shows the position of the rational surface at $r = 0.232 \: r_a$.}
\label{resonant_xi_r}
\end{figure}
To assess the rotation in the simulated magnetic columns described in Sec. \ref{nonlinear_jet}, we calculate the rotation frequencies at different values of $z$. The rotation frequencies plotted in Fig. \ref{jet_rot_vs_z} are determined by making linear fits to the $\theta$-component of the fluid velocity over the radial coordinate. The simulation times chosen for these profiles are such that the kink mode is in the linear phase for the $\hat V_D = 0.5$ and $1.0$ calculations, as can be seen in Fig. \ref{m_1_energy}. It is clear that angular momentum injected by the accretion disk is transported axially by the jet as it expands. As $\hat V_D$ increases, the rotation rate of the jet increases, providing greater stability for the kink mode.
\begin{figure}[t]
\plotone{f20.pdf}
\caption{Rotation frequency as a function of $z$ in the nonlinear jet calculations discussed in Section \ref{nonlinear_jet}, for various values of $\hat V_D$, at times $t = 8.8, 8.4, 11.3 \; T_i$. Rotation frequency is calculated by making a linear fit to the $m = 0$ component of $v_\theta$ for small values of $r$.}
\label{jet_rot_vs_z}
\end{figure}
For comparison to the results of the linear MHD calculations, we examine the $m=1$ kink mode in the nonlinear jet simulations when it is in the linear phase. The $m=1$ Fourier component of $v_r$ is plotted in Fig. \ref{jet_m_1_vr} for the unstable $\hat V_D = 0.5$ and $1.0$ jet simulations at times $t = 8.76$ and $10.26 \: T_i$, respectively. For these times, the kink mode is in its linearly growing phase. Since the modes plotted in Fig. \ref{jet_m_1_vr} extend across the entire width of the jet, we conclude that the kink mode observed in the nonlinear simulations is a non-resonant mode. According to our linear results, these modes would be stable with increased rotation, as is the case in the $\hat V_D = 4.0$ simulation. Similar to the eigenmodes from the linear analysis shown in Fig. \ref{nonresonant_xi_r}, the distortion of the linear eigenmodes in the jet simulations (Fig. \ref{jet_m_1_vr}) is due to a radially dependent phase shift in $v_r$.
\begin{figure}[t]
\plotone{f21.pdf}
\caption{The $m=1$ Fourier component of $v_r$ in the nonlinear jet simulations with $\hat V_D = 0.5$ and $1.0$ at times $t = 8.76$ and $10.26 \: T_o$ respectively.}
\label{jet_m_1_vr}
\end{figure}
\section{Discussion and Conclusions}
\label{conclusions}
Nonlinear non-relativistic MHD simulations of jet evolution, starting from an equilibrium coronal plasma with zero net magnetic flux through the accretion disk, show the formation of a collimated outflow. This outflow is unstable to the current driven $m=1$ kink mode for low rotation velocities of the accretion disk relative to the Alfv\'en speed of the coronal plasma. As it saturates, the kink mode broadens the outflow, but does not destroy the collimation. Similar to previous results \citep{Nakamura:2004p687}, for large rotation velocities of the accretion disk, the outflow is shown to be stable against the kink mode. Moreover, the growth rate of the $m=1$ kink mode is shown to be inversely related to the rotation rate of the accretion disk. This result is counter-intuitive in the sense that as the accretion disk rotates faster, the collimating magnetic field in the jet coils tighter. As the coiling of the magnetic field increases, the current increases. Since the current is the source of free energy for the kink mode, one would expect that the jet would be more unstable for high rotation rates of the accretion disk. However, we observe that it is stable in this regime.
Motivated by the result of the nonlinear jet simulations, we explore the effect of rigid rotation on the $m=1$ kink mode in a periodic cylindrical plasma via linear MHD calculations. The linear calculations are treated as an initial value problem in an Eulerian reference frame and as eigenvalue problems in Eulerian and Lagrangian reference frames. The results from all three methods are in agreement. While previous studies have shown that sheared flow is more efficient at stabilizing the kink mode \citep{Wanex:2005p1603}, we show that rigid equilibrium rotation stabilizes the non-resonant $m=1$ kink mode via the Coriolis effect. The Coriolis effect links radial and azimuthal motions of the plasma, which distorts the kink eigenmode and reduces its growth rate.
The MHD equations used to model the jet propagation discussed in Section \ref{nonlinear_jet} include dissipative terms, and we should consider what effect dissipation has on the rotational kink stabilization. In order to obtain smooth numerical solutions, the values chosen for the resistivity and the viscosity in the nonlinear jet simulations are much larger than that of any astrophysical jet system. However, we use the dissapationless ideal MHD equations for the eigenvalue analysis discussed in Section \ref{levc}. While dissipation certainly affects the energy densities in the outflow in the jet simulations, the rotational stabilization is an ideal effect and robust to the choice of the dissipation coefficients.
Our choice of initial conditions in the nonlinear jet simulations discussed in Sec. \ref{nonlinear_jet} has a significant effect on the shape of the magnetic pitch profile, $P(r)$, in the simulated jet. The combination of inertia in the initial coronal plasma and a rapidly decreasing magnetic field acts as a background which the magnetic flux can push against. This allows for the buildup of a large $B_\theta$, producing a monotonically decreasing $P(r)$. In contrast, the simulations of \citet{Moll:2008p4392} produce jets with a monotonically increasing $P(r)$. While these differences affect the shape of the linear eigenfunctions, the eigenvalue calculations discussed in Sec. \ref{linear_eigenvalue_results} show that the rotational stabilization is insensitive to the shape of the $P(r)$ profile.
With a decreasing $P(r)$ profile and no equilibrium rotation, there are lower and upper bounds on unstable values of $k$ for the kink mode, and the growth rate, $\gamma(k)$, is a function of $k$. This can have a profound effect on the evolution of an expanding jet. The linear rigid rotation calculations discussed in this paper apply only to static equilibria. However, we contend that the results of these calculations can be used as a guide for considering the stability of the time-dependent equilibrium of an expanding jet. As the jet expands, the $k$-value of any given harmonic decreases in time, i.e. the harmonic is stretched by the jet expansion. If we consider an equilibrium that is expanding at a constant rate $s$ with an initial length $L$ and a mode with $k = k'$ at time $t = 0$, the total energy gained by the harmonic over time $t$ is can be estimated as
\begin{equation}
\Delta E(t, k') = E' \int_0^t e^{2 \; \gamma ( \frac{k'}{1 + s t / L}) \; \tilde t} \; d \tilde t ,
\label{ler_0}
\end{equation}
\noindent
where $E'$ is some initial energy in the mode. As we increase $s$, i.e. with faster expansion, the mode spends less time in the unstable range of $k$, and $\Delta E$ decreases. Moreover, equilibrium rotation acts to decrease the area under the $\gamma(k)$ curve, decreasing $\Delta E$ as well. Clearly, this a nonlinear process, and the qualitative description given here motivates further study.
While current driven instabilities may play a role in the wiggled structures which are observed in some outflows \citep{Reipurth:2002p4455, Worrall:2007p3874}; other explanations for these structures have been presented, such as precession of the the source object \citep{Masciadri:2002p4461}. In general, a combination of these effects could contribute to the formation of these structures. Since rotation is shown to stabilize the kink mode, knowledge of the jet rotation velocity relative to the Alfv\'en velocity is critical for understanding the degree to which the kink plays a role.
\section{Acknowledgements}
\label{acknowledgements}
The authors would like to recognize the following people for their valuable discussions and contributions to this work: John Everett, Ellen Zweibel, Sebastian Heinz, Chris Hegna, Hui Li, and Stirling Colgate. This work is supported by the U.S. Department of Energy Computational Science Graduate Fellowship (DE-FG02-97ER25308) and the National Science Foundation Center for Magnetic Self-Organization in Laboratory and Astrophysical Plasmas (PHY 0821899). Nonlinear simulations were performed at the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
|
1,108,101,563,140 | arxiv | \section{INTRODUCTION}
\label{sec:introduction}
Simultaneous Localization and Mapping (SLAM) are research hotspots in the field of robots, autonomous driving, augmented reality, etc. Camera and inertial measurement units (IMU) are low-cost and effective sensors. The Visual Inertial Odometry (VIO) combines the complementarity of the two sensors to improve the accuracy and robustness of the pose estimation. Existing VIO methods \cite{VINS, ORBSLAM3, mourikis2007multi} generally use the feature-based (indirect) method as visual measurements. However, in the weak texture environment, the number of effective point features extracted by the indirect method is small, leading to the failure of pose estimation. On the other hand, the direct method \cite{engel2017direct, von2018direct} can utilize all available pixels of images to generate a more complete model, which is more robust.
Since the VIO can build the map of the surrounding environment in real-time, by using the geometric constraint information of the map, we can improve the positioning accuracy and the quality of the map.
The widely existing edges in the environment are the most common features which add different types of visual measurement to the estimator \cite{PL-VIO,pumarola2017pl,structslam, li2018direct}. Meanwhile, the planar features in the environment can also be used to increase the robustness of the estimator. However, the reconstructed map with the indirect method is too sparse, which makes it difficult to extract planar regularities, and the planar features will increase the computational burden that affects the real-time performance of the system.
For the VIO based on the direct method, the visual module tracks the pixels with large enough intensity gradients, sufficient visual observation makes the reconstructed map denser. Meanwhile, with the aid of the IMU information, it is easier to extract the plane information in the map, as is shown in Fig.\ref{fig:cover_grap}. Furthermore, introducing the geometric information that is less sensitive to the luminance changes into the direct method makes the system more robust.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{image/cover_graph.pdf}
\caption{The proposed direct sparse visual-inertial odometry builds a sparse map running on the V11 sequence of the EuRoC dataset. (a) is the reconstruction map of the whole scenes, (b) is the coplanar points on different planes, the different colors are used to distinguish the planes. (c) is the 2D Delaunay triangulation, depth map, and reconstructed 3D mesh of the corner in the scene. }
\label{fig:cover_grap}
\vspace{-6mm}
\end{figure}
This paper proposes a direct sparse visual inertial odometry that leverages planar regularities called PVI-DSO. The method is an extension of DSO, we complement the IMU measurement constraints and coplanar constraints. Unlike the method of \cite{von2018direct}, we use a novel way to fuse the IMU measurement information, which doesn't estimate the transformation between the DSO frame and the metric frame. Meanwhile, we extract the coplanar regularities from the generated 3D mesh, and inspired by \cite{li2020co}, in the photometric error function, we adopt the plane parametric representation whose performance has been proved in \cite{li2020co}. Through the above methods, adding plane features in the system doesn't increase much computational burden. In summary, the main contribution of this work include:
\begin{itemize}
\item To our best knowledge, PVI-DSO is the first direct sparse visual-inertial odometry that fuses coplanar constraint regularities to improve the accuracy of localization.
\item { We introduce a coplanar point parameterization in the direct method that constructs photometric error constraints, which is an extension of \cite{li2020co}. The plane-distance cost used in this paper converts the coplanar constraints into the plane prior, which can enforce the coplanar constraints in the optimization without increasing extra computation cost.}
\item We design extensive experiments on the challenging EuRoC and TUMVI datasets. The experimental results show that the proposed PVI-DSO fusing IMU constraints and coplanar regularities into the direct method outperforms the state-of-the-art.
\end{itemize}
\section{RELATED WORK}\label{sec:related work}
{The most common features used in the SLAM / VIO algorithms are the points \cite{mourikis2007multi, VINS, ORBSLAM3}, \cite{engel2017direct}, \cite{forster2014svo}, as a complement to the point features, the geometric information (line, plane) introduced in the system based on points can improve the accuracy of the pose estimation and mapping, which have received extensive attention in recent years.}
\textbf{Indirect method with geometric Regularities} \ Since the line features exist widely in the environment, it seems natural to fuse the line features into the framework based on points \cite{PL-VIO,pumarola2017pl, Tsai2018}. Furthermore, in the structural scenario, the lines with three perpendicular directions of the Manhattan world model can encode the global orientations of the local environments, which are leveraged to improve the robustness and accuracy of pose estimation \cite{Zou2019, structslam, xu2021leveraging}. For the planar information, the main difficulty is how to accurately extract the planar regularities in the environment. \cite{salas2014dense,zhang2019point,ma2016cpa} extract plane features with the assistance of depth map obtained by the RGBD camera. Lu \textit{et al.} proposed \cite{lu2015visual} uses the RANSAC method to fit the plane among the estimated 3D points with RGB camera. Nevertheless, the RANSAC-based method can only be used when there is only one potential large plane in the environment and consumes a lot of time. More Recently, \cite{rosinol2019incremental, li2020leveraging} get the planar regularities from the 3D mesh generated by the VIO. However, Sparse point clouds generated by indirect-based vision algorithms make the location of the 3D mesh inaccurate.
\textbf{Direct method with geometric Regularities} \ Direct methods minimize photometric errors to estimate pose and reconstruct the map. Geometric information can reduce the impact of drastic lighting changes on the system. \cite{krombach2016combining} combines the point feature-based matching method with semi-dense direct image alignment to initialize the system robustly, which takes advantage of the two methods. For the line features utilized in the direct method, \cite{yang2017direct, li2018direct, gomez2016pl} only use the straight lines to force the 3D points in the map, which satisfies the collinear constraint, not optimize jointly optimize the pose of the system. \cite{zhou2021dplvo} introduces the collinear constraint into the DSO and improves the pose estimation accuracy with the line of 2 degrees of freedom (DoF) representation. \cite{cheng2020direct} expresses the line features in the image as Manhattan world regularity and merges the structural information into the optimization framework of photometric error.
\section{SYSTEM OVERVIEW}\label{sec:system overview}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{image/Framework.pdf}
\vspace{-5mm}
\caption{Overview of our PVI-DSO system.} \label{fig:pipeline}
\vspace{-3mm}
\end{figure}
The system proposed in this paper is based on DSO\cite{engel2017direct}. To improve the accuracy and robustness of the system, we introduce the IMU measurement and geometric information in the environment. As shown in Fig. \ref{fig:pipeline}, the proposed system contains two main modules running in parallel: the front end and the back end.
In the front end, raw measurements from the IMU are processed. With the aid of IMU information, the most recent frame's photometric parameters and pose can be estimated robustly by coarse tracking. And then, we judge whether the current frame is the keyframe. If the current frame is a keyframe, it will be delivered to the back end module. Otherwise, it is only used to track the candidate points to update the depth.
In the back end, the candidate points are tracked and refined with the latest image to obtain more accurate depth, they are selected as the activate points if satisfy some criterion as described in \cite{engel2017direct}. And then, the depth map generated by the activate points is used to form the 3D mesh and planes. The coplanar constraints extracted from the planes are used to constraint the pose of the system. Finally, the operations for the visual-inertial bundle adjustment are executed. We add the non-coplanar point residual, coplanar point residual, inertial residual, and corresponding prior residual into the optimizer. The plane detection, residual construction, optimization, and marginalization of the system will be described in Sec.\ref{sec: direct sparse visual-inertial odometry}.
\section{Notations and Preliminaries} \label{sec:algorithm}
Throughout the paper, we will use the following notation: the bold upper letters represent matrices, bold lower letters represent vectors, and light lower letters represent scalars. In this section, we introduce the coordinate transformation and the representation of point and plane features.
\subsection{Coordinate Transformation} \label{alg: coordinate system}
The world coordinate system is defined as a fixed inertial framework that the Z axis is aligned with the gravity direction. We define the transformation from the IMU frame to world frame is $\textbf{T}_{wi} \in \textbf{SE}\left(3\right)$, the transformation from camera frame to IMU frame is $\textbf{T}_{ic} \in \textbf{SE}\left(3\right) $(which is calibrated in advance and fixed in the system). So the transformation from camera frame to world frame is calculated by $\textbf{T}_{wc} = \textbf{T}_{wi} * \textbf{T}_{ic}$. We denote Lie algebra elements as $\bm{\hat{\xi}} \in \mathfrak{se}\left(3\right)$ where $\bm{\xi} \in \mathbb{R}^{6}$, exponential and logarithmic map associate $\textbf{SE}\left(3\right)$ and $\mathfrak{se}\left(3\right)$.
\begin{figure}[t]
\vspace{2mm}
\centering
\includegraphics[width=0.8\linewidth]{image/point_planar_nonplanar.pdf}
\vspace{-5mm}
\caption{The coplanar regularities of pixels in the image are detected, and the coplanar points are on the planar region $\bm{\pi}_i$. The planar parameters are expressed in the world frame with a normal vector $\bm{n}_i$ and a distance $d_i$. }
\label{fig:point_planar_nonplanar}
\vspace{-3mm}
\end{figure}
\subsection{Point Representation}
The pixels extracted are expressed by the inverse depth $d_p\in \mathbb{R}$ in the host image frame. In order to construct the photometric error cost function, we need to transform the pixel in the host image frame $I_h$ into the target image frame $I_t$. Assuming $\bm{p}$ is the pixel in the $I_h$ and it is observed at $\bm{p}'$ in the $I_t$, the relationship of the pixel is:
\begin{flalign}
\begin{array}{c}
\bm{p'} = \Pi_c\left(\bm{R}_{th}\Pi_c^{-1}\left(\bm{p}, d_p\right) + \bm{t}_{th}\right)
\end{array}\label{fla: point transform}
\end{flalign}
where $\bm{R}_{th}$ and $\bm{t}_{th}$ are the relative rotation and translation from $I_h$ to $I_t$, $\Pi_c$ and $\Pi_c^{-1}$ are the projecton and back projection of the camera.
\subsection{Plane Representation}
As is shown in Fig.\ref{fig:point_planar_nonplanar}, a plane in the world frame can be represented by $\bm{\pi} =\begin{bmatrix}\bm{n}^{\rm T}, d^{\rm T} \end{bmatrix}^{\rm T} $ where $\bm{n} \in \mathbb{R}^3$ is the normal vector of plane and $d \in \mathbb{R}$ is the distance from origin of the world frame to the plane. The normal vector $\bm{n}$ has three parameters but only two Degrees of Freedom (DoF) with $||\bm{n}||_2 = 1$.
To get a minimal parameterization of $\bm{\pi}$ for optimization, we specialize the parameters of a general plane into the vertical plane $^v\bm{\pi}$ that the normal vector is perpendicular to the gravity direction or the horizontal plane $^h\bm{\pi}$ that the normal vector is parallel for the gravity direction. For horizontal plane, we fix the normal vector $^h\bm{n}$ to $\begin{bmatrix} 0, 0, 1 \end{bmatrix}^{\rm T} $, and only need to optimize the distance $^h d$. For vertical plane, since the normal vector is perpendicular to the gravity direction, it has only one DoF, we represent it by the form
\begin{flalign}
^v\bm{\pi}= \left( cos(\psi)cos(\phi), cos(\psi) sin(\phi), sin(\psi) , d \right)^{\rm T}
\end{flalign}
where $\phi$ and $\psi$ are the azimuth and elevation angles of the normal vector and $d$ is the distance from the origin of the world frame. As shown in Fig. \ref{fig:plane_parameter}, we fix the elevation angle $\psi$ to zero and only optimize the azimuth angle $\phi$ and the distance $d$.
The transformation of the plane parameters from world frame to camera frame has the form:
\begin{flalign}
\bm{\pi}{_c}= \textbf{T}_{cw}^{-\rm T}\bm{\pi}{_w}
\label{fla: transformation matrix of plane}
\end{flalign}
Using (\ref{fla: transformation matrix of plane}), we can convert the plane of world frame and construct the photometric error function about plane parameters in the camera frame.
\begin{figure}[t]
\centering
\includegraphics[width=0.55\linewidth]{image/plane_parameter.pdf}
\vspace{-5mm}
\caption{ The 2 DoF parameter representation of the normal vector $\bm{n}$.}
\label{fig:plane_parameter}
\vspace{-3mm}
\end{figure}
\section{VIO WITH COPLANAR REGULARITIES } \label{sec: direct sparse visual-inertial odometry}
{In this section, the method of coplanarity detection is firstly described. And then, we will fuse the inertial constraint and coplanar constraint into the non-linear optimization framework of \cite{engel2017direct} to estimate the body state and 3D landmarks. Meanwhile, in the optimization, we use the coplanar parameter expression to construct the photometric error, which not only guarantees accuracy but also reduces the number of optimization state variables. When the coplanar point-anchored frame is marginalized, we convert these constraints into plane distance-costs as the plane prior.
Finally, since the minimized energy functional is highly non-convex, the system proposed in this paper adopts dynamic initialization, a good initial value ensures the robustness of the system initialization in complex environments.}
We optimize all the state variables in the sliding window by minimizing the sum of the energy function from visual residual, IMU residual, and prior residual:
\begin{flalign}
E_{total} = \lambda \cdot E_{point} + \lambda \cdot E_{plane} + E_{inertial} \label{fla: total cost function} + E_{prior}
\end{flalign}
where $E_{point}$ is the photometric error of non-coplanar points (section \ref{alg: photometric error of non-coplanar point}), $E_{plane}$ is the photometric error of coplanar points (section \ref{alg: photometric error of coplanar point}), $E_{inertial}$ is the the inertial error term (section \ref{alg: photometric error of inertial error}), and $E_{prior}$ is the prior from marginalization operator (section \ref{alg: Marginalization with coplanar constraint}), respectively. $\lambda$ is the weight between visual photometric error and inertial error.
\subsection {Coplanarity Detection}\label{alg: coplanarity detection}
In this paper, we detect plane information from the 3D mesh. Since it is challenging to construct 3D mesh from 3D landmarks directly, we build 2D Delaunay triangulation \cite{hosseinzadeh2017sparse} in the image and project the triangular regularities to 3D landmarks. The direct method does not compute the matching relationship of points, making organizing the same triangular patch observations in different image frames difficult. Therefore, we generate 2D Delaunay triangulation in {the depth map of DSO \cite{engel2017direct}, which spans multiple image frames. In this way, the 3D mesh is anchored to the time frame of the sliding window, limiting memory usage.}
{For plane detection, we only detect the planes that are either vertical (i.e., walls, the normal is perpendicular to the gravity direction) or horizontal (i.e., floor, the normal is parallel for the gravity direction), which are commonly found in man-made environments.} For horizontal plane detection, we build a 1D histogram of the average height of triangular patches that the normal vector parallel for the gravity direction, and then the local maximums of the histogram are extracted after a Gaussian filter, as described in \cite{rosinol2019incremental}. We consider the local maximum to be the horizontal plane when it exceeds a certain threshold ($\sigma_t = 20$). For vertical plane detection, a 2D histogram is built, one axis is the azimuth of the plane's normal vector, the other axis is the distance from the origin of the plane, we only count the triangular patches that the normal vector is perpendicular to the gravity direction, after statistics, we select the vertical planes in a similar way of horizontal plane detection.
We maintain all historical planes in the system. Therefore, for the newly detected planes, we need to detect whether the historical plane already exists according to its parameters and merge the similar planes when the angle and distance parameters are less than a certain threshold. Meanwhile, the coplanar points associated with the latest plane are also transferred to the historical plane, for efficiency, we only add the coplanar constraints detected from the current local 3D mesh into the sliding window. In the marginalization, the coplanar constraints on the plane parameters are removed and converted to the plane-distance cost, which contains the information of historical plane and is used as the prior in the subsequent optimization.
\subsection{Photometric Error of Non-coplanar Point } \label{alg: photometric error of non-coplanar point}
The direct method is based on the photometric invariance hypothesis to minimize the photometric error. In a similar way as \cite{engel2017direct}, the photometric error for a point $\bm{p}$ with inverse depth $d_{\bm{p}}$ in the host image $I_h$ observed by the target image $I_t$ is defined as:
\begin{flalign}
E_{p_j} = \sum\limits_{\bm{p} \in \mathcal{N}_{\bm{p}}} w_{\bm{p}} \|
\left( I_t\left[\bm{p'}\right] - b_t\right) - \frac{t_t e^{a_t}}{t_he^{a_h}}\left(I_h\left[\bm{p}\right]-b_h\right) \|_{\gamma} \label{fla: photometric error}
\end{flalign}
where $t_h$, $t_t$ are the exposure times of the respective image $I_h$ and $I_t$, $a_h$, $a_t$, $b_h$ and $b_t$ are the affine illumination transform parameters, $\mathcal{N}_p$ is a small set of pixels around the point $\bm{p}$,
$w_{\bm{p}}$ is a gradient-dependent weighting and $\| \cdot\|_{\gamma}$ is the Huber norm, $\bm{p}'$ is the point projected into $I_t$ and the relationship between $\bm{p}$ and $\bm{p}'$ is given by (\ref{fla: point transform}).
So the sum of the photometric error of non-coplanar points observed by all the keyframes in the sliding window is defined as:
\begin{flalign}
E_{point} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{P}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_j}
\end{flalign}
where $\mathcal{F}$ is the keyframes in the sliding window, $\mathcal{P}_i$ is the set of non-coplanar points in the host keyframe $i$, $obs(\bm{p})$ is a set of observations of the $\bm{p}$ in the other keyframes.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{image/hessian_matrix_of_coplanar_points.pdf}
\vspace{-5mm}
\caption{The Hessian matrix of the photometric error. (a) 2 coplanar points and 2 non-coplanar points are observed by camera $O_1$ and camera $O_2$, the coplanar points lie on the plane $\bm{\pi}$; (b) the traditional cost function of coplanar points needs to optimize the inverse depth; (c) the cost function of this paper only optimizes the parameters of plane.}
\label{fig:hessian matrix of coplanar points}
\vspace{-3mm}
\end{figure}
\subsection{Photometric Error of Coplanar Point } \label{alg: photometric error of coplanar point}
The plane $\bm \pi_w$ detected from the 3D mesh is expressed in the world frame, in order to construct the photometric error between the host image and the target image, we convert the $\bm \pi_w$ into the host image frame to get $\bm \pi_c$ using (\ref{fla: transformation matrix of plane}).
When we detect the point $\bm p_c $ in the image that the corresponding 3D point $\bm{p}_{\bm{\pi}_c}$ lies on the plane $\bm{\pi}_c$ , the point $\bm p_c$ satisfies the coplanar equation
\begin{flalign}
z \cdot \bm{n}_c^{\rm T} \Pi_c^{-1}(\bm{p}_c, 1) + d_c = 0
\end{flalign}
where $z$ is the depth from the origin of the camera frame to the 3D point of $\bm p_c$, $\bm n_c$ is the normal vector of $\bm \pi_c$, and $d_c$ is the distance of $\bm \pi_c$.
With the depth $z$, we can rewrite (\ref{fla: point transform}) as (\ref{fla: coplanar_point transform}) and construct the photometric error $E_{\bm p_c}$ of coplanar point by (\ref{fla: photometric error})
\begin{flalign}
\begin{array}{c}
\bm{p_c}' = \Pi_c\left(\bm{R}_{th}\Pi_c^{-1}\left(\bm{p}_c, 1/z\right) + \bm{t}_{th}\right)
\end{array}\label{fla: coplanar_point transform}
\end{flalign}
The sum of the photometric error of coplanar points observed by all the keyframes in the sliding window is:
\begin{flalign}
E_{plane} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{C}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_c}
\end{flalign}
where $\mathcal{C}_i$ is the set of coplanar points in the host keyframe $i$. For a single residual, the jacobian corresponding to the camera frame can be decomposed as \cite{engel2017direct}:
\begin{flalign}
\bm{J}_k = [\bm{J}_I \cdot \bm{J}_{geo}, \bm{J}_{photo}]
\end{flalign}
where $\bm{J}_I$ is the image gradient, $\bm{J}_{geo}$ is the jacobian of geometric parameters ($\textbf{T}_h$, $\textbf{T}_t$), $\bm{J}_{photo}$ is the jacobian of photometric parameters ($a_h$, $a_t$, $b_h$, $b_t$). Specially, we note the jacobian terms in $\bm{J}_k$ that related to the host image frame as $\bm{J}_h$, the jacobian terms in $\bm{J}_k$ that related to the target image frame as $\bm{J}_t$, the jacobian of $\bm{\pi_{w}}$ as $\bm{J}_{\pi}$ so the jacobian of the coplanar point on the plane can be written as :
\begin{flalign}
\bm{J}_{coplanar} = [\bm{J}_h, \bm{J}_t, \bm{J}_{\pi}]
\end{flalign}
we only need to construct the Hessian matrix of the host image frame $\bm{H}_{h}$, target image frame $\bm{H}_{t}$ and plane parameters $\bm{H}_{\pi}$, not the inverse depth of coplanar points, as is shown in Fig.\ref{fig:hessian matrix of coplanar points}, in this way, the number of state variables added to the optimizer can be effectively reduced, which limits the computation cost of the optimization.
\subsection{Inertial Error} \label{alg: photometric error of inertial error}
To construct the inertial error with angular velocity and linear acceleration, we use the pre-integration method proposed in \cite{forster2015imu} to handle the high frequencies of IMU measurements.
This gives a pseudo-measured value of IMU between consecutive keyframes.
Similar to \cite{forster2015imu}, the pre-integration IMU factors we build include relative rotation and translation, velocity, and gyroscope and accelerometer biases. The biases are fixed in each single pre-integration block.
\subsection{Marginalization with Coplanar Constraints}\label{alg: Marginalization with coplanar constraint}
{We utilize the bundle adjustment to optimize the state variables in the sliding window. When the new image frame is added into the sliding window, all the variables corresponding to the older frame are marginalized using the Schur complement.
For the state variable of the plane, to avoid maintaining too many historical planes in the sliding window, we also need to "marginalize" them out.
Since the removed planes may be added into the optimizer again later, we convert the coplanar constraints related to the plane into the plane-distance cost, which is regarded as the plane prior to enforcing coplanar constraints. In this way, we do not need to increase much computation cost to generate the plane prior.}
\subsubsection{Marginalization of non-coplanar and inertial constraints}
When the number of keyframes in the sliding window is greater than a certain threshold, we select the marginal keyframe similar to the criteria in \cite{engel2017direct}, which considers the luminance change of the images and the geometry distribution of the poses. To maintain the consistency of the system, once the variable is connected to the marginalization factor, we use the First Jacobian Estimation (FEJ) \cite{huang2009first} to fix the variable's linearization point at the same value.
\subsubsection{Plane-distance cost for coplanar constraints}
For coplanar points, instead of using the photometric error term as the prior, we consider the coplanar points as a whole plane and directly use the plane-distance cost as the prior constraint. The plane-distance cost can be expressed by the distance from the prior plane $\bm{\pi}'= (\phi', \psi', d')$ to the current plane $\bm{\pi} = ( \phi , \psi , d)$:
\begin{flalign}
E_{\bm{\pi}_p} = w_n \|\left[ \phi', \psi', d' \right]^{\rm T}- \left[ \phi , \psi , d \right]^{\rm T}\|_{\sum_{\bm{\pi}}}^2
\end{flalign}
where $\sum_{\bm{\pi}}$ is the covariance matrix of constraint, $w_n$ is the number of the points on the plane.
\subsection{Initialization and Observability}
{Compared to pure visual odometry, visual-inertial odometry fusing IMU data makes metric scale, roll, and pitch observable.
This means that the state variables must be properly initialized. Otherwise, the optimization may diverge, leading to system failure.
Initialization can be divided into dynamic initialization and static initialization.
For the problem of insufficient visual parallax during static initialization, we use dynamic initialization in the system.
Optical flow tracking is introduced in the initialization, which improves the robustness of VIO initialization based on the direct method.
Similar to \cite{VINS}, we run SFM (Structure From Motion) to compute the rough pose of the frame, which is then aligned with the state variables obtained through the IMU integration for joint initialization.}
\begin{table*}[htbp]
\label{tab:1}
\begin{center}
\vspace{5mm}
\caption{The trajectory error (m) of different methods on EuRoC dataset. In \textbf{bold} the best of RMSE gt-scaled results, in \textcolor{blue}{blue} the best of RMSE results. }
\label{tab:ATE Trans of EuRoC}
\centering
\setlength{\tabcolsep}{2.0mm}{
\begin{tabular}{cc|ccccccccccc}
\toprule
Sequence& & MH1& MH2&MH3&MH4&MH5&V11&V12&V13&V21&V22&V23 \\\midrule
\multirow{3}{*}{\makecell[c]{VI-DSO \\ (raw data from \cite{von2018direct})} } &RMSE& 0.062 & \textcolor{blue}{0.044}& 0.117 & 0.132 & 0.121 & 0.059 & \textcolor{blue}{0.067} & 0.096 &\textcolor{blue}{0.040} & 0.062 & 0.174 \\
&RMSE gt-scaled& 0.041 &\bf 0.041 & 0.116 & 0.129 & \bf 0.106 & 0.057 &\bf 0.066 & 0.095 & \bf 0.031 & 0.060 & 0.173 \\
&Scale Error(\%) &1.1 & 0.5 & 0.4 & \bf 0.2 & 0.8 & 1.1 & \bf 1.1 & \bf 0.8 & 1.2 & \bf 0.3 & 0.4 \\\midrule
\multirow{3}{*}{our method no plane} & RMSE & 0.066 & 0.057 & 0.065& 0.110 & 0.117& 0.057 & 0.090 & 0.092 & 0.045 & 0.060 & 0.123\\
& RMSE gt-scaled & 0.052 & 0.056 & 0.062 & 0.099 & \bf 0.106 & 0.054 & 0.087 & 0.089 & 0.045 & 0.056 & 0.123 \\
& Scale Error(\%) & 0.9 & \bf 0.2 & 0.5 & 0.7 & 0.7 & 0.9 & 1.3 & 1.5 & \bf 0.3 & 1.0 & \bf 0.3 \\ \midrule
\multirow{3}{*}{our method with plane} &RMSE &\textcolor{blue}{0.051} & 0.051 & \textcolor{blue}{0.057} & \textcolor{blue}{0.103} & \textcolor{blue}{0.115} & \textcolor{blue}{0.051} & 0.082 & \textcolor{blue}{0.082} & 0.043 &\textcolor{blue}{0.048} & \textcolor{blue}{0.111} \\
&RMSE gt-scaled & \bf 0.039 & 0.048 & \bf 0.055 & \bf 0.087 & 0.109 & \bf 0.049 & 0.079 & \bf 0.078 & 0.042 & \bf 0.044 & \bf 0.110 \\
& Scale Error(\%) & \bf 0.8 & 0.3 & \bf 0.3 & 0.8 & \bf 0.5 & \bf 0.7 & \bf 1.1 & 1.6 & 0.5 & 0.9 & \bf 0.3 \\ \midrule
\makecell[c]{ mesh-VIO}& RMSE & 0.145 & 0.130 & 0.212 & 0.217 & 0.226 & 0.057 &0.074 & 0.167 & 0.081 & 0.103 & 0.272 \\ \midrule
\makecell[c]{ PVIO}& RMSE & 0.163 & 0.111 & 0.119 & 0.353 & 0.225 & 0.082 &0.113 & 0.201 & 0.063 & 0.157 & 0.280 \\ \midrule
\end{tabular}
}
\end{center}
\vspace{-5mm}
\end{table*}
\section{Experiments}
\label{sec:experiment}
We evaluate our system on the publicly available EuRoC MAV dataset \cite{Burri2016} and TUM VI dataset \cite{Schubert2018}. Besides, we provide a video
in the supplementary to reflect the results of the experiment more intuitively. We run the system in the environment with Intel Core i7-9750H@ 2.6GHz, 16GB memory.
\subsection{Quantitative Evaluation}
To verify the performance of the system, we compared the accuracy of our system with VI-DSO \cite{von2018direct}, DSO \cite{engel2017direct}, ORBSLAM3 without loop closure \cite{ORBSLAM3}, OKVIS \cite{Leutenegger2014}, PVIO \cite{li2019robust} and mesh-VIO \cite{rosinol2019incremental}. VI-DSO and DSO are the typical slam based on the direct method, ORBSLAM3 and OKVIS are state-of-art with indirect method, mesh-VIO and PVIO use the plane information in the VIO system, through comparison, the system performance can be fully reflected. {The accuracy of the trajectory is measured by aligning the estimated trajectory with groundtruth, we calculated the RMSE of the trajectory error with SE3 alignment and Sim(3) alignment (with the description "RMSE gt-scaled"). Scale error is computed using scale $s$ from Sim(3) alignment, as $| 1-s|$.}
\subsubsection{The EuRoC Dataset}
The EuRoC micro aerial vehicle (MAV) dataset consists of two scenes, the machine hall and the ordinary room,
{which contain scenes with different scales and complexity. We compared our method with VI-DSO, mesh-VIO, and PVIO. Meanwhile, the ablation experiments are also conducted, as shown in Tab.\ref{tab:ATE Trans of EuRoC}.
Whether from the results of RMSE or RMSE gt-scaled, PVI-DSO achieves the smallest translation error on most of the sequences, which means using the planar information in the direct method VIO can improve the accuracy of pose estimation.
Especially, by comparing the experiment results of VI-DSO and our method without plane information, the positioning accuracy is basically the same.
However after introducing the coplanar constraints into VIO, the error of RMSE was reduced by an average of 11\%, the error of RMSE gt-scaled reduced by an average of 12\%.
We can see that the structure information in the environment can help improve the accuracy and maintain the global scale estimation, which is also proved by the results of scale error.
Meanwhile, from the experimental results of PVIO and mesh-VIO, since the direct method can obtain more visual measurement compared with the indirect method, our method thereby improves the estimation of the pose.
}
\begin{figure}[b]
\centering
\vspace{-2mm}
\includegraphics[scale=0.16]{image/color_map}
\caption{Translation error of RMSE gt-scaled for diffferent methods run 11 times (rows) on each corridor sequence (colums) of TUM VI dataset.}
\label{fig:color map of tum vi}
\end{figure}
\begin{table}[b!]
\label{tab:1}
\begin{center}
\caption{The RMSE gt-scaled of the state-of-art methods compare to our method on TUM VI dataset. The translation (m) and rotation (rad) error are list as follows. In \textbf{bold} the best result}
\label{tab:ATE Trans and Rots on TUM VI}
\centering
\setlength{\tabcolsep}{1.3mm}{
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{ccccccccc}
\hline
\multirow{2}{*}{Seq.} & \multicolumn{2}{c}{ORBSLAM3} & \multicolumn{2}{c}{DSO} & \multicolumn{2}{c}{OKVIS} & \multicolumn{2}{c}{ours} \\ \cline{2-9}
& trans. & rot. & trans. & rot. & trans. & rot. & trans. & rot. \\ \hline
corridor1 &0.230&0.216&0.747&0.968&0.451&0.800 &\bf0.114&\bf0.129 \\
corridor2 &\bf0.052&\bf0.058&0.783&1.318&0.478&0.528 &0.379&0.421 \\
corridor3 &0.169&0.127&0.924&2.161&0.485&1.033 &\bf0.159&\bf0.123 \\
corridor4 &0.414&0.510&0.305&0.150& 0.300&0.359&\bf0.118&\bf0.145 \\
corridor5 &0.186&0.034& 0.823&2.000&0.415&0.613&\bf 0.098&\bf0.013 \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\subsubsection{The TUM VI Dataset}
We also evaluate our proposed system on the corridor sequences of the TUM VI benchmark dataset, which is collected in a long corridor and several offices and contains many images with blurring and lighting changes due to high-speed motion. We independently ran the algorithm 11 times on all sequences and calculated RMSE gt-scaled. Fig. \ref{fig:color map of tum vi} shows the different performance results of different algorithms running 11 times. Tab. \ref{tab:ATE Trans and Rots on TUM VI} shows the median translation and rotation error of the 11 results. {Compared to other algorithms, our method achieves the highest accuracy on almost all the sequences except for the corridor2 sequence, on which ORBSLAM3 performs better, this demonstrates the effectiveness of IMU measurements and geometric coplanarity information for improving localization accuracy. The VIO based on the direct method can build a more dense map, as shown in Fig. \ref{fig:map of tum vi}, the detailed structure of the indoor environment constructs by our method makes it easy to detect the plane information.}
\begin{figure}[b!]
\centering
\vspace{-4mm}
\includegraphics[scale=0.21]{image/tumvi_map}
\caption{
Map of corridor3 sequence generated by our method. (a) is the 3D coplanar points of the vertical planes and the horizontal planes in the map. (b) is the reconstructed map, two sub-images show the 2D Delaunay triangulation and the corresponding 3D mesh of the scene, the pink points in the image are the detected 2D coplanar points. }
\label{fig:map of tum vi}
\vspace{-3mm}
\end{figure}
\subsection{ Weight Determination of Photometric Error}
Since the camera's photometric calibration and distortion correction have significant influences on the direct VIO methods, the weight ratio of photometric and inertial errors is often difficult to determine. In this paper, we use the method of parametric study to obtain the standard deviation of the photometric error to determine the weight, {for the EuRoC MAV dataset and TUM VI dataset, we drew the cumulative error curve of pose estimation in Fig.\ref{fig:v2_02_standard_derivation} after setting different standard deviations.} The results in Fig.\ref{fig:v2_02_standard_derivation} (a) show that setting the standard deviation of the photometric error to 11 performs best in the EuRoC dataset, as shown in Fig.\ref{fig:v2_02_standard_derivation} (b), for the TUM VI dataset, a standard deviation of 16 is optimal.
\begin{figure}[h!]
\centering
\vspace{2mm}
\includegraphics[scale=0.17]{image/parameter_study}
\caption{Cumulative error plot on the V22 sequence of EuRoC dataset (a) and corridor4 sequence of TUM VI dataset (b). Different standard deviation of photometric error affects the accuray and robustness of pose estimation.}
\label{fig:v2_02_standard_derivation}
\vspace{-3mm}
\end{figure}
\subsection{Effect of Plane Prior}
To verify the effectiveness of our proposed plane-distance prior, we test the plane parameters consistent on the EuRoC dataset V11 sequence. As shown in Fig. \ref{fig:curve of plane prior}:
We can observe that the distance estimate for the plane varies greatly without the prior, while the estimation remains stable after adding the plane prior.
The planar prior contains historical information constraints that can provide constraints on the pose in the sliding window. Meanwhile, the range of curve variation without prior information also indicates that the elevation of ground points in the reconstructed map changes from -0.98m to -0.86m.
\begin{figure}[t]
\centering
\includegraphics[scale=0.18]{image/V101_plane_prior}
\caption{
The distance estimation of the horizontal plane in V11 sequence of the EuRoC dataset, we count the keyframes in which planar features are observed.}
\label{fig:curve of plane prior}
\vspace{-3mm}
\end{figure}
\subsection{Runtime Evaluation}
Tab.\ref{tab:running time} shows the run-time ablation experiments for systems containing different modules.
We measure the mean execution time of different modules of the system using pure VO, VIO, and our proposed PVI-DSO on EuRoC's V11 sequence, respectively. Considering the real-time performance of the system, we set the number of extracted pixels in the keyframe to 800 and the size of the sliding window to 7 according to \cite{engel2017direct}. In the pure VO method, two-thirds of the time is spent on optimization, and the rest is spent on tracking pixels, selecting candidate, activation points, and extracting pixels, etc. After adding inertia constraints to become a VIO system, there is only a slight time increase in IMU information processing and optimization.
For our proposed PVI-DSO, the system adds a plane detection part and a mesh generation part, but both parts are very efficient and only need 0.72ms and 1.08ms, respectively.
Furthermore, compared with VIO, the time of marginalization is reduced from 0.79ms to 0.72ms, the time of problem solving (compute x from $\rm Ax = b$) and state variables updating ($\rm x = x + \delta x$) is also reduced from 2.41ms to 2.16 ms, due to the novel parametric method adopted by the system to reduce the size of parameters.
The optimization time is slightly increased because the coplanar parameters are more complicated to calculate the photometric error and Jacobian. In the sliding window, the average number of the state variables (plane number $\approx$ 2, point number $\approx$ 1182) for PVI-DSO is fewer than that (point number $\approx$ 1422) of the VIO, proving that the coplanar parameter dramatically reduces the number of optimized state variables.
\begin{comment}
\begin{table}[htbp]
\label{tab:3}
\begin{center}
\caption{Mean execution time (Unit: millisecond) of pure VO, VIO, and our proposed PVI-DSO running on the sequence V11.}
\label{tab:running time}
\centering
\setlength{\tabcolsep}{2.9mm}{
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{cccc}
\hline
\multicolumn{1}{c}{Module} & \multicolumn{1}{c}{VO} & \multicolumn{1}{c}{VIO} & \multicolumn{1}{c}{PVI-DSO} \\ \hline
\texttt{Plane detection } & 0 & 0 & 0.72 \\
\texttt{mesh Creation} & 0 & 0 & 1.08 \\
\texttt{Optimization} & 19.89 & 20.46 & 21.65 \\
\texttt{Construct Cost Function} & 16.44 & 16.99 & 17.74 \\
\texttt{Solve Equation update increament} & 2.39 & 2.41 & 2.16 \\
\texttt{Marginalization}& 0.77 & 0.79 & 0.72 \\ \hline
\texttt{Total} & 31.97 & 32.54 & 35.03 \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\end{comment}
\begin{table}[htbp]
\label{tab:3}
\begin{center}
\caption{Mean execution time (Unit: millisecond) of pure VO, VIO, and our proposed PVI-DSO running on the sequence V11.}
\label{tab:running time}
\centering
\setlength{\tabcolsep}{1.3mm}{
\renewcommand{\arraystretch}{1.2}
\begin{threeparttable}
\begin{tabular}{cccc}
\hline
\multicolumn{1}{c}{Module} & \multicolumn{1}{c}{VO} & \multicolumn{1}{c}{VIO} & \multicolumn{1}{c}{PVI-DSO} \\ \hline
\texttt{Plane detection } & 0 & 0 & 0.72 \\
\texttt{mesh Creation} & 0 & 0 & 1.08 \\
\texttt{ Cost Function Construction\tnote{1}} & 16.44 & 16.99 & 17.74 \\
\texttt{\makecell[c]{Problem Solving\&Var Updating\tnote{1}}} & 2.39 & 2.41 & 2.16 \\
\texttt{Marginalization}& 0.77 & 0.79 & 0.72 \\ \hline
\texttt{Total} & 31.97 & 32.54 & 34.58 \\ \hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] The optimization contains cost function construction, problem solving, and state variables updating.
\end{tablenotes}
\end{threeparttable}
}
\end{center}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we present a novel VIO system based on the direct method, as well as leveraging the planar regularities, which is called PVI-DSO. The VIO based on the direct method constructs a more dense map, which contains rich geometric features. The plane regularities in the map are extracted by generating the 3D mesh, and a novel parameterization for co-planar points is used to introduce the plane constraints into the VIO system, meanwhile, the plane-distance cost used in the optimization converts the coplanar constraints into the plane prior, by these methods, introducing the planar information into the system doesn't increase much computational burden. The experiments show that the trajectory accuracy of our approach is better than the state-of-the-art visual-inertial odometry. In the future, we will fuse line and plane geometric information in the VIO based on the direct method to further improve the accuracy and robustness of positioning.
\section*{ACKNOWLEDGMENT}
We would like to thank Dr. Yijia He for the helpful discussion. This work was supported by the Foundation for Innovative Research Groups at the National Natural Science Foundation of China (Grant No. 41721003) and the fellowship of China National Postdoctoral Program for Innovative Talents (Grant No. BX20200251).
\begin{comment}
\section*{APPENDIX}
To simplify the formula, we define variables $nwr = (
\bm{n}_w^{\rm T} \bm{R}_{wh}) \cdot \bm{p}_c^n$ and $nwt =
\bm{n}_w \cdot \bm{t}_{wh} + dw$
The Jacobian of host image frame is the form:
\begin{flalign}
\bm{J}_{Th} = -\bm{R}_{th} \bm{p}_c^n \cdot \bm{n}_w^{\rm T} / ((
\bm{n}_w^{\rm T} \bm{R}_{wh}) \cdot {\bm{p}_c^n}) + \bm{R}_{wt}^{\rm T}
\end{flalign}
\begin{flalign}
\bm{J}_{rh} = -\bm{R}_{th} \bm{p}_c^n \cdot \bm{n}_w^{\rm T} / ((
\bm{n}_w^{\rm T} \bm{R}_{wh}) \cdot {\bm{p}_c^n}) + \bm{R}_{wt}^{\rm T}
\end{flalign}
The jacobian of target image frame
\begin{flalign}
E_{plane} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{P}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_c}
\end{flalign}
The jacobian of ground
\begin{flalign}
E_{plane} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{P}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_c}
\end{flalign}
The jacobian of wall
\begin{flalign}
E_{plane} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{P}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_c}
\end{flalign}
\end{comment}
\balance
\bibliographystyle{IEEEtran}
\section{INTRODUCTION}
\label{sec:introduction}
Simultaneous Localization and Mapping (SLAM) are research hotspots in the field of robots, autonomous driving, augmented reality, etc. Camera and inertial measurement units (IMU) are low-cost and effective sensors. The Visual Inertial Odometry (VIO) combines the complementarity of the two sensors to improve the accuracy and robustness of the pose estimation. Existing VIO methods \cite{VINS, ORBSLAM3, mourikis2007multi} generally use the feature-based (indirect) method as visual measurements. However, in the weak texture environment, the number of effective point features extracted by the indirect method is small, leading to the failure of pose estimation. On the other hand, the direct method \cite{engel2017direct, von2018direct} can utilize all available pixels of images to generate a more complete model, which is more robust.
Since the VIO can build the map of the surrounding environment in real-time, by using the geometric constraint information of the map, we can improve the positioning accuracy and the quality of the map.
The widely existing edges in the environment are the most common features which add different types of visual measurement to the estimator \cite{PL-VIO,pumarola2017pl,structslam, li2018direct}. Meanwhile, the planar features in the environment can also be used to increase the robustness of the estimator. However, the reconstructed map with the indirect method is too sparse, which makes it difficult to extract planar regularities, and the planar features will increase the computational burden that affects the real-time performance of the system.
For the VIO based on the direct method, the visual module tracks the pixels with large enough intensity gradients, sufficient visual observation makes the reconstructed map denser. Meanwhile, with the aid of the IMU information, it is easier to extract the plane information in the map, as is shown in Fig.\ref{fig:cover_grap}. Furthermore, introducing the geometric information that is less sensitive to the luminance changes into the direct method makes the system more robust.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{image/cover_graph.pdf}
\caption{The proposed direct sparse visual-inertial odometry builds a sparse map running on the V11 sequence of the EuRoC dataset. (a) is the reconstruction map of the whole scenes, (b) is the coplanar points on different planes, the different colors are used to distinguish the planes. (c) is the 2D Delaunay triangulation, depth map, and reconstructed 3D mesh of the corner in the scene. }
\label{fig:cover_grap}
\vspace{-6mm}
\end{figure}
This paper proposes a direct sparse visual inertial odometry that leverages planar regularities called PVI-DSO. The method is an extension of DSO, we complement the IMU measurement constraints and coplanar constraints. Unlike the method of \cite{von2018direct}, we use a novel way to fuse the IMU measurement information, which doesn't estimate the transformation between the DSO frame and the metric frame. Meanwhile, we extract the coplanar regularities from the generated 3D mesh, and inspired by \cite{li2020co}, in the photometric error function, we adopt the plane parametric representation whose performance has been proved in \cite{li2020co}. Through the above methods, adding plane features in the system doesn't increase much computational burden. In summary, the main contribution of this work include:
\begin{itemize}
\item To our best knowledge, PVI-DSO is the first direct sparse visual-inertial odometry that fuses coplanar constraint regularities to improve the accuracy of localization.
\item { We introduce a coplanar point parameterization in the direct method that constructs photometric error constraints, which is an extension of \cite{li2020co}. The plane-distance cost used in this paper converts the coplanar constraints into the plane prior, which can enforce the coplanar constraints in the optimization without increasing extra computation cost.}
\item We design extensive experiments on the challenging EuRoC and TUMVI datasets. The experimental results show that the proposed PVI-DSO fusing IMU constraints and coplanar regularities into the direct method outperforms the state-of-the-art.
\end{itemize}
\section{RELATED WORK}\label{sec:related work}
{The most common features used in the SLAM / VIO algorithms are the points \cite{mourikis2007multi, VINS, ORBSLAM3}, \cite{engel2017direct}, \cite{forster2014svo}, as a complement to the point features, the geometric information (line, plane) introduced in the system based on points can improve the accuracy of the pose estimation and mapping, which have received extensive attention in recent years.}
\textbf{Indirect method with geometric Regularities} \ Since the line features exist widely in the environment, it seems natural to fuse the line features into the framework based on points \cite{PL-VIO,pumarola2017pl, Tsai2018}. Furthermore, in the structural scenario, the lines with three perpendicular directions of the Manhattan world model can encode the global orientations of the local environments, which are leveraged to improve the robustness and accuracy of pose estimation \cite{Zou2019, structslam, xu2021leveraging}. For the planar information, the main difficulty is how to accurately extract the planar regularities in the environment. \cite{salas2014dense,zhang2019point,ma2016cpa} extract plane features with the assistance of depth map obtained by the RGBD camera. Lu \textit{et al.} proposed \cite{lu2015visual} uses the RANSAC method to fit the plane among the estimated 3D points with RGB camera. Nevertheless, the RANSAC-based method can only be used when there is only one potential large plane in the environment and consumes a lot of time. More Recently, \cite{rosinol2019incremental, li2020leveraging} get the planar regularities from the 3D mesh generated by the VIO. However, Sparse point clouds generated by indirect-based vision algorithms make the location of the 3D mesh inaccurate.
\textbf{Direct method with geometric Regularities} \ Direct methods minimize photometric errors to estimate pose and reconstruct the map. Geometric information can reduce the impact of drastic lighting changes on the system. \cite{krombach2016combining} combines the point feature-based matching method with semi-dense direct image alignment to initialize the system robustly, which takes advantage of the two methods. For the line features utilized in the direct method, \cite{yang2017direct, li2018direct, gomez2016pl} only use the straight lines to force the 3D points in the map, which satisfies the collinear constraint, not optimize jointly optimize the pose of the system. \cite{zhou2021dplvo} introduces the collinear constraint into the DSO and improves the pose estimation accuracy with the line of 2 degrees of freedom (DoF) representation. \cite{cheng2020direct} expresses the line features in the image as Manhattan world regularity and merges the structural information into the optimization framework of photometric error.
\section{SYSTEM OVERVIEW}\label{sec:system overview}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{image/Framework.pdf}
\vspace{-5mm}
\caption{Overview of our PVI-DSO system.} \label{fig:pipeline}
\vspace{-3mm}
\end{figure}
The system proposed in this paper is based on DSO\cite{engel2017direct}. To improve the accuracy and robustness of the system, we introduce the IMU measurement and geometric information in the environment. As shown in Fig. \ref{fig:pipeline}, the proposed system contains two main modules running in parallel: the front end and the back end.
In the front end, raw measurements from the IMU are processed. With the aid of IMU information, the most recent frame's photometric parameters and pose can be estimated robustly by coarse tracking. And then, we judge whether the current frame is the keyframe. If the current frame is a keyframe, it will be delivered to the back end module. Otherwise, it is only used to track the candidate points to update the depth.
In the back end, the candidate points are tracked and refined with the latest image to obtain more accurate depth, they are selected as the activate points if satisfy some criterion as described in \cite{engel2017direct}. And then, the depth map generated by the activate points is used to form the 3D mesh and planes. The coplanar constraints extracted from the planes are used to constraint the pose of the system. Finally, the operations for the visual-inertial bundle adjustment are executed. We add the non-coplanar point residual, coplanar point residual, inertial residual, and corresponding prior residual into the optimizer. The plane detection, residual construction, optimization, and marginalization of the system will be described in Sec.\ref{sec: direct sparse visual-inertial odometry}.
\section{Notations and Preliminaries} \label{sec:algorithm}
Throughout the paper, we will use the following notation: the bold upper letters represent matrices, bold lower letters represent vectors, and light lower letters represent scalars. In this section, we introduce the coordinate transformation and the representation of point and plane features.
\subsection{Coordinate Transformation} \label{alg: coordinate system}
The world coordinate system is defined as a fixed inertial framework that the Z axis is aligned with the gravity direction. We define the transformation from the IMU frame to world frame is $\textbf{T}_{wi} \in \textbf{SE}\left(3\right)$, the transformation from camera frame to IMU frame is $\textbf{T}_{ic} \in \textbf{SE}\left(3\right) $(which is calibrated in advance and fixed in the system). So the transformation from camera frame to world frame is calculated by $\textbf{T}_{wc} = \textbf{T}_{wi} * \textbf{T}_{ic}$. We denote Lie algebra elements as $\bm{\hat{\xi}} \in \mathfrak{se}\left(3\right)$ where $\bm{\xi} \in \mathbb{R}^{6}$, exponential and logarithmic map associate $\textbf{SE}\left(3\right)$ and $\mathfrak{se}\left(3\right)$.
\begin{figure}[t]
\vspace{2mm}
\centering
\includegraphics[width=0.8\linewidth]{image/point_planar_nonplanar.pdf}
\vspace{-5mm}
\caption{The coplanar regularities of pixels in the image are detected, and the coplanar points are on the planar region $\bm{\pi}_i$. The planar parameters are expressed in the world frame with a normal vector $\bm{n}_i$ and a distance $d_i$. }
\label{fig:point_planar_nonplanar}
\vspace{-3mm}
\end{figure}
\subsection{Point Representation}
The pixels extracted are expressed by the inverse depth $d_p\in \mathbb{R}$ in the host image frame. In order to construct the photometric error cost function, we need to transform the pixel in the host image frame $I_h$ into the target image frame $I_t$. Assuming $\bm{p}$ is the pixel in the $I_h$ and it is observed at $\bm{p}'$ in the $I_t$, the relationship of the pixel is:
\begin{flalign}
\begin{array}{c}
\bm{p'} = \Pi_c\left(\bm{R}_{th}\Pi_c^{-1}\left(\bm{p}, d_p\right) + \bm{t}_{th}\right)
\end{array}\label{fla: point transform}
\end{flalign}
where $\bm{R}_{th}$ and $\bm{t}_{th}$ are the relative rotation and translation from $I_h$ to $I_t$, $\Pi_c$ and $\Pi_c^{-1}$ are the projecton and back projection of the camera.
\subsection{Plane Representation}
As is shown in Fig.\ref{fig:point_planar_nonplanar}, a plane in the world frame can be represented by $\bm{\pi} =\begin{bmatrix}\bm{n}^{\rm T}, d^{\rm T} \end{bmatrix}^{\rm T} $ where $\bm{n} \in \mathbb{R}^3$ is the normal vector of plane and $d \in \mathbb{R}$ is the distance from origin of the world frame to the plane. The normal vector $\bm{n}$ has three parameters but only two Degrees of Freedom (DoF) with $||\bm{n}||_2 = 1$.
To get a minimal parameterization of $\bm{\pi}$ for optimization, we specialize the parameters of a general plane into the vertical plane $^v\bm{\pi}$ that the normal vector is perpendicular to the gravity direction or the horizontal plane $^h\bm{\pi}$ that the normal vector is parallel for the gravity direction. For horizontal plane, we fix the normal vector $^h\bm{n}$ to $\begin{bmatrix} 0, 0, 1 \end{bmatrix}^{\rm T} $, and only need to optimize the distance $^h d$. For vertical plane, since the normal vector is perpendicular to the gravity direction, it has only one DoF, we represent it by the form
\begin{flalign}
^v\bm{\pi}= \left( cos(\psi)cos(\phi), cos(\psi) sin(\phi), sin(\psi) , d \right)^{\rm T}
\end{flalign}
where $\phi$ and $\psi$ are the azimuth and elevation angles of the normal vector and $d$ is the distance from the origin of the world frame. As shown in Fig. \ref{fig:plane_parameter}, we fix the elevation angle $\psi$ to zero and only optimize the azimuth angle $\phi$ and the distance $d$.
The transformation of the plane parameters from world frame to camera frame has the form:
\begin{flalign}
\bm{\pi}{_c}= \textbf{T}_{cw}^{-\rm T}\bm{\pi}{_w}
\label{fla: transformation matrix of plane}
\end{flalign}
Using (\ref{fla: transformation matrix of plane}), we can convert the plane of world frame and construct the photometric error function about plane parameters in the camera frame.
\begin{figure}[t]
\centering
\includegraphics[width=0.55\linewidth]{image/plane_parameter.pdf}
\vspace{-5mm}
\caption{ The 2 DoF parameter representation of the normal vector $\bm{n}$.}
\label{fig:plane_parameter}
\vspace{-3mm}
\end{figure}
\section{VIO WITH COPLANAR REGULARITIES } \label{sec: direct sparse visual-inertial odometry}
{In this section, the method of coplanarity detection is firstly described. And then, we will fuse the inertial constraint and coplanar constraint into the non-linear optimization framework of \cite{engel2017direct} to estimate the body state and 3D landmarks. Meanwhile, in the optimization, we use the coplanar parameter expression to construct the photometric error, which not only guarantees accuracy but also reduces the number of optimization state variables. When the coplanar point-anchored frame is marginalized, we convert these constraints into plane distance-costs as the plane prior.
Finally, since the minimized energy functional is highly non-convex, the system proposed in this paper adopts dynamic initialization, a good initial value ensures the robustness of the system initialization in complex environments.}
We optimize all the state variables in the sliding window by minimizing the sum of the energy function from visual residual, IMU residual, and prior residual:
\begin{flalign}
E_{total} = \lambda \cdot E_{point} + \lambda \cdot E_{plane} + E_{inertial} \label{fla: total cost function} + E_{prior}
\end{flalign}
where $E_{point}$ is the photometric error of non-coplanar points (section \ref{alg: photometric error of non-coplanar point}), $E_{plane}$ is the photometric error of coplanar points (section \ref{alg: photometric error of coplanar point}), $E_{inertial}$ is the the inertial error term (section \ref{alg: photometric error of inertial error}), and $E_{prior}$ is the prior from marginalization operator (section \ref{alg: Marginalization with coplanar constraint}), respectively. $\lambda$ is the weight between visual photometric error and inertial error.
\subsection {Coplanarity Detection}\label{alg: coplanarity detection}
In this paper, we detect plane information from the 3D mesh. Since it is challenging to construct 3D mesh from 3D landmarks directly, we build 2D Delaunay triangulation \cite{hosseinzadeh2017sparse} in the image and project the triangular regularities to 3D landmarks. The direct method does not compute the matching relationship of points, making organizing the same triangular patch observations in different image frames difficult. Therefore, we generate 2D Delaunay triangulation in {the depth map of DSO \cite{engel2017direct}, which spans multiple image frames. In this way, the 3D mesh is anchored to the time frame of the sliding window, limiting memory usage.}
{For plane detection, we only detect the planes that are either vertical (i.e., walls, the normal is perpendicular to the gravity direction) or horizontal (i.e., floor, the normal is parallel for the gravity direction), which are commonly found in man-made environments.} For horizontal plane detection, we build a 1D histogram of the average height of triangular patches that the normal vector parallel for the gravity direction, and then the local maximums of the histogram are extracted after a Gaussian filter, as described in \cite{rosinol2019incremental}. We consider the local maximum to be the horizontal plane when it exceeds a certain threshold ($\sigma_t = 20$). For vertical plane detection, a 2D histogram is built, one axis is the azimuth of the plane's normal vector, the other axis is the distance from the origin of the plane, we only count the triangular patches that the normal vector is perpendicular to the gravity direction, after statistics, we select the vertical planes in a similar way of horizontal plane detection.
We maintain all historical planes in the system. Therefore, for the newly detected planes, we need to detect whether the historical plane already exists according to its parameters and merge the similar planes when the angle and distance parameters are less than a certain threshold. Meanwhile, the coplanar points associated with the latest plane are also transferred to the historical plane, for efficiency, we only add the coplanar constraints detected from the current local 3D mesh into the sliding window. In the marginalization, the coplanar constraints on the plane parameters are removed and converted to the plane-distance cost, which contains the information of historical plane and is used as the prior in the subsequent optimization.
\subsection{Photometric Error of Non-coplanar Point } \label{alg: photometric error of non-coplanar point}
The direct method is based on the photometric invariance hypothesis to minimize the photometric error. In a similar way as \cite{engel2017direct}, the photometric error for a point $\bm{p}$ with inverse depth $d_{\bm{p}}$ in the host image $I_h$ observed by the target image $I_t$ is defined as:
\begin{flalign}
E_{p_j} = \sum\limits_{\bm{p} \in \mathcal{N}_{\bm{p}}} w_{\bm{p}} \|
\left( I_t\left[\bm{p'}\right] - b_t\right) - \frac{t_t e^{a_t}}{t_he^{a_h}}\left(I_h\left[\bm{p}\right]-b_h\right) \|_{\gamma} \label{fla: photometric error}
\end{flalign}
where $t_h$, $t_t$ are the exposure times of the respective image $I_h$ and $I_t$, $a_h$, $a_t$, $b_h$ and $b_t$ are the affine illumination transform parameters, $\mathcal{N}_p$ is a small set of pixels around the point $\bm{p}$,
$w_{\bm{p}}$ is a gradient-dependent weighting and $\| \cdot\|_{\gamma}$ is the Huber norm, $\bm{p}'$ is the point projected into $I_t$ and the relationship between $\bm{p}$ and $\bm{p}'$ is given by (\ref{fla: point transform}).
So the sum of the photometric error of non-coplanar points observed by all the keyframes in the sliding window is defined as:
\begin{flalign}
E_{point} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{P}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_j}
\end{flalign}
where $\mathcal{F}$ is the keyframes in the sliding window, $\mathcal{P}_i$ is the set of non-coplanar points in the host keyframe $i$, $obs(\bm{p})$ is a set of observations of the $\bm{p}$ in the other keyframes.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{image/hessian_matrix_of_coplanar_points.pdf}
\vspace{-5mm}
\caption{The Hessian matrix of the photometric error. (a) 2 coplanar points and 2 non-coplanar points are observed by camera $O_1$ and camera $O_2$, the coplanar points lie on the plane $\bm{\pi}$; (b) the traditional cost function of coplanar points needs to optimize the inverse depth; (c) the cost function of this paper only optimizes the parameters of plane.}
\label{fig:hessian matrix of coplanar points}
\vspace{-3mm}
\end{figure}
\subsection{Photometric Error of Coplanar Point } \label{alg: photometric error of coplanar point}
The plane $\bm \pi_w$ detected from the 3D mesh is expressed in the world frame, in order to construct the photometric error between the host image and the target image, we convert the $\bm \pi_w$ into the host image frame to get $\bm \pi_c$ using (\ref{fla: transformation matrix of plane}).
When we detect the point $\bm p_c $ in the image that the corresponding 3D point $\bm{p}_{\bm{\pi}_c}$ lies on the plane $\bm{\pi}_c$ , the point $\bm p_c$ satisfies the coplanar equation
\begin{flalign}
z \cdot \bm{n}_c^{\rm T} \Pi_c^{-1}(\bm{p}_c, 1) + d_c = 0
\end{flalign}
where $z$ is the depth from the origin of the camera frame to the 3D point of $\bm p_c$, $\bm n_c$ is the normal vector of $\bm \pi_c$, and $d_c$ is the distance of $\bm \pi_c$.
With the depth $z$, we can rewrite (\ref{fla: point transform}) as (\ref{fla: coplanar_point transform}) and construct the photometric error $E_{\bm p_c}$ of coplanar point by (\ref{fla: photometric error})
\begin{flalign}
\begin{array}{c}
\bm{p_c}' = \Pi_c\left(\bm{R}_{th}\Pi_c^{-1}\left(\bm{p}_c, 1/z\right) + \bm{t}_{th}\right)
\end{array}\label{fla: coplanar_point transform}
\end{flalign}
The sum of the photometric error of coplanar points observed by all the keyframes in the sliding window is:
\begin{flalign}
E_{plane} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{C}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_c}
\end{flalign}
where $\mathcal{C}_i$ is the set of coplanar points in the host keyframe $i$. For a single residual, the jacobian corresponding to the camera frame can be decomposed as \cite{engel2017direct}:
\begin{flalign}
\bm{J}_k = [\bm{J}_I \cdot \bm{J}_{geo}, \bm{J}_{photo}]
\end{flalign}
where $\bm{J}_I$ is the image gradient, $\bm{J}_{geo}$ is the jacobian of geometric parameters ($\textbf{T}_h$, $\textbf{T}_t$), $\bm{J}_{photo}$ is the jacobian of photometric parameters ($a_h$, $a_t$, $b_h$, $b_t$). Specially, we note the jacobian terms in $\bm{J}_k$ that related to the host image frame as $\bm{J}_h$, the jacobian terms in $\bm{J}_k$ that related to the target image frame as $\bm{J}_t$, the jacobian of $\bm{\pi_{w}}$ as $\bm{J}_{\pi}$ so the jacobian of the coplanar point on the plane can be written as :
\begin{flalign}
\bm{J}_{coplanar} = [\bm{J}_h, \bm{J}_t, \bm{J}_{\pi}]
\end{flalign}
we only need to construct the Hessian matrix of the host image frame $\bm{H}_{h}$, target image frame $\bm{H}_{t}$ and plane parameters $\bm{H}_{\pi}$, not the inverse depth of coplanar points, as is shown in Fig.\ref{fig:hessian matrix of coplanar points}, in this way, the number of state variables added to the optimizer can be effectively reduced, which limits the computation cost of the optimization.
\subsection{Inertial Error} \label{alg: photometric error of inertial error}
To construct the inertial error with angular velocity and linear acceleration, we use the pre-integration method proposed in \cite{forster2015imu} to handle the high frequencies of IMU measurements.
This gives a pseudo-measured value of IMU between consecutive keyframes.
Similar to \cite{forster2015imu}, the pre-integration IMU factors we build include relative rotation and translation, velocity, and gyroscope and accelerometer biases. The biases are fixed in each single pre-integration block.
\subsection{Marginalization with Coplanar Constraints}\label{alg: Marginalization with coplanar constraint}
{We utilize the bundle adjustment to optimize the state variables in the sliding window. When the new image frame is added into the sliding window, all the variables corresponding to the older frame are marginalized using the Schur complement.
For the state variable of the plane, to avoid maintaining too many historical planes in the sliding window, we also need to "marginalize" them out.
Since the removed planes may be added into the optimizer again later, we convert the coplanar constraints related to the plane into the plane-distance cost, which is regarded as the plane prior to enforcing coplanar constraints. In this way, we do not need to increase much computation cost to generate the plane prior.}
\subsubsection{Marginalization of non-coplanar and inertial constraints}
When the number of keyframes in the sliding window is greater than a certain threshold, we select the marginal keyframe similar to the criteria in \cite{engel2017direct}, which considers the luminance change of the images and the geometry distribution of the poses. To maintain the consistency of the system, once the variable is connected to the marginalization factor, we use the First Jacobian Estimation (FEJ) \cite{huang2009first} to fix the variable's linearization point at the same value.
\subsubsection{Plane-distance cost for coplanar constraints}
For coplanar points, instead of using the photometric error term as the prior, we consider the coplanar points as a whole plane and directly use the plane-distance cost as the prior constraint. The plane-distance cost can be expressed by the distance from the prior plane $\bm{\pi}'= (\phi', \psi', d')$ to the current plane $\bm{\pi} = ( \phi , \psi , d)$:
\begin{flalign}
E_{\bm{\pi}_p} = w_n \|\left[ \phi', \psi', d' \right]^{\rm T}- \left[ \phi , \psi , d \right]^{\rm T}\|_{\sum_{\bm{\pi}}}^2
\end{flalign}
where $\sum_{\bm{\pi}}$ is the covariance matrix of constraint, $w_n$ is the number of the points on the plane.
\subsection{Initialization and Observability}
{Compared to pure visual odometry, visual-inertial odometry fusing IMU data makes metric scale, roll, and pitch observable.
This means that the state variables must be properly initialized. Otherwise, the optimization may diverge, leading to system failure.
Initialization can be divided into dynamic initialization and static initialization.
For the problem of insufficient visual parallax during static initialization, we use dynamic initialization in the system.
Optical flow tracking is introduced in the initialization, which improves the robustness of VIO initialization based on the direct method.
Similar to \cite{VINS}, we run SFM (Structure From Motion) to compute the rough pose of the frame, which is then aligned with the state variables obtained through the IMU integration for joint initialization.}
\begin{table*}[htbp]
\label{tab:1}
\begin{center}
\vspace{5mm}
\caption{The trajectory error (m) of different methods on EuRoC dataset. In \textbf{bold} the best of RMSE gt-scaled results, in \textcolor{blue}{blue} the best of RMSE results. }
\label{tab:ATE Trans of EuRoC}
\centering
\setlength{\tabcolsep}{2.0mm}{
\begin{tabular}{cc|ccccccccccc}
\toprule
Sequence& & MH1& MH2&MH3&MH4&MH5&V11&V12&V13&V21&V22&V23 \\\midrule
\multirow{3}{*}{\makecell[c]{VI-DSO \\ (raw data from \cite{von2018direct})} } &RMSE& 0.062 & \textcolor{blue}{0.044}& 0.117 & 0.132 & 0.121 & 0.059 & \textcolor{blue}{0.067} & 0.096 &\textcolor{blue}{0.040} & 0.062 & 0.174 \\
&RMSE gt-scaled& 0.041 &\bf 0.041 & 0.116 & 0.129 & \bf 0.106 & 0.057 &\bf 0.066 & 0.095 & \bf 0.031 & 0.060 & 0.173 \\
&Scale Error(\%) &1.1 & 0.5 & 0.4 & \bf 0.2 & 0.8 & 1.1 & \bf 1.1 & \bf 0.8 & 1.2 & \bf 0.3 & 0.4 \\\midrule
\multirow{3}{*}{our method no plane} & RMSE & 0.066 & 0.057 & 0.065& 0.110 & 0.117& 0.057 & 0.090 & 0.092 & 0.045 & 0.060 & 0.123\\
& RMSE gt-scaled & 0.052 & 0.056 & 0.062 & 0.099 & \bf 0.106 & 0.054 & 0.087 & 0.089 & 0.045 & 0.056 & 0.123 \\
& Scale Error(\%) & 0.9 & \bf 0.2 & 0.5 & 0.7 & 0.7 & 0.9 & 1.3 & 1.5 & \bf 0.3 & 1.0 & \bf 0.3 \\ \midrule
\multirow{3}{*}{our method with plane} &RMSE &\textcolor{blue}{0.051} & 0.051 & \textcolor{blue}{0.057} & \textcolor{blue}{0.103} & \textcolor{blue}{0.115} & \textcolor{blue}{0.051} & 0.082 & \textcolor{blue}{0.082} & 0.043 &\textcolor{blue}{0.048} & \textcolor{blue}{0.111} \\
&RMSE gt-scaled & \bf 0.039 & 0.048 & \bf 0.055 & \bf 0.087 & 0.109 & \bf 0.049 & 0.079 & \bf 0.078 & 0.042 & \bf 0.044 & \bf 0.110 \\
& Scale Error(\%) & \bf 0.8 & 0.3 & \bf 0.3 & 0.8 & \bf 0.5 & \bf 0.7 & \bf 1.1 & 1.6 & 0.5 & 0.9 & \bf 0.3 \\ \midrule
\makecell[c]{ mesh-VIO}& RMSE & 0.145 & 0.130 & 0.212 & 0.217 & 0.226 & 0.057 &0.074 & 0.167 & 0.081 & 0.103 & 0.272 \\ \midrule
\makecell[c]{ PVIO}& RMSE & 0.163 & 0.111 & 0.119 & 0.353 & 0.225 & 0.082 &0.113 & 0.201 & 0.063 & 0.157 & 0.280 \\ \midrule
\end{tabular}
}
\end{center}
\vspace{-5mm}
\end{table*}
\section{Experiments}
\label{sec:experiment}
We evaluate our system on the publicly available EuRoC MAV dataset \cite{Burri2016} and TUM VI dataset \cite{Schubert2018}. Besides, we provide a video
in the supplementary to reflect the results of the experiment more intuitively. We run the system in the environment with Intel Core i7-9750H@ 2.6GHz, 16GB memory.
\subsection{Quantitative Evaluation}
To verify the performance of the system, we compared the accuracy of our system with VI-DSO \cite{von2018direct}, DSO \cite{engel2017direct}, ORBSLAM3 without loop closure \cite{ORBSLAM3}, OKVIS \cite{Leutenegger2014}, PVIO \cite{li2019robust} and mesh-VIO \cite{rosinol2019incremental}. VI-DSO and DSO are the typical slam based on the direct method, ORBSLAM3 and OKVIS are state-of-art with indirect method, mesh-VIO and PVIO use the plane information in the VIO system, through comparison, the system performance can be fully reflected. {The accuracy of the trajectory is measured by aligning the estimated trajectory with groundtruth, we calculated the RMSE of the trajectory error with SE3 alignment and Sim(3) alignment (with the description "RMSE gt-scaled"). Scale error is computed using scale $s$ from Sim(3) alignment, as $| 1-s|$.}
\subsubsection{The EuRoC Dataset}
The EuRoC micro aerial vehicle (MAV) dataset consists of two scenes, the machine hall and the ordinary room,
{which contain scenes with different scales and complexity. We compared our method with VI-DSO, mesh-VIO, and PVIO. Meanwhile, the ablation experiments are also conducted, as shown in Tab.\ref{tab:ATE Trans of EuRoC}.
Whether from the results of RMSE or RMSE gt-scaled, PVI-DSO achieves the smallest translation error on most of the sequences, which means using the planar information in the direct method VIO can improve the accuracy of pose estimation.
Especially, by comparing the experiment results of VI-DSO and our method without plane information, the positioning accuracy is basically the same.
However after introducing the coplanar constraints into VIO, the error of RMSE was reduced by an average of 11\%, the error of RMSE gt-scaled reduced by an average of 12\%.
We can see that the structure information in the environment can help improve the accuracy and maintain the global scale estimation, which is also proved by the results of scale error.
Meanwhile, from the experimental results of PVIO and mesh-VIO, since the direct method can obtain more visual measurement compared with the indirect method, our method thereby improves the estimation of the pose.
}
\begin{figure}[b]
\centering
\vspace{-2mm}
\includegraphics[scale=0.16]{image/color_map}
\caption{Translation error of RMSE gt-scaled for diffferent methods run 11 times (rows) on each corridor sequence (colums) of TUM VI dataset.}
\label{fig:color map of tum vi}
\end{figure}
\begin{table}[b!]
\label{tab:1}
\begin{center}
\caption{The RMSE gt-scaled of the state-of-art methods compare to our method on TUM VI dataset. The translation (m) and rotation (rad) error are list as follows. In \textbf{bold} the best result}
\label{tab:ATE Trans and Rots on TUM VI}
\centering
\setlength{\tabcolsep}{1.3mm}{
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{ccccccccc}
\hline
\multirow{2}{*}{Seq.} & \multicolumn{2}{c}{ORBSLAM3} & \multicolumn{2}{c}{DSO} & \multicolumn{2}{c}{OKVIS} & \multicolumn{2}{c}{ours} \\ \cline{2-9}
& trans. & rot. & trans. & rot. & trans. & rot. & trans. & rot. \\ \hline
corridor1 &0.230&0.216&0.747&0.968&0.451&0.800 &\bf0.114&\bf0.129 \\
corridor2 &\bf0.052&\bf0.058&0.783&1.318&0.478&0.528 &0.379&0.421 \\
corridor3 &0.169&0.127&0.924&2.161&0.485&1.033 &\bf0.159&\bf0.123 \\
corridor4 &0.414&0.510&0.305&0.150& 0.300&0.359&\bf0.118&\bf0.145 \\
corridor5 &0.186&0.034& 0.823&2.000&0.415&0.613&\bf 0.098&\bf0.013 \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\subsubsection{The TUM VI Dataset}
We also evaluate our proposed system on the corridor sequences of the TUM VI benchmark dataset, which is collected in a long corridor and several offices and contains many images with blurring and lighting changes due to high-speed motion. We independently ran the algorithm 11 times on all sequences and calculated RMSE gt-scaled. Fig. \ref{fig:color map of tum vi} shows the different performance results of different algorithms running 11 times. Tab. \ref{tab:ATE Trans and Rots on TUM VI} shows the median translation and rotation error of the 11 results. {Compared to other algorithms, our method achieves the highest accuracy on almost all the sequences except for the corridor2 sequence, on which ORBSLAM3 performs better, this demonstrates the effectiveness of IMU measurements and geometric coplanarity information for improving localization accuracy. The VIO based on the direct method can build a more dense map, as shown in Fig. \ref{fig:map of tum vi}, the detailed structure of the indoor environment constructs by our method makes it easy to detect the plane information.}
\begin{figure}[b!]
\centering
\vspace{-4mm}
\includegraphics[scale=0.21]{image/tumvi_map}
\caption{
Map of corridor3 sequence generated by our method. (a) is the 3D coplanar points of the vertical planes and the horizontal planes in the map. (b) is the reconstructed map, two sub-images show the 2D Delaunay triangulation and the corresponding 3D mesh of the scene, the pink points in the image are the detected 2D coplanar points. }
\label{fig:map of tum vi}
\vspace{-3mm}
\end{figure}
\subsection{ Weight Determination of Photometric Error}
Since the camera's photometric calibration and distortion correction have significant influences on the direct VIO methods, the weight ratio of photometric and inertial errors is often difficult to determine. In this paper, we use the method of parametric study to obtain the standard deviation of the photometric error to determine the weight, {for the EuRoC MAV dataset and TUM VI dataset, we drew the cumulative error curve of pose estimation in Fig.\ref{fig:v2_02_standard_derivation} after setting different standard deviations.} The results in Fig.\ref{fig:v2_02_standard_derivation} (a) show that setting the standard deviation of the photometric error to 11 performs best in the EuRoC dataset, as shown in Fig.\ref{fig:v2_02_standard_derivation} (b), for the TUM VI dataset, a standard deviation of 16 is optimal.
\begin{figure}[h!]
\centering
\vspace{2mm}
\includegraphics[scale=0.17]{image/parameter_study}
\caption{Cumulative error plot on the V22 sequence of EuRoC dataset (a) and corridor4 sequence of TUM VI dataset (b). Different standard deviation of photometric error affects the accuray and robustness of pose estimation.}
\label{fig:v2_02_standard_derivation}
\vspace{-3mm}
\end{figure}
\subsection{Effect of Plane Prior}
To verify the effectiveness of our proposed plane-distance prior, we test the plane parameters consistent on the EuRoC dataset V11 sequence. As shown in Fig. \ref{fig:curve of plane prior}:
We can observe that the distance estimate for the plane varies greatly without the prior, while the estimation remains stable after adding the plane prior.
The planar prior contains historical information constraints that can provide constraints on the pose in the sliding window. Meanwhile, the range of curve variation without prior information also indicates that the elevation of ground points in the reconstructed map changes from -0.98m to -0.86m.
\begin{figure}[t]
\centering
\includegraphics[scale=0.18]{image/V101_plane_prior}
\caption{
The distance estimation of the horizontal plane in V11 sequence of the EuRoC dataset, we count the keyframes in which planar features are observed.}
\label{fig:curve of plane prior}
\vspace{-3mm}
\end{figure}
\subsection{Runtime Evaluation}
Tab.\ref{tab:running time} shows the run-time ablation experiments for systems containing different modules.
We measure the mean execution time of different modules of the system using pure VO, VIO, and our proposed PVI-DSO on EuRoC's V11 sequence, respectively. Considering the real-time performance of the system, we set the number of extracted pixels in the keyframe to 800 and the size of the sliding window to 7 according to \cite{engel2017direct}. In the pure VO method, two-thirds of the time is spent on optimization, and the rest is spent on tracking pixels, selecting candidate, activation points, and extracting pixels, etc. After adding inertia constraints to become a VIO system, there is only a slight time increase in IMU information processing and optimization.
For our proposed PVI-DSO, the system adds a plane detection part and a mesh generation part, but both parts are very efficient and only need 0.72ms and 1.08ms, respectively.
Furthermore, compared with VIO, the time of marginalization is reduced from 0.79ms to 0.72ms, the time of problem solving (compute x from $\rm Ax = b$) and state variables updating ($\rm x = x + \delta x$) is also reduced from 2.41ms to 2.16 ms, due to the novel parametric method adopted by the system to reduce the size of parameters.
The optimization time is slightly increased because the coplanar parameters are more complicated to calculate the photometric error and Jacobian. In the sliding window, the average number of the state variables (plane number $\approx$ 2, point number $\approx$ 1182) for PVI-DSO is fewer than that (point number $\approx$ 1422) of the VIO, proving that the coplanar parameter dramatically reduces the number of optimized state variables.
\begin{comment}
\begin{table}[htbp]
\label{tab:3}
\begin{center}
\caption{Mean execution time (Unit: millisecond) of pure VO, VIO, and our proposed PVI-DSO running on the sequence V11.}
\label{tab:running time}
\centering
\setlength{\tabcolsep}{2.9mm}{
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{cccc}
\hline
\multicolumn{1}{c}{Module} & \multicolumn{1}{c}{VO} & \multicolumn{1}{c}{VIO} & \multicolumn{1}{c}{PVI-DSO} \\ \hline
\texttt{Plane detection } & 0 & 0 & 0.72 \\
\texttt{mesh Creation} & 0 & 0 & 1.08 \\
\texttt{Optimization} & 19.89 & 20.46 & 21.65 \\
\texttt{Construct Cost Function} & 16.44 & 16.99 & 17.74 \\
\texttt{Solve Equation update increament} & 2.39 & 2.41 & 2.16 \\
\texttt{Marginalization}& 0.77 & 0.79 & 0.72 \\ \hline
\texttt{Total} & 31.97 & 32.54 & 35.03 \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\end{comment}
\begin{table}[htbp]
\label{tab:3}
\begin{center}
\caption{Mean execution time (Unit: millisecond) of pure VO, VIO, and our proposed PVI-DSO running on the sequence V11.}
\label{tab:running time}
\centering
\setlength{\tabcolsep}{1.3mm}{
\renewcommand{\arraystretch}{1.2}
\begin{threeparttable}
\begin{tabular}{cccc}
\hline
\multicolumn{1}{c}{Module} & \multicolumn{1}{c}{VO} & \multicolumn{1}{c}{VIO} & \multicolumn{1}{c}{PVI-DSO} \\ \hline
\texttt{Plane detection } & 0 & 0 & 0.72 \\
\texttt{mesh Creation} & 0 & 0 & 1.08 \\
\texttt{ Cost Function Construction\tnote{1}} & 16.44 & 16.99 & 17.74 \\
\texttt{\makecell[c]{Problem Solving\&Var Updating\tnote{1}}} & 2.39 & 2.41 & 2.16 \\
\texttt{Marginalization}& 0.77 & 0.79 & 0.72 \\ \hline
\texttt{Total} & 31.97 & 32.54 & 34.58 \\ \hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] The optimization contains cost function construction, problem solving, and state variables updating.
\end{tablenotes}
\end{threeparttable}
}
\end{center}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we present a novel VIO system based on the direct method, as well as leveraging the planar regularities, which is called PVI-DSO. The VIO based on the direct method constructs a more dense map, which contains rich geometric features. The plane regularities in the map are extracted by generating the 3D mesh, and a novel parameterization for co-planar points is used to introduce the plane constraints into the VIO system, meanwhile, the plane-distance cost used in the optimization converts the coplanar constraints into the plane prior, by these methods, introducing the planar information into the system doesn't increase much computational burden. The experiments show that the trajectory accuracy of our approach is better than the state-of-the-art visual-inertial odometry. In the future, we will fuse line and plane geometric information in the VIO based on the direct method to further improve the accuracy and robustness of positioning.
\section*{ACKNOWLEDGMENT}
We would like to thank Dr. Yijia He for the helpful discussion. This work was supported by the Foundation for Innovative Research Groups at the National Natural Science Foundation of China (Grant No. 41721003) and the fellowship of China National Postdoctoral Program for Innovative Talents (Grant No. BX20200251).
\begin{comment}
\section*{APPENDIX}
To simplify the formula, we define variables $nwr = (
\bm{n}_w^{\rm T} \bm{R}_{wh}) \cdot \bm{p}_c^n$ and $nwt =
\bm{n}_w \cdot \bm{t}_{wh} + dw$
The Jacobian of host image frame is the form:
\begin{flalign}
\bm{J}_{Th} = -\bm{R}_{th} \bm{p}_c^n \cdot \bm{n}_w^{\rm T} / ((
\bm{n}_w^{\rm T} \bm{R}_{wh}) \cdot {\bm{p}_c^n}) + \bm{R}_{wt}^{\rm T}
\end{flalign}
\begin{flalign}
\bm{J}_{rh} = -\bm{R}_{th} \bm{p}_c^n \cdot \bm{n}_w^{\rm T} / ((
\bm{n}_w^{\rm T} \bm{R}_{wh}) \cdot {\bm{p}_c^n}) + \bm{R}_{wt}^{\rm T}
\end{flalign}
The jacobian of target image frame
\begin{flalign}
E_{plane} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{P}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_c}
\end{flalign}
The jacobian of ground
\begin{flalign}
E_{plane} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{P}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_c}
\end{flalign}
The jacobian of wall
\begin{flalign}
E_{plane} = \sum\limits_{i \in \mathcal{F}} \sum\limits_{\bm{p} \in \mathcal{P}_i} \sum\limits_{j \in obs(\bm{p})} E_{\bm{p}_c}
\end{flalign}
\end{comment}
\balance
\bibliographystyle{IEEEtran}
|
1,108,101,563,141 | arxiv | \section{Introduction}
Giant molecular clouds (GMCs) are the primary reservoirs of molecular
gas within spiral galaxies. Star formation is tightly correlated
with the molecular column density within spiral galaxies \citep{wong02},
and is therefore controlled by the formation and evolution of these
giant clouds. In this and a subsequent paper we develop the theory of
GMC dynamics and present semi-analytical models for GMC evolution. We
rely on simplifying assumptions about the structure of the cloud and the
properties of the surrounding interstellar medium in order to focus on
clouds' global energy budget and dynamical state.
Observations show that stars form much more slowly than the free-fall
rate in GMCs \citep{zuckerman74,rownd99,wong02,gao04a,wu05}, and
any successful GMC model must
explain why this should be. It is now widely held that collapse is
inhibited primarily by intensely supersonic motions
\citep{vazquezsemadeni03,maclow04}, rather than magnetic fields
\citep{mouschovias76,shu87,mckee89}. (Star formation is strongly
suppressed in low-extinction regions of molecular clouds, beyond what one would expect if the star formation rate were simply following the column density to some power, suggesting
that magnetic fields may play a secondary role --
\citealt{onishi98,onishi99,onishi02,johnstone04} -- though see
\citealt{hatchell05} who find that low column densities reduce but do not completely prevent star formation.) Simulations and analytic
theory indicate that the observed level of turbulence in GMCs is
sufficient to produce the observed rate of star formation
\citep{krumholz05c}.
However, undriven
supersonic turbulence decays via radiation from isothermal shocks
with an $e$-folding time of roughly one cloud crossing time
\citep{maclow98, stone98, maclow99, padoan99}, so undriven turbulence
alone is not sufficient to prevent global collapse. Instead, GMCs must
either be destroyed before their turbulent motions decay, or the
turbulence must be continually driven. The mode of destruction is
intimately related to the clouds' dynamical state. Unless a cloud is
destroyed all at once, any internal agent of destruction is also an
internal source for turbulent energy -- and one strong enough to
balance turbulent decay \citep{matzner02}. Destruction from within
therefore favors models that achieve energetic and dynamical equilibrium
\citep[e.g.][]{mckee89}, if only briefly. The alternative -- that
clouds are disrupted entirely by outside agents
\citep[e.g.][]{bonnell06b} -- requires most of the cloud
mass to remain gravitationally unbound, as bound regions rapidly
collapse to higher density and become impervious to external
forces. This is very difficult to reconcile with observational
estimates of GMCs lifetimes and ratios of kinetic to
potential energy. We critique the hypothesis of unbound clouds in
greater detail in \S~\ref{discussion}.
In the following sections, we investigate the properties of molecular
clouds both stirred and destroyed by HII regions within the cloud
volume. The models we present here are improved in several ways relative
to prior work:
\begin{itemize}
\item[-] Rather than enforcing strict mechanical or energetic
equilibrium, we solve for the time evolution of a cloud's radius and
turbulent velocity dispersion according to the time-dependent virial
and energy equations. In an early discussion of this problem, \citet{mckee89}
allowed for time dependence in the energy equation, but assumed virial equilibrium. \citet{matzner99a} and \citet{matzner99b} followed the evolution of GMCs using time dependent virial and energy equations, but neglected several terms in these equations that we model here. While
our approach is still not a full numerical solution of the equations of
gravity, radiation, and magnetohydrodynamics, this approach enables us to study
cloud dynamics without ignoring the effects of feedback on GMC
evolution, as most numerical simulations to date have done
\citep[e.g.][]{clark05}.
\item[-] We account self-consistently for the recoil of cloud matter from
the sites of mass ejection. In addition to driving turbulence,
inward recoil confines the cloud. Recoil confinement is equivalent to
an additional (and variable) external pressure, which becomes
dynamically significant when cloud destruction is
rapid. Corresponding terms appear in the virial
(\S~\ref{equationofmotion}, Appendix \ref{virialderivation}) and energy
(\S \ref{energyeqn}, Appendix \ref{energyderivation}) equations.
\item[-] Our model for HII regions (\S \ref{hiiregions}) accounts for the
scale-dependence of density and velocity structures within GMCs.
Although this does not fully account for the three-dimensional
structure of the turbulent cloud medium, it is a significant
improvement over the uniform cloud model employed by
\cite{whitworth79}, \cite{williams97}, and \cite{matzner02}.
\item[-] We apply the \cite{krumholz05c} prescription for the star
formation rate within turbulent clouds. This formula accurately
predicts the star formation rate on a variety of scales, from
starburst galaxies to the dense precursors of individual star
clusters \citep{tan06a, krumholz06c}. We use
it to govern the birth rate of ionizing associations
(\S~\ref{starformation}).
\item[-] Our dynamical simulations (\S~\ref{method}) track the formation
and evolution of many individual HII regions. This approach
accounts for the finite lifetime of ionizing stars and the time
delay associated with the deceleration of shells driven by HII
regions, neither of which is negligible compared to GMC dynamical
times.
\end{itemize}
We analyze the results of our models in \S~\ref{results},
discuss the implications of our findings in
\S~\ref{discussion}, and summarize our conclusions in
\S~\ref{conclusions}. In a future work (Matzner, Krumholz, \& McKee,
2006, in preparation, hereafter Paper II) we apply this model to the
problem of GMC formation and evolution in the galactic environment,
including the effects of spiral density waves.
We do make several approximations in our work
(\S~\ref{limitations}). We assume that the clouds are spherical and
that they expand or contract homologously. We assume that the clouds
are sufficiently massive that the energy injection is dominated by HII
regions, not protostellar outflows, which based on the models of
\citet{matzner02} should be true for clouds of mass $\sim 10^5$
$M_{\odot}$ or more. Such clouds contain most of the molecular mass in the
Galaxy. We neglect the
possibility that the column density of the cloud must exceed a
threshold in order for stars to form \citep[e.g][]{mckee89}. We
neglect energy injection by HII regions after they reach pressure
equilibrium with the surrounding medium. Finally, we neglect possible
time dependent effects due to the ambient medium of the GMC: no mass
can be added to the cloud, and the ambient pressure remains constant.
Despite these approximations, the models we present in this Paper illustrate the
degree to which GMC properties can be understood in terms of internal
dynamics.
Obviously, real GMCs are not homologously expanding or contracting spheres with smooth density distributions, so the approximations we make to render the problem analytically tractable are quite limiting. For this reason one might be tempted give up on analytic treatment altogether and simply attempt to solve the problem numerically. Unfortunately, full numerical simulation of a giant molecular cloud, including the formation of multiple star clusters and the effects of their feedback, is not feasible with current codes and computers. Instead, one is forced to make numerous approximations regardless of whether one takes a numerical or analytic approach. Many numerical simulations of GMC evolution simply ignore feedback altogether \citep[e.g.][]{clark05}, include it only from a single source and/or focus on size scales much smaller than an entire GMC \citep[e.g.][]{dale05,li06b}, or focus on the galactic scale and lack the resolution to say anything about individual GMCs \citep[e.g.][]{slyz05, tasker06}. For this reason, an analytic approach that allows us to include feedback provides a valuable complement to numerical results, and points out areas for future simulation in which new effects might be discovered by a more thorough treatment of the physics.
\section{Evolution Equations}
\label{basiceqn}
We are interested in the global evolution and energy balance in GMCs,
so we construct a simple model in which we neglect details of cloud
structure. We consider a cloud with density profile $\rho = \rho_e
(r/R_{\rm cl})^{-k_{\rho}}$, where $R_{\rm cl}$ is the radius of the cloud edge, and
$\rho_e$ is the edge density. As we discuss below, we take $k_{\rho}=1$
as typical of GMCs. We approximate that the cloud evolves
homologously, so that $k_{\rho}$ is constant. However, the cloud can
expand or contract and can lose mass (via evaporation of gas by HII
regions), so $\rho_e$ and $R_{\rm cl}$ both vary in time. The cloud is
embedded in an ambient medium of negligible density and constant
pressure $P_{\rm amb}$. We model evaporation of gas from the cloud as a
wind, into which cloud material is injected at a rate $\dot{\rho}$ (which
we take to be negative). Gas that is injected into the wind travels
radially outward with ``kick'' velocity ${\mathbf{v}_{\rm ej}'}$ relative to the
radial velocity of the cloud at that radius. Homology requires that
the mass loss rate follow the existing density profile, so
\begin{equation}
\dot{\rho} = \frac{\dot{M}_{\rm cl}}{M_{\rm cl}} \rho,
\end{equation}
where $M_{\rm cl}$ is the total mass of the cloud. We assume that the wind is
low density and escapes from the vicinity of the cloud quickly, so
that we can neglect its gravitational interaction with the cloud. As
we discuss in more detail in \S~\ref{hiiregions}, this is a reasonable
model for mass evaporating from an HII region.
We neglect the possibility that turbulent motions within the cloud are likely to lead to a significant loss of mass. This seems justified, since in a GMC with a virial ratio of unity, roughly the value for observed GMCs, the 3D turbulent velocity dispersion is smaller than the escape speed from the GMC surface by a factor of $\sim 2$. Since the distribution of velocities in a supersonically turbulent medium cuts off exponentially above the turbulent velocity dispersion \citep{krumholz06a}, there is a negligible amount of mass moving rapidly enough to escape. We also neglect the possibility that a GMC might gain mass during its evolution, due to continuing infall. This assertion is more problematic, and we discuss it in more detail in \S~\ref{limitations}.
Within the limitations of these assumptions, we derive the
Eulerian Virial Theorem (EVT) and equation of energy conservation in
Appendices \ref{virialderivation} and \ref{energyderivation}. We then
use these to construct evolution equations for the the cloud.
\subsection{Equation of Motion}
\label{equationofmotion}
We derive the equation of motion from the EVT for an evaporating
homologously-moving cloud,
\begin{eqnarray}
\frac12 \ddot{I}_{\rm cl} & = &
2(\mathcal{T}-\mathcal{T}_0) +
\mathcal{B}+\mathcal{W}-\frac12
\frac{d}{dt}\int_{S_{\rm vir}} (\rho\mathbf{v} r^2)\cdot d\mathbf{S}
\nonumber\\
& & {} +
2 a_{\rm I} \dot{M}_{\rm cl} R_{\rm cl} \dot{R}_{\rm cl} + \frac12 a_{\rm I} \ddot{M}_{\rm cl} R_{\rm cl}^2
\nonumber\\
& & {} +
\frac{3-k_{\rho}}{4-k_{\rho}} \dot{M}_{\rm cl} R_{\rm cl} v_{\rm ej}'.
\label{EVTtext}
\end{eqnarray}
The proof of this theorem is in Appendix \ref{virialderivation}.
In this equation $I_{\rm cl}$ is the cloud moment of inertia, $\mathcal{T}$ is
the total kinetic and thermal energy, $\mathcal{T}_0$ is the energy
associated with the confining external pressure, $\mathcal{B}$ and $\mathcal{W}$
are the magnetic and gravitational potential energies, and the surface
integral represents the rate of change of the flux of inertia across
the surface $S_{\rm vir}$ that bounds the volume to which we apply the virial
theorem. These are all terms that appear in the EVT for a
cloud without a wind, and their formal definitions are given in the
Appendix. The three additional terms represent the second derivative
of cloud inertia caused by mass loss through the wind (the first two
extra terms) and the rate at which recoil from the process of
launching the wind injects momentum into the cloud (the final
additional term).
We now evaluate each of these terms in the context of our model.
The moment of inertia is
$I_{\rm cl}=a_{\rm I} M_{\rm cl} R_{\rm cl}^2$, where $a_{\rm I}\equiv (3-k_{\rho})/(5-k_{\rho})$,
so its second derivative is
\begin{eqnarray}
\frac{1}{2}\ddot{I}_{\rm cl} & = &
a_{\rm I} M_{\rm cl} \dot{R}_{\rm cl}^2 + a_{\rm I} M_{\rm cl} R_{\rm cl} \ddot{R}_{\rm cl}
\nonumber\\
& & \: {} +
2 a_{\rm I} \dot{M}_{\rm cl} R_{\rm cl} \dot{R}_{\rm cl} + \frac{1}{2}
a_{\rm I} \ddot{M}_{\rm cl} R_{\rm cl}^2.
\end{eqnarray}
Next consider the kinetic term $\mathcal{T}$, which we evaluate by
decomposing the velocity into large-scale homologous and
fluctuating turbulent components (equation \ref{vdecomp}). This gives
\begin{equation}
\mathcal{T} =\frac{3}{2} M_{\rm cl} c_{\rm cl}^2+\frac{1}{2} a_{\rm I} M_{\rm cl}
\dot{R}_{\rm cl}^2 + \mathcal{T}_{\rm turb} + 2\piP_{\rm amb} (R_{\rm vir}^3-R_{\rm cl}^3),
\end{equation}
where $c_{\rm cl}$ is the sound speed in the cloud (assumed constant),
$\mathcal{T}_{\rm turb}$ is the term for turbulent motions, and the last
term comes from the constant ambient pressure $P_{\rm amb}$ outside the
cloud. We return to $\mathcal{T}_{\rm turb}$ below. Note that our assumption
of homologous motion
implicitly neglects the possibility of significant rotational
motions. Observed GMCs have negligible kinetic energies in overall
rotation compared to turbulent motions or gravitational potential
energy.
We use the same strategy for the gravitational and magnetic terms as
for the kinetic term, dividing them into steady and fluctuating parts.
The non-turbulent gravitational part is
\begin{equation}
\mathcal{W}_{\rm non-turb} = -\frac{3}{5} a \frac{G M_{\rm cl}^2}{R_{\rm cl}},
\end{equation}
where
\begin{equation}
a = \frac{15-5 k_{\rho}}{15-6 k_{\rho}}.
\end{equation}
In principle we should include a component of potential energy due to
stars, but all observed molecular clouds have gas
masses that greatly exceed their stellar masses. For this reason, we
may neglect the stellar mass.
For the non-turbulent magnetic component, we follow \citet{mckee93}. Let
$\Phi$ be the total magnetic flux threading the cloud, so that the
mean field within the cloud is $\overline{B}=\Phi/(\pi
R_{\rm cl}^2)$. From the the form of the magnetic energy term (equation
\ref{calbdef}), it is clear that the non-turbulent component of this
term scales as $\mathcal{B}_{\rm non-turb}
\propto \overline{B}^2 R_{\rm cl}^3$. We therefore define the constant $b$
such that
\begin{equation}
\mathcal{B}_{\rm non-turb} = \frac{b}{3} \overline{B}^2 R_{\rm cl}^3 =
\frac{b}{3 \pi^2} \left(\frac{\Phi^2}{R_{\rm cl}}\right).
\end{equation}
The exact value of $b$ depends on the topology of the magnetic field
and on the background field $B_0$, but it is generally of order
unity, with $b=0.3$ as a typical value \citep{mckee93}. We now define
the magnetic critical mass $M_{\Phi}$ by
\begin{equation}
M_{\Phi}^2 = \left(\frac{5b}{9\pi^2 a}\right) \frac{\Phi^2}{G},
\end{equation}
so that we have
\begin{equation}
\mathcal{B}_{\rm non-turb} = \frac{3}{5} a \frac{G M_{\Phi}^2}{R_{\rm cl}}.
\end{equation}
We define the magnetic support parameter by
\begin{equation}
\eta_{\rm B} = \frac{M_{\Phi}}{M_{\rm cl}}.
\end{equation}
With this definition, we can combine the non-turbulent magnetic and
gravitational terms to find
\begin{equation}
\mathcal{W}_{\rm non-turb}+\mathcal{B}_{\rm non-turb} =
-\frac{3}{5} a (1-\eta_{\rm B}^2) \frac{G M_{\rm cl}^2}{R_{\rm cl}}.
\end{equation}
Now consider the turbulent components. First, we can neglect the
turbulent component of the gravitational term because since most
sub-regions of a molecular cloud are not self-gravitating
\citep{krumholz05c}, and therefore have negligible potential energy in
comparison to their kinetic or magnetic energies. We can therefore set
$\mathcal{W}=\mathcal{W}_{\rm non-turb}$. For the magnetic and
kinetic turbulent components, \citet{mckee92} argue for equipartition
of kinetic and magnetic energy. \citet{stone98} find in simulations of
low plasma-$\beta$ turbulence that magnetic energy is slightly
sub-equipartition, $\mathcal{B}_{\rm turb} \approx 0.6\, \mathcal{T}_{\rm turb}$,
which \citet{mckee03} argue can be understood as the kinetic and
magnetic energies reaching equipartition for motions transverse to the
field, but not for motions along the field. We adopt the ratio of
magnetic to kinetic energy found by \citet{stone98}, so the combined
turbulent kinetic and magnetic energies in the virial theorem are
\begin{equation}
2\mathcal{T}_{\rm turb} + \mathcal{B}_{\rm turb} \approx
2.6\, \mathcal{T}_{\rm turb} = 3.9\, M_{\rm cl} \sigma_{\rm cl}^2,
\end{equation}
where
$\sigma_{\rm cl}=\left\langle (v_z-v_{{\rm cl},z})^2\right\rangle^{1/2}_{\rho}$
is the
one-dimensional mass-weighted turbulent velocity dispersion of the
gas in the cloud.
Finally, we come to the surface terms, $\mathcal{T}_0$ and $(d/dt)\int_{S_{\rm vir}}
(\rho\mathbf{v}r^2)\cdot d\mathbf{S}$. We can make the latter term
zero by choosing our virial surface to be well outside the cloud so
that the density of cloud material on the surface is negligible. However,
the pressure outside the cloud $P_{\rm amb}$ is non-zero, so
\begin{equation}
\mathcal{T}_0 = 2\piP_{\rm amb}R_{\rm vir}^3.
\end{equation}
Substituting into the EVT (\ref{EVTtext}), we arrive at an equation of
motion for the cloud:
\begin{eqnarray}
a_{\rm I} \ddot{R}_{\rm cl} & = &
3.9\frac{\sigma_{\rm cl}^2}{R_{\rm cl}} + 3\frac{c_{\rm cl}^2}{R_{\rm cl}}
- \frac{3}{5} (1-\eta_{\rm B}^2) a \frac{G M_{\rm cl}}{R_{\rm cl}^2}
\nonumber \\
& & \: {} -
4\pi P_{\rm amb}
\frac{R_{\rm cl}^2}{M_{\rm cl}}
+ \left(\frac{3-k_{\rho}}{4-k_{\rho}}\right) \frac{\dot{M}_{\rm cl}}{M_{\rm cl}} v'_{\rm ej}.
\label{virialdim}
\end{eqnarray}
This equation is intuitively easy to understand. The left-hand side
represents the acceleration of the cloud edge, which is equated with
the force per unit mass due to internal turbulence and pressure (the
first two terms), gravity and magnetic fields (the third term),
external pressure (the fourth term), and recoil from the evaporating
gas (the final term).
For convenience we wish to non-dimensionalize this equation. Let
$M_{\rm cl-0}$, $R_{\rm cl-0}$, and $\sigma_{\rm cl-0}$ be the initial mass, radius, and
velocity dispersion of the cloud. We define the dimensionless variables
$M=M_{\rm cl}/M_{\rm cl-0}$, $R=R_{\rm cl}/R_{\rm cl-0}$, $\sigma=\sigma_{\rm cl}/\sigma_{\rm cl-0}$, and
$\tau=t/t_{\rm cr-0}$, where $t_{\rm cr-0}=R_{\rm cl-0}/\sigma_{\rm cl-0}$ is the
crossing time of the initial cloud. In these variables
(\ref{virialdim}) becomes
\begin{equation}
\label{eqofmotionfinal}
R'' =
\frac{3.9\, \sigma^2 + 3 \mathcal{M}_0^{-2}}{a_{\rm I} R}
-\eta_G \frac{M}{R^2} - \eta_P \frac{R^2}{M} + \eta_R \frac{M'}{M}
\end{equation}
where the primes indicate differentiation with respect to $\tau$,
\begin{eqnarray}
\label{etageqn}
\eta_G & \equiv &
\frac{3 a (1 - \eta_{\rm B}^2)}{a_{\rm I} \alpha_{\rm vir-0}}, \\
\label{etapdefn}
\eta_P & \equiv &
\frac{4 \pi R_{\rm cl-0}^3 P_{\rm amb}}{a_{\rm I} M_{\rm cl-0} \sigma_{\rm cl-0}^2}, \\
\label{etardefn}
\eta_R & \equiv &
\left(\frac{3-k_{\rho}}{4-k_{\rho}}\right)
\frac{v'_{\rm ej}}{a_{\rm I} \sigma_{\rm cl-0}}
\end{eqnarray}
and we have defined the initial Mach number
\begin{equation}
\mathcal{M}_0 \equiv \frac{\sigma_{\rm cl-0}}{c_{\rm cl}}
\end{equation}
and non-thermal virial parameter \citep{bertoldi92}
\begin{equation}
\alpha_{\rm vir-0} \equiv \frac{5\sigma_{\rm cl-0}^2 R_{\rm cl-0}}{G M_{\rm cl-0}}.
\end{equation}
If the cloud's initial state is in equilibrium and it is not losing mass,
so $R'(0)=M'(0)=0$,
then the ambient pressure $P_{\rm amb}$ must be such that
\begin{equation}
\label{etapeqn}
\eta_P = \frac{3.9+3\mathcal{M}_0^{-2}}{a_{\rm I}}-\eta_G.
\end{equation}
Note that this will generally
implies an ambient pressure higher than the mean in the ISM. We
consider it realistic, however, because GMCs form in relatively
overpressured regions, and because we have not included the weight of
an atomic layer overlying the cloud. Once mass loss commences, there
will be an additional recoil pressure.
\subsection{Equation of Energy Evolution}
\label{energyeqn}
To derive the evolution equation for the cloud energy, we begin with
the general energy conservation equation for an evaporating homologous
cloud, which we derive in Appendix \ref{energyderivation} and for
convenience repeat here:
\begin{eqnarray}
\frac{d\mathcal{E}}{dt} & = & \frac{\dot{M}_{\rm cl}}{M_{\rm cl}} \left[\mathcal{E} +
(1-\eta_{\rm B}^2) \mathcal{W}\right] - 4\pi\PaR_{\rm cl}^2\dot{R}_{\rm cl}
\nonumber \\
& & \: {} +
\left(\frac{3-k_{\rho}}{4-k_{\rho}}\right) \dot{M}_{\rm cl} \dot{R}_{\rm cl} v'_{\rm ej} +
\mathcal{G}_{\rm cl} - \mathcal{L}_{\rm cl}.
\label{energyeqn1text}
\end{eqnarray}
Here $\mathcal{E}$ is the total cloud energy, and $\mathcal{G}_{\rm cl}$ and
$\mathcal{L}_{\rm cl}$ are the rates of radiative energy gain and loss
integrated over the entire cloud. This equation is easy to understand
intuitively. The term $(\dot{M}_{\rm cl}/M_{\rm cl}) \mathcal{E}$ is simply the mass loss rate
times the energy per unit mass in the cloud. The term
$(\dot{M}_{\rm cl}/M_{\rm cl}) (1-\eta_{\rm B}^2)\mathcal{W}$ is the rate at which mass loss reduces
the cloud energy via reduction of the gravitational and magnetic
fields, both of which are proportional to the mass. The next two terms
represent the rate at which the external pressure and the recoil force
from launching the wind do work on the cloud. Finally, the last two
terms are simply the rate of radiative gains and losses.
Using the same arguments as in \S~\ref{equationofmotion}, we may write
the total energy in the cloud as
\begin{equation}
\label{evircl}
\mathcal{E} = \frac{1}{2} a_{\rm I} M_{\rm cl} \dot{R}_{\rm cl}^2 +
2.4\, M_{\rm cl} \sigma_{\rm cl}^2
+\frac{3}{2} M_{\rm cl} c_{\rm cl}^2 - \frac{3}{5} a (1-\eta_{\rm B}^2) \frac{G
M_{\rm cl}^2}{R_{\rm cl}}.
\end{equation}
Note that the factor of $2.4$ in the $M_{\rm cl} \sigma_{\rm cl}^2$ term comes from
taking $\mathcal{T}_{\rm turb}+\mathcal{B}_{\rm turb} \approx 1.6\, \mathcal{T}_{\rm
turb}$, and the $3/2$ in front of the $M_{\rm cl} c_{\rm cl}^2$ term comes from the
assumption that the ratio of specific heats for the cloud is
$\gamma_{\rm cl}=5/3$. One might expect $7/5$ instead, since the cloud is
molecular and therefore diatomic. However, the lowest rotational or
vibrational excitations of H$_2$ have excitation temperatures of
several hundred K. Since the cloud is far colder than this, molecules
never have enough energy to access their rotational and vibrational
degrees of freedom, and the gas acts as if it were monatomic. The time
derivative of this is
\begin{eqnarray}
\frac{d\mathcal{E}}{dt} & = &
\frac{1}{2}a_{\rm I} \dot{M}_{\rm cl} \dot{R}_{\rm cl}^2 + a_{\rm I} M_{\rm cl} \dot{R}_{\rm cl} \ddot{R}_{\rm cl} +
2.4\, \dot{M}_{\rm cl} \sigma_{\rm cl}^2
\nonumber \\
& &
{} + 4.8\, M_{\rm cl} \sigma_{\rm cl} \dot{\sigma}_{\rm cl}
\frac{3}{2} \dot{M}_{\rm cl} c_{\rm cl}^2
- \frac{6}{5} a (1-\eta_{\rm B}^2) \frac{G M_{\rm cl}
\dot{M}_{\rm cl}}{R_{\rm cl}}
\nonumber \\
& &
{} + \frac{3}{5} a (1-\eta_{\rm B}^2) \frac{G M_{\rm cl}^2 \dot{R}_{\rm cl}}{R_{\rm cl}^2}.
\label{dedtvircl}
\end{eqnarray}
Substituting $\mathcal{E}$ and $d\mathcal{E}/dt$ into the energy equation
(\ref{energyeqn1text}), we find
\begin{eqnarray}
\lefteqn{a_{\rm I} M_{\rm cl} \dot{R}_{\rm cl} \ddot{R}_{\rm cl} + 4.8M_{\rm cl} \sigma_{\rm cl} \dot{\sigma}_{\rm cl}}
\qquad
\nonumber \\
\lefteqn{
{} +
\frac{3}{5} a (1-\eta_{\rm B}^2) \frac{G M_{\rm cl}^2}{R_{\rm cl}^2} \dot{R}_{\rm cl}
=} \qquad
\nonumber \\
& &
-4 \pi P_{\rm amb} R_{\rm cl}^2 \dot{R}_{\rm cl} +
\left(\frac{3-k_{\rho}}{4-k_{\rho}}\right) \dot{M}_{\rm cl} \dot{R}_{\rm cl} v'_{\rm ej}
\nonumber \\
& & \; {}
+
\mathcal{G}_{\rm cl} - \mathcal{L}_{\rm cl}.
\end{eqnarray}
We may regard this as an evolution equation for $\sigma_{\rm cl}$, which makes
intuitive sense: the overall expansion and contraction of the cloud is
dictated by the equation of motion, and the thermal energy per unit
mass is fixed,
so the turbulence acts as the energy resevoir, increasing or
decreasing as the cloud gains or loses energy. Re-arranging to solve
for $\dot{\sigma}_{\rm cl}$ and non-dimensionalizing as we have done with the equation
of motion gives
\begin{equation}
\label{energyeqfinal}
\frac{4.8}{a_{\rm I}} \sigma' = -\frac{R' R''}{\sigma}
- \eta_G \frac{M R'}{R^2 \sigma}
- \eta_P \frac{R^2 R'}{M\sigma}
+ \eta_R \frac{M' R'}{M \sigma}
+ \frac{\mathcal{G} - \mathcal{L}}{a_{\rm I} M \sigma},
\end{equation}
where $\mathcal{G}$ and $\mathcal{L}$ are the dimensionless rates of radiative
energy gain and loss, defined by
\begin{equation}
\mathcal{G} - \mathcal{L} = \left(\frac{R_{\rm cl-0}}{M_{\rm cl-0}\sigma_{\rm cl-0}^3}\right)
(\mathcal{G}_{\rm cl} - \mathcal{L}_{\rm cl}).
\end{equation}
\section{Energy Sources and Sinks}
\label{sources}
In this section we evaluate the rates of radiative energy gain $\mathcal{G}$
and loss $\mathcal{L}$, and the characteristic launch speed of evaporating gas
$v'_{\rm ej}$. Together with the equation of motion
(\ref{eqofmotionfinal}) and the energy equation (\ref{energyeqfinal}),
and the star formation rate (\S~\ref{starformation}),
this will completely specify the evolution of our model clouds.
\subsection{Decay of Turbulence via Isothermal Shocks}
\label{turbdecay}
GMCs are approximately isothermal because their radiative time scales
are much shorter than their mechanical time scales. As a result,
supersonic motions within the cloud generate radiative shocks that
remove energy from the cloud. The
problem of the decay of turbulent motions by supersonic isothermal
shocks in both hydrodynamic and magnetohydrodynamic media has been
studied extensively by numerical simulation \citep[e.g.][]{maclow98,
stone98, maclow99, padoan99}. \citet{stone98} finds that the
dissipation time scale of the turbulent energy is $t_{\rm dis} \equiv
\dot{E}/E = 0.83\, \lambda_{\rm in}/v_{\rm rms} = 0.48 \lambda_{\rm
in}/\sigma$, where $\lambda_{\rm in}$ is the length scale on which the
energy is injected.
In reality, HII
regions coming from associations of various sizes, winds, and
gravitational contraction of the cloud will all contribute to
turbulent motions and inject energy on different length
scales. However, we can estimate an effective length scale through a
combination of observational and theoretical
considerations. Observationally, turbulence in nearby GMCs appears to
be driven on scales comparable to the cloud scale or larger
\citep{basu01, heyer04}, and theoretical estimates of the effective
energy injection scale of HII regions suggest that this is also near
the cloud scale \citep{matzner02}. The longest wavelength mode that
a cloud of radius $R_{\rm cl}$ can support is $\sim 4 R_{\rm cl}$, where the factor
of four arises because the largest turbulent mode corresponds to
overall expansion or contraction of the cloud, in which diametrically
opposed points are moving in opposite directions. Motion in
opposite directions corresponds to the points being half a wavelength
apart, giving a total wavelength of twice the cloud diameter
\citep{matzner02}. Thus, we take the effective injection scale to be
$\lambda_{\rm in} = 4\phi_{\rm in} R_{\rm cl}$, where $\phi_{\rm in}\leq 1$,
and we take $\phi_{\rm in}=1$ as a fiducial value based on
observations and theory. The (dimensionless) rate at which energy is
radiated away due to decaying turbulence is
\begin{equation}\label{Lambda_diss}
\mathcal{L} = \frac{\eta_{\rm v}}{\phi_{\rm in}} \frac{M \sigma^3}{R},
\end{equation}
where Stone et al.'s simulations give $\eta_{\rm v}=1.2$. Note that, in
deriving this factor, we have used our result that the energy in the
turbulence, including magnetic and kinetic contributions, is $2.4\,
M_{\rm cl} \sigma_{\rm cl}^2$.
It is also worth noting the possibility that the measured energy loss
rates are too high. \citet{cho03} argue that Alfv\'en waves cascade
and decay anisotropically, and that this anisotropy can reduce the
decay rate. However, Cho \& Lazarian argue that simulations to date
have not captured this effect because they use an isotropic
driving field that is unrealistic. \citet{sugimoto04}
simulate filamentary molecular clouds, and find that Alfv\'en waves of
certain polarizations decay more slowly than simulations have
found. Even if neither of these effects apply, there are differences
in the rate of decay in different simulations depending on how the
turbulence is driven. The simulations of \citet{maclow99} give
slightly lower dissipation rates, probably because in those
simulations the turbulence is forced with a driving field that is
constant in time, while in those of \citet{stone98} the driving field
is determined randomly at each time step. While we feel that the
Stone et al. approach is somewhat more realistic, this is by no
means certain.
\subsection{HII Regions}
\label{hiiregions}
HII regions are the dominant source of energy injection into GMCs
from within \citep{matzner02}. We consider the effects of the
HII region from a single association here, and
extend our results to a population of associations to
\S~\ref{starformation}.
\subsubsection{Evolution of Individual HII Regions}
\label{HIIevol}
We first describe the evolution of an HII region embedded in a
molecular cloud, modifying the analysis by \cite{matzner02} by
allowing the mean ambient density and velocity dispersion to vary with
radius $r$ away from the formation site of an association, as
$\rho(r)\propto r^{-k_{\rho}}$ and
$\sigma(r)\propto r^{-k_\sigma}$, respectively. \cite{matzner02}
considered only the homogeneous case $k_{\rho} = k_\sigma=0$. Note that
the local turbulent virial parameter $\alpha(r)\equiv 5
r\sigma^2(r)/[G M(r)]$ scales as $r^{k_{\rho} -2k_\sigma -2}$. Since
$\alpha(r)$ is roughly unity on the scale of the cloud and on that of
the newborn association (else it would not have formed), it
is reasonable to assume $k_{\rho} = 2k_\sigma+2$. The observed
line width-size relation and density-size relations \citep{solomon87} imply
$k_\sigma=-1/2$ and $k_{\rho}=1$, and we adopt these values below. These
parameters correspond to a cloud with a negligible internal pressure
gradient. Note that it has been suggested that the observational result $k_{\rho}=1$ (which is equivalent to GMCs having constant column densities within a galaxy) is simply an observational artifact \citep[e.g.][]{vazquezsemadeni97}. However, many of the proposed mechanisms to explain how this artifact could be created do not apply to extragalactic observations, and more recent observations show that GMCs in other galaxies also show constant column densities \citep[and references therein]{blitz06a}. We discuss this point in more detail in \S~\ref{varcolsection} and \S~\ref{gmcevol}, and also refer readers to the discussion of this point in \citet{krumholz05c}.
The density variation affects the expansion phase of the HII region,
and the variation in velocity dispersion affects how HII regions merge
with the background turbulence.
The evolution
of an HII region in a turbulent GMC is a substantial problem that must
ultimately be solved via simulation. However, to date only preliminary
attempts to solve the problem have been made
\citep[e.g.][]{dale05,mellema05,maclow06,krumholz06e}, so we are forced to rely an analytic
approximations.
First, consider the expansion phase of an HII region. Assume
that it has expanded well beyond its initial
Str\"omgen radius. The mean ambient density is $\bar{\rho}(r)\propto
r^{-k_\rho}$ within a distance $r$ from the association, the ionized
gas temperature is $T_{\rm II}\simeq 7000$ K, the (constant) ionizing
luminosity is $S=10^{49}S_{49}$ s$^{-1}$, and the recombination
coefficient is $\alpha^{(2)}$ in the on-the-spot approximation. If the
HII region is blister-type, ionized gas will rocket away from the
ionization front at a velocity $v'_{\rm ej} = 2 c_{\rm II}$, where
$c_{\rm II} = 9.74$ km s$^{-1}$ is the sound speed in the ionized
gas. This is the characteristic launch velocity of our cloud's
escaping wind. For the evolution of the HII region,
\citeauthor{matzner02}'s dynamical equation [his eq. (15)] admits the
self-similar solution
\begin{equation}
r_{\rm sh}^2 \bar{\rho}(r_{\rm sh}) = (2,1)\times
\frac{3}{4}\frac{(7-2k_{\rho})^2}{9-2k_{\rho}} \rho_{\rm II} c_{\rm II}^2 t^2
\end{equation}
for (blister, embedded) regions, respectively. Here $r_{\rm sh}$ is
the radius of the shell at time $t$, and $\rho_{\rm II}$ is the
(uniform) density inside the HII region, which must vary as $r_{\rm
sh}^{-3/2}$ to ensure that ionizations balance recombinations. This
equation applies when the expansion velocity is much larger than the
turbulent velocity dispersion; we discuss the effects of turbulence
below. This yields
\begin{equation}\label{HIIselfsim}
r_{\rm sh}^{7/2}\bar{\rho}(r_{\rm sh}) = (2,1)
\times 2.2 \frac{3(7-2k_{\rho})^2}{2(9-2k_{\rho})}k_B T_{\rm II}
\left(\frac{3S}{4\pi\alpha^{(2)}}\right)^{1/2} t^2
\end{equation}
where $k_B$ is Boltzmann's constant. The leading factor of 2.2
accounts for the particle abundances assuming helium is singly
ionized. We assume blister-style regions hereafter, because GMCs are
porous and therefore even small HII regions are likely to be able to
punch through to low density channels and vent \citep{matzner02}. For
our fiducial case $k_{\rho}=1$, the mean column $\Sigma\equiv
4\bar{\rho}(r) r/3 $ is a constant. Adopting $\alpha^{(2)} =
3.46\times 10^{-13}$ cm$^3$ s$^{-1}$, and adjusting the effective
ionizing luminosity downward by a factor 1.37 to account for dust
absorption \citep{williams97}, we find
\begin{equation} \label{rdrv}
r_{\rm sh} = 15.4 S_{49}^{1/5}
\left(\frac{t}{3.8\;\rm Myr}\right)^{4/5}
\left(\frac{0.03\;\rm g\;cm^{-2}}{\Sigma}\right)^{2/5}{~\rm pc}.
\end{equation}
Here we have normalized $\Sigma$ to a typical value for Milky Way
molecular clouds. We take the typical ionizing lifetime to be 3.8 Myr
because we adopt the \cite{kroupa01a} initial mass function (IMF) and
the stellar ionizing luminosities and lifetimes tabulated by
\cite{parravano03}.
However, the lifetime can fluctuate from this for small clusters, so
we define $t_{\rm ms}$ as the main-sequence ionizing lifetime of
stars in a given cluster.
(With these choices, a young association large enough to fully sample
the IMF emits $3.4\times 10^{46}$ ionizing photons per second per
solar mass, and has one star above $8M_{\odot}$, hence one supernova, per
131 solar masses.) The momentum of the HII region during this driving
phase is
\begin{eqnarray}
p_{\rm sh} & = & 5.1\times 10^{43} S_{49}^{3/5}
\left(\frac{t}{3.8\;\rm Myr}\right)^{7/5}
\nonumber \\
& & {} \times
\left(\frac{0.03\;\rm g\;cm^{-2}}{\Sigma}\right)^{1/5}{\rm
g\;cm\;s^{-1}}.
\label{pdrv}
\end{eqnarray}
Assuming the gas can escape freely, the mass evaporation rate from the
cloud is simply
\begin{equation} \label{Mev}
\dot{M}_{\rm cl} = -\dot{p}_{\rm sh}/(2c_{\rm II}).
\end{equation}
Note that equations (\ref{rdrv}) and (\ref{pdrv}) are quite
insensitive to the actual escape of ionized gas \citep{matzner02}, but
$\dot{M}_{\rm cl}$ depends on the existence of an escape route.
After the driving stars burn out at $t=t_{\rm ms}$
(and if it has not yet merged with the cloud turbulence,
\S~\ref{HII-merging}), the HII region will continue to expand in a
momentum-conserving snowplow, in which $\dot{r}_{\rm sh}\propto
r_{\rm sh}^{-(3-k_{\rho})} \rightarrow r_{\rm sh}^{-2}$ so that
\begin{equation}
r_{\rm sh} = \left[r_{\rm ms}^{3} + 3 r_{\rm ms}^2 \dot{r}_{\rm ms}
(t - t_{\rm ms}) \right]^{1/3},
\end{equation}
where $r_{\rm ms}$ and $\dot{r}_{\rm ms}$ are the radius and velocity
of the expanding shell at the time when the driving stars burn out.
No mass is evaporated in this phase; we ignore the possibility of
dynamical mass ejection by the snowplow unless the snowplow contains
enough kinetic energy to unbind the cloud as a whole.
We now comment on two potential criticisms of our HII
regions.
First, real molecular clouds are very inhomogeneous, being filled with
clumpy and filamentary structure. To what degree should this affect
the results of this section? Very little, we argue.
\citet{mckee84} consider HII regions in a medium composed
entirely of dense clumps. Adopting a clump mass distribution very
similar to that observed within molecular clouds, they find (their
eq.\ 5) that the photoevaporative rocket effect clears all but a
couple clumps from within $r_{\rm sh}(t)$. Additional homogenization, not
considered by \citeauthor{mckee84}, should come from the
ram pressure stripping of overdense clumps and filaments as they are
struck by gas already set in motion. For these reasons we expect the
gross properties of HII regions to be rather insensitive to local
inhomogeneities. Recent simulations by \citet{mellema05} and
\citet{maclow06} support this expectation, and \citeauthor{mellema05}
find that the radius of an HII region in a turbulent medium as a
function of time is very similar to what one would find in a uniform
medium. (The simulation by \citet{dale05} gives a somewhat different
result, but its different initial conditions and brief duration do not
allow a fair comparison. In particular, \citeauthor{dale05} compare
their HII region expansion rate to the analytic solution for a uniform
medium, despite the fact that they are simulating a strongly centrally
condensed gas cloud.) For these reasons we
expect our results to be robust against clumping of the gas on scales
smaller than $r_{\rm sh}(t)$. Given that our adoption of blister-type flow
reflects the existence of inhomogeneities on scales larger than
$r_{\rm sh}(t)$, we expect the results in this section to be robust against
cloud inhomogeneities in general.
A second potential criticism concerns our assumption that each HII
region expands as if the gas density drops off like $r^{-k_\rho}$ away
from it. Formally this can only be consistent with our approximations
of a spherical, $\rho\propto r^{-k_\rho}$ cloud if every HII region is
born at the cloud center. We consider this to be merely a
technicality. While our global cloud model refers only to a mean
density profile and a turbulent energy, our model for the local
density enhancement reflects the fact that clusters are born at the
peaks of the turbulent density profile -- which will not always occur
at the cloud center. Indeed, we model mass loss and recoil pressure
as if HII regions peppered the entire cloud and commonly vent out its
side, and we ignore the interaction between regions.
Although our local $k_\rho=1$ density is an idealization, we consider
it an improvement over previous works which assumed homogeneous cloud
gas. Moreover, the most luminous HII regions, those which dominate
the energy injection, are in reality likely to be born near cloud
centers, and once they
have expanded to a radius that is significantly larger than the
distance from their center to the peak of the density distribution in
the GMC, the approximation that they are expanding down a $k_\rho = 1$
density gradient will be quite good.
\subsubsection{Merging of HII regions} \label{HII-merging}
The shell will continue to expand until it decelerates to the point
where the expansion velocity is comparable to the current velocity
dispersion in the GMC, at which point the shell will break up, merge
with the general turbulent motions, and cease to be a coherent entity.
The HII region experiences a velocity dispersion $\sigma(r) = \sigma
(r/R_{\rm cl})^{-k_\sigma}$ after expanding to radius $r$, and we approximate
that the merger occurs when $\sigma(r_{\rm sh})=\dot{r}_{\rm sh}$. If
the cloud properties were to remain fixed during the expansion, the
merger radius of an HII region would be
\begin{equation} \label{Rmerge-HII}
r_m= \left(\frac{2
p_{\rm sh}}{M_{\rm cl}\sigma}\right)^{1/(3-k_\sigma-k_{\rho})\rightarrow 2/5} R_{\rm cl},
\end{equation}
applying $k_{\rho}\rightarrow 1$ and $k_\sigma\rightarrow-1/2$. The
factor of 2 arises from our idealization of the cloud as a sphere and
the HII region as a hemisphere. Equation (\ref{Rmerge-HII}) holds
whether merging occurs in the driving phase ($p_{\rm sh}\rightarrow
p_{\rm sh}(t)$) or the snowplow phase ($p_{\rm sh}\rightarrow p_{\rm
sh}(t_{\rm ms})$). However, since the cloud velocity dispersion and
radius can change during the expansion of an HII region, equation
(\ref{Rmerge-HII}) holds only approximately.
Note that gravity has little effect on HII regions except
during merging, because their shells travel faster than $\sigma(r)$ up
to that point. Indeed, the crossing time $r/\sigma(r)$ is $2.0
\alpha(r)^{-1/2}$ times the free-fall time on scale $r$. Since
$\alpha(r)\geq 1$, gravity only marginally affects shell motions. For
this reason, we are justified in continuing to use our similarity
solution (which neglects gravity) up to the point at which a shell merges.
\subsubsection{Energy injection}
\label{eninjection}
At any given time, the cloud energy terms $\mathcal{T}$ and $\mathcal{M}$ include
both the turbulent energy from merged HII regions, and the kinetic
energy of those that are still expanding. During its expansion,
the kinetic energy of a single HII region is
\begin{equation} \label{calT_II}
\mathcal{T}_1=\frac12 p_{\rm sh} \dot r_{\rm sh} + 2\pi P_{\rm II} r_{\rm sh}^3,
\end{equation}
the two terms reflecting bulk and thermal energy, respectively, if
$P_{\rm II}$ is the internal pressure. The bulk kinetic energy at the time
of merging $t_m$ is therefore $\mathcal{T}_1(t_m) =p_{\rm sh}\sigma(r_m)/2$.
Consider first the energy injected into turbulence at merging. We
assume a merging shell stores the same fraction of energy in
non-kinetic form as does the background turbulence, so we consider the
total energy contribution to be $1.6\mathcal{T}_1(t_m)$. Rather than
track individual regions' contributions in detail, we lump them
together in the energy budget in a way that reproduces the correct
average energy in turbulence. Since the cloud's energy dissipation
time is $t_{\rm dis} = 2.0 (1.2\phi_{\rm in}/\eta_{\rm v}) R_{\rm cl}/\sigma_{\rm cl}$, whereas
the dissipation of a single contribution occurs over $t_{\rm
dis,1}=2.0(1.2\phi_{\rm in}/\eta_{\rm v})r_m/\sigma(r_m)$, we add
\begin{equation} \label{deltaE_implemented}
\Delta E_{\rm turb,1} = \eta_{\rm E} \frac{r_m \sigma_{\rm cl}}{R_{\rm cl} \sigma(r_m)} \times 1.6
\mathcal{T}_1\rightarrow
\eta_{\rm E} \left(\frac{r_m}{R_{\rm cl}}\right)^{1/2} \times1.6 \mathcal{T}_1,
\end{equation}
rather than $1.6 \mathcal{T}_1$, to the turbulent energy. The factor $\eta_{\rm E}$ parameterizes our ignorance of the exact efficiency with which HII regions drive motions in molecular clouds. For a uniform medium $\eta_{\rm E} = 1$, but there has been some suggestion that $\eta_{\rm E} < 1$ in turbulent media \citep[e.g.][]{elmegreen00, dale05}, although other simulations do not appear support this result \citep[e.g.][]{mellema05}. In our models we vary $\eta_{\rm E}$ to see explore how sensitively our results depend on the efficiency with which HII regions inject energy.
We exclude from consideration the energy that expanding shells possess
prior to merging (which we estimate to be several times smaller than
the mean turbulent energy contributed by these regions). We note that
the cloud outside of a given region is unaffected by it, except
gravitationally, until it has merged.
Thus, the rate of energy injection due to a single HII region is roughly
\begin{equation}
\label{HII-injected}
\mathcal{G}_{\rm cl} = 1.6 \eta_{\rm E} \mathcal{T}_1 \left(\frac{r_m}{R_{\rm cl}}\right)^{1/2}
\delta(t-t_m),
\end{equation}
and when a shell merges with the overall cloud
turbulence
we set
\begin{equation}
\label{sigmaneweqn}
\frac{3}{2}\sigma_{\rm new}^2 = \frac{3}{2}\sigma_{\rm old}^2+
\eta_{\rm E} \frac{\mathcal{T}_1}{M_{\rm cl}}
\left(\frac{r_m}{ R_{\rm cl}}\right)^{1/2}.
\end{equation}
\subsubsection{Cometary HII regions and cloud disruption}\label{cometHII}
Finally, note that it is possible that a shell will contain enough
momentum to expand to the point where its radius is larger than the
cloud radius. If the expansion velocity of the shell at this point is
larger than the escape velocity of the cloud (i.e. $\dot{r}_{\rm
sh}>\sqrt{2 G M_{\rm cl}/R_{\rm cl}}$), then the shell will simply unbind the
cloud. This criterion for cloud disruption is somewhat different from
those adopted by \citet{williams97} and \citet{matzner02}; we discuss
the distinction and its implications in \S~\ref{fiducialresult}.
If the expansion velocity is smaller than the escape velocity,
the shell will deform the cloud into a cometary
configuration \citep{bertoldi90}, but what happens at that point
depends on whether or not the shell is still being driven by an active
association. An undriven shell will neither gain nor lose energy once
$r_{\rm sh} > R_{\rm cl}$, and its energy will eventually be added to the
cloud's as turbulence once the cloud falls back. We therefore
approximate that any shell in the snowplow phase that reaches $r_{\rm
sh} > R_{\rm cl}$ but does not have enough momentum to unbind the GMC merges
with the turbulence. On the other hand, a shell with
an active source can continue to gain energy and evaporate mass even
after reaching the cometary configuration, although eventually its
energy will saturate. Following \citet{williams97}, we
estimate that a shell can continue gaining energy after reaching the
cometary phase up to a time $t \sim 2 t_R$, where $t_R$ is the time it
took the HII region to reach $R_{\rm cl}$. Its radius at this time will be
$r_{\rm sh} = 1.74 R_{\rm cl}$. If a driven HII region reaches this radius
and still does not have enough energy to unbind the cloud, then it can
affect the cloud no further. We approximate that the HII region merges
with the turbulence and evaporates no further mass after that point.
It is important to point out that our treatment of large HII regions,
and our criteria for cloud disruption and deformation into a cometary
configuration, are quite approximate. In comparison to
\citet{matzner02}, the criteria we use here reduce the shell momentum
required to disrupt a cloud by roughly a factor of 2. This may affect
our conclusions about the frequency of disruption by HII regions.
\subsection{Supernovae and Protostellar Outflows}
Here we discuss in more detail two sources of feedback that we have chosen not to include: supernovae and protostellar outflows. We omit these because they deliver much less energy per unit mass of stars than HII regions. Here we summarize the arguments, many of which are given in \citet{matzner02}, as to why this is the case.
The dynamics of a supernovae exploding inside HII regions, and the
amount of energy and momentum they add, may be studied directly either
by numerical simulation \citep[e.g.][]{tenoriotagle85,yorke89} or
analytic calculation \citep{matzner02}. Both methods yield similar
results: supernovae generally sweep up enough mass inside their
ionized bubbles to become radiative by the time they reach the outer
boundary of their host HII regions. As a result, supernova blast waves
radiate away much of their energy before encountering any molecular
cloud material, and consequently increase the total mass removed by
their HII region by at most $\sim 20-40\%$, and more typically $\sim
10\%$. Moreover, this applies only to supernova progenitors whose HII
regions are enclosed by the cloud and do not blister. Supernovae inside
blister-type regions will deposit most of their energy outside the
cloud, into the low density ambient medium, and should therefore have
an even smaller effect. The assumption sometimes made in the literature
\citep[e.g.][]{clark05} that supernovae can halt star formation and
disrupt molecular clouds in $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle < 10$ Myr does not appear to be
supported by any numerical or analytic calculations.
Protostellar outflows are negligible compared to HII regions on the size scale of entire GMCs for two reasons. First, the momentum injected into a cloud by an outflow is of order $\dot{M}_* v_w$, where $\dot{M}_*$ is the star formation rate and $v_w \sim 40$ km s$^{-1}$ is the rough momentum input per unit mass of stars formed for hydromagnetic winds \citep{matzner00, richer00}. In contrast, as the analysis of \S~\ref{hiiregions} shows, the momentum carried by HII regions is of order $\dot{M}_{\rm phot} c_{\rm II}$, where $\dot{M}_{\rm phot}$ is the rate at which ionizing photons photoevaporate cloud mass. While $v_w$ is larger than $c_{\rm II}$ by roughly a factor of 4, the low star formation efficiencies of GMCs imply that the rate at which ionizing photons evaporate gas must exceed the rate at which the gas transforms into stars by a factor of $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle > 10$, so $\dot{M}_{\rm phot} \gg \dot{M}_*$. Our results in \S~\ref{results} confirm this conclusion. Thus, HII regions provide much more momentum than outflows.
Second, outflows generally inject energy on significantly smaller scales than HII regions. While there are some spectacular examples of HH objects parsecs in size, these appear to be exceptions. \citet{quillen05} find in NGC1333 that the typical size of protostellar outflow cavities is only $\sim 0.1-0.2$ pc. This is a low- to intermediate-mass star forming region, and cavities are likely to be even smaller in the denser environments where most Galactic star formation takes place. This is much smaller than the sizes comparable to the cloud radius reached by large HII regions \citep{matzner02}. Following the discussion in \S~\ref{turbdecay}, we expect the energy injected by these outflows to decay much more quickly than the large-scale motions created by HII regions, and therefore make an even smaller contribution that comparison of the momenta would suggest.
It is worth noting that both arguments show that HII regions dominate over outflows only apply if we are concerned with objects comparable in size to entire GMCs. For smaller objects like the gas clumps forming individual clusters, the time for a single HII region to expand may be comparable to the evolution time of the entire region, and much of the energy of the HII region may be deposited outside the clump. As a result, outflows may well dominate for these objects \citep[e.g.][]{quillen05, li06b}.
\section{Star Formation and HII Region Creation}
\label{starformation}
To know how HII region feedback affects GMC evolution, we must know
the star formation rate and the clustering properties of the stars
formed. To estimate the former, we use the turbulence-regulated star
formation model of
\citet{krumholz05c}. This model postulates that stars form in any
sub-region of the cloud where the local gravitational potential energy
exceeds the turbulent kinetic energy, and it gives good agreement with
simulations, with the observed star formation rate in the Milky Way
\citep{luna06}, with the age spreads of rich star clusters \citep{tan06a},
with the star formation rate in dense molecular clumps \citep{krumholz06c},
and with the the Kennicutt-Schmidt Law for star formation in galactic
disks \citep{kennicutt98b}. In the Krumholz \& McKee model, the star
formation rate is
\begin{equation}
\label{sfreqn}
\dot{M}_* = \mbox{SFR}_{\rm ff} \frac{M_{\rm cl}}{t_{\rm ff}},
\end{equation}
where
\begin{equation}
\mbox{SFR}_{\rm ff} \approx 0.073 \alpha_{\rm vir}^{-0.68}
\mathcal{M}^{-0.32},
\end{equation}
$t_{\rm ff}\equiv[3\pi/(32 G \bar{\rho})]^{1/2}$ is the free-fall time at
the mean density $\bar{\rho}$ of the cloud, and $\alpha_{\rm vir}$ and $\mathcal{M}$ are
the virial parameter and 1D Mach number. Thus, given the mass,
radius, velocity dispersion, and gas temperature in a GMC, the
Krumholz \& McKee model enables us to predict the instantaneous rate
of star formation in that cloud.
HII regions will be driven by the combined ionizing luminosity of all
the stars in an association, and the energy injection therefore
depends on how the stars are
clustered. Thus we must estimate the ionizing luminosity
function of associations as well as the star formation rate. This
distribution is unfortunately quite uncertain, because a small cloud
cannot form arbitrarily large associations and thus the
Galactic-average and individual-cloud luminosity functions are
different. Let $dF_a(\mathcal{N}_{*})/d\ln \mathcal{N}_{*}$ be the fraction of OB
associations with $\ln$ number of stars between $\ln \mathcal{N}_{*}$ and $\ln
\mathcal{N}_{*} + d\ln\mathcal{N}_{*}$ in the entire Galaxy, and let
$dF_{a,M_{\rm cl}}(\mathcal{N}_{*})/d\ln \mathcal{N}_{*}$ be the corresponding fraction within
a cloud of mass $M_{\rm cl}$. \citet{mckee97} show that observations of the
Galactic population of HII regions are consistent with a Galactic
distribution
\begin{equation}
\label{galacticfstar}
\frac{dF_a}{d\ln \mathcal{N}_{*}} \propto \frac{1}{\mathcal{N}_{*}}.
\end{equation}
It will be more convenient for us to work with the mass of an
association rather than the number of stars. Since we are concerned
with OB associations which give rise to HII regions, and these have
enough stars to sample the IMF well
\citep[$\mathcal{N}_{*}>100$,][]{zinnecker93}, we convert from number to mass
simply by multiplying by the mean stellar mass. We use the
\citet{kroupa01a} IMF, which gives $\left\langle m\right\rangle_{\rm
IMF}=0.21$ $M_{\odot}$, where $\left\langle\right\rangle_{\rm
IMF}$ indicates an average over the IMF. Thus,
\begin{equation}
\label{galacticfma}
\frac{dF_a}{d\ln M_a} \propto \frac{1}{M_a}
\end{equation}
is the Galactic distribution of association mass $M_a$.
To go from the Galactic-average distribution to the
individual cloud distribution, we follow \citet{williams97} and
\citet{matzner02}. First, we require that (1) no association has a
mass larger than $\epsilon M_{\rm cl}$, where \citet{williams97} estimate
$\epsilon=0.1$, (2) the distribution of association masses within
individual GMCs, when integrated over the GMC population of the
Galaxy, gives the Galactic distribution (\ref{galacticfma}), and (3)
the star formation rate per cloud scales with the cloud mass as
$\dot{M}_* \propto M_{\rm cl}^{\beta}$. Using an argument analogous to the
derivation of equation (16) in \citet{williams97} and equation (35) in
\citet{matzner02}, we derive a distribution
\begin{equation}
\label{massocdist}
\frac{d F_{a,M_{\rm cl}}}{d\ln M_a} \propto
\frac{H(\epsilon M_{\rm cl}-M_a)}{1 - (\epsilon M_{\rm cl}/M_a)^{\alpha-\beta}}
\left(\frac{1}{M_a}\right).
\end{equation}
Here $H(x)=(1,0)$ for $(x>0,x<0)$ is the Heaviside step function, and
the Galactic population of star-forming GMCs satisfies
$d\mathcal{N}_{\rm cl}/d\ln M_{\rm cl} \propto M_{\rm cl}^{-\alpha}$ with
$\alpha\approx 0.6$ to an upper limit of
$M_u=6\times 10^6$ $M_{\odot}$ \citep{williams97}.
Based on their star formation model,
\citet{krumholz05c} suggest $\beta\approx 0.67$, and we adopt this
value in our work. Note that $\beta<1$ implies a higher rate of star
formation per unit mass in smaller clouds, which occurs because
in the Galaxy smaller clouds have higher densities and thus shorter
free-fall times. Equation (\ref{massocdist}) is fairly simple to
understand intuitively: small clouds cannot make arbitrarily
large associations, hence the factor $H(\epsilon M_{\rm cl}-M_a)$. If all
clouds produced OB associations with the same mass distribution as the
Galactic powerlaw distribution, up to the maximum mass they could
produce, there would be a deficit of large
associations because all clouds can make small associations, but only
a small fraction of clouds can make large ones. To offset this effect
and produce the Galactic distribution of OB association masses, GMCs
must be slightly more likely to produce an association near their
upper mass limit than a straightforward extrapolation of the Galactic
association mass distribution would suggest, hence the factor
$[1-(\epsilon M_{\rm cl}/M_a)^{\alpha-\beta}]^{-1}$.
For large associations, the ionizing luminosity is simply
\begin{equation}
S_{49} =
\frac{\left\langle s_{49}(m)\right\rangle_{\rm IMF}}
{\left\langle m\right\rangle_{\rm IMF}} M_a,
\end{equation}
and the main sequence ionizing lifetime is
\begin{equation}
\left\langle t_{\rm ms} \right\rangle_{\rm a} =
\frac{
\left\langle s_{49}(m) t_{\rm ms}(m) \right\rangle_{\rm IMF}}
{\left\langle s_{49}(m)\right\rangle_{\rm IMF}},
\end{equation}
where $s_{49}(m)$ and $t_{\rm ms}(m)$ are the ionizing luminosity (in
units of $10^{49}$ photons s$^{-1}$) and main-sequence lifetime of a
star of mass $m$. We adopt the fits of \citet{parravano03} for $s(m)$
and $t_{\rm ms}(m)$, which, together with the \citet{kroupa01a} IMF,
give $\left\langle s_{49}(m)\right\rangle_{\rm IMF} = 7.2\times
10^{-4}$, $S_{49} = 3.4\times 10^{-3} (M_a/M_{\odot})$,
and $t_{\rm ms} = 3.8$ Myr. However, for smaller
associations the ionizing luminosity is likely to be dominated by the
single most massive star, and this causes the ionizing luminosity
function to flatten and the ionizing lifetime to vary with
$s_{49}$. \citet[Appendix A]{mckee97} give an analytic formula that
approximates the flattening, but for semi-analytic models we can
dispense with the approximation and determine the luminosity function
simply by drawing stars randomly from the IMF until we accumulate
mass up to a given association mass. We can then determine the
ionizing luminosity of the association by summing those of the individual
stars, and define the main sequence lifetime as the time at which the
stars providing half the ionizing photons disappear.
Note that, in contrast to \citet{williams97} and \citet{matzner02}, we
do not impose an absolute upper limit of $S_{49} \leq 490$ on the
ionizing luminosity of associations. This has no effect at all on any
but the most massive clouds, since the requirement that the
association mass be at most 10\% of the cloud mass imposes a limit on
$S_{49}$ that is lower than this. For the most massive clouds we
model in \S~\ref{fiducialresult}, the largest associations formed reach
$S_{49} \sim 1700$, but the extremely weak dependence of HII region
properties on $S_{49}$ ($r_{\rm sh}\propto S_{49}^{1/5}$) means that
increasing the maximum $S_{49}$ by a factor of a few has very little
effect. Furthermore, due to the relative improbability of forming an
association so close to the upper mass limit, even for our highest
mass models the majority of clouds do not form associations with
$S_{49} > 490$. Thus, we do not expect the presence or absence of an
upper limit on association ionizing luminosities to affect our results
in any significant way.
\section{Semi-Analytic Models}
\label{method}
\subsection{Methodology}
We now have in place all the necessary theoretical apparatus to set up
our semi-analytic models. In essence, our model describes the
evolution of a GMC using a pair of coupled non-linear ODEs, with added
damping and driving terms. We integrate these equations forward in
time in a three-step process. First, we use the current configuration of
the cloud to compute the rate of turbulent decay (equation
\ref{Lambda_diss}), the rate of star formation (equation
\ref{sfreqn}), and the rate of evaporation by HII regions (equation
\ref{Mev}). We update $R_{\rm cl}$, $\dot{R}_{\rm cl}$, $\sigma_{\rm cl}$, and $M_{\rm cl}$ using these
values (equations \ref{eqofmotionfinal} and \ref{energyeqfinal}). We
chose our update time step so that no quantity changes by more than
0.1\% per advance. Second, we update the state of HII
regions as described in \S~\ref{hiiregions}. We compute new values of the
radius and expansion rate for each HII region, removing those whose
expansion rates are low enough or radii are large enough for them to
merge. At merging, we add their energy to turbulent motions in the
cloud.
Third, we create new HII regions. To do this we generate a random
mass for the next association, chosen from the distribution
(\ref{massocdist}). We track the amount of mass transformed into
stars since the last assocation was fully formed, and when half the
next association mass has been accumulated in new stars, we add an HII
region for that association. To determine the properties of the HII
region, we generate stellar masses from the \citet{kroupa01a} IMF,
assign an ionizing luminosity and main sequence lifetime to each star,
and use these to compute the total ionizing luminosity and lifetime
(defined as the time when the stars responsible for half the ionizing
photons burn out) for the association. We continue to put newly formed stars
into the new association until the full mass of the association has
been accumulated, at which point we re-set the tally of accumulated
stellar mass and randomly generate a new mass for the next
association.
We terminate the evolution when one of three conditions has been
satisfied: (1) an HII region unbinds the cloud
(i.e. $r_{\rm sh} \geq R$ and $\dot{r}_{\rm
sh} > \sqrt{2 G M/R}$); (2) the cloud surface density has dropped to
the point where its visual extinction $A_V < 1.4$, the minimum
required for CO to remain molecular \citep{vandishoeck88}, assuming
the standard Milky Way interstellar UV field and dust to gas ratio,
whereby 1 g cm$^{-2}$ corresponds to $A_V = 214$; or (3) the time step
is less than $10^{-8}$ times the current evolution time, which
occurs if the radius is approaching zero. We term these
possibilities \textit{disruption}, \textit{dissociation}, and
\textit{collapse}. An important caveat that applies to all these
outcomes is their dependence on our assumption of spherical
symmetry. Even if an HII region delivers an impulse capable of
unbinding a cloud, the cloud may actually be displaced as a whole, or it may
break into multiple pieces, each of
which is internally bound and capable of continuing to form
stars. In the case of dissociation, a GMC's mean column density may be
so low that much of its mass is turned atomic by the interstellar UV
field, but overdense clumps within it may survive and continue star
formation. Finally, the collapse case would probably result in a cloud
fragmenting into smaller pieces rather than undergoing monolithic
collapse as occurs in our one-dimensional models.
\subsection{Fiducial Initial Conditions}
Here we describe a basic set of initial conditions for our runs. In
\S~\ref{results}, we discuss how varying some of these affects the
outcome of our runs. Our models start with a common set of initial
parameters summarized in Table \ref{parameters}. Most of these values
are taken directly from observations, but a few of the parameters
deserve some additional discussion. Observations of the strength of
magnetic fields in GMCs are quite uncertain. We set $\eta_{\rm B}=0.5$,
corresponding to equipartition between magnetic and kinetic energy.
For a more detailed analysis of the observational data and theoretical
arguments for this choice, see \citet[ \S~7.3]{krumholz05c}. We
set $\eta_{\rm v}=1.2$ and $\phi_{\rm in}=1.0$, corresponding to the decay of
turbulence at the rate found by the simulations of \citet{stone98},
and to turbulence on the size scale of the entire GMC. Finally, we set $\eta_{\rm E}=1$, corresponding to HII regions injecting energy into turbulent media as efficiently as they would for smooth media.
\begin{deluxetable}{cc}
\tablecaption{Fiducial parameters.\label{parameters}}.
\tablewidth{0pt}
\tablehead{
\colhead{Parameter} &
\colhead{Value}
}
\startdata
$\alpha_{\rm vir-0}$ & 1.1 \\
$c_{\rm s}$ & 0.19 km s$^{-1}$ \\
$\eta_{\rm B}$ & 0.5 \\
$\eta_{\rm E}$ & 1.0 \\
$\eta_{\rm v}$ & 1.2 \\
$k_{\rho}$ & 1.0 \\
$\phi_{\rm in}$ & 1.0
\enddata
\end{deluxetable}
With these parameters fixed, we can fully specify the initial
conditions for a model by giving the initial mass and column density
of a cloud. We evolve models with masses of $M_{\rm cl,6}=0.2$, $1.0$ and
$5.0$, where $M_{\rm cl}=M_{\rm cl,6} \times 10^6$
$M_{\odot}$. These masses span the range where most of the molecular gas in the
Milky Way resides \citep{williams97}. We set the initial cloud column
density to $N_{H,22} = 1.5$, where $N_H = N_{H,22} \times 10^{22}$
cm$^{-2}$, which is typical for Milky Way GMCs regardless of mass
\citep{larson81, solomon87}. We summarize the initial radius, velocity
dispersion, and crossing time as a function of mass in Table
\ref{cmlist}. For reference, we also show how these quantities vary
with column density at fixed mass.
\begin{deluxetable}{ccccc}
\tablecaption{Initial cloud properties.\label{cmlist}}
\tablewidth{0pt}
\tablehead{
\colhead{$M_{\rm cl,6}$} &
\colhead{$N_{\rm H,22}$} &
\colhead{$R_{\rm cl-0}$ (pc)} &
\colhead{$\sigma_{\rm cl-0}$ (km s$^{-1}$)} &
\colhead{$t_{\rm cr-0}$ (Myr)}
}
\startdata
0.2 & 0.5 & 33.7 & 2.37 & 13.9 \\
1.0 & 0.5 & 75.3 & 3,54 & 20.8 \\
5.0 & 0.5 & 168 & 5.30 & 31.1 \\
0.2 & 1.5 & 19.4 & 3.12 & 6.1 \\
1.0 & 1.5 & 43.5 & 4.66 & 9.1 \\
5.0 & 1.5 & 97.2 & 6.97 & 13.6 \\
0.2 & 4.5 & 11.2 & 4.10 & 2.7 \\
1.0 & 4.5 & 25.1 & 6.14 & 4.0 \\
5.0 & 4.5 & 56.1 & 9.18 & 6.0
\enddata
\end{deluxetable}
\section{Results}
\label{results}
\subsection{Fiducial Runs}
\label{fiducialresult}
We simulate the fiducial initial conditions 100 times each, using
different random seeds for each run, for each of our three initial
masses. Table \ref{fiducialoutcome} summarizes the
statistical outcome of our fiducial runs.
\begin{deluxetable*}{ccccccccc}
\tablecaption{Fiducial run outcomes.\label{fiducialoutcome}}
\tablewidth{0pt}
\tablehead{
\colhead{$M_{\rm cl,6}$} &
\colhead{$t_{\rm life}$ (Myr)} &
\colhead{$\overline{\alpha}_{\rm vir}$} &
\colhead{$\overline{N}_{H,22}$} &
\colhead{SFE} &
\colhead{$M_{\rm phot}$} &
\colhead{$N_{\rm disrupt}$} &
\colhead{$N_{\rm dissoc}$} &
\colhead{$N_{\rm col}$}
}
\startdata
0.2 & 1.6 (9.9) & 2.2 & 1.4 & 0.053 & 0.59 & 63 & 37 & 0 \\
1.0 & 2.2 (20) & 2.1 & 1.3 & 0.054 & 0.70 & 92 & 8 & 0 \\
5.0 & 3.2 (43) & 1.5 & 1.5 & 0.082 & 0.80 & 99 & 1 & 0
\enddata
\tablecomments{
Col. (2): Mean lifetime in crossing times (Myr). Cols. (3-4):
$\alpha_{\rm vir}$ and $N_{H,22}$ averaged over all times and runs. Col. (5): Star
formation efficiency. Col. (6): Fraction of mass photoevaporated prior to cloud destruction.
Cols. (7-9): Number of runs out of 100 that
ended in disruption, dissociation, and collapse.
}
\end{deluxetable*}
The most basic result is that massive clouds attain a
quasi-equilibrium state, in which the decay of turbulence is roughly
balanced by the injection of energy by HII regions. In this state,
cloud virial parameters fluctuate around unity, but most clouds spend
most of their lives with virial parameters from $1$ to $3$, with a
time-averaged value from $1.5-2.2$ depending on the cloud mass, as
shown by Figure \ref{alphaevol}. This may slightly overestimate the
true virial parameter, because in our model we assume that all of the
energy from HII regions goes into random turbulent motions that can
then fuel cloud expansion, rather than into coherent motions of the cloud as a whole or, if HII regions fragment the cloud, into motions of these fragments. Such coherent motions are often excluded in
observational estimates of virial parameters. Nonetheless, at a
qualitative level the result that
clouds equilibrate to virial parameters $\sim 1$ is in good agreement
with observations showing that massive clouds are approximately
virialized \citep{heyer01,blitz06a}.
\begin{figure}
\plotone{f1.eps}
\caption{
\label{alphaevol}
Virial parameter $\alpha_{\rm vir}$ versus physical time $t$ and dimensionless
time $\tau$ for a sample of runs with fiducial parameters. We show
$M_{\rm cl,6}=0.2$ (\textit{top panel}), $M_{\rm cl,6}=1.0$
(\textit{center panel}), and $M_{\rm cl,6}=5.0$ (\textit{bottom
panel}).
}
\end{figure}
The evolution follows a sawtooth
pattern, in which an HII region drives up the virial parameter, which then
exponentially decays until it is increased by the next injection of
energy. This is similar to the pattern seen in simulations by
\citet{li06b}, in which turbulence in star-forming clumps $\sim 1$ pc
in size is maintained by
energy injection from protostellar outflows. This equilibrium is
maintained partly because the star formation rate responds to the
current state of the cloud, increasing as the cloud contracts and its
turbulence decays, and going back down when the cloud
re-expands. Figure \ref{tdepevol} shows the depletion time, defined as
the ratio of the current cloud mass to the current star formation
rate, which exhibits this pattern.
\begin{figure}
\plotone{f2.eps}
\caption{
\label{tdepevol}
Depletion time versus physical time $t$ and dimensionless
time $\tau$ for a sample of runs with fiducial parameters. We show
$M_{\rm cl,6}=0.2$ (\textit{top panel}), $M_{\rm cl,6}=1.0$
(\textit{center panel}), and $M_{\rm cl,6}=5.0$ (\textit{bottom
panel}).
}
\end{figure}
As shown in Figure \ref{nhevol},
cloud column densities also show a sawtooth pattern, oscillating
up and down but remaining relatively constant over multiple expansion
and contraction cycles so that the time average is roughly the
observed value $N_{H,22}\approx 1.5$. (The figure is somewhat deceptive, because the longest lived-clouds tend to go to somewhat lower column densities, and these stand out the most when examining the figure because the shorter-lived clouds all overlie one another on the left side of the plot. The numerically-computed time-averaged density over all models is given in Table \ref{fiducialoutcome}.)
Over their lifetimes, the
average star formation efficiency of clouds, defined as the fraction
of initial cloud mass transformed into stars, is $5-10\%$. HII regions
ionize away anywhere from $50-90\%$ of the mass before finally
destroying the clouds entirely, with lower mass clouds losing less
mass to photoionization than more massive clouds. Figure
\ref{massevol} shows this evolution.
\begin{figure}
\plotone{f3.eps}
\caption{
\label{nhevol}
Column density $N_H$ versus physical time $t$ and dimensionless
time $\tau$ for a sample of runs with fiducial parameters. We show
$M_{\rm cl,6}=0.2$ (\textit{top panel}), $M_{\rm cl,6}=1.0$
(\textit{center panel}), and $M_{\rm cl,6}=5.0$ (\textit{bottom
panel}).
}
\end{figure}
\begin{figure}
\plotone{f4.eps}
\caption{
\label{massevol}
Cloud mass (\textit{solid lines}) and 10 times mass of stars formed
(\textit{dashed lines}) versus physical time $t$ and dimensionless
time $\tau$ for a sample of runs with fiducial parameters. We show
$M_{\rm cl,6}=0.2$ (\textit{top panel}), $M_{\rm cl,6}=1.0$
(\textit{center panel}), and $M_{\rm cl,6}=5.0$ (\textit{bottom
panel}).
}
\end{figure}
The duration and stability of cloud equilibrium is affected by
cloud mass. Low mass clouds, $M_{\rm cl,6}=0.2$ are only stable
for an average of $1.6$ crossing times ($3.2$ free-fall times), and
are most often destroyed when they form an HII region that delivers
enough momentum to completely unbind them. They survive only one or a
few cycles of expansion and contraction. In contrast, very massive
clouds, $M_{\rm cl,6}=5$, are stable for $3.2$ crossing times
and for multiple cycles before finally being unbound by HII
regions. At all masses most clouds are destroyed by direct disruption
rather than by a photodissociation, with the number photodissociated
dropping sharply as a function of mass. This result is largely a
function of cloud lifetimes. Direct disruption by a single HII region
occurs when a truly large association forms, one that is on the tail
of the mass distribution. Since larger clouds live longer and form
more stars, they sample this tail more thoroughly, and thus are more
likely to form one very large association capable of disrupting them
than smaller clouds. An additional effect comes from the cloud
velocity dispersion. Since larger clouds have higher velocity
dispersions, the HII region shells driven by small associations merge
more quickly, so the relative amount of energy injected by large
versus small associations increases.
Cloud disruption is much more frequent in our models than in those of
\citet{williams97} or of \citet{matzner02}. The primary difference is a
revised criterion for dynamical disruption. Williams \& McKee took the
onset of a cometary phase to be the point at which an HII region
effectively disrupts or displaces its parent cloud. Matzner
considered the delivery of a momentum $p_{\rm sh}$ in excess of $M_{\rm cl}
v_{\rm esc}$ to define disruption. Our criterion ($\dot{r}_{\rm
sh}>v_{\rm esc}$
when $r>R_{\rm cl}$) resembles the Matzner threshold, but requires roughly
half as much momentum because we model HII regions as hemispheres and
clouds as spheres. In addition, our adoption of the \citet{kroupa02}
IMF and \citet{parravano03} stellar properties leads to about twice as many
ionizing photons per stellar mass relative to Williams \& McKee and
Matzner. The revised model of HII region expansion also plays a minor
role. All told, the largest cloud that can be disrupted by HII
regions is about ten times more massive in this work than in the
Matzner models. Our disruption criterion, being more generous than
the Williams \& McKee or Matzner criteria, accounts in an approximate
manner for cloud displacement and fragmentation by large HII regions.
Our conclusion that more massive GMCs survive longer than less massive
ones also contrasts with the results obtained \citet{williams97} and
\citet{matzner02}. One reason for our different
result is that in our model the star formation rate per unit mass is
proportional to the inverse of the free-fall time, which, at constant
$N_H$, produces more star formation per unit mass in small clouds than
in large ones. Thus, low mass clouds are subject to more vigorous
feedback. In contrast, previous studies generally adopted star formation
laws under which the star formation rate per unit mass was independent
of or increased with GMC size. This higher star formation rate
compensates for the lack of large associations in small clouds and
the ease with which small clouds can be driven into a cometary
configuration, both of which tend to reduce mass loss. As a result
clouds photoevaporate about the same fraction of their mass per
dynamical time regardless of their starting mass.
A second reason for our finding that small clouds live less time is
our focus on the dynamical rather than the photevaporation lifetime.
For massive clouds, our models show that the
photoevaporation $e$-folding time is $\sim 20-30$ Myr, comparable to
or a bit shorter than the dynamical lifetime, and consistent with
earlier estimates. In contrast, for low mass clouds the dynamical
destruction time is shorter than the photoevaporation time. Clouds are
dynamically broken up by HII regions, either by direct disruption from
a single region or expansion to the point of photodissociation through
the collective action of several HII regions, before most of their
mass is photoevaporated away. This occurs because smaller clouds have
less inertia and lower escape velocities, so they are more
easily pushed apart by HII regions. Larger clouds require truly
large HII regions to unbind them, and so are not disrupted until they
have lost significant mass and enough associations have formed to
reach to this tail of the distribution. Smaller clouds can be
destroyed by more typical HII regions.
This distinction between photoevaporation and dynamical lifetime
points to an important caveat of our analysis. The dynamical lifetime
we find is the time for which a cloud exists as a single coherent
entity under the assumption of spherical symmetry. As we discuss
above, it is possible that, rather than
unbinding all the molecular gas, or expanding it to the point of
photodissociation, HII regions displace GMCs or blow GMCs into multiple
pieces that are not bound to each other, but each of which remains
molecular and internally bound, and can continue forming stars. Thus,
the time for
which the Lagrangian elements of a cloud remain molecular and
star-forming may be longer than the dynamical lifetime of the cloud
which we have found. The photoevaporation time may provide
a better estimate of this Lagrangian lifetime. For clouds of mass
$M_{\rm cl,6}\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle > 1$ this distinction is not that significant since
the lifetime is comparable to the photoevaporation time, but
it can be for smaller clouds. Determining whether
parts of these clouds do continue forming stars even after they no
longer form a single gravitationally bound entity will require
radiation hydrodynamic simulations that can include both cloud
dynamics and photoevaporation.
\subsection{Varying Column Density}
\label{varcolsection}
Prompted in part by the result that giant molecular clouds both in the
Milky Way \citep{larson81,solomon87} and in nearby galaxies with
similar metallicities \citep{blitz06a} have a
surprisingly small range of column densities, we investigate how
varying the initial column density affects the evolution. For each
mass we run with the fiducial parameters, we re-run using the same 100
random seeds but initial column densities of $N_{H,22} = 0.5$ and $4.5$.
The low column density case corresponds to a GMC with a mean column
density that is just barely sufficient to be self-shielding against
the interstellar UV field. Giant clouds of such low column density (as
opposed to isolated low-mass clouds) have not been seen in CO surveys
either in the Milky Way or in other local group galaxies, and there is
observational evidence and theoretical expectation that molecular gas
at column densities lower than $N_{N,22}\sim 0.9$ is not star-forming
(\citealt{mckee89, li97, onishi98, onishi99, onishi02, johnstone04} --
though see \citealt{hatchell05}, who find less star formation at low column densities, but not a complete absence of star formation). However, in the model of GMCs as turbulent
density fluctuations, low column density clouds of all masses are
expected to exist and be star-forming
\citep[e.g.][]{vazquezsemadeni97}, so we consider the low column
density case without imposing a column density threshold for star
formation. The high column density case corresponds to GMCs found in
galaxies that are entering the starburst regime. For example, in M64,
a weak nuclear starburst galaxy, the typical GMC column density is 2.5
times that in the Milky Way \citep{rosolowsky05a}.
Table \ref{nhvarytab} gives statistical results for the
runs with varying $N_H$, and Figure \ref{nhvary} shows
the evolution of column density versus time for a sample of runs at
each mass. The results show that $N_{H,22}\approx 1$ seems to
be roughly a critical point in column density. Clouds that begin their
evolution at substantially lower column density tend to disrupt or
dissociate in $\sim 1$ dynamical time. Clouds that start at higher
column densities show a pattern that depends on mass. At masses
$M_{\rm cl,6} \ll 1$, they remain stable for long times. At higher
masses, they tend to undergo uncontrolled collapse.
\begin{deluxetable*}{cccccccccc}
\tablecaption{Outcomes with varying column density.\label{nhvarytab}}
\tablewidth{0pt}
\tablehead{
\colhead{$M_{\rm cl,6}$} &
\colhead{$N_{\rm N,22}$} &
\colhead{$t_{\rm life}$ (Myr)} &
\colhead{$\overline{\alpha}_{\rm vir}$} &
\colhead{$\overline{N}_{H,22}$} &
\colhead{SFE} &
\colhead{$M_{\rm phot}$} &
\colhead{$N_{\rm disrupt}$} &
\colhead{$N_{\rm dissoc}$} &
\colhead{$N_{\rm col}$}
}
\startdata
0.2 & 0.5 & 0.44 (6.1) & 1.7 & 0.60 & 0.022 & 0.34 & 74 & 26 & 0 \\
0.2 & 1.5 & 1.6 (9.9) & 2.2 & 1.4 & 0.053 & 0.53 & 63 & 37 & 0 \\
0.2 & 4.5 & 5.6 (15) & 1.5 & 5.2 & 0.19 & 0.80 & 93 & 0 & 7 \\
1.0 & 0.5 & 0.72 (15) & 1.9 & 0.61 & 0.026 & 0.51 & 8 & 92 & 0 \\
1.0 & 1.5 & 2.2 (20) & 2.1 & 1.3 & 0.054 & 0.70 & 92 & 8 & 0 \\
1.0 & 4.5 & - & - & - & - & - & 17 & 0 & 83 \\
5.0 & 0.5 & 1.3 (41) & 1.6 & 0.5 & 0.039 & 0.58 & 5 & 95 & 0 \\
5.0 & 1.5 & 3.2 (43) & 1.5 & 1.5 & 0.082 & 0.80 & 99 & 1 & 0 \\
5.0 & 4.5 & - & - & - & - & - & 0 & 0 & 100
\enddata
\tablecomments{
Column definitions are identical to those in Table
\ref{fiducialoutcome}, and cases with $N_{H,22}=1.5$ are identical to
the values in that Table. We compute average quantities excluding
runs that result in collapse, and we do not attempt to compute
averages in cases where a majority of runs produce collapse.
}
\end{deluxetable*}
\begin{figure}
\plotone{f5.eps}
\caption{
\label{nhvary}
Column density $N_H$ versus dimensionless time $\tau$ for a sample of
runs with varying initial $N_H$. Each panel shows a different initial
mass, shown in the upper left corner. The dotted horizontal line
indicates the column density at which clouds dissociate.
}
\end{figure}
This change in behavior is not due to a change in the star formation
rate per dynamical time. This is almost independent
of the column density, since $\mbox{SFR}_{\rm ff} \propto \mathcal{M}^{-0.32} \propto
N_H^{-0.08}$ for fixed $\alpha_{\rm vir}$. Instead, the evolution depends on the
column density because column density significantly affects the
efficiency of feedback. With our approximation that merged HII regions do not inject energy into clouds, the fraction of an HII
region's energy that is lost to radiation increases with column
density, so for equal ionizing fluxes and times, the kinetic energy in
an HII region shell varies as $N_H^{-3/5}$. Furthermore, higher column
densities
increase the cloud velocity dispersion (as $N_H^{1/4}$ for fixed
$\alpha_{\rm vir}$) and decrease the expansion velocity, causing HII regions to
break up and merge earlier. Since in our model HII regions only gain energy
as long as they are expanding, the sooner they merge the less energy
they inject. From equation (\ref{Rmerge-HII}), for fixed cloud and HII
region properties the ratio of the merger radius to the cloud radius
scales as $N_H^{-9/50}$. Combining these two effects, equation
(\ref{HII-injected}) shows that the energy injected by a single HII
region of fixed properties into a cloud of fixed mass varies with
column density as $N_H^{-69/100}$. Thus, the efficiency with which
ionizing photons are converted into turbulent motions is a reasonably
strong function of the column density, and this picks out a particular
characteristic column density at which energy injection balances
loss. This turns out to be roughly the observed column density
$N_{H,22} \approx 1.5$. At this point we should mention one
significant cautionary note: we have not explored the effects of the
external environment of GMCs, and in particular the ambient pressure,
on the characteristic column density. We will consider this effect in Paper II.
It is easy to understand intuitively why the critical column density
above which clouds tend to collapse for masses $M_{\rm cl,6}
\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle > 1$ should be between $N_{H,22} = 1.5$ and $4.5$. In the high
column density case, the velocity dispersion required to hold up a
cloud is almost more than HII regions can provide. HII regions cannot
effectively drive turbulence to velocity dispersions larger than the
ionized gas sound speed $c_{\rm II}=9.7$ km s$^{-1}$, and as indicated in
Table \ref{cmlist}, in the high mass, high column density case the
level of turbulence required to maintain $\alpha_{\rm vir}=1.1$ is
$\sigma_{\rm cl-0}=9.2$ km s$^{-1}$. For the medium mass case it is
$\sigma_{\rm cl-0}=7.0$ km s$^{-1}$. Furthermore, if the cloud contracts some
so that its velocity dispersion increases a bit beyond these values,
then HII regions will break up and merge with the turbulence almost
immediately and will therefore inject very little
energy. \citet{matzner02} previously discussed the inability of HII
regions to maintain virial balance in systems where the required
velocity dispersion exceeds $c_{\rm II}$, and our models find the same
result. We discuss the implications of this in more detail in
\S~\ref{gmcevol}. Note, however, that this conclusion is partially
dependent on our approximation that HII regions cease driving
turbulence once they cease dynamically expanding and merge, which may
not be entirely correct -- see \S~\ref{limitations}.
The long lifetime and high star formation efficiency
produced at low masses and high column density also suggests an
interesting interpretation: this combination of parameters may
correspond to the regime of formation of individual OB associations
and open clusters, which is characterized by relatively high column
densities \citep{mckee03} and star formation efficiencies $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle > 10\%$
\citep{lada03}, and probably requires many crossing times to complete
\citep{tan06a}. Although the highest column density we have considered
is still relatively modest by the standards of some cluster-forming
clumps, which can reach $N_{H,22}$ of several tens, the general result that
low masses and high column densities can produce long-lived bound
objects is suggestive. This is particularly true in light of the
recent evidence that rich clusters require $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle > 5$ crossing times
to assemble \citep{tan06a, huff06, krumholz06c}, and must therefore
be held up against collapse yet not be unbound by feedback for at
least this long.
\subsection{Varying Dissipation Rate}
\label{dissvarysec}
As we discuss in \S~\ref{turbdecay}, it is possible that the turbulent
dissipation rate may vary from our fiducial estimate, either being
substantially lower if simulations of turbulent decay have missed some
important physics, or substantially higher if the characteristic scale
of molecular cloud turbulence is smaller than the size of an entire
GMC. The latter possibility is probably ruled out by observations
showing that turbulence in molecular clouds is self-similar out to the
size of the entire GMC \citep[e.g.][]{ossenkopf02, heyer04}, but we
explore it nonetheless to better understand how the dissipation rate
affects GMC evolution. We re-run our fiducial case $\eta_{\rm v}=1.2$,
$\phi_{\rm in}=1.0$, corresponding to turbulence on the GMC scale and decay at
the rate measured by \citet{stone98}, with $\eta_{\rm v}=0.4$,
$\phi_{\rm in}=1.0$, corresponding to turbulence on the GMC scale decaying
somewhat more slowly than Stone et al.\ find, and with
$\eta_{\rm v}=1.2$, $\phi_{\rm in}=0.33$, corresponding to turbulence driven on
$1/3$ of the GMC size scale decaying at the Stone et al.\ rate. Since
the dissipation rate depends on the ratio $\eta_{\rm v}/\phi_{\rm in}$, we are
therefore considering energy dissipation rates that are a factor of 3
smaller and larger than the fiducial case. (From equations
\ref{eqofmotionfinal}, \ref{energyeqfinal}, and \ref{Lambda_diss},
this factor of 3 larger of smaller dissipation rate should correspond
roughly to a factor of 3 change in the acceleration of the cloud
radius, since $\mathcal{L}\propto \eta_{\rm v} \sigma^3$, $\sigma'\propto
\mathcal{L}/\sigma^2$, and $R''\propto \sigma^2$.)
Table \ref{dissvarytab} summarizes the statistical results of varying
the dissipation rate, and Figure \ref{dissvaryfig} shows the evolution
of column density versus time as a function of the dissipation rate
and cloud mass. The general result is that, with the exception of the
high mass, fast dissipation case, the results do not greatly depend on
the dissipation rate. As the turbulent dissipation rate increases, the
lifetime and mean virial parameter decrease and the mean column
density and the star formation efficiency increase, but none of these
changes by more than $\sim 10\%$. That changing the
dissipation rate by a factor of three in either direction induces a
much smaller change in the cloud column density and virial parameter
suggests that these values represent roughly an equilibrium
configuration, and that this equilibrium is quite robust. As the
dissipation rate changes by a factor of nine from the lowest to the
highest values we try, clouds contract a bit more, form stars
a bit more vigorously, and are destroyed by stellar feedback a bit
sooner, but only modest changes are sufficient to offset the change in
dissipation rate. Star formation self-regulates to produce GMCs with
$N_{\rm H,22}\approx 1$ and $\alpha_{\rm vir}\approx 1-2$, and the regulation is
stiff in the sense that even an order of magnitude change in the
dissipation rate does not alter it much. We discuss reasons for this
in \S~\ref{paramdependence}.
\begin{deluxetable*}{ccccccccccc}
\tablecaption{Outcomes with varying dissipation rate.\label{dissvarytab}}
\tablewidth{0pt}
\tablehead{
\colhead{$M_{\rm cl,6}$} &
\colhead{$\eta_{\rm v}$} &
\colhead{$\phi_{\rm in}$} &
\colhead{$t_{\rm life}$ (Myr)} &
\colhead{$\overline{\alpha}_{\rm vir}$} &
\colhead{$\overline{N}_{H,22}$} &
\colhead{SFE} &
\colhead{$M_{\rm phot}$} &
\colhead{$N_{\rm disrupt}$} &
\colhead{$N_{\rm dissoc}$} &
\colhead{$N_{\rm col}$}
}
\startdata
0.2 & 0.4 & 1.0 & 1.7 (10) & 2.8 & 1.2 & 0.046 & 0.53 & 49 & 51 & 0 \\
0.2 & 1.2 & 1.0 & 1.6 (9.9) & 2.2 & 1.4 & 0.053 & 0.59 & 63 & 37 & 0 \\
0.2 & 1.2 & 0.33 & 1.4 (8.4) & 1.9 & 1.8 & 0.067 & 0.57 & 44 & 56 & 0 \\
1.0 & 0.4 & 1.0 & 2.3 (21) & 2.6 & 1.1 & 0.046 & 0.64 & 90 & 10 & 0 \\
1.0 & 1.2 & 1.0 & 2.2 (20) & 2.1 & 1.3 & 0.054 & 0.70 & 92 & 8 & 0 \\
1.0 & 1.2 & 0.33 & 1.7 (16) & 1.7 & 2.1 & 0.074 & 0.74 & 61 & 35 & 4 \\
5.0 & 0.4 & 1.0 & 4.4 (60) & 2.3 & 1.0 & 0.070 & 0.75 & 98 & 2 & 0 \\
5.0 & 1.2 & 1.0 & 3.2 (43) & 1.5 & 1.5 & 0.082 & 0.80 & 99 & 1 & 0 \\
5.0 & 1.2 & 0.33 & - & - & - & - & - & 0 & 0 & 100
\enddata
\tablecomments{
Column definitions are identical to those in Table
\ref{fiducialoutcome}, and cases with $\eta_{\rm v}=1.2$, $\phi_{\rm in}=1.0$ are
identical to the values in that Table. As in Table \ref{nhvarytab}, we
compute average quantities excluding runs that result in collapse, and
we do not attempt to compute averages in cases where a majority of
runs produce collapse.
}
\end{deluxetable*}
\begin{figure*}
\epsscale{0.85}
\plotone{f6.eps}
\epsscale{1.0}
\caption{
\label{dissvaryfig}
Column density $N_H$ versus dimensionless time $\tau$ for a sample of
runs with varying dissipation rates and masses. The mass and
dissipation rate are indicated in each panel -- slow is $\eta_{\rm v}=0.4$,
$\phi_{\rm in}=1.0$, medium is $\eta_{\rm v}=1.2$, $\phi_{\rm in}=1.0$, and fast is
$\eta_{\rm v}=1.2$, $\phi_{\rm in}=0.33$. The dotted horizontal line indicates the
column density at which clouds dissociate. All runs begin with
$N_{H,22} = 1.5$.
}
\end{figure*}
There are two exceptions to this, where varying the dissipation
rate does produce a qualitative change in the outcome. For clouds of high
mass and high dissipation rate, we see a phenomenon similar to the
high mass, high column density case. For $M_{\rm cl,6}=5$, the
velocity dispersion starts at $\sigma_{\rm cl-0} = 7.0$ km s$^{-1}$, but
increases very rapidly as the cloud contracts due to decay of
turbulence. By the time the first large HII regions have expanded to
the point where they contain substantial kinetic energy, the cloud has
contracted to the point where the velocity dispersion required to hold
it up is comparable to or larger than $c_{\rm II}=9.7$ km s$^{-1}$. As a
result, HII regions cannot hold up the cloud, and it collapses.
The other exception occurs if we simultaneously increase the column
density to $N_{H,22}=4.5$ and lower the dissipation rate by setting
$\eta_{\rm v}=0.4$, $\phi_{\rm in}=1.0$. In this case, clouds have long lifetimes
and high star formation efficiencies regardless of their
mass, and their column densities gradually decrease with
time. Qualitatively, this is the same behavior we see for the case
$M_{\rm cl,6}=0.2$, $N_{H,22}=4.5$, and the fiducial dissipation
rate. Figure \ref{nhevol.dissvary.nhhigh} shows the evolution of
clouds within initial column densities of $N_{H,22}=4.5$ as a function
of mass and dissipation rate. As the plot shows, reducing the
dissipation rate changes the characteristic mass at which one goes
into the high star formation efficiency, long-lived regime. The
boundary between this regime of very long stability and the regime of
stability for a few dynamical times appears to depend weakly on both
the mass and the dissipation rate.
\begin{figure*}
\epsscale{0.85}
\plotone{f7.eps}
\epsscale{1.0}
\caption{
\label{nhevol.dissvary.nhhigh}
Column density $N_H$ versus dimensionless time $\tau$ for a sample of
runs with varying dissipation rates and masses, all starting from high initial column densities. The mass and
dissipation rate are indicated in each panel -- slow is $\eta_{\rm v}=0.4$,
$\phi_{\rm in}=1.0$, medium is $\eta_{\rm v}=1.2$, $\phi_{\rm in}=1.0$, and fast is
$\eta_{\rm v}=1.2$, $\phi_{\rm in}=0.33$. The dotted horizontal line indicates the
column density at which clouds dissociate. All runs begin with
$N_{H,22} = 4.5$.
}
\end{figure*}
\subsection{Varying Energy Injection Efficiency}
\label{effvary}
As we discuss in \S~\ref{eninjection}, we are not certain of the
efficiency with which HII regions are able to drive motions in
turbulent media. We therefore re-run our fiducial models with
$\eta_{\rm E}=0.25$, thereby reducing the amount of energy injected by HII
regions by a factor of 4. This allows us to examine how strongly our
results depend on our assumed efficiency.
Table \ref{effvarytab} summarizes our results, which show how small a
difference the change in driving efficiency makes. With a factor of 4
less energy injection for an HII region of the same luminosity, the
mean lifetime of the lower mass clouds we model increases by about
10\% and that of the highest mass clouds decreases by the same
fraction. Mean virial parameters decline by $\sim 10\%$, and mean
column densities rise a similar amount. Both star formation
efficiencies and fractions of the mass photoevaporated rise, but again
by only $\sim 10\%$. The only quantity that changes significantly is
the fraction of clouds destroyed by dissociation, which not
surprisingly declines sharply. Thus, our results appear extremely
insensitive to changes in the assumed energy injection efficiency of
HII regions. Figure \ref{nhevol.effvary}, which shows the evolution of
column density versus time for a sample of our runs, confirms this
impression. The runs with lower energy injection efficiency go to
slightly higher column densities, but overall show no major difference
in their evolution from the fiducial case. Again, the star formation
process seems to be self-regulating, so that large changes in
efficiencies, either of energy injection or of dissipation, produce
only very small changes in the global statistics of cloud
evolution. We discuss reasons for this in \S~\ref{paramdependence}.
\begin{deluxetable*}{cccccccccc}
\tablecaption{Outcomes with varying HII driving efficiency.\label{effvarytab}}
\tablewidth{0pt}
\tablehead{
\colhead{$M_{\rm cl,6}$} &
\colhead{$\eta_{\rm E}$} &
\colhead{$t_{\rm life}$ (Myr)} &
\colhead{$\overline{\alpha}_{\rm vir}$} &
\colhead{$\overline{N}_{H,22}$} &
\colhead{SFE} &
\colhead{$M_{\rm phot}$} &
\colhead{$N_{\rm disrupt}$} &
\colhead{$N_{\rm dissoc}$} &
\colhead{$N_{\rm col}$}
}
\startdata
0.2 & 0.25 & 1.7 (11) & 1.6 & 1.7 & 0.065 & 0.67 & 93 & 7 & 0 \\
0.2 & 1.0 & 1.6 (9.9) & 2.2 & 1.4 & 0.053 & 0.59 & 63 & 37 & 0 \\
1.0 & 0.25 & 2.1 (20) & 1.6 & 1.6 & 0.062 & 0.75 & 100 & 0 & 0 \\
1.0 & 1.0 & 2.2 (20) & 2.1 & 1.3 & 0.054 & 0.70 & 92 & 8 & 0 \\
5.0 & 0.25 & 2.8 (39) & 1.3 & 1.8 & 0.087 & 0.82 & 100 & 0 & 0 \\
5.0 & 1.0 & 3.2 (43) & 1.5 & 1.5 & 0.082 & 0.80 & 99 & 1 & 0 \\
\enddata
\tablecomments{
Column definitions are identical to those in Table
\ref{fiducialoutcome}, and cases with $\eta_{\rm E}=1.0$ are identical to
the values in that Table.
}
\end{deluxetable*}
\begin{figure*}
\epsscale{0.85}
\plotone{f8.eps}
\epsscale{1.0}
\caption{
\label{nhevol.effvary}
Column density $N_H$ versus dimensionless time $\tau$ for a sample of
runs with varying HII region driving efficiencies and masses. The mass and
driving efficiency are indicated in each panel. The dotted horizontal line indicates the
column density at which clouds dissociate. All runs begin with
$N_{H,22} = 1.5$.
}
\end{figure*}
\section{Discussion}
\label{discussion}
\subsection{The Relationship of Cloud Properties to Rates of Radiative
Loss and Energy Injection}
\label{paramdependence}
One of the most interesting results of our analysis is how little the
globabl statistics of cloud evolution, such as lifetimes and star
formation efficiencies, change in response to large changes in
parameters such as the rate at which turbulence decays via isothermal
shocks or the efficiency with which HII regions transfer their
kinetic energy into turbulent motions.
We can understand intuitively why large variations in either driving
efficiency or dissipation rate cause such minimal changes in the
statitistics of cloud evolution by examining Figure \ref{tdepevol}. As
the Figure illustrates, clouds form most of their stars during their
contraction phases, when they reach high densities, so the rate of
star formation is effectively set by the cadence of expansion and
contraction cycles. This cadence is affected only slightly by the
dissipation rate and the energy injection efficiency, because both of
these properties are coupled to cloud evolution poorly, with large
time delays.
For a GMC to contract from an expanded state, the
rate-limiting step is not the time required to dissipate the energy
injected by HII regions, it is the cloud free-fall time, the time the
cloud would take to collapse even if it contained no turbulence at
maximum expansion. Thus, varying the turbulent dissipation rate makes
little difference to the cloud contraction time.
Similarly, the time it takes for HII regions to drive apart a cloud is
controlled less by the amount of energy added per unit mass of stars
formed than by the time delay between when stars begin forming and
when the HII regions they create expand, break up, and drive turbulent
motions. Lowering the driving efficiency means that clouds expand somewhat
less far, but since the dissipation rate of the turblence is a strong
function of the velocity dispersion, $\mathcal{L}\propto \sigma^3$, the extra
energy at early stages produced by higher efficiency is radiated away
very quickly, and produces only slightly increased cloud expansion. We
can put this argument more formally by noting that our energy equation
gives $\sigma'\propto \sigma^2$, neglecting changes in velocity dispersion
due to external pressure and cloud expansion, so the time required for
a velocity dispersion of $\sigma_0$ immediately after a large HII
region breaks up to decay to a value $\sigma_1$ obeys $t\propto
(\sigma_0-\sigma_1) / (\sigma_0 \sigma_1)$. If the final velocity
dispersion is much smaller than the initial one, i.e. $\sigma_1 \ll
\sigma_0$, this ratio is independent of $\sigma_0$. Thus, the decay
time for a large amount of initial energy injected is only slightly
larger than the decay time for a significantly smaller energy
injection, so the cloud expansion time and maximum radius depend only
weakly on the driving efficiency.
This insensitivity to the dissipation rate and the energy injection
rate breaks down if the dissipation rate becomes so high or the
energy injection so inefficient that HII regions are not capable of
expanding clouds at all. In this case, clouds undergo runaway
collapse. It also breaks down if the dissipation rate becomes so low
or the efficiency so high that the first generation of HII regions
simply unbinds most clouds, in which case clouds form only one
generation of stars and are destroyed on a time scale set by the
expansion time for HII regions from that first generation. However, in
between these two extremes there is a broad range of parameter space
within which the cadence of expansion and contraction cycles is set
mostly by the cloud free-fall time and the time required for HII
regions to break up, with only a very weak dependence on the details
of the energy loss rate or the energy gain efficiency. The degree of
insensitivity is illustrated by Figures \ref{dissvaryfig} and
\ref{nhevol.effvary}, in which factor of several variations in
dissipation rate or driving efficiency change the expansion and
contraction period by only tens of percent.
\subsection{GMC Stability and Lifetime}
\label{gmcstability}
Observations place strong constraints on the stability of giant
molecular clouds, and are now beginning to constrain their lifetimes
as well. We must test theories that seek to explain the behavior of GMCs
against these observations. The first constraint is that giant
molecular clouds cannot be in a state of global collapse. If they
were, then they would convert order unity of their mass into stars in
a crossing time, and this would produce a star formation rate
two orders of magnitude larger than the observed one
\citep{zuckerman74}. Thus, a model of GMC evolution must explain why
GMCs are stable against global collapse and convert only a few percent of
their mass into stars per crossing time.
The second observational constraint is that GMCs, at least the most
massive ones, live for considerably more than a single crossing
time. While GMC lifetimes are quite difficult to estimate inside the
Milky Way, age spreads of the largest OB associations are $\sim 20$
Myr \citep{blaauw64, blitz80}, which is a probable lower limit on the lifetimes
of GMCs. More robust constraints are available in extragalactic
observations, where there is no distance ambiguity. Associations
between GMCs and star clusters imply a typical GMC lifetime of $\sim
20$ Myr in M33 \citep{engargiola03} and 27 Myr in the LMC
\citep{fukui99,blitz06a}. This is significantly greater than the GMC crossing
time of $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle < 10$ Myr. While the difference between one and a few
crossing times might seem insignificant, recall that the $e$-folding
time for the decay of turbulence is roughly a crossing time. A cloud
two crossing times old will have lost almost all of its turbulence if
feedback or some other source cannot replenish it, and will therefore
undergo global collapse. Thus, the observations not only require that
clouds be stable against collapse, they require that this stability be
maintained for several turbulent decay times.
The GMC model we present in this Paper, using the fiducial parameters
suggested by observation and previous theoretical work, provides very
good qualitative agreement with the observations. We show that
feedback from star formation can keep clouds supersonically turbulent
and virialized for $\sim 30$ Myr, until they are destroyed by
feedback. Combined with the results of \citet{krumholz05c} showing
that supersonic turbulent motions naturally produce a star formation
rate of a few percent per crossing time if star formation occurs in
virialized clouds, this model satisfies both observational
constraints. We discuss the question how how GMCs evolve during this
lifetime in more detail in \S~\ref{gmcevol}.
Alternative models run into difficulty with one of these two
observations, or with other data. One possible explanation why GMCs do
not undergo global collapse is that they are gravitationally unbound
transient fluctuations in the atomic ISM; only local subregions
constituting a small fraction of the cloud mass are bound and can
collpase \citep{maclow04, clark04, clark05, vazquezsemadeni06,
dobbs06a}. However, in this case it is hard to see how GMCs could
survive $20-30$ Myr. For example, \citet{clark05} find GMC lifetimes
of only about 10 Myr in their simulations of unbound clouds. There are
also other severe observational difficulties. There are no molecular
clouds with masses $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle > 10^4$ $M_{\odot}$ that are clearly
gravitationally unbound, with virial parameters $\alpha_{\rm vir}\gg 1$, either
in the Milky Way \citep{heyer04} or in other galaxies
\citep{engargiola03, rosolowsky03, rosolowsky05a, blitz06a}. There are
many examples in the Milky Way of clouds of mass $<10^4$ $M_{\odot}$ with
virial parameters $\alpha_{\rm vir}\gg 1$ \citep{heyer04}, and there is no obvious
reason why massive clouds should not also display a range of $\alpha_{\rm vir}$
if they are generally unbound. Transient fluctuation models also have
problems explaining the existence of a GMC mass scale and the low
rotation rates of GMCs. These arguments are presented in detail in
\citet{krumholz05c}.
A second possiblity is that GMCs are bound, at least marginally, but
that they are destroyed before the turbulence with which they are born
decays away. As a result, they never have a chance to begin global
collapse. However, the observed lifetimes of $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle > 20$ Myr appear to
rule out the possibility that such rapid destruction is the
norm. Furthermore, to work this model requires a mechanism of cloud
destruction that reliably operates in $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle < 1$ crossing time. As we
show here, HII regions are not capable of completely
photoevaporating or disrupting massive clouds over such short
periods. Supernovae and protostellar winds inject consierably less
energy into GMCs per unit mass of stars formed than do HII regions
\citep{tenoriotagle85, yorke89, matzner02}, so they are unlikely
candidates for rapid
cloud disruption. In the absence of a plausible mechanism for cloud
disruption in $\sim 10$ Myr, this model faces a major theoretical
problem as well as an observational one.
It is worth noting at this point that
arguments for GMC lifetimes in the Milky Way strictly limited to 1
crossing time, e.g. \citet{hartmann01}, generally rely on observations
of clouds in the solar neighborhood, all of which are much smaller than
the larger, more typical GMCs we have considered in this paper. Since
we find that molecular clouds $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle < 10^5$ $M_{\odot}$ in mass do only
survive for $\sim 1$ crossing time, our results are consistent with
the idea that solar neighborhood GMCs are short-lived. We simply
suggest that GMC lifetime is mass-dependent, and that the nearest
clouds, due to their atypical masses, also have atypical lifetimes.
\subsection{An Evolutionary Scenario for GMCs}
\label{gmcevol}
Our findings allow us to present a rough evolutionary scenario for
GMCs that is consistent both with the constraints of stability and GMC
age described in \S~\ref{gmcstability} and with the observation that
GMCs in the Milky Way and in other galaxies all lie in a narrow
range of column densities and virial parameters, centering around
$N_{H,22} \approx 1.5$ and $\alpha_{\rm vir} \approx
1-2$. \citep{larson81,solomon87, blitz06a}. This scenario is quite
similar to that suggested in \citet{mckee89} and is developed in
considerably greater detail in Paper II.
Our scenario is that GMCs are born at low column densities by
condensation out of the atomic ISM, primarily triggered by
self-gravitational instabilities in spiral shocks
\citep[e.g.][]{kim01, kim02, kim03}. This can only occur in regions of
a galactic disk that are at significantly higher densities and
pressures than is typical of the ISM. If star formation were able to
start in such clouds immediately, they would undergo rapid disruption,
converting $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle < 3\%$ of their mass into stars. However, in reality
such clouds are probably not star-forming over most of their volume,
so feedback is suppressed.
This may explain the observation that GMCs
pass through a phase in which they appear to lack embedded HII
regions. The duration of this phase is difficult to obtain from
observations -- \citet{fukui99}, \citet{engargiola03} and
\citet{blitz06a} report that $\sim 1/4$ of GMCs do not show
H$\alpha$ signatures associated with HII regions, but this is probably
an overestimate because it does not include highly obscured HII
regions. Although the H$\alpha$ sensitivities of the catalogs used by
these authors are sufficient to detect HII regions with H$\alpha$
luminosities comparable to the Orion Nebula, Orion is visible largely
because it is on the near side of a dark cloud; at the distance of the
LMC or M33, Orion would be invisible if it were oriented with the dark
cloud on the near side rather than the other way around. Spitzer/MIPS
observations have failed to find any GMCs that are dark at 24 $\mu$m,
so there must be some embedded star formation even in GMCs without
detected HII regions
(E. Rosolowsky, 2006, private communication). This result suggests
either that the HII regions surveys are incomplete, or that clouds do
not begin making massive stars until well after they have begun
forming low mass stars, an effect we have not considered.
Regardless of how long this initial starless phase lasts, a question
we will address using our models in Paper II, in the
absence of feedback, GMCs will contract due to loss of turbulent
support, gravity, and external pressure, until they approach
$N_{H,22}\approx 1$. At
that point, they become star-forming, and feedback stabilizes them
against further contraction and keeps them in virial equilibrium for several
crossing times. We observe GMCs at $N_{H,22}\approx 1$,
$\alpha_{\rm vir}\approx 1-2$ because this is where they spend the vast majority
of their lifetimes. This column density is selected because it is the
one for which energy injection by turbulence roughly balances energy
loss by isothermal shocks. In addition to turbulent energy injection,
the recoil momentum produced by mass being evaporated by HII regions
may also play a significant role in confining clouds.
This quasi-equilibrium endures for an amount of time that depends on
the cloud mass. For massive clouds, which contain most of the
molecular mass in the Milky Way and in all but one other galaxy in
which it is possible to estimate cloud mass distributions
\citep{fukui99,engargiola03,blitz06a}, it endures $2-3$ crossing
times, or $20-30$ Myr. During a cloud's lifetime, it converts $5-10\%$
of its mass into stars. These results are robust against changes in
the assumed dissipation rate for turbulence or efficiency of turbulent
driving by HII regions.
Less massive clouds are probably dynamically disrupted in $\sim 1$
crossing time, consistent with estimates of the lifetimes of small
clouds in the solar neighborhood \citep[e.g.][]{hartmann01},
although parts of them may endure in molecular phase and continue
forming stars for longer periods of time even after the original cloud
has been disrupted.
One interesting question is how our results might change if we
considered a galaxy quite different from the Milky
Way. Observations indicate that GMCs in normal spiral galaxies like
the Milky Way are generally quite similar to those in the Milky Way
\citep{blitz06a}, so to find truly different conditions we must
consider galaxies that are approaching the regime of starbursts. The sole
observational example of such a galaxy where we have observed the
clouds is M64, a weak starburst in which the largest GMCs are $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle >
10^7$ $M_{\odot}$ in mass, and have surface densities up to $N_H\approx
4\times 10^{22}$ cm$^{-2}$ \citep{rosolowsky05a}. Despite these
differences from Milky Way GMCs, observations indicate that these
clouds are still in approximate virial balance, and that like Milky
Way GMCs their star formation rate is only a few percent per free-fall
time. Thus, they are not in a state of global collapse.
Since we have found that HII
regions cannot sustain the turbulence and prevent collapse in such
clouds (though see \S~\ref{limitations}), we are left with two
possibilities. Either the dissipation rate in such clouds is lower
than our fiducial estimate, or there is an additional source of energy
input that supplements HII regions. One strong candidate for a source
of energy injection is driving of turbulence by external shocks
\citep{kornreich00}, either from supernovae, gravitational
instabilities driven by the potential of the stars, or cloud-cloud collisions \citep{tan00}. This mechanism
encounters considerably difficulty in the Milky Way because GMCs are
much denser than the gas around them in which shocks propogate. This
creates an ``impedance mismatch'' that makes it difficult to drive
turbulence into the GMCs \citep{nakamura06}. However, in a starburst
where the ISM is
entirely molecular, clouds are not much denser than their
surroundings. \citet{rosolowsky05a} find that in M64 the GMCs are
overdense only by factors $\sim 2$. This removes the impedance
mismatch problem, and makes it far easier for external shocks to drive
turbulence than it is in galaxies like Milky Way. Whether this
mechansim can work in detail will require more analytic models and
simulations to determine.
\subsection{Limitations of This Model}
\label{limitations}
The most obvious limitation of our model is the constraint that GMCs
evolve homologously. In mathematical terms, this constraint is
equivalent to dropping time derivatives of $a_{\rm I}$ in the virial
theorem. This should only change our results substantially if changes
in the moment of inertia of a GMC occured primarily through changes in
its shape rather than overall expansion or contraction. While a cloud
in approximate equilibrium probably does experience most changes in
its inertia via changes in shape, a cloud that is undergoing global
collapse or global disruption would almost certainly experience most
changes in its inertia through those processes rather than through
changes in shape. As a result, our broad conclusion that neither overall
collapse or disruption occur for several crossing times seems likely
to be robust.
A more subtle limitation to our model is our simple boundary
conditions for GMCs. For simplicity we have assumed a fixed mass
budget and a fixed external pressure that puts GMCs into pressure
balance initially. In reality, GMCs may start forming stars while
still accumulating gas. The somewhat higher mean mass for GMCs with
associated HII regions than for GMCs without such associations seen in
the LMC \citep{fukui99,blitz06a} seems to point in this direction. In
addition, GMCs form in spiral shocks. As a result, the Lagrangian
mass elements from which a cloud forms are probably subjected to a
rising pressure as they enter the arm region, and then experience a
falling pressure as the cloud moves out of the arm. This may have
important effects on GMC evolution, and we explore the effects of more
complex boundary conditions in Paper II. A related issue in GMC
evolution, which we will also explore, is the effect of the apparent
existence of column density thresholds for star formation. If clouds
cannot begin forming stars until they accumulate a certain column of
molecular mass, then feedback in GMCs will not start up until some
time after the cloud first becomes molecular. This lag may affect GMC
evolution.
There are also several limitations associated with our treatment of HII regions. We assume that the mass of the cloud is large enough that its evolution is dominated by HII regions, not protostellar outflows.
We assume that the clouds are spherical, which means that all the energy injected by HII regions goes into internal motions, whereas in fact some of it goes into moving the entire cloud. Finally, we neglect possible energy injection by HII regions after merging. In our model, shells driven by small HII
regions or those in very turbulent clouds may drop to expansion
velocities below the cloud velocity dispersion well before the driving
stars burn out. At this point the HII region will cease to drive a
coherent expanding shell, and we approximate that the energy
injection ceases. However, if the driving stars continue to ionize a
part of the cloud, there is another potential driving
mechanism. Clumps of gas that enter the ionized region will be
photoevaporated on one side, and therefore will be rocketed away from
the ionizing stars. Even though the resulting thrust is not enough to
drive the clumps out of the HII region entirely, since the expansion
velocity is smaller than the cloud velocity dispersion, it can
still alter the trajectories of gas clumps and potentially inject
energy \citep{tan01}. This
effect could be important in the cases where we find that HII regions
cannot support GMCs because the velocity dispersion required to hold
up the cloud is comparable to or larger than the ionized gas sound
speed. Determining whether this effect is significant will require
either more detailed theoretical treatment or numerical
simulations. Despite these caveats, though, our results in
\S~\ref{dissvarysec}-\ref{effvary} and our analysis in
\S~\ref{paramdependence} show that our conclusions regarding the
lifetime and evolutionary history of GMCs are quite robust against
changes in the assumed efficiency with which HII regions drive
turbulent motions or the rate at which those turbulent motions decay.
\section{Conclusions}
\label{conclusions}
In this Paper we derive the basic equations governing the evolution of
the mass, radius, and velocity dispersion of giant molecular clouds
under the approximation of homologous motion. We construct simple
models for the rate at which turbulent motions decay due to radiation
from isothermal shocks and for the rate at which HII regions driven by
massive stars within the cloud drive turbulent motions, and we use these
to study the global evolution and energetic balance of
GMCs. This enables us to build GMC models in which we neither assume
energetic and virial balance, nor neglect the effects of
feedback. Thus, we are able for the first time to address critical
questions such as whether GMCs are in virial or energetic equilibrium,
how long they live, what determines their lifetimes and star formation
histories, and what determines properties such as column density and
virial parameter that appear to be roughly the same for all GMCs.
Our primary conclusion is that giant molecular clouds with observed
properties are indeed quasi-equilibrium objects. The rate at which
they lose energy via radiation from isothermal shocks is roughly
balanced by the rate at which HII regions from star formation inject
energy back into the cloud. This feedback keeps them virialized,
$\alpha_{\rm vir} = 1-2$, and keeps them at the observed column density
$N_H\approx 1.5\times 10^{22}$ cm$^{-2}$. Our results suggest that
this column density, which appears to be generic for GMCs both in the
Milky Way and in other galaxies where GMCs are observable, is in fact
the result of feedback, since the efficiency of star formation
feedback varies with column density, and the observed column density
corresponds to the one required for equilibrium. Whether the GMC
formation process, particularly the passage through an overdense and
overpressured spiral shock, also plays a role in selecting the GMC
column density is at this point an open question, one we will address
in Paper II.
The duration of the equilibrium state of a GMC depends on cloud
mass, but for clouds $\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle > 10^6$ $M_{\odot}$ in mass it lasts for $2-3$
crossing times, or roughly 30 Myr. The dominant destruction mechanism
for GMCs is dynamical unbinding by the momentum delivered by an expanding HII
region, but for massive clouds this unbinding does not happen until
HII regions have photoionized away $\sim 90\%$ of the GMC
mass. Smaller clouds are dynamically disrupted by HII regions in $\sim
10$ Myr while $\sim 50\%$ of their mass is still in molecular form,
but it is unclear if this disruption actually destroys their molecular
material or simply breaks it into smaller, mutually unbound
clouds.
Overall, our models produce a picture of GMCs that is quite different
than that suggested by models in which GMCs are gravitationally
unbound or in which feedback from star formation is neglected. Models
that neglect feedback appear incapable of reproducing the observed
lifetimes and properties of the GMC population. We conclude that any
reasonable model of GMC evolution cannot neglect the effects of star
formation feedback, and that even a simple model of feedback such as
the one we use here produces good agreement with the observed
properties of GMCs.
\acknowledgements We thank E. C. Ostriker, J.~P. Ostriker,
E. Rosolowsky, J.~M. Stone, and J.~C. Tan for helpful discussions, and
the anonymous referees for useful comments. Support for this work was
provided by NASA through Hubble Fellowship grant \#HSF-HF-01186
awarded by the Space Telescope Science Institute, which is operated by
the Association of Universities for Research in Astronomy, Inc., for
NASA, under contract NAS 5-26555 (MRK), by NSERC and the Canada
Research Chairs Program (CDM), and by the NSF through grants
AST-0098365 and AST-0606831 (CFM).
\begin{appendix}
\section{Derivation of the Virial Theorem For an Evaporating
Cloud}
\label{virialderivation}
Here we derive the equation of motion for an evaporating homologous
cloud, which is a generalization of the Eulerian Virial Theorem
\citep[EVT,][]{mckee92}. Gas in the cloud is injected into the wind at
a rate $\dot{\rho}$, so the continuity equation for the cloud is
\begin{equation}
\label{conteqn}
\frac{\partial \rho}{\partial t} = -\nabla\cdot\rho\mathbf{v}+\dot{\rho},
\end{equation}
where $\rho$ and $\mathbf{v}$ are understood to be the density and
velocity of cloud and not wind material. The wind satisfies its own
continuity equation $\partial \rho_w/\partial t =
-\nabla\cdot\rho_w\mathbf{v}_w-\dot{\rho}$,
so that the summed equation $\partial (\rho+\rho_w) =
-\nabla\cdot(\rho\mathbf{v}+\rho_w\mathbf{v}_w)$ is simply the
ordinary continuity equation. The equation of momentum conservation
for the cloud gas is
\begin{equation}
\label{peqn}
\frac{\partial}{\partial t}(\rho \mathbf{v}) =
-\nabla\cdot(\mathbf{\Pi}-\mathbf{T}_{M}) + \rho\mathbf{g}
+\dot{\rho}(\mathbf{v}+{\mathbf{v}_{\rm ej}'}),
\end{equation}
where $\mathbf{\Pi}$ is the gas pressure tensor, $\mathbf{T}_{M}
\equiv [\mathbf{B}\mathbf{B}-(1/2)B^2\mathbf{I}]/(4\pi)$ is the
Maxwell stress tensor, $\mathbf{B}$ is the magnetic field, and
$\mathbf{g}$ is the gravitational force per unit mass. Intuitively,
the $\dot{\rho}\mathbf{v}$ term represents the change in cloud momentum due
to mass being transferred into the wind, and $\dot{\rho}{\mathbf{v}_{\rm ej}'}$ is the
recoil momentum from the ejection. As with the continuity equation,
there is a corresponding momentum equation for the wind.
We now follow \citet{mckee92} in deriving the EVT for a cloud
with a wind. The moment of inertia of the cloud is
\begin{equation}
I_{\rm cl} \equiv \int_{V_{\rm vir}} \rho r^2 \,dV,
\end{equation}
where the fixed volume of integration $V_{\rm vir}$ is chosen to be sufficiently
large that it includes the entire cloud at all times. Taking the time
derivative
\begin{eqnarray}
\dot{I}_{\rm cl} &=&
- \int_{V_{\rm vir}} (\nabla\cdot \rho \mathbf{v}) r^2 \,dV + \int_{V_{\rm vir}} \dot{\rho} r^2
\,dV \nonumber\\
&=& - \int_{S_{\rm vir}} (\rho \mathbf{v} r^2)\cdot d\mathbf{S} + 2 \int_{V_{\rm vir}}
\rho \mathbf{v}\cdot \mathbf{r} \,dV + a_{\rm I} \dot{M}_{\rm cl} R_{\rm cl}^2
\end{eqnarray}
where $S_{\rm vir}$ is the surface of the volume $V_{\rm vir}$, and
\begin{equation}
a_{\rm I} \equiv \frac{3-k_{\rho}}{5-k_{\rho}}.
\end{equation}
Differentiating again,
\begin{eqnarray} \label{IddotE}
\frac{1}{2} \ddot{I}_{\rm cl} &=& - \frac{1}{2}
\int_{S_{\rm vir}} r^2 \frac{\partial}{\partial t}(\rho \mathbf{v}) \cdot
d\mathbf{S} +
\int_{V_{\rm vir}} \frac{\partial}{\partial t}(\rho \mathbf{v}) \cdot
\mathbf{r} \,dV
\nonumber\\
& &
{} + \frac{1}{2} a_{\rm I} \ddot{M}_{\rm cl} R_{\rm cl}^2 + a_{\rm I} \dot{M}_{\rm cl} R_{\rm cl} \dot{R}_{\rm cl}.
\end{eqnarray}
The time derivative can be taken out of the integral in the first term
because $S_{\rm vir}$ is fixed, and for the second term we can substitute for
the integrand using the momentum equation (\ref{peqn}). This gives
\begin{eqnarray}
\label{IddotE-2}
\frac{1}{2} \ddot{I}_{\rm cl}
& = &
-\frac{1}{2} \frac{d}{dt} \int_{S_{\rm vir}} r^2 \rho \mathbf{v} \cdot
d\mathbf{S}
- \int_{V_{\rm vir}}
\left\{
\mathbf{r}\cdot
\left[\nabla\cdot(\mathbf{\Pi}-\mathbf{T}_M)\right]
-\rho\mathbf{v}\cdot\mathbf{g}\right\}\,
dV
\nonumber \\
&& {} +
\int_{V_{\rm vir}} \dot{\rho} \mathbf{r} \cdot\mathbf{v} \,dV
+ \int_{V_{\rm vir}} \dot{\rho} r v_{\rm ej}' \,dV
+ \frac{1}{2} a_{\rm I} \ddot{M}_{\rm cl} R_{\rm cl}^2 + a_{\rm I} \dot{M}_{\rm cl} R_{\rm cl} \dot{R}_{\rm cl} .
\end{eqnarray}
To evaluate the term $\int_{V_{\rm vir}} \dot{\rho} \mathbf{r} \cdot\mathbf{v}
\,dV$, we make use of our homology assumption, which allows us
to write the velocity at any point in the cloud as
\begin{equation}
\label{vdecomp}
\mathbf{v} = \mathbf{r} \frac{\dot{R}_{\rm cl}}{R_{\rm cl}} + \mathbf{v}_{\rm turb},
\end{equation}
where the first term represents homologous expansion or contraction of
the cloud, and the second is a turbulent velocity that carries no net
radial flux of matter. Thus, $\int_{V_{\rm vir}} \rho \mathbf{r}
\cdot\mathbf{v}_{\rm turb} \,dV = 0$, and $\int_{V_{\rm vir}} \dot{\rho} \mathbf{r}
\cdot\mathbf{v} = \int_{V_{\rm vir}} \dot{\rho} r^2 \dot{R}_{\rm cl}/R_{\rm cl}$.
This allows us to write the final EVT,
\begin{eqnarray}
\frac12 \ddot{I}_{\rm cl} & = &
2(\mathcal{T}-\mathcal{T}_0) +
\mathcal{B}+\mathcal{W}-\frac12
\frac{d}{dt}\int_{S_{\rm vir}} (\rho\mathbf{v} r^2)\cdot d\mathbf{S}
\nonumber\\
& & {} +
2 a_{\rm I} \dot{M}_{\rm cl} R_{\rm cl} \dot{R}_{\rm cl} + \frac{3-k_{\rho}}{4-k_{\rho}} \dot{M}_{\rm cl} R_{\rm cl} v_{\rm ej}' +
\frac12 a_{\rm I} \ddot{M}_{\rm cl} R_{\rm cl}^2.
\label{EVT}
\end{eqnarray}
In this equation, the kinetic term is
\begin{equation}
\mathcal{T}=\frac{1}{2} \int_{V_{\rm vir}} (3 P + \rho v^2) \,dV
\end{equation}
for gas thermal pressure $P$, the surface term is
\begin{equation}
\mathcal{T}_0 = \frac{1}{2} \int_{S_{\rm vir}} \mathbf{r} \cdot
\mathbf{\Pi} \cdot d\mathbf{S},
\end{equation}
the gravitational term is
\begin{equation}
\mathcal{W} = \int_{V_{\rm vir}} \rho \mathbf{r}\cdot\mathbf{g} \,dV,
\end{equation}
and the magnetic term is
\begin{equation}
\label{calbdef}
\mathcal{B}=\frac{1}{8\pi} \int_{V_{\rm vir}} (B^2-B_0^2) \, dV,
\end{equation}
where $B_0$ is the background magnetic field far from the cloud, and
we require that $V_{\rm vir}$ be large enough to include the full volume over
which the background field is distorted by the presence of the cloud.
\section{Derivation of Energy Conservation For an Evaporating Cloud}
\label{energyderivation}
Here we derive the equation of energy conservation for an evaporating
homologous cloud. First, we rewrite the momentum equation (\ref{peqn})
in a slightly different form,
\begin{equation}
\label{peqn2}
\rho\frac{d\mathbf{v}}{dt}=-\nabla P - \rho\nabla \phi +
\frac{\mathbf{J}\times\mathbf{B}}{c} +\dot{\rho}{\mathbf{v}_{\rm ej}'},
\end{equation}
where the terms on the right hand side are the pressure,
gravitational, and Lorentz forces, $\phi$ is the gravitational
potential, $\mathbf{J}$ is the current density, and we have replaced
the pressure tensor $\mathbf{\Pi}$ with the isotropic pressure
$P\mathbf{I}$ under the assumption that viscosity is
negligible. Taking the dot product of (\ref{peqn2}) with $\mathbf{v}$
yields
\begin{equation}
\label{eneqn1}
\frac{\partial}{\partial t}
\left(\frac{1}{2} \rho v^2\right)
+\nabla\cdot \left(\frac{1}{2} \rho \mathbf{v} v^2\right)
= -\mathbf{v}\cdot\nabla P - \rho\mathbf{v}\cdot\nabla\phi
+\frac{\mathbf{v}}{c} \cdot (\mathbf{J}\times\mathbf{B})
+ \dot{\rho} \left(\frac{1}{2} v^2 + \mathbf{v}\cdot{\mathbf{v}_{\rm ej}'}\right).
\end{equation}
Using the continuity equation, we can rewrite the gravitational work
term as
\begin{eqnarray}
-\rho \mathbf{v} \cdot \nabla\phi & = &
-\nabla\cdot\rho\mathbf{v}\phi + \phi\nabla\cdot\rho\mathbf{v} \\
& = &
-\nabla\cdot\rho\mathbf{v}\phi - \frac{\partial}{\partial t}(\rho
\phi) + \rho\frac{\partial\phi}{\partial t} + \dot{\rho} \phi.
\end{eqnarray}
Similarly, we can rewrite the magnetic work term using Poynting's
Theorem, which in the MHD approximation is
\begin{equation}
\frac{1}{8 \pi} \frac{\partial}{\partial t} (B^2-B_0^2)
+ \nabla\cdot\mathbf{S}_p = -\mathbf{J}\cdot\mathbf{E}
= -\frac{\mathbf{v}}{c}\cdot (\mathbf{J}\times\mathbf{B}),
\end{equation}
where $\mathbf{S}_p$ is the Poynting flux, and we have used the fact that $B_0$
is constant to include it in the time derivative for future
convenience. Substituting into (\ref{eneqn1}) gives the equation for
the time-evolution of the non-thermal energy,
\begin{equation}
\label{eneqn2}
\frac{\partial}{\partial t}
\left(\frac{1}{2}\rho v^2 + \rho\phi + \frac{B^2-B_0^2}{8\pi}\right)
+\nabla\cdot \rho\mathbf{v} \left(\frac{1}{2}v^2+\phi\right)
+\nabla\cdot\mathbf{S}_p =
-\mathbf{v}\cdot\nabla P + \rho\frac{\partial\phi}{\partial t}
+\dot{\rho} \left(\frac{1}{2}v^2 + \phi+\mathbf{v}\cdot{\mathbf{v}_{\rm ej}'}\right).
\end{equation}
To include the internal energy, we write down the first law of
thermodynamics,
\begin{equation}
\rho \frac{de}{dt}+P\nabla\cdot\mathbf{v} = \Gamma - \Lambda,
\end{equation}
where $e$ is the internal energy per unit mass and $\Gamma$ and
$\Lambda$ are the rates of radiative energy gain and loss per unit
volume. Combining this with the continuity equation gives the
evolution equation for the internal energy,
\begin{equation}
\label{eneqn3}
\frac{\partial}{\partial t}\left(\rho e\right) +
\nabla\cdot\rho\mathbf{v} \left(e+\frac{P}{\rho}\right) =
\dot{\rho} e + \mathbf{v}\cdot\nabla P + \Gamma - \Lambda.
\end{equation}
Adding together the evolution equations (\ref{eneqn2}) and
(\ref{eneqn3}) for the non-thermal and thermal energies gives the
total energy equation
\begin{eqnarray}
\lefteqn{
\frac{\partial}{\partial t}
\left[
\rho \left(\frac{1}{2}v^2+e+\phi\right)
+\frac{B^2-B_0^2}{8\pi}
\right]
+\nabla \cdot \rho\mathbf{v}
\left(\frac{1}{2}v^2 + e + \frac{P}{\rho} + \phi\right)
+ \nabla \cdot \mathbf{S}_p }
\nonumber \\
& \qquad \qquad \qquad \qquad = &
\rho \frac{\partial\phi}{\partial t}
+ \dot{\rho} \left(\frac{1}{2}v^2+e+\phi+\mathbf{v}\cdot{\mathbf{v}_{\rm ej}'}\right)
+ \Gamma - \Lambda.
\label{eneqn4}
\end{eqnarray}
To derive the global form of the energy equation, we integrate over
the virial volume $V_{\rm vir}$, which gives
\begin{eqnarray}
\lefteqn{
\frac{d\mathcal{E}}{dt} + \int_{S_{\rm vir}}
\left[\rho\left(\frac{1}{2} v^2 + e + \phi\right) + P\right]
\mathbf{v}\cdot d\mathbf{S} + \int_{S_{\rm vir}} \mathbf{S}_p\cdot d\mathbf{S}
}
\nonumber \\
& = &
\frac{1}{2} \int_{V_{\rm vir}} \rho \frac{\partial\phi}{\partial t} dV +
\frac{\dot{M}_{\rm cl}}{M_{\rm cl}} (\mathcal{E} - \mathcal{B})
+ \left(\frac{3-k_{\rho}}{4-k_{\rho}}\right)\dot{M}_{\rm cl}
\dot{R}_{\rm cl} v'_{\rm ej} + \mathcal{G}_{\rm cl} - \mathcal{L}_{\rm cl},
\label{eneqn5}
\end{eqnarray}
where
\begin{equation}
\mathcal{E} = \int_{V_{\rm cl}}
\left[\rho\left(\frac{1}{2} v^2 + e + \frac{1}{2}\phi\right) +
\frac{B^2-B_0^2}{8\pi}
\right] dV
\end{equation}
is the total energy in the cloud, $\mathcal{G}_{\rm cl}$ and $\mathcal{L}_{\rm
cl}$ are the total rates of radiative energy gain and loss integrated
over the cloud, and we have
used our assumption of homologous motion to evaluate the terms
involving $\dot{\rho}$ and ${\mathbf{v}_{\rm ej}'}$. To evaluate the first integral on the
left-hand side, we note all the terms proportional to density vanish
at the cloud surface because the ambient material has zero density,
but that constant pressure and zero density correspond to an
incompressible fluid, so the velocity of the ambient medium across the
virial surface is related to the velocity of the cloud surface by
$v=\dot{R}_{\rm cl} (R_{\rm cl}^2/R_{\rm vir}^2)$. This implies that
\begin{equation}
\label{presintegral}
\int_{S_{\rm vir}}
\left[\rho\left(\frac{1}{2} v^2 + e + \phi\right) + P\right]
\mathbf{v}\cdot d\mathbf{S} = 4\pi P_{\rm amb} R_{\rm cl}^2 \dot{R}_{\rm cl}.
\end{equation}
To evaluate the integral on the right-hand side, recall that gas in
the wind contributes negligibly to the gravitational potential. Since
the potential arises solely from cloud material, we can write
\citep{shu92}
\begin{equation}
\label{gravintegral}
\frac{1}{2} \int_{V_{\rm vir}} \rho \frac{\partial\phi}{\partial t} dV
= \frac{1}{2} \int_{V_{\rm vir}} \frac{\partial\rho}{\partial t} \phi \, dV
= \left(\frac{\dot{M}_{\rm cl}}{M_{\rm cl}}\right) \frac{1}{2} \int_{V_{\rm vir}} \rho \phi \, dV
= \frac{\dot{M}_{\rm cl}}{M_{\rm cl}} \mathcal{W}.
\end{equation}
Finally, we must evaluate $\int_{S_{\rm vir}} \mathbf{S}_p\cdot d\mathbf{S}$, which
represents the flux of magnetic energy across the virial surface as it
is carried off by the wind. This is uncertain, because it depends on
the process by which the mass is removed. As in
\S~\ref{equationofmotion}, we
write the magnetic energy as a sum of turbulent and
non-turbulent contributions, $1/(8\pi) \int_{V_{\rm vir}} (B^2-B_0^2)\,dV =
\mathcal{B} =
\mathcal{B}_{\rm non-turb} + \mathcal{B}_{\rm turb}$. For the turbulent part, we assume
that the wind does not change the nature of the MHD turbulence within
the cloud, so the turbulent magnetic energy remains proportional to
the turbulent kinetic energy. Since the loss of turbulent energy
accompanying mass loss is $\int_{V_{\rm vir}} 3 \dot{\rho} \sigma^2/2\,
dV = (\dot{M}_{\rm cl}/M_{\rm cl}) \mathcal{T}_{\rm turb}$, this requires that the flux of
turbulent magnetic energy be $\partial{\mathcal{B}}_{\rm turb}/\partial t =
(\dot{M}_{\rm cl}/M_{\rm cl}) \mathcal{B}_{\rm turb}$. For the non-turbulent part, for simplicity and
based on the observation that GMCs have roughly constant mass-to-flux
ratios \citep{crutcher99}, we assume that the mass-to-flux ratio
remains constant as the cloud loses mass. Since $\mathcal{B}_{\rm non-turb}
\propto \Phi^2$, this implies $\partial{\mathcal{B}}_{\rm non-turb}/\partial t = 2
(\dot{\Phi}/\Phi) \mathcal{B}_{\rm non-turb} = 2 (\dot{M}_{\rm cl}/M_{\rm cl}) \mathcal{B}_{\rm non-turb}$. Thus,
we can write
\begin{equation}
\label{magintegral}
\int_{S_{\rm vir}} \mathbf{S}_p\cdot d\mathbf{S} = -\frac{\dot{M}_{\rm cl}}{M_{\rm cl}} (\mathcal{B}_{\rm turb} +
2 \mathcal{B}_{\rm non-turb}) = -\frac{\dot{M}_{\rm cl}}{M_{\rm cl}} (\mathcal{B} - \eta_{\rm B}^2 \mathcal{W}).
\end{equation}
Substituting (\ref{presintegral}), (\ref{gravintegral}), and
(\ref{magintegral}) into (\ref{eneqn5}), we arrive at the final energy
equation for the cloud:
\begin{equation}
\label{energyeqn1}
\frac{d\mathcal{E}}{dt} = \frac{\dot{M}_{\rm cl}}{M_{\rm cl}} \left[\mathcal{E} +
(1-\eta_{\rm B}^2) \mathcal{W}\right] - 4\pi\PaR_{\rm cl}^2\dot{R}_{\rm cl} +
\left(\frac{3-k_{\rho}}{4-k_{\rho}}\right) \dot{M}_{\rm cl} \dot{R}_{\rm cl} v'_{\rm ej} +
\mathcal{G}_{\rm cl} - \mathcal{L}_{\rm cl}.
\end{equation}
\end{appendix}
\input{ms.bbl}
\end{document}
|
1,108,101,563,142 | arxiv | \section{}
\section{\label{intro}Introduction}
The signature of a normal diffusive process is that the mean-squared displacement grows linearly in time. That is, if $X(t)$ denotes the one-dimensional position of the diffusive particle at time $t\ge0$, then
\begin{align}\label{normal}
\mathbb{E}\big[\big(X(t)-X(0)\big)^{2}\big]
\propto t,
\end{align}
where $\mathbb{E}$ denotes expected value. However, the mean-squared displacement in complex systems often deviates from the linear behavior in \eqref{normal} and instead grows as a power law,
\begin{align}\label{alpha}
\mathbb{E}\big[\big(X(t)-X(0)\big)^{2}\big]
\propto t^{\alpha},\quad \alpha>0,
\end{align}
in a phenomenon called anomalous diffusion if $\alpha\neq1$. Subdiffusion is defined by \eqref{alpha} with $\alpha<1$ and has been observed in various systems, including charge transport in amorphous semiconductors \cite{scher1975}, subsurface hydrology \cite{berkowitz2002}, and the transport of a bead through a polymer network \cite{amblard1996}. In addition, subdiffusive motion is ubiquitous in cell biology, where it is believed to result from macromolecular crowding \cite{golding2006, hofling2013}. Superdiffusion is defined by \eqref{alpha} with $\alpha>1$ and has been observed in animal movement \cite{klafter2005} and in active transport inside cells \cite{caspi2000}.
Three common mathematical models for subdiffusion are the continuous-time random walk model, fractional Brownian motion, and random walks on fractal and disordered systems \cite{hofling2013}. In a continuum limit, the { standard} continuous-time random walk model {with independent jump length and waiting time distributions} yields the fractional diffusion equation \cite{metzler2000},
\begin{align}\label{fde}
\frac{\partial}{\partial t}c(x,t)
=\mathcal{D} K_{\alpha}\frac{\partial^{2}}{\partial x^{2}} c(x,t),\quad x\in\mathbb{R},\,t>0,
\end{align}
for the subdiffusive chemical concentration $c(x,t)$ at position $x$ at time $t$. In \eqref{fde}, the parameter $K_{\alpha}>0$ is the generalized diffusivity (with dimensions $(\text{length})^{2}(\text{time})^{-\alpha}$) and $\mathcal{D}$ is the Riemann-Liouville fractional derivative \cite{samko1993}, defined by
\begin{align}\label{rl}
\mathcal{D} f(t)
=\frac{1}{\Gamma(\alpha)}\frac{\textup{d}}{\textup{d} t}\int_{0}^{t}\frac{f(s)}{(t-s)^{1-\alpha}}\,\textup{d} s,
\end{align}
where $\Gamma(\alpha)$ is the Gamma function. Note that $\mathcal{D}$ is sometimes denoted by $\frac{\partial^{1-\alpha}}{\partial t^{1-\alpha}}$. {As a technical aside, the operator appearing in the derivation of \eqref{fde} is actually the Gr{\"u}nwald-Letnikov derivative, but this operator is equivalent to \eqref{rl} for sufficiently smooth functions \cite{podlubny1998}.}
Generalizing \eqref{fde}, fractional Fokker-Planck equations model the spatiotemporal evolution of subdiffusive molecules under the influence of an external force \cite{metzler1999}. A fractional Fokker-Planck equation takes the form
\begin{align}\label{ffpec}
\frac{\partial}{\partial t}c(x,t)
=\mathcal{D} \mathcal{L}_{x} c(x,t),\quad x\inV\subseteq\mathbb{R}^{d},\,t>0,
\end{align}
where $V\subseteq\mathbb{R}^{d}$ is a $d$-dimensional spatial domain and $\mathcal{L}_{x}$ is the forward Fokker-Planck operator,
\begin{align}\label{fpo}
\begin{split}
\mathcal{L}_{x} f(x)
&:=-\sum_{i=1}^{d}\frac{\partial}{\partial x_{i}}\big[\mu_{i}(x)f(x)\big]\\
&+\frac{1}{2}\sum_{i=1}^{d}\sum_{k=1}^{d}\frac{\partial^{2}}{\partial x_{i}\partial x_{k}}
\Big[\big({\sigma}(x){\sigma}(x)^{\top}\big)_{i,k}f(x)\Big],
\end{split}
\end{align}
where $\mu(x):\overline{V}\mapsto\mathbb{R}^{d}$ is the external force (drift) vector and $
\sigma(x):\overline{V}\mapsto\mathbb{R}^{d\times m}$ describes the space-dependence and anisotropy in the diffusivity. Of course, if $\alpha=1$, then $\mathcal{D}$ is the identity operator and \eqref{ffpec} reduces to the familiar equation of integer order,
\begin{align}\label{fpec}
\frac{\partial}{\partial t}c(x,t)
=\mathcal{L}_{x} c(x,t).
\end{align}
A fundamental and now longstanding question is how to model reaction kinetics for subdiffusive molecules (see the review \cite{nepomnyashchy2016} and \cite{hornung2005, gafiychuk2008, boon2012, angstmann2013, kosztolowicz2013, hansen2015, straka2015, dossantos2019, zhang2019, li2019}). In the classical case of normal diffusion, reaction terms can simply be added to the evolution equations describing spatial movement. More precisely, consider the vector of $n$ chemical concentrations,
\begin{align}\label{c}
\mathbf{c}(x,t)=(\mathbf{c}_{i}(x,t))_{i=1}^{n}\in\mathbb{R}^{n},
\end{align}
where $\mathbf{c}_{i}(x,t)$ denotes the concentration of species $i$ at position $x\in\mathbb{R}^{d}$ at time $t\ge0$. In the absence of spatial movement, suppose the concentrations obey the mean-field reaction equations
\begin{align}\label{reactions}
\frac{\partial}{\partial t}\mathbf{c}
=\mathbf{f}(\mathbf{c}),
\end{align}
where $\mathbf{f}:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}$. In the case of normal diffusion where each chemical species moves by \eqref{fpec}, one incorporates the reaction kinetics \eqref{reactions} into spatiotemporal evolution equations by the simple addition of $\mathbf{f}(\mathbf{c})$ to the righthand side,
\begin{align}\label{simpleform}
\frac{\partial}{\partial t}\mathbf{c}
=\mathcal{L}_{x}\mathbf{c}+\mathbf{f}(\mathbf{c}).
\end{align}
However, this simple procedure fails for subdiffusion. Indeed, it was shown that the following attempt to combine subdiffusion with degradation at rate $\lambda>0$,
\begin{align*}
\frac{\partial}{\partial t}c
=\mathcal{D} K_{\alpha}\frac{\partial^{2}}{\partial x^{2}} c-\lambda c,\quad x\in\mathbb{R},\,t>0,
\end{align*}
leads to an unphysical negative concentration, $c(x,t)<0$ \cite{henry2006}.
In a series of important works \cite{sokolov2006, henry2006, schmidt2007, langlands2008}, evolution equations were derived for certain subdiffusive processes with linear reactions. In \cite{sokolov2006}, the equations were derived for pure subdiffusion in $\mathbb{R}$ with an irreversible reaction between $n=2$ chemical species. Equivalent equations were then derived in \cite{henry2006} and \cite{schmidt2007} using different formalisms. These results were generalized in \cite{langlands2008} to allow reversible reactions between any finite number $n$ of chemical species. In particular, in the case that (i) the reactions in \eqref{reactions} are linear,
\begin{align*}
\mathbf{f}(\mathbf{c})
=R\mathbf{c},
\end{align*}
where $R\in\mathbb{R}^{n\times n}$ is a constant reaction rate matrix, and (ii) each chemical species moves by the one-dimensional fractional diffusion equation in \eqref{fde}, it was found that \cite{langlands2008}
\begin{align}\label{langlands}
\frac{\partial}{\partial t}\mathbf{c}
=K_{\alpha}e^{{{R}} t}\mathcal{D}\Big[e^{-{{R}} t}\frac{\partial^{2}}{\partial x^{2}}\mathbf{c}\Big]
+{{R}} \mathbf{c},\quad x\in\mathbb{R},
\end{align}
where $e^{Rt}$ is the matrix exponential. In contrast to the simple form in \eqref{simpleform} with decoupled reaction and movement terms, notice that the reactions modify the movement term in \eqref{langlands}. Interestingly, for the case of L\'{e}vy flights with an irreversible reaction between $n=2$ species, it was shown in \cite{schmidt2007} that the reaction-superdiffusion equations have the usual decoupling of reaction and movement terms. The results in \cite{sokolov2006, schmidt2007, henry2006, langlands2008} were derived using continuous-time random walks and Fourier-Laplace transform theory.
In this paper, we first give a short and elementary proof of \eqref{langlands}. We then show how this argument gives the evolution equations for more general cases, including subdiffusion following any fractional Fokker-Planck equation in an arbitrary $d$-dimensional spatial domain with time-dependent reactions between infinitely many discrete states. This analysis reveals that the evolution equations follow from (i) the probabilistic independence of the stochastic spatial and discrete processes describing a single particle and (ii) the linearity of the integro-differential operators describing spatial movement. In addition, under mild assumptions on initial and boundary conditions, the evolution equations imply that the spatial and discrete processes are independent. That is, under some mild conditions, the evolution equations hold if and only if the spatial and discrete processes are independent.
The rest of the paper is organized as follows. In section~\ref{simplifiedsetting}, we give a simple argument that yields \eqref{langlands}. In section~\ref{moregeneralsetting}, we generalize this argument to yield the evolution equations describing more complicated spatial and discrete processes. In section~\ref{examples}, we apply this more general result to some examples. We conclude by discussing our results and highlighting future directions.
\section{Simplified setting\label{simplifiedsetting}}
We first consider a setup that is equivalent to the main problem considered in \cite{langlands2008}. Assume $\{J(t)\}_{t\ge0}$ is a continuous-time Markov jump process on the finite state space $\{1,\dots,n\}$. Suppose the matrix $R\in\mathbb{R}^{n\times n}$ contains the transition rates, meaning the distribution of $J(t)$ satisfies the linear ordinary differential equation,
\begin{align}\label{ode}
\frac{\textup{d}}{\textup{d} t}\mathbf{r}
={{R}} \mathbf{r},
\end{align}
where $\mathbf{r}(t)$ is the vector of probabilities,
\begin{align}\label{vp}
\mathbf{r}(t)
=(\mathbf{r}_{i}(t))_{i=1}^{n}
:=\big(\mathbb{P}(J(t)=i)\big)_{i=1}^{n}
\in\mathbb{R}^{n}.
\end{align}
Of course, the solution to \eqref{ode} is the matrix exponential,
\begin{align}\label{me}
\mathbf{r}(t)
=e^{{{R}} t}\mathbf{r}(0),\quad t\ge0.
\end{align}
In the language of Markov chain theory, $R$ is the forward operator and the transpose $R^{\top}$ is the backward operator (i.e.\ $R^{\top}$ is the infinitesimal generator \cite{norris1998}).
Assume that $\{X(t)\}_{t\ge0}$ is a one-dimensional subdiffusive process taking values in $\mathbb{R}$. Let $q(x,t)$ denote the probability density that $X(t)=x$ and assume that it satisfies the fractional diffusion equation,
\begin{align}\label{ffp}
\frac{\partial}{\partial t}q
=\mathcal{D}_{t} \mathcal{L}_{x} q,\quad x\in\mathbb{R},\,t>0,
\end{align}
where
\begin{align}\label{sub0}
\mathcal{D}_{t}=\mathcal{D},\quad \alpha\in(0,1),
\end{align}
is the fractional derivative of Riemann-Liouville type given in \eqref{rl}, and
\begin{align}\label{sub1}
\mathcal{L}_{x}
={K_{\alpha}}\frac{\partial^{2}}{\partial x^{2}},
\end{align}
is the one-dimensional Laplacian with generalized diffusivity ${K_{\alpha}>0}$. We use the subscripts in \eqref{sub0} and \eqref{sub1} to emphasize that $\mathcal{D}_{t}$ acts only on the time variable $t$ and $\mathcal{L}_{x}$ acts only on the space variable $x$.
Langlands et al.\ \cite{langlands2008} developed a mesoscopic continuous-time random walk argument to derive the following system of fractional reaction-diffusion equations,
\begin{align}\label{langlandsp}
\frac{\partial}{\partial t}\mathbf{p}
=e^{{{R}} t}\mathcal{D}_{t}\Big[e^{-{{R}} t}\mathcal{L}_{x} \mathbf{p}\Big]
+{{R}} \mathbf{p},\quad x\in\mathbb{R},\,t>0,
\end{align}
for the joint density
$\mathbf{p}(x,t)=(\mathbf{p}_{i}(x,t))_{i=1}^{n}$, where
\begin{align*}
\mathbf{p}_{i}(x,t)\,\textup{d} x
=\mathbb{P}(X(t)=x,\,J(t)=i).
\end{align*}
The derivation in \cite{langlands2008} implicitly assumed that $X$ and $J$ are independent processes.
We now prove that \eqref{langlandsp} follows immediately from \eqref{ode}, \eqref{ffp}, the independence of $X$ and $J$, and the linearity of $\mathcal{D}_{t}$ and $\mathcal{L}_{x}$. Note first that independence ensures that the joint probability distribution is the product of the individual distributions,
\begin{align}\label{pqr}
\begin{split}
\mathbf{p}_{i}(x,t)\,\textup{d} x
&=\mathbb{P}(X(t)=x,\, J(t)=i)\\
&=\mathbb{P}(X(t)=x)\mathbb{P}(J(t)=i)\\
&=q(x,t)\mathbf{r}_{i}(t)\,\textup{d} x.
\end{split}
\end{align}
Therefore, differentiating $\mathbf{p}(x,t)=q(x,t)\mathbf{r}(t)$ with respect to time and using \eqref{ode} and \eqref{ffp} yields
\begin{align}\label{calc}
\begin{split}
\frac{\partial}{\partial t}\mathbf{p}(x,t)
&=\mathbf{r}(t)\mathcal{D}_{t}\Big[\mathcal{L}_{x} q(x,t)\Big]+q(x,t){{R}} \mathbf{r}(t)\\
&=\mathbf{r}(t)\mathcal{D}_{t}\Big[\mathcal{L}_{x} q(x,t)\Big]+{{R}} \mathbf{p}(x,t).
\end{split}
\end{align}
Using \eqref{me}, the first term in the righthand side of \eqref{calc} becomes
\begin{align}\label{calc2}
\begin{split}
\mathbf{r}(t)\mathcal{D}_{t}\Big[\mathcal{L}_{x} q(x,t)\Big]
&=e^{{{R}} t}\mathbf{r}(0)\mathcal{D}_{t}\Big[\mathcal{L}_{x} q(x,t)\Big]\\
&=e^{{{R}} t}\mathcal{D}_{t}\Big[\mathbf{r}(0)\mathcal{L}_{x} q(x,t)\Big]\\
&=e^{{{R}} t}\mathcal{D}_{t}\Big[e^{-{{R}} t}\mathbf{r}(t)\mathcal{L}_{x} q(x,t)\Big]\\
&=e^{{{R}} t}\mathcal{D}_{t}\Big[e^{-{{R}} t}\mathcal{L}_{x} \mathbf{p}(x,t)\Big].
\end{split}
\end{align}
Combining \eqref{calc} and \eqref{calc2} yields \eqref{langlandsp}.
\section{More general setting\label{moregeneralsetting}}
It is easy to see that the calculation in \eqref{pqr}-\eqref{calc2} and the resulting evolution equation in \eqref{langlandsp} holds in much greater generality. First, the spatial domain need not be $\mathbb{R}$, and we will instead take it to be any $d$-dimensional open set $V\subseteq\mathbb{R}^{d}$ with $d\ge1$. Second, the operator $\mathcal{L}_{x}$ need not be the Laplacian and the operator $\mathcal{D}_{t}$ need not be the Riemann-Liouville fractional derivative. Instead, we will take $\mathcal{L}_{x}$ to be any linear operator acting on functions of space $x\in V\subseteq\mathbb{R}^{d}$ and $\mathcal{D}_{t}$ to be any linear operator acting on functions of time $t\in[0,\infty)$. That is, if $\varphi(t),\psi(t)$ are real-valued functions of time $t\in[0,\infty)$ {in the domain of $\mathcal{D}_{t}$} and $f(x),g(x)$ are real-valued functions of space $x\inV\subseteq\mathbb{R}^{d}$ {in the domain of $\mathcal{L}_{x}$}, then we assume
\begin{align}\label{linear}
\begin{split}
\mathcal{L}_{x}(\varphi f+\psi g)
&=\varphi \mathcal{L}_{x} f+\psi \mathcal{L}_{x} g,\\
\mathcal{D}_{t}(\varphi f+\psi g)
&=f \mathcal{D}_{t} \varphi+g \mathcal{D}_{t} \psi.
\end{split}
\end{align}
Third, the jump process $J(t)$ need not have constant jump rates or a finite state space. We summarize this in the following theorem. {Equation~\eqref{evo} in Theorem~\ref{thmind} and its proof is the main result of this paper.}
\begin{theorem}\label{thmind}
Assume $\{J(t)\}_{t\ge0}$ is a stochastic process on the possibly infinite, countable state space, $\{1,2,\dots,n\}$, where
\begin{align*}
n\in\mathbb{N}\cup\{\infty\}.
\end{align*}
Suppose the distribution,
\begin{align*}
\mathbf{r}(t):=(\mathbf{r}_{i}(t))_{i=1}^{n}:=(\mathbb{P}(J(t)=i))_{i=1}^{n}\in\mathbb{R}^{n},
\end{align*}
satisfies
\begin{align}\label{rode}
\frac{\textup{d}}{\textup{d} t}\mathbf{r}(t)
=R(t)\mathbf{r}(t),\quad t>0,
\end{align}
for some function $R(t):[0,\infty)\mapsto\mathbb{R}^{n\times n}$, and
\begin{align}\label{sop}
\mathbf{r}(t)
=\Psi(t)\mathbf{r}(0),\quad t\ge0,
\end{align}
where $\Psi(t):(-\infty,\infty)\mapsto\mathbb{R}^{n\times n}$ satisfies
\begin{align}\label{invert}
\Psi(t)\Psi(-t)
=\textup{id},\quad t\in(-\infty,\infty),
\end{align}
where $\textup{id}$ is the identity operator.
Assume $\{X(t)\}_{t\ge0}$ is a stochastic process taking values in the closure of the open set $V\subseteq\mathbb{R}^{d}$ whose probability density,
\begin{align*}
q(x,t)\,\textup{d} x
=\mathbb{P}(X(t)=x),
\end{align*}
satisfies
\begin{align}\label{qeqn}
\frac{\partial}{\partial t}q=\mathcal{D}_{t}\mathcal{L}_{x} q,\quad x\inV\subseteq\mathbb{R}^{d},\,t>0,
\end{align}
where $\mathcal{L}_{x}$ and $\mathcal{D}_{t}$ are linear operators satisfying \eqref{linear}.
If $X$ and $J$ are independent, then the joint probability density $\mathbf{p}(x,t)=(\mathbf{p}_{i}(x,t))_{i=1}^{n}$,
\begin{align*}
\mathbf{p}_{i}(x,t)\,\textup{d} x
&=\mathbb{P}(X(t)=x,\, J(t)=i),
\end{align*}
satisfies
\begin{align}\label{evo}
\frac{\partial}{\partial t}\mathbf{p}
=\Psi(t)\mathcal{D}_{t}\Big[\Psi(-t)\mathcal{L}_{x} \mathbf{p}\Big]
+{{R}}(t) \mathbf{p},\quad x\in V,\,t>0.
\end{align}
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thmind}]
Since $X$ and $J$ are independent, the joint probability density is simply the product,
\begin{align*}
\mathbf{p}(x,t)=q(x,t)\mathbf{r}(t),
\end{align*}
and the proof then follows exactly as in \eqref{calc}-\eqref{calc2} with $e^{\pm Rt}$ replaced by $\Psi(\pm t)$.
\end{proof}
Theorem~\ref{thmind} states that if $X$ and $J$ are independent, then their joint density $\mathbf{p}(x,t)$ satisfies the evolution equations in \eqref{evo}. To investigate the converse of Theorem~\ref{thmind}, assume that the joint density $\mathbf{p}(x,t)$ of $X$ and $J$ satisfies the evolution equations in \eqref{evo}. Now, notice that the product $q(x,t)\mathbf{r}(t)$ also satisfies \eqref{evo} if $q(x,t)$ satisfies \eqref{qeqn} and $\mathbf{r}(t)$ satisfies \eqref{rode}. Therefore, if (i) $\mathbf{p}(x,t)$ and $q(x,t)\mathbf{r}(t)$ satisfy the same initial conditions and boundary conditions (or growth conditions if the domain $V$ is unbounded) and if (ii) the solution to equation \eqref{evo} with these initial/boundary conditions is unique, then $\mathbf{p}(x,t)=q(x,t)\mathbf{r}(t)$. Therefore, $X$ and $J$ must be independent. In conclusion, the joint density of $X$ and $J$ satisfies \eqref{evo} if and only if $X$ and $J$ are independent, as long as dependencies between $X$ and $J$ are not imposed at $t=0$ or on the spatial boundary.
\section{Examples\label{examples}}
In this section, we illustrate Theorem~\ref{thmind} by applying it to some examples of interest.
\subsection{Some previous results}
To get the result \eqref{langlands} of Langlands et al.\ in \cite{langlands2008}, then we apply Theorem~\ref{thmind} with
\begin{align*}
V=\mathbb{R},\quad
\mathcal{L}_{x}=K_{\alpha}\frac{\partial^{2}}{\partial x^{2}},\quad
\mathcal{D}_{t}=\mathcal{D},\quad
\Psi(t)=e^{Rt},
\end{align*}
where $R=R(t)$ is constant in time and $n<\infty$.
\subsection{Fractional Fokker-Planck equations}
To find the evolution equations for fractional Fokker-Planck equations with linear reactions, we apply Theorem~\ref{thmind} with $\mathcal{L}_{x}$ given by the Fokker-Planck operator in \eqref{fpo} and $\mathcal{D}_{t}=\mathcal{D}$.
\subsection{Other memory kernels}
Theorem~\ref{thmind} shows that the form of the evolution equations in \eqref{evo} holds for more general operators than the Riemann-Liouville fractional derivative $\mathcal{D}$. For example, we can take the time operator $\mathcal{D}_{t}$ to be the integro-differential operator \cite{sokolov2006b, magdziarz2009, carnaffan2017},
\begin{align}\label{bec}
\mathcal{D}_{t} \varphi(t)
=\frac{\textup{d}}{\textup{d} t}\int_{0}^{t}M(t-t')\varphi(t')\,\textup{d} t',
\end{align}
where $M(t):[0,\infty)\mapsto\mathbb{R}$ is the so-called memory kernel.
\subsection{Superdiffusion}
Using a continuous-time random walk argument and properties of Fourier-Laplace transforms, Schmidt et al.\ \cite{schmidt2007} found that the reaction and movement terms are decoupled in the reaction-superdiffusion equations for L\'{e}vy flights with a single irreversible reaction. In this case, the equation describing the movement (without reaction) of a single molecule is
\begin{align*}
\frac{\partial}{\partial t}q
=K_{\mu}\Delta_{x}^{\mu/2}q,
\end{align*}
where $\Delta_{x}^{\mu/2}$ is the Riesz symmetric fractional derivative acting on $x$. Therefore, the decoupling of reaction and movement terms in the reaction-superdiffusion equations follows from Theorem~\ref{thmind} upon taking the spatial operator to be $\mathcal{L}_{x}=K_{\mu}\Delta_{x}^{\mu/2}$ and the time operator $\mathcal{D}_{t}$ to be the identity.
\subsection{Time-dependent rates}
Theorem~\ref{thmind} allows the reaction rate matrix to vary in time. {In particular, suppose that the reaction rate matrix is some given function of time $\{R(t)\}_{t\ge0}$ (which does not depend on spatial position).} Starting from the result of Langlands et al.\ in \eqref{langlands} for constant reaction rates, one might conjecture that the evolution equations for {such} time-dependent reaction rates are
\begin{align}\label{nguess}
\frac{\partial}{\partial t}\mathbf{p}
=e^{\int_{0}^{t}R(s)\,\textup{d} s}\mathcal{D}_{t}\Big[e^{-\int_{0}^{t}R(s)\,\textup{d} s}\mathcal{L}_{x} \mathbf{p}\Big]
+R(t) \mathbf{p},
\end{align}
where the integration $\int_{0}^{t}R(s)\,\textup{d} s$ is performed component wise. {Indeed, \eqref{nguess} has been used to model some physical systems involving a single irreversible reaction with a time-dependent rate \cite{abad2012, abad2013}.} However, Theorem~\ref{thmind} {shows that the conjecture in \eqref{nguess} can fail}, since the solution operator {$\Psi(\pm t)$} in \eqref{sop} for the equation \eqref{rode} is {not always} given {by the matrix exponential $e^{\pm\int_{0}^{t}R(s)\,\textup{d} s}$}.
{In fact, the two-state irreversible reaction
\begin{align}\label{1to2}
1\overset{\lambda(t)}{\to}2,
\end{align}
is a rare case of time-dependent reaction rates for which \eqref{nguess} holds, since this is one of the few instances of time-dependent reaction rates in which $\Psi(\pm t)=e^{\pm\int_{0}^{t}R(s)\,\textup{d} s}$ (see Appendix A.2.4 in \cite{aalen2008}). To illustrate, suppose $J(t)\in\{1,2\}$ models \eqref{1to2} for some reaction rate $\lambda(t)$, and thus assume that the distribution $\mathbf{r}(t)\in\mathbb{R}^{2}$ satisfies the nonautonomous linear system of ordinary differential equations in \eqref{rode} with time-dependent reaction rate matrix,
\begin{align*}
R(t)
=\begin{pmatrix}
-\lambda(t) & 0 \\
\lambda(t) & 0
\end{pmatrix}\in\mathbb{R}^{2\times2}.
\end{align*}
In this case, one can check that the solution operator in \eqref{sop} is indeed the matrix exponential,
\begin{align*}
\Psi(t)
=e^{\int_{0}^{t}R(s)\,\textup{d} s}
=\begin{pmatrix}
e^{-\int_{0}^{t}\lambda(s)\,\textup{d} s} & 0\\
1-e^{-\int_{0}^{t}\lambda(s)\,\textup{d} s} & 1
\end{pmatrix},\quad\text{for }t\ge0,
\end{align*}
and $\Psi(-t)=e^{-\int_{0}^{t}R(s)\,\textup{d} s}$ for $t>0$.
However, if the reaction scheme is more complicated than \eqref{1to2} and the reaction rates depend on time, then typically $\Psi(\pm t)\neq e^{\pm\int_{0}^{ t}R(s)\,\textup{d} s}$, and thus Theorem~\ref{thmind} shows that \eqref{evo} holds rather than \eqref{nguess}. For example, suppose \eqref{1to2} is now reversible,
\begin{align*}
1\Markov{\lambda_{2}(t)}{\lambda_{1}(t)}2,
\end{align*}
and thus $\mathbf{r}(t)\in\mathbb{R}^{2}$ satisfies \eqref{rode} with
\begin{align*}
R(t)
=\begin{pmatrix}
-\lambda_{1}(t) & \lambda_{2}(t) \\
\lambda_{1}(t) & -\lambda_{2}(t)
\end{pmatrix}\in\mathbb{R}^{2\times2}.
\end{align*}
In this case, it is straightforward to check that the solution operator is
\begin{align}\label{op445}
\Psi(t)
=\begin{pmatrix}
\Psi_{11}(t) & 1-\Psi_{22}(t)\\
1-\Psi_{11}(t) & \Psi_{22}(t)
\end{pmatrix},\quad t\ge0,
\end{align}
where for $i\in\{1,2\}$ and $t\ge0$,
\begin{align*}
\Psi_{ii}(t)
&=e^{-\int_{0}^{t}(\lambda_{1}(s)+\lambda_{2}(s))\,\textup{d} s}\\
&\quad\times\Big(1+\int_{0}^{t}\lambda_{1-i}(s)e^{\int_{0}^{s}(\lambda_{1}(\sigma)+\lambda_{2}(\sigma))\,\textup{d} \sigma}\,\textup{d} s\Big).
\end{align*}
Also, the condition \eqref{invert} implies that the operator evaluated at a negative time argument is the matrix inverse
\begin{align*}
\Psi(-t)
=(\Psi(t))^{-1},\quad\text{for }t>0.
\end{align*}
Note that matrix $\Psi(t)$ is invertible for each $t\ge0$ since it is triangular and the diagonal entries are nonzero. The matrix exponential in this case is
\begin{align}\label{op446}
e^{\int_{0}^{t}R(s)\,\textup{d} s}
=\begin{pmatrix}
1-\chi_{21}(t) & \chi_{12}(t)\\
\chi_{21}(t) & 1-\chi_{12}(t)\\
\end{pmatrix},\quad t\ge0,
\end{align}
where
\begin{align*}
\chi_{ij}(t)
=\frac{\int_0^t \lambda_{j}(s) \, \textup{d} s \left(1-e^{-\int_0^t (\lambda_{1}(s)+\lambda_{2}(s)) \, \textup{d} s}\right)}{\int_0^t \lambda_{1}(s) \, \textup{d} s+\int_0^t \lambda_{2}(s) \, \textup{d} s}.
\end{align*}
It is straightforward to check that \eqref{op445} and \eqref{op446} are generally not equal, except in special cases (such as constant rates, $\lambda_{j}(t)\equiv\lambda_{j}>0$, or equal rates, $\lambda_{1}(t)=\lambda_{2}(t)$ for all $t\ge0$).
Furthermore, it is not merely the presence of a reversible reaction that can cause \eqref{nguess} to fail. For example, suppose $J(t)\in\{1,2,3\}$ has two irreversible reactions,
\begin{align*}
1\overset{\lambda_{1}(t)}{\to}2\overset{\lambda_{2}(t)}{\to}3,
\end{align*}
and its distribution $\mathbf{r}(t)\in\mathbb{R}^{3}$ satisfies \eqref{rode} with
\begin{align*}
R(t)
=\begin{pmatrix}
-\lambda_{1}(t) & 0 & 0\\
\lambda_{1}(t) & -\lambda_{2}(t) & 0\\
0 & \lambda_{2}(t) & 0
\end{pmatrix}\in\mathbb{R}^{3\times3}.
\end{align*}
The corresponding solution operator for $t\ge0$ is then
\begin{align*}
\Psi(t)
&=\begin{pmatrix}
e^{-\int_{0}^{t}\lambda_{1}(s)\,\textup{d} s} & 0 & 0\\
\Psi_{21}(t) & e^{-\int_{0}^{t}\lambda_{2}(s)\,\textup{d} s} & 0\\
1-e^{-\int_{0}^{t}\lambda_{1}(s)\,\textup{d} s}-\Psi_{21}(t) & 1-e^{-\int_{0}^{t}\lambda_{2}(s)\,\textup{d} s} & 1
\end{pmatrix},
\end{align*}
where
\begin{align*}
\Psi_{21}(t)
&=e^{-\int_{0}^{t}\lambda_{2}(s)\,\textup{d} s}\int_{0}^{t}\lambda_{1}(s)e^{\int_{0}^{s}(\lambda_{2}(\sigma)-\lambda_{1}(\sigma))\,\textup{d} \sigma}\,\textup{d} s.
\end{align*}
For this example, one can check that
\begin{align*}
\Psi(t)\neq e^{\int_{0}^{t}R(s)\,\textup{d} s},\quad \text{if }t>0,
\end{align*}
except for special cases, and thus \eqref{nguess} is invalid.
Summarizing, except for a single irreversible reaction, the evolution equation \eqref{nguess} is typically false for time deppendent rates and is corrected by \eqref{evo} in Theorem~\ref{thmind}.
}
\section{Discussion}
We have given a short and elementary proof of the evolution equations for a general class of systems which can combine anomalous motion with linear reaction kinetics. Our results generalize some previous results in \cite{sokolov2006, henry2006, schmidt2007, langlands2008}. The derivations of these previous results employed a variety of mathematical techniques, including continuous-time random walk theory, Fourier and Laplace transforms, Tauberian theorems, and asymptotic expansions. In light of these previous derivations, one might conclude that the form of the evolution equations depends on these finer details. However, we have shown that the evolution equations follow directly from (i) the independence of the stochastic spatial and discrete processes describing a single particle and (ii) the linearity of the integro-differential operators describing particle motion.
Of course, in the present work and in the previous work \cite{sokolov2006, henry2006, schmidt2007, langlands2008}, the evolution equations are not strictly necessary in the sense that the solution to the equations is merely the product of the distributions of the spatial and discrete processes. Nevertheless, these results are expected to be useful for developing models where the independence assumption breaks down. Indeed, evolution equations of a {very} similar form to \eqref{evo} have been derived in \cite{abad2010, yuste2014} for pure subdiffusion with certain space-dependent reaction rates. Furthermore, we agree with Refs.\ \cite{henry2006, langlands2008} that these results could provide a platform for investigating nonlinear reactions, such as those stemming from mass-action kinetics.
{For example, a natural starting place is an irreversible bimolecular reaction of the form \cite{yuste2001prl}
\begin{align*}
A+A\to\varnothing,
\end{align*}
which describes particles that can annihilate each other. However, while the general form of the evolution equations in \eqref{evo} may be instructive for this nonlinear example, it is clear that the approach of the present work cannot be applied directly. Indeed, the present work relied on the independence of the spatial position and discrete state of a single particle. However, it is clear for this example that a single particle is more likely to be in the discrete ``annihilated'' state if it is in a region of space containing a high concentration of particles. Similarly, if we consider a unimolecular reaction of the form
\begin{align*}
A\overset{\lambda(x)}{\to}\varnothing,
\end{align*}
where the first order rate $\lambda(x)>0$ depends on the spatial position $x$ of the particle, it is clear that the particle is more likely to be in the ``annihilated'' state if it is in a region of space where $\lambda(x)$ is large.
}
\begin{acknowledgments}
The author gratefully acknowledges support from the National Science Foundation (DMS-1944574, DMS-1814832, and DMS-1148230).
\end{acknowledgments}
|
1,108,101,563,143 | arxiv | \section{Observations}
Since 1993, the Owens Valley Radio Observatory (OVRO) 5.5-m telescope (see \cite{Herbig}) has been
used for extensive observations at 32~GHz of the cosmic microwave background
(CMB) on angular scales of $7^\prime\!-22^\prime$. The
receiver input is switched at 500~Hz between two beams of $7^\prime\!.35$ (FWHM)
separated by $22^\prime\!.16$. The OVRO 40-m telescope, under-illuminated at
14.5~GHz to match the 5.5-m beam, provides a second frequency channel for
spectral discrimination of foregrounds (see Table 1). Both receivers detect
right-circular polarization.
\begin{table}[h]
\centerline {\footnotesize\sc TABLE 1}
\centerline {\footnotesize\sc Parameters for the OVRO 5.5-m and 40-m telescopes}
\begin{center}
\footnotesize
\begin{tabular*}{3.0in}[h]{p{0.9in} p{0.9in} p{0.9in}} \hline\hline
& 5.5-m & 40-m \nl\hline
Frequency & 32~GHz & 14.5~GHz \\
Bandwidth & 5.6~GHz & 3~GHz \\
System & 30 K & 48 K \\
temperature & & \\
Beam efficiency& 0.61 & 0.70 \\
Beamwidth &$7^{\prime}.35$ &$7^{\prime}.35$ \\
(FWHM) & & \\
Beamthrow &$22^{\prime}\!.16$ &$21^{\prime}\!.50$ \\
Main beam &$5.12\times10^{-6}~$sr &$5.38\times10^{-6}~$sr \\
solid angle & & \\\hline
\end{tabular*}
\label{tab:params}
\end{center}
\end{table}
In the ``RING5M'' experiment, we observe 36 fields at $\delta \simeq 88^\circ$
spaced by the $22^\prime\!.16$ beamthrow in a ring around the NCP (see Figure 1). To minimize variations in differential
contributions from the ground, each field is observed only within $\pm20$~minutes
of upper culmination. In each flux measurement, the telescope moves in
azimuth to alternate the beams on the main field; each measurement is thus
the difference between the signal from the main field and the average of the
signals in the two adjacent fields (\cite{Readhead}~1989). As a result, a
strong signal in one field produces a negative signal half as strong in each
of the two flanking fields (see Figure 2).
To estimate the contribution of point sources, the RING5M fields were mapped
on the VLA at 8~GHz to a sensitivity of 0.25~mJy; a total of 34 sources
were detected with $S_{\rm 8\,GHz} \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 2~$mJy. Subsequent monthly VLA monitoring
of these sources at 8 and 15~GHz provided accurate measurements of flux
densities and spectral indices, enabling us to estimate the flux
densities at both 14.5~GHz and 32~GHz and to subtract the point-source
contribution from these data sets.
We report here only on results relevant to an anomalous foreground we have
detected. The implications for CMB observations and a full discussion and
analysis of our results will be presented in subsequent papers.
\section{Analysis}
Our cumulative observations of the RING5M fields have achieved a $1\sigma$ rms sensitivity of $17\,\mu$K per field at 32~GHz, and $15\,\mu$K per field at 14.5~GHz.
Signals $\geq200\,\mu$K are seen at both frequencies, and excellent
reproducibility of these data between the 1994, 1995 and 1996 observing
seasons indicates that they represent real structure on the sky.
In addition to good agreement between independent data sets at the same
frequency, a Spearman rank test (\cite{Kendall}), modified to account for
correlations introduced by the switching (see \S3), finds a correlation
$r_s = 0.84$ between the two frequencies, with a significance
$P(r_s \geq 0.84) = 7\times10^{-7}$. The strength of the observed correlation
between independent observations on separate telescopes is further evidence
that the signals are astronomical in origin, and not artifacts of the
observing procedure.
Since the only common element between the two channels is our
observing strategy, we explored the possibility of systematic contamination
by observing the RING5M fields at 14.5~GHz for two weeks
at {\it lower} culmination. The data obtained showed the same signals, to
within the observed $\sim~\!\!\!80~\mu$K scatter between two-week subsets of our
upper culmination data.
In all of the following discussion, we use data sets from which the
point-source contributions have been subtracted. Apart from one source, which
affected 3 of our 36 fields, the contributions of point sources were much
smaller than the detected signals. The maximum $1\sigma$ error on the estimated point-source contribution to a field was $22~\mu$K. There is therefore no doubt that we have detected a significant astronomical signal which is not due to point sources.
\subsection{Spectral Index of the Foreground}
The strongest signals seen in both the 14.5 and 32~GHz channels have
steep spectral indices and amplitudes $T\sim300~\mu$K at 14.5~GHz. A
likelihood analysis yields $\beta = -1.1^{+0.2}_{-0.3}$ for the spectral index of the data set as a
whole, indicating the presence of significant emission with $\beta < 0$.
Recent Westerbork observations (\cite{Wieringa}) reveal polarized structures
at high Galactic latitude as bright as 8~K at 325~MHz. These features, on
scales of $4^\prime-30^\prime$, are seen in linear polarization only; the
corresponding total intensity maps are extremely smooth, and upper limits
$< 0.5-1~$K are set on any conterparts in total intensity. This is interesting,
as $\sim\!8$~K features with spectral index $\beta=-2.7$ can just reproduce the
observed structures at 14.5~GHz if we were 100\% sensitive to linear polarization.
Tests of our 14.5~GHz polarizers, however, indicate $< 6\%$
contamination from linear polarization across our bandpass, so it is clear
that such polarized features cannot account for the signals we have detected.
Moreover, given the smoothness of the total intensity maps, it is highly improbable that the structure in the polarized emission is due to variations in
intrinsic polarization angle, and the polarized structure is interpreted as
Faraday rotation of an intrinsically smooth, polarized synchrotron
background by an intervening screen. As a result, the polarization angle
will have the $\nu^{-2}$ dependence of Faraday rotation and the structure
should be negligible at 14.5~GHz.
Total intensity maps from the WENSS survey (\cite{de Bruyn}), covering 20 of
the 36 RING5M fields, show no detectable signals after removal of discrete
sources. Comparison of the maximum signal at 14.5~GHz with the rms from the
WENSS maps in the overlap fields places a lower limit $\beta \geq -2.2$ on the spectral index of the foreground we have detected.
\begin{figure}[ht]
\plotone{fig1.ps}
\caption{\footnotesize IRAS $100\,\mu$m map in J2000 coordinates, with RING5M fields over-plotted. The spacing of the
fields is $\sim 22^\prime$, and the FWHM of the beam is $7^\prime\!.35$.}
\label{fig:irasfields}
\end{figure}
Thus, based on the WENSS maps, we know that the contribution of any
steep-spectrum ($\beta < -2.2$) component is negligible and we now
investigate what foreground spectral
index is favored by our data. We model our data as a Gaussian CMB component in the presence of a single foreground of variable strength but constant spectral index $\beta$. Defining $\Delta T_i = \delta T_i - {{1\over2}(\delta T_{i+1} + \delta T_{i-1})}$, as described in \S1, for each field $i$, we measure
\begin{equation}
\Delta T_i(\nu) = \Delta T_{i_{c\!m\!b}} + \Delta T_{i_{f\!o\!r\!e}}\nu^\beta.
\end{equation}
Given $\Delta T_i(\nu)$ measured at two frequencies $\nu_1$ and
$\nu_2$, we can solve for the CMB component in terms of the unknown spectral
index of the foreground:
\begin{equation}
\Delta T_{i_{c\!m\!b}}(\beta) = {{\Delta T_i(\nu_1)\nu_1^{-\beta} - \Delta T_i(\nu_2)\nu_2^{-\beta}}\over{\nu_1^{-\beta} - \nu_2^{-\beta}}}.
\label{eq:sep}
\end{equation}
The likelihood function for the CMB component (see, for example, \cite{Readhead}~1989) is then given by
\begin{equation}
\L(\sigma_{c\!m\!b}, \beta) = \prod_{i=1}^{36} {1\over{\sqrt{2\pi(\epsilon^2_i + \sigma^2_{c\!m\!b})}}}\exp\left[-{\Delta T^2_{i_{c\!m\!b}}(\beta)\over{2(\epsilon^2_i + \sigma^2_{c\!m\!b})}}\right],
\end{equation}
where $\epsilon_i$ is the error in the residual CMB component, and $\sigma^2_{c\!m\!b}$ is the intrinsic CMB variance.
The likelihood constructed from the point-source subtracted data sets at 14.5 and 32~GHz peaks at $\beta = -2.25$, with $\beta > -1.8$ ruled out at the
$1\sigma$ level. Our data is thus consistent with a foreground of spectral
index $\beta\sim-2$, and we conclude that the foreground is either unusually
flat-spectrum synchrotron radiation, or free-free emission.
\section{IRAS 100 $\mu$m maps}
In an attempt to correlate this component with known Galactic
foregrounds, we convolved the IRAS $100\,\mu$m map (\cite{IRAS}) of the NCP
with our beam and beam-switch. The source-subtracted 14.5~GHz data,
along with results of the IRAS convolution, are shown in Figure 2.
\begin{figure}[ht]
\plotone{fig2.ps}
\caption{\footnotesize Comparison of the 14.5~GHz data (solid line) in $\mu$K, with the IRAS $100\,\mu$m convolution (dot-dashed line). Errors for the IRAS data points are the estimated standard deviation of the convolution. The dotted line essentially coincident with the x-axis is the anisotropy inferred from H$\alpha$ images of the NCP fields, in $\mu$K. At bottom left is the ``triple beam'' pattern due to the double switching.}
\label{fig:kuiras}
\end{figure}
We find a clear correlation between the IRAS $100\,\mu$m maps and our 14.5~GHz
data set. To assess the significance of this result without {\it a priori}
knowledge of the distribution of the IRAS $100\,\mu$m brightness or 14.5~GHz
temperature on $7^\prime$ scales, we use Spearman's rank correlation
coefficient $r_s$. Since this depends only on the data {\it ranks}, whose distribution is known, and not on the values themselves,
we can determine the significance of an observed value of $r_s$
unambiguously. The observed correlation between the 14.5~GHz and IRAS $100~\mu$m data is $r_s = 0.73$, and, for 36 {\it independent} fields, the probability of observing $r_s \geq 0.73$ by chance is $P(r_s\ge0.73) = 4.5\times10^{-7}$.
Due to our switching strategy, however, only every third field is actually
independent, and numerical simulations show that this reduces the significance to $6.7\times10^{-5}$.
We note that region spanning 3h-8h, where the correlation is weakest, is
also the region where we see the strongest signals at 32~GHz; the spectral
indices of these fields are consistent with $\beta = 0$, indicating the
presence of a significant CMB signal.
Taking $\beta = -2.2$ as the spectral index of the foreground most consistent
with both our data and the WENSS maps, we can use the 14.5~GHz and 32~GHz data to solve for the free-free component in the manner of Equation (\ref{eq:sep}). A linear fit to the foreground component yields the conversion
\begin{equation}
{T_{\it f}/I_{100\mu{\rm m}}} = 7.5\times10^{-2}~\nu_{\rm GHz}^{-2.2}~{\rm K/(MJy/sr)}.
\end{equation}
\section{H$\alpha$ Observations of the NCP}
\cite{Gaustad}~(1995) have recently estimated the free-free contamination
of small-scale anisotropy experiments through H$\alpha$ observations of the
NCP. For the brightness temperature of optically thin hydrogen, we expect
\begin{equation}
T_b(\mu{\rm K}) = {5.43\over{\nu^2_{10}T^{1/2}_4}}g_{\it ff}E\!M_{\rm cm^{-6}pc},
\label{eq:tb}
\end{equation}
where
\begin{equation}
g_{\it ff} = 4.69(1 + 0.176\ln{T_4} - 0.118\ln{\nu_{10}})
\end{equation}
is the free-free Gaunt factor (\cite{Spitzer}), the frequency is $10^{10}\nu_{10}~{\rm GHz}$, the electron temperature $T_e = 10^4T_4~{\rm K}$, and $E\!M \equiv \int{n^2_e\,dl}$ is the emission measure. For $T_e\leq 2.6\times10^4~$K, the H$\alpha$ surface brightness in rayleighs ($1 {\rm R} = 2.42\times10^{-7}~{\rm ergs~cm^{-2}~s^{-1}~sr^{-1}~at~H}\alpha$) is given by
\begin{equation}
I_{H\!\alpha}({\rm R}) = 0.36\,E\!M_{\rm cm^{-6}pc}\,T^{-0.9}_4\label{eq:Ialpha1}
\end{equation}
(\cite{Shri}).
Simulating our observing procedure on the maps of \cite{Gaustad}~(1995), we measure $\langle\Delta I\rangle_{_{r\!m\!s}}\leq0.1$~R on $7^\prime$ scales in H$\alpha$. If we assume $T_e = 10^4$~K for the temperature of the emitting gas, the inferred upper limit on the rms at 14.5~GHz due to free-free emission is $\langle\Delta T_{\it ff}\rangle_{_{r\!m\!s}}\leq3.2~\mu$K, a factor of $\sim60$ lower than the observed $\langle\Delta T_{\it ff}\rangle_{_{r\!m\!s}}=203~\mu$K.
Furthermore, the H$\alpha$ maps are featureless; in the 36 RING fields, no signals are seen with $|\Delta I| > 0.2~$R. For the $\sim300~\mu$K signals we detect, Equations (\ref{eq:tb})-(\ref{eq:Ialpha1}) predict an H$\alpha$ brightness $|\Delta I|\sim9~$R.
If considerable dust lies along the line of sight to the NCP, extinction might
account for the low levels of observed H$\alpha$ emission; estimates from
the IRAS $100\,\mu$m intensities, however, imply $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 0.6$ magnitudes of
visual extinction (\cite{Simonetti}~1996), so that the upper limits on free-free emission can be increased by 74\% at most.
As $T_e$ is increased beyond $2.6\times10^4~$K, the allowed orbital space for recombination shrinks, and Equation (\ref{eq:Ialpha1}) is no longer valid; for $T_e > 2.6\times10^4~$K, a fit to the H$\alpha$ recombination coefficient gives $\alpha_{H\!\alpha} \propto T^{-1.2}$ (\cite{Ferland}). The presence of $300\,\mu$K free-free emission can therefore be reconciled
with the observed $3\sigma$ H$\alpha$ limit if the emission is due to gas at $T_e\simgt10^6~$K. For these temperatures, free-free brightness temperatures of $300\,\mu$K at 14.5~GHz require an $E\!M\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 131$. The corresponding
allowed $n_e-l$ parameter space is shown in Figure 3.
\begin{figure}[ht]
\plotone{fig3.ps}
\caption{\footnotesize Allowed $n_e-l$ parameter space (shaded region) for $T_e\sim10^6-10^7~$K. We can exclude $l > 100$~pc as this would require that
inhomogeneities be aligned along the line of sight to within $\sim6\times10^{-3}$~rad, since we see fluctuations on the scale of the $22^\prime$ beamthrow. The solid lines correspond to $T_e = 10^6,~2\times10^6,~4\times10^6,~6\times10^6,~8\times10^6 {\rm~and~} 10^7~\rm K.$}
\label{fig:nel}
\end{figure}
\section{Discussion}
The observed structure in the IRAS $100~\mu$m map of the NCP
region is part of a large HI feature known as the NCP Loop. This feature,
which encompasses all of the 36 RING5M fields, has been modeled by Meyerdierks
et al.~(1991) as the wall of an expanding cylindrical shock. While the
production of a {\it dense} ionized component such as that implied by Figure 3
may pose significant difficulties --- such structures will be extremely overpressured, and must necessarily be transient phenomena --- it is intriguing that the combination of large emission measure and high temperature arrived at by
interpreting the observed structure at 14.5~GHz as free-free emission are
suggestive of just such a shocked component of the ISM.
If the emission is due to $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10^6$~K gas, this component should have a
counterpart in soft X-rays; the absence of any ROSAT PSPC pointings near the
NCP and the low resolution of available all-sky surveys, however, prevent any
useful comparison with existing data sets.
Perhaps more plausible, though no less anomalous, is the possibility that the
observed structure at 14.5~GHz is due to flat-spectrum synchrotron emission.
Synchrotron spectral indices as flat as $\beta = -2.0$ are typically
observed only in plerions associated with the very youngest SNR (\cite{Green}),
and we would not expect such emission from the NCP Loop --- an old remnant,
with expansion velocity $v\simeq 20$~km/s. A notable exception to the general
steepness of Galactic synchrotron radiation, however, is the filamentary structure
observed toward the Galactic center. These features, consisting of long,
nearly one-dimensional threads, have spectral
indices $-2.2 \leq\beta\leq -1.9$ yet show considerable linear polarization,
suggesting that the dominant emission mechanism is synchrotron (\cite{Yusef}).
Although such structures would suffer from a similar lifetime problem as
free-free filaments and require recent injection of high-energy electrons to
maintain a flat spectrum, they would obviate the high temperature and pressure
required in the case of free-free emission.
\section{Conclusions}
We have detected a significant correlation between emission with temperature
spectral index $\beta\sim-2$ observed at 14.5~GHz in the RING5M experiment, and the IRAS $100\,\mu$m maps. If this is free-free emission, the lack of
accompanying H$\alpha$ emission implies that it is from a component of the
ISM with $T_e\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10^6~$K. The large EM required to
produce the observed signals at these temperatures is typical of SNR, an
interpretation supported by the morphology of the NCP Loop, with which the IRAS emission is associated.
\cite{kogut}~(1996) have recently reported a large angular scale correlation of the
residual free-free component in the COBE DMR sky maps with far-infrared DIRBE emission. The level of this signal at 53~GHz, however, is consistent with
predictions from H$\alpha$ observations, implying that on $7^\circ$ scales,
the observed free-free emission is from a $T_e\sim10^4~$K phase of the
ISM. Moreover, if the correlation with free-free emission persists to
small scales, the power spectrum of the high-latitude DIRBE $240\,\mu$m maps
$P({\ell})\propto{\ell}^{-3}$ where ${\ell}\sim 60/\theta^{\circ}$, implies
a level of free-free at $0.1^{\circ}$ scales marginally consistent with
the limit inferred from H$\alpha$ observations.
If the observed foreground is not unique to the NCP region, our
results imply that such emission could be a serious contaminant to
small-scale CMB measurements in other areas of sky. This component does,
however, have a significantly steeper spectral index and may be subtracted out by
multi-frequency observations. Moreover, further observations now in progress
to determine the extent of the correlation between 14.5~GHz and $100~\mu$m
emission indicate that these results for the NCP region are atypical.
\vskip 12pt
We would like to thank A. G. de Bruyn for making data from the WENSS survey
available to us prior to publication, Gaustad et al. for placing their H$\alpha$ images in the public domain, C. Heiles for pointing us to the work of
Meyerdierks et al, and a referee for a number of useful comments. This
research has made use of the SkyView database, operated under the auspices
of the High Energy Astrophysics Science Archive Research Center (HEASARC)
at the GSFC Laboratory for High Energy Astrophysics.
This work at the OVRO is supported by NSF grant number AST 94-19279.
\newpage
\footnotesize
|
1,108,101,563,144 | arxiv | \section{Element Jacobi Smoothing}
\label{app:ej}
Fidkowski~et~al.~\cite{Fidkowski2005} investigated the use of \emph{p}-multigrid on the convergence of implicit DG with Element-Jacobi (EJ) smoothing. As a canonical approach for solving implicit systems of equations we have included this method to provide a benchmark for the dual time approach. The equivalent of the pseudo time update for EJ takes the form
\begin{subequations}
\begin{align}
\mathbf{u}_{n+1,m+1} &= \mathbf{u}_{n+1,m} - \kappa\mathbf{J}^{-1}(\mathbf{T}\mathbf{u}_{n+1,m} - C_B\mathbf{u}_{n+1,0}),\\
\mathbf{J} &= \px{}{\mathbf{u}_{n+1,m}}(\mathbf{T}\mathbf{u}_{n+1,m} - C_B\mathbf{u}_{n+1,0}),
\end{align}
\end{subequations}
where $\kappa$ is the relaxation factor. From \cref{eq:Q_def,eq:pseudo_res} the Jacobian matrix inverse may then be defined as
\begin{equation}
\mathbf{J}^{-1} = B_0\Delta t\left[\mathbf{I} - \frac{2B_0\Delta t}{h}\left(\mathbf{C}_0 -\frac{2\mu}{h}\mathbf{B}_0\right)\right]^{-1},
\end{equation}
and this may then be inserted in the previously defined \emph{p}-multigrid algorithms in place of the RK pseudo-time integration.
\section{FR Operator Definition}
\label{app:fr_op}
The FR operators of first-order derivatives are defined as
\begin{subequations}
\begin{align}
\px{\mathbf{u}_i}{x} &= \frac{2}{h}\left(\mathbf{C}_-\mathbf{u}_{i-1} + \mathbf{C}_0\mathbf{u}_i + \mathbf{C}_+\mathbf{u}_{i+1}\right),\\
\mathbf{C}_- &= \alpha\mathbf{g}_L\mathbf{l}_R^T,\\
\mathbf{C}_0 &= \mathbf{D} - \alpha\mathbf{g}_L\mathbf{l}_L^T - (1-\alpha)\mathbf{g}_R\mathbf{l}_R^T,\\
\mathbf{C}_+ &= \alpha\mathbf{g}_L\mathbf{l}_R^T.
\end{align}
\end{subequations}
The matrix $\mathbf{D}$ is the nodal differentiation matrix ($\mathbf{D}_{ij}=\pxvar{l_i(x_j)}{x}$), $\mathbf{g}_L$ is the \emph{gradient} of the left correction function at the solution points and $\mathbf{l}_l$ is the interpolation of the solution points to the left faces.
The FR methodology for second-order derivatives is to nest the derivatives, treating each first order derivative in the standard manner~\cite{Castonguay2013,Huynh2009,Ou2011}. In particular, the diffusion equation takes the form
\begin{equation}
\px{u}{t} = \mu\px{q}{x}, \quad \mathrm{where} \quad q = \px{u}{x}.
\end{equation}
Each stage is then solved with the FR methodology, which in the vector form is
\begin{subequations}
\begin{align}
\mathbf{q}_i &= \frac{2}{h}\Big(\mathbf{C}_{-}\mathbf{u}_{i-1} + \mathbf{C}_0\mathbf{u}_i + \mathbf{C}_{+}\mathbf{u}_{i+1}\Big),\\
\px{\mathbf{q}_i}{x} &= \frac{2}{h}\Big(\mathbf{C}_{-}\mathbf{q}_{i-1} + \mathbf{C}_0\mathbf{q}_i + \mathbf{C}_{+}\mathbf{q}_{i+1}\Big).
\end{align}
\end{subequations}
These may be combined to achieve
\begin{multline}
\pxi{2}{\mathbf{u}_i}{x} = \mathbf{Q}_d\mathbf{u}_i = \frac{4}{h^2}\Big(\mathbf{C}_{-}^2\mathbf{u}_{i-2} +
(\mathbf{C}_{-}\mathbf{C}_0 + \mathbf{C}_0\mathbf{C}_{-})\mathbf{u}_{i-1} + \\
(\mathbf{C}_{-}\mathbf{C}_{+} + \mathbf{C}_0^2 + \mathbf{C}_{+}\mathbf{C}_{-})\mathbf{u}_{i} + \\
(\mathbf{C}_0\mathbf{C}_{+} + \mathbf{C}_{+}\mathbf{C}_0)\mathbf{u}_{i+1} +
\mathbf{C}_{+}^2\mathbf{u}_{i+2}\Big).
\end{multline}
In the analysis performed in the main body of this work the following assignments are used for brevity.
\begin{subequations}
\begin{align}
\mathbf{B}_{-2} &= \mathbf{C}_{-}^2\\
\mathbf{B}_{-} &= \mathbf{C}_{-}\mathbf{C}_0 + \mathbf{C}_0\mathbf{C}_{-}\\
\mathbf{B}_{0} &= \mathbf{C}_{-}\mathbf{C}_{+} + \mathbf{C}_0^2 + \mathbf{C}_{+}\mathbf{C}_{-}\\
\mathbf{B}_{+} &= \mathbf{C}_0\mathbf{C}_{+} + \mathbf{C}_{+}\mathbf{C}_0\\
\mathbf{B}_{+2} &= \mathbf{C}_{+}^2
\end{align}
\end{subequations}
\section{Additional Operator Definition for $p$-Multigrid}
\label{app:pmg}
For the convergence of lower levels of $p$-multigrid there is a source term $\mathbf{r}_i$ that can be substituted to give the pseudo-step update equation as:
\begin{subequations}
\begin{align}
\mathbf{u}_{i,n+1,m+1} =&\: \mathbf{P}_i\mathbf{u}_{i,n+1,m} - \mathbf{C}_i\sum^{s-1}_{l=0}B_{l+1}\mathbf{u}_{i,n-l} - \mathbf{K}_i\Big(\mathbf{T}_{i,0}\mathbf{u}_{i,n+1,0} + \mathbf{d}_{i} \Big) \\
\mathbf{u}_{i,n+1,M} =&\: \mathbf{P}^M_i\mathbf{u}_{i,n+1,0} - \Bigg[\sum^{M-1}_{m=0}\mathbf{P}^m_i\Bigg]\Bigg(\mathbf{C}_i\sum^{s-1}_{l=0}B_{l+1}\mathbf{u}_{i,n-l} + \mathbf{K}_i\Big(\mathbf{T}_{i,0}\mathbf{u}_{i,n+1,0} + \mathbf{d}_{i} \Big) \Bigg)
\end{align}
\end{subequations}
If we set $i=p-1$, this leads to:
\begin{multline}
\mathbf{u}_{p-1,n+1,M} = \mathbf{P}^M_{p-1}\mathbf{u}_{p-1,n+1,0} - \\
\Bigg[\sum^{M-1}_{m=0}\mathbf{P}^m_{p-1}\Bigg]\Bigg(\mathbf{C}_{p-1}\sum^{s-1}_{l=0}B_{l+1}\mathbf{u}_{p-1,n-l} + \mathbf{K}_{p-1}\Big(\mathbf{T}_{p-1,0}\mathbf{u}_{p-1,n+1,0} + \mathbf{d}_{p-1} \Big) \Bigg)
\end{multline}
and for compactness we will set:
\begin{equation}
\mathbf{S}_{p-1,M} = \sum^{M-1}_{m=0}\mathbf{P}^m_{p-1}
\end{equation}
Now substitutions may be made such that this update equation can be expressed in terms of the initial solution at level $p$. Hence,
\begin{multline}
\mathbf{u}_{p-1,n+1,M} = \Big(\mathbf{P}^M_{p-1}\pmb{\rho}_{p-1}\mathbf{R}_{p,M} - \mathbf{S}_{p-1,M}\Big[\mathbf{K}_{p-1}\mathbf{T}_{p-1,0}\pmb{\rho}_{p-1}\mathbf{R}_{p,M} + \mathbf{K}_{p-1}\pmb{\rho}_{p-1}\mathbf{T}_{p,M}\Big]\Big)\mathbf{u}_{p,n+1,0} \\
- \mathbf{S}_{p-1,M}\mathbf{C}_{p-1}\sum^{s-1}_{l=0}B_{l+1}\mathbf{u}_{p-1,n-l}
\end{multline}
This is to serve as a demonstration that the update equation at level $p-1$ can be expressed as a linear operator acting on the initial condition. However for the purposes of analytically evaluating the goings-on at each level, this nested approach will be avoided and instead values such as $\mathbf{u}_{i,p+1,0}$ and $\mathbf{d}_i$ will be made explicit.
But for completeness here we will continue to finish the simple one level v-cycle.
\begin{multline}
\mathbf{\Delta}_{p-1} = - \bigg(\Big(\mathbf{P}^M_{p-1}-\mathbf{I}\Big)\pmb{\rho}_{p-1}\mathbf{R}_{p,M} - \mathbf{S}_{p-1,M}\Big[\mathbf{K}_{p-1}\mathbf{T}_{p-1,0}\pmb{\rho}_{p-1}\mathbf{R}_{p,M} + \mathbf{K}_{p-1}\pmb{\rho}_{p-1}\mathbf{T}_{p,M}\Big]\bigg)\mathbf{u}_{p,n+1,0} \\
+ \mathbf{S}_{p-1,M}\mathbf{C}_{p-1}\sum^{s-1}_{l=0}B_{l+1}\mathbf{u}_{p-1,n-l}
\end{multline}
\section{Fourier Analysis}\label{sec:fourier}
We have so far presented the techniques to construct implicit temporal integration applied to ODEs, \cref{eq:linear_pseduo} and demonstrated the effect of pseudo-stepping on time integration stability. We now wish to use the flux reconstruction scheme for spatial differentiation to provide the eigenvalues. With this complete system, not only can the coupled stability be studied, but it provides a means to calculate the analytic error which may inform cycle construction. In order to generalise the analysis, we will consider the Fourier analysis of the linear advection-diffusion equation with a modified Bloch trial solution
\begin{subequations}\label{eq:lad}
\begin{align}
\px{u}{t} + \px{u}{x} &= \mu\pxi{2}{u}{x}, \\
u &= \exp{\big(\imath\left(kx-\omega t\right)\big)},\\
\mathbf{u}_n &= \exp{\big(\imath (\mathbf{x}-\omega n\Delta t)\big)},
\end{align}
\end{subequations}
where $k$ is the wavenumber, $\omega=k(1-\imath \mu k)$ is the angular frequency, and $\imath = \sqrt{-1}$. We will now construct the spatial derivatives via the FR methodology~\cite{Huynh2007,Vincent2010} in one-dimension, which in the linear case---with the Bloch wave solution---may be defined as
\begin{subequations}\label{eq:Q_def}
\begin{align}
\px{\mathbf{u}_i}{x} = \mathbf{Q}_a\mathbf{u}_i =&\; \frac{2}{h}\Big(\exp{(-\imath kh)}\mathbf{C}_{-} + \mathbf{C}_0 + \exp{(\imath kh)}\mathbf{C}_+\Big)\mathbf{u}_i,\\
\pxi{2}{\mathbf{u}_i}{x} = \mathbf{Q}_d\mathbf{u}_i =&\;
\frac{4}{h^2}\Big(\exp{(-\imath 2kh)}\mathbf{B}_{-2} +
\exp{(-\imath kh)}\mathbf{B}_{-} + \mathbf{B}_0 + \\
&\quad\quad\:\exp{(\imath kh)}\mathbf{B}_{+} +
\exp{(2\imath kh)}\mathbf{B}_{+2}\Big)\mathbf{u}_i, \notag
\end{align}
\end{subequations}
for linearly transformed elements on a uniform grid with spacing $h$. Further details on the operator definitions can be found in \cref{app:fr_op}. During the FR method, a common interface flux and a common interface value is calculated. We will use $\alpha=(\alpha_a,\alpha_d)$ to denote the degree of upwinding in the advection and diffusion calculations, with $\alpha=1$ being fully upwinded and $\alpha=0.5$ being centrally differenced.
If the FR scheme represented by $\mathbf{Q}$ is full rank, i.e., none of the solution points are collocated and $k\ne 0$, then $\mathbf{Q}$ can be diagonalised as
\begin{equation}\label{eq:q_diag}
\mathbf{Q} = -\mathbf{Q}_a + \mu\mathbf{Q}_d = \imath k\mathbf{W\Lambda}_Q\mathbf{W}^{-1}.
\end{equation}
which demonstrates that FR has the capacity for a solution comprised of $p+1$ unique eigenvalues. To now apply FR as the source of the eigenvalues to the integration scheme, we first use a result of Ketcheson~et~al.~\cite{Ketcheson2012}, where it is possible to write the stability polynomial of a temporal integration method with $r$ steps as
\begin{equation}
P\bigg(\lambda\Delta\tau,\frac{\Delta\tau}{\Delta t}\bigg) = \sum^{r}_{j=0}\gamma_j\bigg(\frac{\Delta\tau}{\Delta t}\bigg)(\lambda\Delta\tau)^j.
\end{equation}
for an $r$ stage RK scheme. Hence, we can define the partial pseudo update equation as
\begin{equation}
\mathbf{P} = \sum^{r}_{j=0}\gamma_j(\Delta\tau\mathbf{Q})^j = \mathbf{W}\Bigg[\sum^{r}_{j=0}\gamma_j(\imath k\Delta\tau\mathbf{\Lambda}_Q\big)^j\Bigg]\mathbf{W}^{-1}.
\end{equation}
The BDF source term is also a function of $\lambda\Delta\tau$ and can similarly be found in terms of $\mathbf{Q}$ using a polynomial fit of $C$ as in \cref{eq:ps1}. Hence,
\begin{equation}
C = \sum^{r-1}_{j=0}\kappa_j(\lambda\Delta\tau)^j, \quad \mathrm{and} \quad \mathbf{C} = \mathbf{W}\Bigg[\sum^{r-1}_{j=0}\kappa_j\big(\imath k\Delta\tau\mathbf{\Lambda}_Q\big)^j\Bigg]\mathbf{W}^{-1}.
\end{equation}
The full pseudo-time update equation is then
\begin{equation}
\mathbf{u}_{n+1,m+1} = \mathbf{P}\mathbf{u}_{n+1,m} - \mathbf{C}\sum^{s-1}_{l=0}B_{l+1}\mathbf{u}_{n-l}.
\end{equation}
To confirm $\mathbf{P}$ and $\mathbf{C}$ are correctly defined the following relation should hold
\begin{equation}\label{eq:sum_update}
\mathbf{P} + \mathbf{C} = \mathbf{R},
\end{equation}
for the ERK update matrix, $\mathbf{R}$. With $\mathbf{P}$ and $\mathbf{C}$ defined, the $M^\mathrm{th}$ value can be expressed in terms of the initial value of the pseudo-stepping as
\begin{equation}\label{eq:update}
\mathbf{u}_{n+1,M} = \mathbf{P}^M\mathbf{u}_{n+1,0} - (\mathbf{I} -\mathbf{P})^{-1}(\mathbf{I} - \mathbf{P}^M)
\mathbf{C}\Bigg(\sum^{s-1}_{l=0}B_{l+1}\mathbf{u}_{n-l}\Bigg).
\end{equation}
Again simplification was made through a geometric series and its matrix analogue, which has the generalised assumption that the spectral radius of $\mathbf{P}$ is less than unity, i.e., $\rho(\mathbf{P}) \leqslant 1$. This can be verified for suitable pseudo-time steps coupled to the previous assumption that $\Delta\tau \ll \Delta t$.
The dual-time update may then be written as
\begin{subequations}
\begin{align}
\mathbf{u}_{n+1,M} =&\: \left[\mathbf{P}^M - (\mathbf{I} -\mathbf{P})^{-1}(\mathbf{I} - \mathbf{P}^M)\mathbf{C}\left(\sum^{s-1}_{l=0}B_{l+1}\exp{\left(\imath \omega l\Delta t\right)} \right)\right]\mathbf{u}_{n+1,0}, \\
\mathbf{u}_{n+1,M} =&\: \mathbf{R}_M\mathbf{u}_{n+1,0}.
\end{align}
\end{subequations}
The previous solution needed for the BDF source term is taken as the analytic solution from \cref{eq:lad}, which is consistent with a time history of fully converge solutions. Due to imposing a discretiation on the solution the system has a Nyquist limit on the maximum wavenumber, which due to the coupled space-time is
\begin{equation}\label{eq:nyquist}
k_\mathrm{Nq} = \min{\left(\frac{\pi}{\Delta t},\frac{(p+1)\pi}{h}\right)} \quad \mathrm{and}\quad \hat{k}=\frac{\pi k}{k_\mathrm{Nq}}.
\end{equation}
with $\hat{k}$ being the normalised wavenumber.
The exact solution from the applied Bloch wave can be projected into the solution space of FR using the eigenvectors of $\mathbf{Q}$ to obtain the vector of mode weights, $\pmb{\beta}$, via
\begin{equation}
\mathbf{u}_0 = \exp{\left(\imath kx_j\right)}\mathbf{W}\pmb{\beta}.
\end{equation}
This may then be substituted into \cref{eq:update} to give the fully discrete error, written as
\begin{subequations}
\begin{align}
\mathbf{e} =&\: \mathbf{u}_{n+1,M} - \mathbf{u}_{n+1}, \\
=&\: \exp{\big(\imath (kx_j-\omega n\Delta t)\big)}\left(\mathbf{R}_M -\exp{(-\imath \omega\Delta t)}\mathbf{I}\right)\mathbf{W}\pmb{\beta}.
\end{align}
\end{subequations}
\begin{figure}[tbhp]
\centering
\subfloat[Explicit, $\Delta t=0.05$]{\label{fig:frdg4_error} \adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/FRDG4_explicit_error.tex}}}
~
\subfloat[BDF2, $\Delta t = 0.2$, $\Delta\tau=0.05$]{\label{fig:frdg4_error_bdf} \adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/FRDG4_BDF2_error_tt4.tex}}}
\caption{\label{fig:frdg4_error_comp}Error comparison for FRDG, $p=4$, and SSPRK3 explicit scheme, with and without dual-time stepping for pure advection.}
\end{figure}
The evolution of the Euclidean norm of the error calculated using this method is shown in \cref{fig:frdg4_error_comp}, where comparison is made between the use of an explicit scheme and dual-time stepping for FR with upwinded interfaces. The physical time step size in the dual-time error was chosen such that the temporal and spatial Nyquist wavenumbers were equivalent. It is evident that at low wavenumbers the error is equivalent, but at high wavenumbers, the dispersion and dissipation associated with the scheme causes a modification to the pseudo-time steady state, and so large errors are observed.
If we now look to characterise the maximum time step sizes for the explicit and coupled system, due to the presence of source terms in the update equation, the traditional von Neumann stability criteria has to be modified. Therefore, the set of stable values of $\Delta\tau$ may be defined as
\begin{equation}
\Delta\tau_\mathrm{stable}(\Delta t) = \Big\{\Delta\tau\in\mathbb{R}_+ : \: \rho\big(\mathbf{R}_M(\Delta\tau,\Delta t)\big) \leqslant \Big|\sum^{s-1}_{l=0}B_{l+1}\exp{\big(\imath \omega l\Delta t\big)}\Big|\Big\}.
\end{equation}
Hence, the maximum stable step size is $\Delta\tau_\mathrm{max} = \sup\Delta\tau_\mathrm{stable}$. We will also define $\Delta\tau_\mathrm{max,A}$ to signify the maximum step size for pure advection.
\begin{figure}[tbhp]
\centering
\subfloat[Maximum time step size for advection-diffusion FRDG with explicit SSPRK3 temporal integration, and $\alpha=(1,0.5)$.]
{\label{fig:FRDG_ad_cfl}\adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/FRDG_mu_cfl.tex}}}
~
\subfloat[{Maximum pseudo time step size for upwinded advection with FRDG $p=4$, BDF2, and SSPRK3 with pseudo step number $m$ and $k\in(0,k_\mathrm{Nq}]$.}]
{\label{fig:FRDG_BDF_CFL}\adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/FRp4_BDF_CFL.tex}}}
\caption{\label{fig:CFL}Maximum time step size for some configurations for FRDG.}
\end{figure}
\cref{fig:FRDG_ad_cfl} presents the CFL limits of the explicit system, i.e. without dual-time stepping, and makes it clear that for all orders the absolute value of the maximal explicit time-step becomes severely limited for low Reynolds numbers. Turning to the coupled system, the effect of the physical time size on the maximum pseudo-step size is presented in \cref{fig:FRDG_BDF_CFL}. Interestingly, it can be seen that the first pseudo-step has a more restrictive maximum step size, and from the error in \cref{fig:frdg4_error_bdf}, this can be attributed to the contraction being highest initially. Therefore, to prevent instabilities initially entering the solution, smaller pseudo-time steps are required at first. From \cref{fig:ERK5dgp4_BDF2_stab}, as the ratio $\Delta t/\Delta\tau$ is reduced the stability region is reduced and this is seen here in the CFL limit. For $\Delta t>0.2$, the physical-time dominates the Nyqusit limit, and it is around this point at which a sharp change in the $m=1$ case is seen. After this point, as $\Delta t$ is continually increased, the range of wavenumber decreases, and the stability is observed to increase. This is concurrent with the initial error in the BDF approximation being largest at highest wavenumbers, with further iterations this behaviour is not seen as the poor initial approximation of the temporal derivative from BDF---due to the use of $u_{n+1,0}=u_n$---is quickly rectified.
\subsection{\emph{p}-Multigrid}
A key component of the multigrid methodology is the residual which was defined in \cref{eq:ps_residual} for the $M^\mathrm{th}$ pseudo step in terms of the zeroth step. Through applying the FR operator for the spatial discretisation, we may write the residual of $u_{n+1,M}$ as:
\begin{subequations}\label{eq:pseudo_res}
\begin{align}
\px{\mathbf{u}_{i,n+1,M}}{\tau} =&\: -\bigg(\frac{\mathbf{I}}{\Delta tB_0} + \mathbf{Q}_i\bigg)\mathbf{u}_{i,n+1,M} - \underbrace{\frac{1}{\Delta t B_0}\sum_{l=0}^{s-1}B_{l+1}\exp{\big(\imath \omega l\Delta t\big)}}_{C_B}\mathbf{u}_{i,n+1,0} \\
=&\: \mathbf{T}_{i}\mathbf{u}_{i,n+1,M} - C_B\mathbf{u}_{i,n+1,0}.
\end{align}
\end{subequations}
For FR \emph{p}-multigrid, the restriction and prolongation matrices can be straightforwardly defined modally as
\begin{equation}
\hat{\pmb{\rho}}_i = \hat{\mathbf{I}} \quad \mathrm{and} \quad \hat{\pmb{\pi}}_i = \hat{\mathbf{I}}^T,
\end{equation}
which can be projected to a nodal representation by using the Vandermonde matrix and the appropriate solution points for the degree.
It should be noted again that $\mathbf{r}_p = 0$. To proceed, \cref{eq:ps_rsource} has to converted to a matrix representation and so the procedure applied to $C$ is applied to $K$ using $\mathbf{Q}$. This leads to the update equation
\begin{equation}
\mathbf{u}_{i,n+1,m+1} = \mathbf{P}_i\mathbf{u}_{i,n+1,m} - \mathbf{C}_i\sum^{s-1}_{l=0}B_{l+1}\mathbf{u}_{i,n-l} - \mathbf{K}_i\mathbf{r}_i,
\end{equation}
which similarly may be defined at the $M^\mathrm{th}$ step as
\begin{subequations}
\begin{align}
\mathbf{u}_{i,n+1,M} =&\: \mathbf{P}^M_i\mathbf{u}_{i,n+1,0} - \bigg[\sum^{M-1}_{m=0}\mathbf{P}^m_i\bigg]\bigg(\mathbf{C}_i\sum^{s-1}_{l=0}B_{l+1}\mathbf{u}_{i,n-l} + \mathbf{K}_i\mathbf{r}_i\bigg),\\
\mathbf{u}_{i,n+1,M} =&\: \mathcal{S}(M,\mathbf{P},\mathbf{C},\mathbf{K},\mathbf{r}_i,\mathbf{u}_{i,n+1,0},\mathbf{u}_i).
\end{align}
\end{subequations}
\begin{figure}[tbhp]
\centering
\subfloat[][One level \emph{V}-cycle.]{\label{fig:simple_v} \adjustbox{width=0.3\linewidth,valign=b}{\input{./Figs/simple_v_cycle.tex}}}
~
\subfloat[][Multi-level $\mathrm{\emph{W}}_{p-2}\,$-cycle. ]{\label{fig:simple_w}\adjustbox{width=0.3\linewidth,valign=b}{\input{./Figs/simple_w_cycle.tex}}}
~
\subfloat[][Multi-level $\mathrm{\emph{V}}_\mathrm{AP}$-cycle. ]{\label{fig:ap_v}\adjustbox{width=0.3\linewidth,valign=b}{\input{./Figs/ap_v_cycle.tex}}}
\caption{\label{fig:cycle_config}\emph{p}-Multigrid cycle configuration.}
\end{figure}
We will now begin by defining the steps in a simple \emph{p}-multigrid \emph{V}-cycle. Diagrammatically, this is shown in \cref{fig:simple_v} with the steps presented in \cref{tab:simplev_steps}. This procedure may be generalised to arbitrary cycles such as \cref{tab:genv_steps} for \emph{V}-cycles with multiple stages.
\begingroup
\setlength{\tabcolsep}{2pt}
\begin{table}[tbhp]
\overfullrule=0pt
\centering
\caption{\label{tab:simplev_steps}Simple $V$-cycle steps.}
\begin{tabular}{rclr}
\toprule
$\mathbf{u}_{p,n+1,M}$ &$=$& $\mathbf{R}_{p,M}\mathbf{u}_{p,n+1,0}$ & \rdelim\}{5}{0mm}[{\rotatebox[origin=c]{-90}{Restriction}}]\\
$\mathbf{d}_{p}$ &$=$& $\mathbf{r}_p - (\mathbf{T}_{p}\mathbf{u}_{p,n+1,M} + C_B\mathbf{u}_{p,n+1,0})$ \\
$\mathbf{u}_{p-1,n+1,0}$ &$=$& $\pmb{\rho}_{p-1}\mathbf{u}_{p,n+1,M}$\\
$\mathbf{d}_{p-1}$ &$=$& $\pmb{\rho}_{p-1}\mathbf{d}_p$ \\
$\mathbf{r}_{p-1}$ &$=$& $\mathbf{T}_{p-1}\mathbf{u}_{p-1,n+1,0} + C_B\pmb{\rho}_{p-1}\mathbf{u}_{p,n+1,0} + \mathbf{d}_{p-1}$ \\
\midrule
$\mathbf{u}_{p-1,n+1,M}$ &$=$& $\mathcal{S}(M,\mathbf{P}_{p-1},\mathbf{C}_{p-1},\mathbf{K}_{p-1},\mathbf{r}_{p-1},\mathbf{u}_{p-1,n+1,0},\mathbf{u}_{p-1})$ & \rdelim\}{5}{0mm}[{\rotatebox[origin=c]{-90}{Prolong.}}]\\
$\mathbf{\Delta}_{p-1}$ &$=$& $\mathbf{u}_{p-1,n+1,0} - \mathbf{u}_{p-1,n+1,M}$ \\
$\mathbf{\Delta}_{p}$ &$=$& $\pmb{\pi}_p\mathbf{\Delta}_{p-1}$ \\
$\mathbf{v}_{p,n+1,0}$ &$=$& $\mathbf{u}_{p,n+1,M} - \mathbf{\Delta}_p$ \\
$\mathbf{u}_{p,n+1}$ &$=$& $\mathbf{R}_{p,M}\mathbf{v}_{p,n+1,0}$ \\
\bottomrule \\
\end{tabular}
\end{table}
\endgroup
\begingroup
\setlength{\tabcolsep}{2pt}
\begin{table}[tbhp]
\overfullrule=0pt
\centering
\caption{\label{tab:genv_steps}General $V$-cycle steps.}
\begin{tabular}{rcll}
for $l\in\{p, \dots, (l_\mathrm{min}+1)\}$: & & & \\ \toprule
$\mathbf{u}_{l,n+1,M}$ &$=$& $\mathcal{S}(M,\mathbf{P}_{l},\mathbf{C}_{l},\mathbf{K}_{l},\mathbf{r}_{l},\mathbf{u}_{l,n+1,0},\mathbf{u}_{l})$ & \rdelim\}{5}{0mm}[{\rotatebox[origin=c]{-90}{Restriction}}]\\
$\mathbf{d}_{l}$ &$=$& $\mathbf{r}_l - (\mathbf{T}_{l}\mathbf{u}_{l,n+1,M} - C_B\mathbf{u}_{l,n+1,0})$ \\
$\mathbf{u}_{l-1,n+1,0}$ &$=$& $\pmb{\rho}_{l-1}\mathbf{u}_{l,n+1,M}$\\
$\mathbf{d}_{l-1}$ &$=$& $\pmb{\rho}_{l-1}\mathbf{d}_l$ \\
$\mathbf{r}_{l-1}$ &$=$& $\mathbf{T}_{l-1}\mathbf{u}_{l-1,n+1,0} - C_B\pmb{\rho}_{l-1}\mathbf{u}_{l,n+1,0} + \mathbf{d}_{l-1}$ \\
for $l \in\{l_\mathrm{min}, \dots, (p-1)\}$: & & & \\\midrule
$\mathbf{u}_{l,n+1,M}$ &$=$& $\mathcal{S}(M,\mathbf{P}_{l},\mathbf{C}_{l},\mathbf{K}_{l},\mathbf{r}_{l},\mathbf{u}_{l,n+1,0},\mathbf{u}_{l})$ & \rdelim\}{4}{0mm}[{\rotatebox[origin=c]{-90}{Prolong.}}] \\
$\mathbf{\Delta}_{l}$ &$=$& $\mathbf{u}_{l,n+1,0} - \mathbf{u}_{l,n+1,M}$ \\
$\mathbf{\Delta}_{l+1}$ &$=$& $\pmb{\pi}_{l+1}\mathbf{\Delta}_{l}$ \\
$\mathbf{u}_{l+1,n+1,0}$ &$=$& $\mathbf{u}_{l+1,n+1,M} - \mathbf{\Delta}_{l+1}$ \\ \bottomrule
$\mathbf{u}_{p,n+1}$ &$=$& $\mathcal{S}(M,\mathbf{P}_{p},\mathbf{C}_{p},\mathbf{K}_{p},\mathbf{r}_{p},\mathbf{u}_{p,n+1,0},\mathbf{u}_{p})$ & \\
\end{tabular}
\end{table}
\endgroup
From the procedure defined in \cref{tab:simplev_steps,tab:genv_steps}, it can be understood that all the steps may be framed as an operation on the initial solution $\mathbf{u}_{p,n+1,0}$. It is significantly simpler to treat some steps independently and pass the result rather than formulating a single operator to act on $\mathbf{u}_{p,n+1,0}$. However, to this end, all the matrix operators at each step may be written as a polynomials in terms of $\mathbf{Q}$, and in consequence, the eigenvalues of the whole system may be found if the Bloch wave is again applied
\begin{subequations}
\begin{align}
\mathbf{u}_{p,n+1} =&\: \mathbf{S}\mathbf{u}_{p,n}, \\
=&\: \exp{(-\imath k\Delta t)}\imath k\mathbf{W}\mathbf{\Lambda}_S\mathbf{W}^{-1}\mathbf{u}_{p,n}, \\
\exp{(\imath k\Delta t)}\mathbf{W}^{-1}\mathbf{u}_{p,n+1} =&\: \imath k\mathbf{\Lambda}_S\mathbf{W}^{-1}\mathbf{u}_{p,n},
\end{align}
\end{subequations}
where $\mathbf{S}$ is the transformation of the full system. This will enable us to examine how the energy in distributed among the spatial modes as a result of $\mathbf{W}$ being constant. Furthermore, we will define the contraction factor as
\begin{equation}
\gamma = \left[\frac{\|\mathbf{e}_{m+1}\|_2 - \|\mathbf{e}_{m}\|_2}{p+1}\right]^{(n_{sp}+n_{sp}^\prime)^{-1}},
\end{equation}
where $n_{sp}$ is the number of smoothing iterations at the finest level applied at the beginning of the cycle and $n_{sp}^\prime$ is equivalently the number of smoothing iterations at the end of the cycle.
\begin{figure}[tbhp]
\centering
\subfloat[Error.]{\label{fig:frdg4_pmg_ad_err} \adjustbox{width=0.48\linewidth,valign=t}{\input{./Figs/FRDGp4_pmg_ad_error_comp.tex}}}
~
\subfloat[Contraction.]{\label{fig:frdg4_pmg_ad_con} \adjustbox{width=0.48\linewidth,valign=t}{\input{./Figs/FRDGp4_pmg_ad_cont_comp.tex}}}
\caption{\label{fig:frdg4_pmgV_ad_comp}Error comparison of dual time FRDG, $p=4$, with SSPRK3 and BDF2 for constant $\Delta t/\Delta\tau=10$, $\Delta\tau=\num{7e-3}$, $\mu=0.5$, and $\alpha=(1,0.5)$. For $\hat{k}=\pi/8$ (\emph{solid}), and $\pi/16$ (\emph{dashed}).}
\end{figure}
\cref{fig:frdg4_pmgV_ad_comp} exemplifies the effect of \emph{p}-multigrid on the convergence of the dual time scheme. Here, advection-diffusion was considered for several cycles where $n_s$ is the number of SSPRK3 smoothing steps per level. The \emph{v}-cycle with subscript AP has additional prolongation smoothing steps with the prolongation smoothing set to three. An example cycle is given in \cref{fig:ap_v}. The pseudo time shown for the \emph{p}-multigrid cases is the cumulative time at the finest \emph{p} level, i.e., $\tau=(n_{sp}+n_{sp}^\prime)n_\mathrm{cycle}\Delta\tau$. In all cases, \emph{p}-multigrid increased the rate of convergence, however, it is clear that fewer smoothing steps during the restriction portion of the cycle was beneficial to convergence. A corollary observation is that making the \emph{V}-cycle asymmetric with addition prolongation smoothing could further increase convergence. This is due to the larger differences in pseudo time between levels causing larger deficit source terms which, upon prolongation, lead to the need for more smoothing steps so that they are adequately relaxed into the solution. From the \emph{V}-cycle with $n_s=3$ in \cref{fig:frdg4_pmgV_ad_comp}, it is clear that the prolongation smoothing requirement does not grow linearly with the restriction smoothing otherwise we would expect to see results closer to the $n_s=1$ case.
It was also observed in all cases that the number of overall iterations to converge is limited by the lower wavenumbers. This result may be expected and can be understood from the longer half-lives of the these waves due to the lower dissipation when considered in the fully discrete form. As the viscosity was decreased, the effectiveness of \emph{p}-multigrid was found to decrease. However, additional prolongation was still found to be effective.
\begin{figure}[tbhp]
\centering
\subfloat[Primary mode.]{\label{fig:frdg4_pmg_ad_beta} \adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/FRDGp4_BDF2_pmg_beta.tex}}}
~
\subfloat[Secondary mode.]{\label{fig:frdg4_pmg_ad_beta2} \adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/FRDGp4_BDF2_pmg_beta_sec.tex}}}
\caption{\label{fig:frdg4_beta_ad_comp}Primary and secondary mode energy at $\hat{k}=\pi/16$ for FRDG $p=4$, BDF2, and SSPRK3, with $\Delta\tau=0.007$, $\Delta t/\Delta\tau=10$, $\mu=0.5$, and $\alpha=(1,0.5)$. Note the change in \emph{y}-axis scaling between figures.}
\end{figure}
Further insight as to why the additional prolongation \emph{V}-cycles have improved contraction rates can be gained from inspection of the way in which the solution energy is distributed among the modes of the spatial system. \cref{fig:frdg4_beta_ad_comp} shows the energy in the primary and secondary modes for several cycle configurations. The additional prolongation steps, in both cases, causes a greater redistribution of energy from the primary mode to the secondary mode. When considered with the knowledge that the secondary modes have shorter half-lives~\cite{Trojak2018}, the mechanism of convergence acceleration is understood to come from this redistribution. The additional restriction smoothing steps in the $n_s=3$ case diminishes the redistribution, and hence is why the contraction factor in \cref{fig:frdg4_pmg_ad_con} show poorer acceleration. If additional restriction smoothing alone is considered, then the effect on redistribution compared to the $n_s=1$ case is negligible, which is concurrent with redistribution being due to the prolongation correction as may have been anticipated.
\begin{figure}[tbhp]
\centering
\subfloat[BDF2.]
{\label{fig:FRDG_bdf2_cont}\adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/FRDG_BDF2_ad_contraction.tex}}}
~
\subfloat[BDF3.]
{\label{fig:FRDG_bdf3_cont}\adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/FRDG_BDF3_ad_contraction.tex}}}
\caption{\label{fig:FRDG_cont}Initial contraction factor for FRDG with BDF and SSPRK3 at $k=(p+1)\pi/16$, $\Delta\tau=0.078\Delta t_\mathrm{max,A}$, $\mu=0.1$, and $\alpha=(1,0.5)$. With spatial orders $p=4$ (\emph{solid}) and $p=3$ (\emph{dashed}).}
\end{figure}
Subsequently, for a constant wavenumber, values of $\Delta t/\Delta\tau$ were swept through and the contraction was found, the results of which are presented in \cref{fig:FRDG_cont}. Here, we have only used a \emph{V}-cycle with additional prolongation as this offered the best performance. From this data the diminishing returns of using multigrid to accelerate dual-time for large ratios of $\Delta t/\Delta\tau$ is seen. This is due to the large time scales in the hyperbolic component of the system becoming dominant, therefore the dual-time convergence is primarily just dependent on the number of iterations. We have marked the points on each diagram where the ratio of contraction between the base scheme and \emph{p}-multigrid is largest. A move from BDF2 to BDF3, for both spatial order tested, resulted in the maximal point increasing by ${\sim}20\%$.
As a point of comparison, the element Jacobi method coupled to BDF was also considered, both with and without \emph{p}-multigrid acceleration. A brief description, and associated definitions of this technique are included in \cref{app:ej}, and after the definition of the EJ matrix, the earlier derivations may be followed to apply the \emph{p}-multigrid methodology.
\begin{figure}[tbhp]
\centering
\subfloat[$p=3$.]
{\label{fig:FRDG3_ej_cont}\adjustbox{width=0.48\linewidth,valign=t}{\input{./Figs/FRDGp3_BDF2_EJ_ad_contraction.tex}}}
~
\subfloat[$p=4$.]
{\label{fig:FRDG4_ej_cont}\adjustbox{width=0.48\linewidth,valign=t}{\input{./Figs/FRDGp4_BDF2_EJ_ad_contraction.tex}}}
\caption{\label{fig:FRDG_ej_cont}Initial contraction factor for FRDG with BDF2 and EJ at $k=\pi/100$, $\kappa=0.5$, $\mu=0.1$, and $\alpha=(1,0.5)$.}
\end{figure}
The contraction factor for the element Jacobi method is presented in \cref{fig:FRDG_ej_cont} for BDF2 at two spatial orders. Similar trends to those observed for the dual-time scheme are seen here, with additional prolongation smoothing being favourable. However, at higher spatial orders and lower time steps, additional prolongation and the $n_s=3$ cycle saw a reduction in their benefit. As the $n_s=1$ cycle maintained the improved contraction, this degradation is due to one smoothing step being sufficient in this less stiff range of $\Delta t$.
\subsection{\emph{p}-Multigrid Acceleration}
As has been confirmed here, \emph{p}-multigrid does not have as greater benefit to accelerate the convergence of the coupled ERK-BDF dual-time system for hyperbolic equations. This is evident when considering the contraction factor in \cref{fig:FRDG_cont} in the limit as $\Delta t/\Delta\tau\rightarrow\infty$ where the hyperbolic time scales become dominant, and with \emph{p}-multigrid providing a greater degree of acceleration for elliptic-hyperbolic equations. As has been discussed in the literature, this is due to the local dependency of hyperbolic equations compared to the global dependency of elliptic problems~\cite{Katzer1991}, and it follows that the convergence of hyperbolic components here are dependent on the convection time of waves in the system. This is not to say that \emph{p}-multigrid cannot be effective for hyperbolic problems. For example, it can be effective when employing Newton--Krylov approaches with large time steps as in the limit the system becomes elliptic.
One method to further accelerate the dual-time \emph{p}-multigrid investigated here is a procedure where the pseudo-time step was increased at coarser \emph{p}-multigrid levels, see Loppi~et~al.~\cite{Loppi2019}. In this method, a factor was introduced such that the pseudo-time step is defined as
\begin{equation}
\Delta\tau_i = \Delta\tau(f_\tau^{p-i}),
\end{equation}
where $\Delta\tau_i$ is the pseudo-time step at the degree $i$ \emph{p}-multigrid level. When setting $f_\tau$, care must be taken such that the CFL limits imposed through \cref{fig:FRDG_ad_cfl,fig:FRDG_BDF_CFL} are not exceeded.
This method will allow more rapid advection---as well as diffusion---of waves at the coarser levels. Implicitly these waves are of lower frequency and consequently are more challenging to converge due to their large length and time scales. This may also pose a problem if the corrections are not sufficiently relaxed into the finer multigrid levels as the corrections are likely to be large due to the different pseudo time steps, allowing error to accumulate in the solution.
\begin{figure}[tbhp]
\centering
\adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/FRDGp4_pmg_ad_fc_error_comp.tex}}
\caption{\label{fig:fc_pmg}Error comparison of dual time FRDG with $f_\tau$, $p=4$, with SSPRK3 and BDF2 for constant $\Delta t/\Delta\tau=10$, $\Delta\tau=\num{7e-3}$, $\mu=0.5$, and $\alpha=(1,0.5)$. For $\hat{k}=\pi/8$ (\emph{solid}), and $\pi/16$ (\emph{dashed}).}
\end{figure}
\cref{fig:fc_pmg} presents the results of applying $f_\tau=1.1$ to the \emph{p}-multigrid cycle with one smoothing step per stage. From this data, it may be concluded that rate of convergence is further increased by $f_\tau$. However, as was hypothesised, insufficient relaxation during prolongation causes the build up of error. This may be mollified by additional prolongation, with the result here using three smoothing steps per prolongation stage, but significant steady state error is still present.
\section{The FR Approach}\label{sec:fr}
The analysis of the methods to be presented will at times require the explicit coupling of temporal integration methods to a spatial scheme to produce the eigenvalues of the system. The spatial scheme used is the FR~\cite{Huynh2007,Vincent2010} method which lies within the set of discontinuous spectral element methods. For the purpose of this analysis, the FR method is used for approximating the first derivative of a function, with second derivatives handled through the introduction of auxiliary variables. Let us set the function $f$ such that $f(u): \mathbb{R} \mapsto \mathbb{R}$, and the domain of the spatial variable $x\in\Omega$. The spatial domain is subdivided into sub-domains $\Omega_i$, such that $\bigcup^N \Omega_i = \Omega$ and $\Omega_i \cap \Omega_j = \emptyset$ if $i \neq j$. In one dimension, we define a reference element and variable, $\xi \in \hat{\Omega} = [-1,1]$, for which we introduce the Jacobian $J_j: \Omega_j \mapsto \hat{\Omega}$.
If we have a solution $u(x)$ and function $f(u)$, FR forms a degree $p$ polynomial approximation of $f$ in $\Omega_i$ transformed to the reference space via the values at a set of $p+1$ nodal points $\xi_{j}$. We denote the discontinuous approximation as $\hat{f}^{\delta}_i$
\[
\hat{f}^{\delta}_i = \sum^p_{j=0} f\big(\hat{u}^{\delta}_i(\xi_j)\big)l_j(\xi) \quad \mathrm{where} \quad l_j = \prod^{p}_{\substack{k=0 \\k\neq j}}\frac{\xi-\xi_k}{\xi_j-\xi_k},
\]
which is similarly defined for $\hat{u}^{\delta}_i$. The FR methodology is then concerned with updating this polynomial such that the approximation is $C^0$ continuous between elements. This is achieved via
\[
\hat{f}^{\delta C} = \hat{f}^{\delta} + (\hat{f}^{\delta I}_{i,L} - \hat{f}^{\delta}_{i,L})h_L + (\hat{f}^{\delta I}_{i,R} - \hat{f}^{\delta}_{i,R})h_R,
\]
where $\hat{f}^{\delta}_{i,L} = \hat{f}^{\delta}_i(-1)$ is the interpolated value of $\hat{f}^{\delta}_i$ at the left interface, and $\hat{f}^{\delta I}_{i,L}$ is the common interface function value at the left interface. Similar definition follow for the right interface. For hyperbolic problems, the interface flux may be found by using information from the adjacent cell to pose a Riemann problem. There are many appropriate methods for the approximation or solution of these problems~\cite{Toro2009}, and it has also been demonstrated~\cite{Jameson2011} that the \emph{E-flux} condition is important in the proof of stability. The functions $h_L$ and $h_R$ are correction functions with the boundary conditions $h_L(-1) = h_R(1) = 1$ and $h_L(1) = h_R(-1) = 0$, and if they are set to left and right Radau polynomials then a nodal DG scheme is recovered~\cite{Huynh2007}. With this, the spatial derivative can straightforwardly be obtained, and if $h_L \in \mathbb{P}^{p+1}$, then it is possible for $\partial\hat{f}^{\delta C}_i/\partial\xi \in \mathbb{P}^p$.
\section{Introduction}\label{sec:intro}
The artificial compressibility method~(ACM)~\cite{Chorin1967} is a means of solving the incompressible Navier--Stokes equations in a manner that is compatible with compressible solvers. The most widely applied method for the incompressible Navier--Stokes equations is the pressure correction method where pressure corrections from a Poisson equation are propagated into the weakly coupled velocity field. This method has the disadvantage of indirect communication which can reduce parallel efficiency. ACM instead couples pressure to the continuity equation and consequently has seen increasing popularity for computational fluid dynamics; however, for each time step it does require that the artificial pressure waves are allowed to propagate in pseudo-time such that the converged, incompressible solution is reached. A technique commonly used to achieve this converged state is dual-time stepping~\cite{Chorin1967,Peyret1976}, and due to the requirement to converge the system for each time step, it follows that an implicit temporal integration scheme is applied. Other approaches have been explored, such as solving the linearised pseudo-time system with GMRES~\cite{Rogers1995}. However, this method requires preconditioning and has parallelisation issues common with these implicit methods.
Relative to pseudo-time, the system is driven to a steady state, and hence many convergence acceleration techniques are applicable. Several approaches have been developed, notably simple spatially-varying time steps, alternating direction implicit schemes~\cite{Peaceman1955}, implicit-explicit hybrid schemes~\cite{Hsu2002}, and the use of complex relaxation schemes such as LU-SSOR~\cite{Yoon1987}. The technique which is the concern of the present work is the multigrid method~\cite{Arnone1993} which is particularly effective for elliptic problems and hence may be well suited to accelerating ACM due to the nature of the artificial pressure waves.
Important to the application of multigrid acceleration is which spatial scheme is employed. We are interested in the use of spectral element discretisations and, in particular, the flux reconstruction method~(FR)~\cite{Huynh2007} which can be understood as a generalisation of the nodal discontinuous Galerkin approach~\cite{Hesthaven2008}. This method is of interest due to its high-order and globally unstructured nature combined with locally structured compute that lends itself to modern computer architectures~\cite{Witherden2014}. High-order methods are particularly beneficial in the context of ACM due to the lack of solution discontinuities, hence making these techniques highly efficient in the approximation in spatial derivatives.
The application of multigrid methods---such as geometric multigrid---is complicated by the unstructured formulation of FR. However, the high spatial order lends itself readily to \emph{p}-multigrid acceleration methods where for the same element coarser levels are introduced via restricting the solution to lower polynomial orders. There is a rich body of literature considering the Fourier analysis of geometric multigrid methods, with analysis advancing to more general deep cycles such as the work of Wienands~et~al.~\cite{Wienands2001}, where it was theoretically shown that contraction factors could deteriorate for schemes with more stages due to aliasing on the coarsest levels. We wish to develop a theoretical framework to explore the effect of cycle design on acceleration of \emph{p}-multigrid methods.
\section{Numerical Experiments}\label{sec:numeric}
In order to test the analytic hypothesis about the utility of asymmetric \emph{V}-cycles, we will consider the incompressible Navier--Stokes equations solved via ACM. The governing equations in two dimensions takes the form
\begin{subequations}
\begin{align}
\px{P}{\tau} + \px{(\zeta u)}{x} + \px{(\zeta v)}{y} &= 0, \\
\px{u}{\tau} + \px{u}{t} + \px{(u^2 + P)}{x} + \px{uv}{y} &= \nu\left(\pxi{2}{u}{x} + \pxi{2}{u}{y}\right), \\
\px{v}{\tau} + \px{v}{t} + \px{(v^2 +P)}{y} + \px{uv}{x} &= \nu\left(\pxi{2}{v}{x} + \pxi{2}{v}{y}\right),
\end{align}
\end{subequations}
where $P$ is the pressure, $u$ and $v$ are the \emph{x} and \emph{y} components of velocity, respectively, $\nu$ is the kinematic viscosity, and $\zeta$ is the artificial compressibility coefficient. The numerical experiments were performed using the high-order FR solver PyFR~\cite{Witherden2014,Loppi2018}. DG recovering correction functions were used together with BR1~\cite{Bassi1997} viscous and HLLC~\cite{Elsworth1992} inviscid ACM approximate common interface flux calculations. The solution and flux points were positioned using Gauss--Legendre and Williams--Shunn~\cite{Williams2014} points for quadrilaterals and triangles, respectively.
\begin{figure}[tbhp]
\centering
\subfloat[View of unstructured mesh.]{\label{fig:naca_mesh}\includegraphics[width=0.49\linewidth]{./Figs/naca4412_mesh_medium.png}}
~
\subfloat[Pressure at $t=\SI{70}{\second}$.]{\label{fig:naca_pressure}\includegraphics[width=0.49\linewidth]{./Figs/naca4412_p_t70.png}}
\caption{NACA-4412 at $\mathrm{AoA}=2.5^\circ$ and $Re=\num{5e3}$.}
\end{figure}
The operating condition examined throughout the experiments was at an angle-of-attack~(AoA) of $2.5^\circ$, $Re=\num{5e3}$, a spatial order of $p=3$, and $\zeta=2.5$. The far-field pressure and velocity magnitude were $P_\infty=1$ and $V_\infty=1$, respectively. A view of the mesh used can be seen in \cref{fig:naca_mesh} and is comprised of ${\sim}900$ triangles and ${\sim}3800$ quadrilaterals. Although this is a simple geometry at low Reynolds number, we chose to use a fully unstructured mesh as this better represents the typical use case for this method.
For the temporal integration, BDF2 with SSPRK3 for the pseudo-time stepping was used, with $\Delta t=\num{5e-4}$, $\Delta t/\Delta\tau=5$, and $f_\tau=1.75$. A fixed number of pseudo steps per iteration of ten was used. A higher number would typically be needed for engineering a calculation; however, this was deemed to be sufficient to demonstrate the convergence acceleration in this case. The simulations were run for $75$ flows over chord and the pressure distribution at $t=\SI{75}{\second}$ is shown in \cref{fig:naca_pressure} where the vortex shedding is clearly visible.
\begin{figure}[tbhp]
\centering
\subfloat[Pressure.]{\label{fig:naca_conv_p} \resizebox{0.47\linewidth}{!}{\input{./Figs/NACA4412_resid_conv_p.tex}}}
~
\subfloat[\emph{x}-velocity.]{\label{fig:naca_conv_u} \resizebox{0.47\linewidth}{!}{\input{./Figs/NACA4412_resid_conv_u.tex}}}
\caption{\label{fig:naca_conv}Mean relative residual convergence for NACA-4412 at $p=3$ over \num{1e4} implicit steps.}
\end{figure}
To demonstrate the effect of various \emph{V}-cycles, we investigated the averaged relative residual for each cycle in dual-time. The mean residual for each cycle is normalised by the mean of the initial residual in each real time step. The results averaged for the last \num{1e4} physical-time steps, equivalent to approximately 10 shedding cycles, is presented in \cref{fig:naca_conv}. An interesting difference in behaviour is exhibited between the pressure and velocity convergence, with pressure showing the same predicted improvement for additional prolongation, whereas for the convergence of velocity cycles, more smoothing steps caused the fastest decay in the residual. This is due to the different character of the equations; the first equation---which drives pressure---is elliptic, whereas the velocity equations are hyperbolic. Hence, the convergence of the velocity equations is chiefly a matter of advection and benefits primarily from a greater number of pseudo time iterations.
The low number of pseudo-steps used here is visible for the base case from the high average pressure residual shown in \cref{tab:naca_mean_quant} and that fact that the residual factor in \cref{fig:naca_conv_p} for the base case does not show reduction. However, reduction is still seen in the velocity residual, for which the governing equation is dominantly hyperbolic and hence benefits purely from additional iterations to further convergence.
\begin{table}[H]
\centering
\caption{\label{tab:naca_mean_quant}Metric comparison for various cycles.}
\begin{tabular}{lcccc} \toprule
Cycle & $n_s$ & $C_L/C_D$ & $\overline{R}_P$ & $\overline{R}_u$ \\ \midrule
None & & 7.246 & \num{5.161e-2} & \num{1.450e-1} \\
\emph{V} & $1$ & 7.072 & \num{3.547e-3} & \num{2.158e-3} \\
\emph{V} & $3$ & 7.071 & \num{1.173e-3} & \num{2.698e-4} \\
$\mathrm{\emph{V}}_\mathrm{AP}$ & $1$ & 7.067 & \num{1.612e-3} & \num{6.249e-04} \\ \bottomrule
\end{tabular}
\end{table}
\section{Conclusions}\label{sec:conclusions}
In this manuscript, we have presented a Fourier analysis of dual-time stepping with the high-order FR approach using \emph{p}-multigrid convergence acceleration. This enables---for the first time---arbitrary multigrid cycles to be explored and analysed directly. Employing this analysis, we have shown for the advection-diffusion equation that \emph{p}-multigrid can reduce the contraction factor by $9\%$. Furthermore, it was also shown how performance can be improved through the use of \emph{asymmetric} cycles which contain additional prolongation steps, an observation which is supported through numerical experiments with the incompressible Navier--Stokes equations on a 2D NACA-4412.
\section*{Acknowledgements}\label{sec:ack}
We would like thank to T. Dzanic and L. Wang for aiding us in preparation of this manuscript.
\bibliographystyle{siamplain}
\section{Pseudo-Time Stepping}\label{sec:pseudo}
To introduce the dual-time method, consider the ordinary differential equation~(ODE)
\begin{equation}\label{eq:linear_ode}
\px{u}{t} -\lambda u = 0 \quad \mathrm{for} \quad (x,t)\in \Omega\times\mathbb{R}_+,
\end{equation}
which may be modified to incorporate pseudo-time terms as
\begin{equation}\label{eq:linear_pseduo}
\px{u}{\tau} + \px{u}{t} - \lambda u = 0 \quad \mathrm{for} \quad (x,t,\tau)\in \Omega\times\mathbb{R}_+^2,
\end{equation}
such that when a steady state in pseudo-time is reached, then a solution to \cref{eq:linear_ode} is reached. To simplify later analysis, we will restrict the spatial domain to be periodic, thus restricting the equation to an initial value problem. To solve this system, we will employ explicit Runge--Kutta (ERK) integration in pseudo-time. Such schemes may be defined through a Butcher tableau~\cite{Butcher1964} as
\begin{equation}\label{eq:butcher}
\arraycolsep=2.5pt\def1.2{1.2}
\begin{array}{c|c}
\mathbf{c} & \mathbf{A} \\ \hline
& {\mathbf{b}^T}
\end{array}
\end{equation}
For ERK schemes, the coefficient matrix $\mathbf{A}$ is strictly lower triangular. The ERK scheme applied to integration of the ODE in \cref{eq:linear_ode} can be written as
\begin{equation}
u_{n+1} = u_{n} + \sum^r_{i=1}\Delta t q_i, \quad \mathrm{with} \quad q_i = \lambda\bigg(u_{n} + \Delta t\sum^{i-1}_{j=1}a_{ij}q_j\bigg),
\end{equation}
where $\Delta t$ is the time step size. For the system presented in \cref{eq:linear_pseduo}, this ERK scheme will be used for the pseudo-time integration, whereas physical time stepping will be performed with the implicit backward-difference formulae (BDF). The general form for a degree $s$ BDF scheme can be expressed as
\begin{equation}
u_{n+1} = -\sum^{s-1}_{i=0}B_{i+1}u_{n-i} + \Delta tB_0\lambda u_{n+1}.
\end{equation}
Example coefficients and stability regions for several BDF schemes are available in \cref{fig:bdf}.
\begin{figure}[tbhp]
\centering
\subfloat[Selection of BDF coefficients.]{\label{tab:bdf}
\adjustbox{width=0.35\linewidth,valign=b}{\begin{tabular}{c c c c c}
\toprule
$s$ & $B_0$ & $B_1$ & $B_2$ & $B_3$ \\ \midrule
1 & $1$ & $-1$ & & \\[0.75ex]
2 & $\frac{2}{3}$ & $-\frac{4}{3}$ & $\frac{1}{3}$ & \\[0.75ex]
3 & $\frac{6}{11}$ & $-\frac{18}{11}$ & $\frac{9}{11}$ & $-\frac{2}{11}$ \\
\bottomrule\vspace{0.09cm}
\end{tabular}}
}
~
\subfloat[BDF stability, shaded regions are unstable.]{\label{fig:bdf_base_stab}\adjustbox{width=0.55\linewidth,valign=b}{\input{./Figs/bdf_stability.tex}}}
\caption{\label{fig:bdf}BDF Schemes.}
\end{figure}
The implicit and explicit integrators for physical- and pseudo-time can now be combined to calculate the solution advanced by the pseudo step, $\Delta\tau$, thus giving the following system of equations
\begin{subequations}
\begin{align}
u_{n+1, m+1} =&\: u_{n+1, m} + \sum^{r}_{i=1}\frac{\Delta\tau b_i}{\alpha_{PI}}q_i, \\
q_i =&\: \lambda \Big(u_{n+1,m} + \Delta\tau\sum^{i-1}_{j=1}a_{i,j}q_j\Big) - \frac{1}{\Delta tB_0}\Big(u_{n+1,m} + \sum^{s-1}_{l=0}B_{l+1}u_{n-l}\Big).
\end{align}
\end{subequations}
From the use of explicit pseudo-time stepping, it logically follows that we assume $\Delta\tau \ll \Delta t$, and hence the term $\alpha_{PI}=1+b_i\Delta\tau/B_0\Delta t\rightarrow1$ can be neglected. We now wish manipulate this into a matrix form to facilitate our later work; applying the terms of \cref{eq:butcher}, the following is obtained
\begin{equation*}
\mathbf{q} = \lambda u_{n+1,m}\mathbf{e} + \lambda\Delta\tau\mathbf{A}\mathbf{q} - \frac{1}{\Delta tB_0}\Big(u_{n+1,m} + \sum^{s-1}_{l=0}B_{l+1}u_{n-l}\Big)\mathbf{e},
\end{equation*}
where $\mathbf{q} = [q_1,\dots,q_r]^T$, and $\mathbf{e} = [1,\dots,1]^T$. This, in turn, implies
\begin{subequations}
\begin{align*}
u_{n+1,m+1} =&\: u_{n+1,m} + \Delta\tau\mathbf{b}^T\mathbf{q}, \\
\mathbf{q} =&\: (\mathbf{I} - \lambda\Delta\tau\mathbf{A})^{-1}\bigg[\lambda u_{n+1,m} - \frac{1}{\Delta tB_0}\Big(u_{n+1,m} + \sum^{s-1}_{l=0}B_{l+1}u_{n-l}\Big)\bigg]\mathbf{e}.
\end{align*}
\end{subequations}
To obtain the system amplification factor, we factorise $\mathbf{q}$ in terms of $u_{n+1,m}$ by initially separating the pseudo-time amplification and source terms as
\begin{equation*}
\mathbf{q} = (\mathbf{I} - \lambda\Delta\tau\mathbf{A})^{-1}\mathbf{e}\bigg(\lambda - \frac{1}{\Delta tB_0}\bigg)u_{n+1,m} -\frac{(\mathbf{I} - \lambda\Delta\tau\mathbf{A})^{-1}\mathbf{e}}{\Delta tB_0}\sum^{s-1}_{l=0}B_{l+1}u_{n-l}.
\end{equation*}
Therefore,
\begin{multline}\label{eq:ps1}
u_{n+1,m+1} = \underbrace{\Bigg[1 + \bigg(\lambda\Delta\tau - \frac{\Delta\tau}{\Delta tB_0}\bigg)\mathbf{b}^T(\mathbf{I} - \lambda\Delta\tau\mathbf{A})^{-1}\mathbf{e}\Bigg]}_Pu_{n+1,m}, \\
- \underbrace{\frac{\Delta\tau\mathbf{b}^T(\mathbf{I} - \lambda\Delta\tau\mathbf{A})^{-1}\mathbf{e}}{\Delta tB_0}}_C\sum^{s-1}_{l=0}B_{l+1}u_{n-l},
\end{multline}
and it can be seen that this is purely a function of the ERK and BDF schemes, together with the factors $\lambda\Delta\tau$ and $\Delta\tau/\Delta t$.
To demonstrate the effect of the coupled system, we present the stability regions of \cref{eq:ps1} as the pseudo step number, $m$, is varied. This was calculated using the amplification factor, defined as
\begin{subequations}
\begin{align}
\frac{u_{n+1,M}}{u_{n}} &= P^M - \bigg[\sum^{M-1}_{j=0}P^j\bigg]C\sum^{s-1}_{l=0}B_{l+1}\exp{(\lambda l\Delta t)},\\
&= P^M - \frac{1-P^M}{1-P}C\sum^{s-1}_{l=0}B_{l+1}\exp{(\lambda l\Delta t)}.
\end{align}
\end{subequations}
In the second step we have assumed $|P|< 1$, which is true for sufficiently small $\Delta\tau$ and under the previous assumption that $\Delta\tau\ll\Delta t$, and hence may treat the summation as a geometric series.
The contours of unity amplification factor representing the stability limit are shown in \cref{fig:ERK5dgp4_BDF2_stab} for BDF2 coupled to an ERK scheme. The ERK scheme applied was an optimised 5 stage scheme from the work of Vermeire~et~al.~\cite{Vermeire2019}, where the ERK stability region was tuned to match the set of eigenvalues produced by $p=4$ nodal DG spatial scheme for advection. This scheme will be denoted as \emph{OERK5-DGp4}. It was posited that the schemes would provide the optimal stability region when using this spatial scheme with dual-time stepping for implicit calculations. To produce a stability region, it was necessary to set $\Delta t$ and $\Delta\tau$, which for these contours take the value of $0.2$ and $0.05$, respectively. Then, eigenvalues can be applied to the system to find the unity contour using $\lambda=\lambda_x+\imath\lambda_y$.
\begin{figure}[tbhp]
\centering
\subfloat[$\Delta t/\Delta\tau=4$, $\Delta\tau = 0.05$.]{\label{fig:stability_region} \adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/OERK5dgp4_BDF2_stability.tex}}}
\subfloat[$m=10$, $\Delta\tau=0.05$.]{\label{fig:dt_stability} \adjustbox{width=0.48\linewidth,valign=b}{\input{./Figs/OERK5dgp4_BDF2_stability_dt.tex}}}
\caption{\label{fig:ERK5dgp4_BDF2_stab}Stability regions for OERK5-DGp4 and BDF2.}
\end{figure}
As is demonstrated here, the coupling of the implicit method to the pseudo-time integrator causes the stability region to change with the number of iterations, with both local contractions and expansions observed. The stability region of the ERK scheme without coupling to an implicit method is also shown in \cref{fig:ERK5dgp4_BDF2_stab} for reference. Additionally, the stability is further deformed by variations to the ratio $\Delta t/\Delta \tau$, \cref{fig:dt_stability}, therefore complicating the design of optimal ERK schemes. Further investigation of the stability of the dual-time system was performed by Chiew~et~al.~\cite{Chiew2016} where a more exhaustive study of implicit schemes is given.
\subsection{\emph{p}-multigrid}
To accelerate the convergence of the solution towards a \newline pseudo-time steady state, the \emph{p}-multigrid methodology has proven to be effective for spectral element methods such as FR~\cite{Loppi2018}. The aim of the method is to restrict the solution to coarse grid levels, apply smoothing there, and subsequently propagate corrections from the coarser levels to the finer levels. We will now outline the techniques of \emph{p}-multigrid applied to the system already described. From the work of the previous section, the residual after $M$ pseudo time steps is
\begin{equation}\label{eq:ps_residual}
T_{p,M}u_{n+1,0} = -\lambda\bigg[P^M + \frac{1-P^M}{1-P}C\bigg]u_{n+1,0} - \frac{1}{\Delta t B_0}\bigg[u_{n+1,M} - \sum^{s-1}_{l=0}B_{1+l}u_{n-l}\bigg].
\end{equation}
For the finest stage, of degree $p$, the deficit is defined as
\begin{equation}
d_p = -T_{p,M}.
\end{equation}
The deficit and residual source terms for the lower order stages are subsequently defined as
\begin{subequations}
\begin{align}
u_{i-1,n+1,0} =&\: \rho_{i-1}(u_{i,n+1,M}), \\
d_{i-1} =&\: \rho_{i-1}(d_{i}), \\
r_{i-1} =&\: T_{i-1,M}u_{i-1,n+1,0} + d_{i-1},
\end{align}
\end{subequations}
where $r_{i-1}$ is the deficit residual source term that is applied in the calculation of $u_{i-1,n+1,M}$, to be shown momentarily. The restriction operator, $\rho_i(.)$, is taken to be the same for the solution and deficit and is defined as
\begin{equation}
\langle\rho_k(u)-u,\phi_i\rangle_{L_2} = 0,
\end{equation}
for some polynomial basis $\phi_i$, which we will take to be the orthogonal Legendre basis. For the linear case to be considered here, this choice does not restrict the generality of the results; however, otherwise this choice is justified by being a polynomial basis for $L_2$ polynomial projection with unit measure. When defined within a nodal or collocation spatial method, the inner product will require approximation for which we use quadrature rules such as Gauss--Legendre.
The prolongation and correction of the $i^\mathrm{th}$ level based on the $i-1^\mathrm{th}$ is then
\begin{subequations}
\begin{align}
\Delta_{i} =&\: v_{i,n+1,0} - v_{i,n+1,M},\\
\Delta_{i+1} =&\: \pi_{i+1}(\Delta_i), \\
v_{i+1,n+1,0} =&\: u_{i+1,n+1,M} + c_{i+1},
\end{align}
\end{subequations}
where $v$ is used to indicate the new solution on the prolongation steps. If at a local minima in a \emph{p}-multigrid cycle, $v_{i,n+1,0}$ is taken to be $u_{i,n+1,0}$. Furthermore, the prolongation operator $\pi_i$ is defined such that given $u_k\in\mathbb{P}^k$ and $x_{k,i}\in\{x_0,\dots,x_k\}$, then
\begin{equation}
\pi_{k+1}(u_k) \in \mathbb{P}^{k+1} \quad \mathrm{given} \quad \pi_{k+1}(u_k)(x_{k+1,i}) = u_k(x_{k+1,i}).
\end{equation}
We now wish to incorporate the multi-grid residual source term $r_q$ such that the modified pseudo-time update equation may be defined, which manifests straightforwardly in the ERK steps as
\begin{equation*}
q_i = \lambda \Big(u_{n+1,m} + \Delta\tau\sum^{i-1}_{j=1}a_{i,j}k_j\Big) - \frac{1}{\Delta tB_0}\Big(u_{n+1,m} + \sum^{s-1}_{l=0}B_{l+1}u_{n-l}\Big) - r_q,
\end{equation*}
and hence
\begin{equation}\label{eq:ps_rsource}
u_{n+1,m+1} = Pu_{n+1,m}-C\sum^{s-1}_{l=0}B_{l+1}u_{n-l} - \Big(\underbrace{\Delta\tau\mathbf{b}^T(\mathbf{I} - \lambda\Delta\tau\mathbf{A})^{-1}\mathbf{e}}_{K}\Big)r_q.
\end{equation}
|
1,108,101,563,145 | arxiv | \subsection{Introduction}
With the growing interest in quantum information theory, quantum cryptography
has become a key research area. One way of ensuring secure communication is
to establish a secret key between the two communication partners, \emph{Alice}
and \emph{Bob}, with which they can later encode and decode secret
messages. The distribution of the key can be done using quantum mechanical
systems where the laws of physics guarantee the security, in marked contrast
to classical schemes which rely on the complexity of mathematical
problems. Quantum key distribution could thus replace conventional public key
cryptosystems, which can be broken in polynomial time by quantum algorithms as
soon as suitable quantum computers are available.
Most quantum key distribution schemes discussed at the current stage of
research are based on the BB84 \cite{BB84} or the E91 \cite{E91}
protocol. These schemes operate in only a subspace of the whole qubit state
space, and so they allow the eavesdropper, \emph{Eve}, unnecessary freedom to
make use of the undetected portion of the Hilbert space. This power is denied
to her in fully tomographic key distribution protocols, such as those of
Ref.\,\cite{TomoCrypt}, and in particular by the ``minimal qubit tomography
protocol'' of Ref.\,\cite{TetraCrypt} that has become known colloquially as the
\emph{Singapore protocol}. It relies heavily on the minimal qubit tomography
(MQT) discussed in Ref.\,\cite{MQT}
In Ref.\,\cite{TetraCrypt} it was shown that the maximum theoretical
efficiency of a quantum key distribution protocol using the MQT measurement is
maximally $\,\textrm{log$_2$}\, \frac{4}{3} = 0.415$; this is less than the efficiencies of the
non-tomographic BB84 and E91 protocol which are $\frac{1}{2}$ and
$\frac{2}{9}$, respectively. However, the difference between the mutual
information between Alice and Bob (A\&B) and the mutual information between
Eve and either Alice or Bob (CK-yield) promises to be higher than in the
non-tomographic protocols \cite{BB84-disc}. Moreover, the MQT measurement has
the potential for a significantly higher key yield than the comparable
tomographic 6-state protocol introduced in Ref.\,\cite{B} and discussed in
Refs.\,\cite{B-PG,GW}, which has an efficiency of only $\frac{1}{3}$.
Using the MQT measurement, one possible way to generate a secret key from the
correlated measurement outcomes was proposed in Ref.\,\cite{TetraCrypt}. The
resulting quantum key distribution protocol recovers 0.4 key-bits per qubit
pair, or 96.4\% of the potential efficiency in the noise-free
case. It is the objective of the present paper to give a detailed account of
how the analysis is carried out that yields the thresholds stated in
Ref.\,\cite{TetraCrypt}. A quantum key-distribution scheme that uses POVMs
with tetrahedron structure is also investigated by Renes in
Ref.\,\cite{Renes}. This protocol differs from the Singapore protocol by a
less efficient key generation procedure that does not fully exploit the
potential of the MQT measurement.
In Sec.\,\ref{sec:MQT} we give a brief overview of the MQT measurement and
review the key generation in Sec.\,\ref{sec:Singapore}. In
Sec.\,\ref{sec:constraints} we will discuss the constraints on Eve's
eavesdropping imposed by the tomographic nature of the MQT measurement. We
then investigate, in Sec.\,\ref{sec:eavesdropping}, the incoherent attacks
available to Eve when she exploits the classical information transmitted
between the communication partners during the key generation. Finally, in
Sec.\,\ref{sec:security} we examine the security of the protocol against such
attacks and obtain the noise threshold stated in Ref.\,\cite{TetraCrypt}.
\subsection{Minimal qubit tomography} \label{sec:MQT}
We suppose Alice and Bob want to establish a secret key and they use a
provider that distributes entangled qubit pairs for private communication. As
advertised, each will receive one qubit of the pair. Since real communication
channels do not usually preserve the signal perfectly, Alice and Bob have to
deal with the fact that they will receive a distorted state. Let Alice and
Bob agree to accept only a mixed state $\rho_{A\&B}$ consisting of the ideal,
perfectly anti-correlated singlet $ |s \rangle\langle s| = {1 \over 4}
\left(\mathbbm{1} - \vec \sigma_{A} \cdot \vec \sigma_{B} \right),$ and
white, unbiased noise, weighted with a noise parameter $\epsilon$. The
two-qubit state that A\&B will share is thus
\begin{eqnarray}\label{rhoAB}
\rho_{A\&B}(\epsilon) =(1-\epsilon)|s\rangle\langle s|
+\frac{\epsilon}{4}\mathbbm{1},
\end{eqnarray}
where $\epsilon$ ranges from $\epsilon =0$ (no noise) to $\epsilon= 1$
(nothing but noise). In practical situations, the class of acceptable sources
should in fact be chosen in accordance with the experimental setup which
produces the singlet and the properties of the transmission line used (fibre,
air, ...). But for the sake of simplicity, we impose the above standard
criterion which was also used in the tomographic protocols of
Ref.\,\cite{TomoCrypt}.
The tomographically complete 6-state protocol was analyzed for the above
scenario in Ref.\,\cite{AccInf}. In this protocol, Alice and Bob each
performs a measurement of a randomly chosen Pauli operator, $\sigma_x,
\sigma_y$ or $ \sigma_z$, resulting in six outcome probabilities. However,
this is not a \emph{minimal} tomography since only four outcome
probabilities are needed to specify the state of a qubit completely. The
optimal qubit tomography POVM with the minimum number of four elements is of
tetrahedron geometry as shown in Ref.\,\cite{MQT}; that is the POVM operators
can be written in the form
\begin{equation}\label{eq:POVM}
P_k = \frac{1}{4} \left( \mathbbm{1} + \vec{t}_k \cdot \vec{\sigma}\right)
\quad\mbox{for}\ k =1, 2, 3, 4,
\end{equation}
where the vectors $\vec t_k$ point to the corners of a tetrahedron inscribed
in the Bloch sphere; see Fig.\,1 in Ref.\,\cite{MQT} for an illustration. The
four vectors are linearly dependent,
\begin{equation}
\sum_{k=1}^4\vec{t}_k=0,
\end{equation}
with the scalar product
\begin{equation}
\vec{t}_k\cdot\vec{t}_l=\frac{4}{3}\delta_{kl}-\frac{1}{3}\quad
\mbox{for}\ k,l =1, 2, 3, 4,
\end{equation}
and fulfill the dyadic completeness relation
\begin{equation}
\frac{3}{4}\sum_{k=1}^4 ~ \vec{t}_k\,\vec{t}_k=\tensor{1}.
\end{equation}
Let Alice and Bob each measure the tetrahedron POVM of Eq.\,\eqref{eq:POVM} on
many copies of a two-qubit state $\rho$. The resulting joint probabilities of
the measurement are then given by $p_{kl} = \,\textrm{tr}\,[\rho ~P_k \, Q_l],$ with $P_k$
denoting Alice's POVM elements and $Q_l$ Bob's, chosen so that their
tetrahedrons are perfectly aligned (if they chose a non-zero angle
between their tetrahedrons they would lose the perfect anti-correlations
introduced by the singlet). To verify that they indeed received the state
$\rho_{A\&B}$ of Eq.\,\eqref{rhoAB}, Alice and Bob sacrifice a fraction of
their data and announce them publicly to determine the joint probabilities
$p_{kl}$ of Alice measuring $k$ and Bob $l$. They check their results for
statistical independence and are able to reconstruct the original state by
\begin{eqnarray}\label{2qbstate}
\rho=\sum_{k,l=1}^4 ~(6 P_k -\mathbbm{1})~ p_{kl}~ (6 Q_l-\mathbbm{1}).
\end{eqnarray}
Naturally, after a finite number of measurements, Alice and Bob cannot infer
the values of the $p_{kl}$ exactly, but they can estimate them rather
reliably. A discussion of the quality of such estimates was given in
Ref.\,\cite{MQT} for the single qubit case where it was also shown that the
measurement of a randomly chosen qubit state with the tetrahedron POVM will
on average lead to the best (\emph{optimal}) estimate of the state's Pauli
vector. Finally A\&B compare whether the predicted state of
Eq.\,\eqref{2qbstate} is consistent with Eq.\,\eqref{rhoAB}. They will only
use the provider if this is the case.
Given the shared state is $\rho_{A\&B}(\epsilon)$ for some $\epsilon$, their
joint probabilities $p_{kl}$ will be
\begin{equation}
\label{pkl}
p_{kl}=\frac{4-\epsilon}{48}(1-\delta_{kl})
+\frac{\epsilon}{16}\delta_{kl} \quad \mbox{for } k,l =1, 2, 3, 4,
\end{equation}
and the accessible information that A\&B can establish between each other is
given by
\begin{equation}
\label{eq:potentialmutinfo}
I^{\textrm{access}}_{A\&B}(\epsilon) =
\left(1-\frac{\epsilon}{4}\right)\,\textrm{log$_2$}\, \frac{4-\epsilon}{3}
+ \frac{\epsilon}{4}\,\textrm{log$_2$}\, \epsilon,
\end{equation}
where we used the definition of the classical mutual information for a
probability distribution $\{ p_{kl}\}_{k,l}$
\begin{equation}
I = \sum_{kl}~ p_{kl} ~ \,\textrm{log$_2$}\, \frac{p_{kl}}{\sum_{k'} ~ p_{k'l}
~ \sum_{l'} ~p_{kl'}}.
\end{equation}
Note that in the noise-free case the accessible information,
$I^{\textrm{access}}_{A\&B} (0) = 0.415$, is substantially higher (by 24.5\%)
than the corresponding value of $\frac{1}{3}$ for the tomographically complete
6-state protocol.
\subsection{The Singapore protocol}\label{sec:Singapore}
From now on, we will refer to the possible measurement outcomes as A, B, C, D
for $k = 1,2,3,4$, respectively, symbolizing a click in the $k$-th POVM
detector. To generate a key from their correlated sequences, Alice and Bob
have to communicate classically. We use the two-way key generation scheme
proposed in Ref.\,\cite{TetraCrypt} which leads to a mutual information of
$I_{A\&B}(0) = 0.4$, sufficiently close to the maximally accessible
information of $I^{\textrm{access}}_{A\&B}(0) = 0.415$. The scheme has
a simple structure and can be easily implemented on a computer. We give a
brief description of the key generation scheme followed by a more detailed
analysis considering the presence of noise which will be relevant for the
eavesdropping discussion. We refer the reader to the original paper
\cite{TetraCrypt} for more details on the key generation scheme.
Let us first consider the noise-free case ($\epsilon=0$). Alice publicly
announces two randomly chosen positions of her sequence for which she has the
same letter. With probability $\frac{2}{3}$, Bob has different letters at
these positions. He then groups the possible outcomes A, B, C, D in two
groups, one containing the two letters he received and one with the remaining
two letters. He randomly assigns the values 0 and 1 to the two groups and
announces these groupings. Bob does not reveal which group his letters belong
to, but since A\&B's measurement outcomes are perfectly anti-correlated, they
can both write down the value of the group which contains Alice's letter and
thus generate a key-bit. With probability $\frac{1}{3}$ Bob has the same
letter in the two positions Alice announced. In this case, he states this fact
and A\&B each write their corresponding letter in a new sequence. \emph{The
above procedure is repeated iteratively with the new sequences thus
generated.}
In the presence of noise, the key sequence generated with the above scheme
will contain errors with a rate dependent on $\epsilon$. For the original
letter sequence (first iteration), the probability of Alice and Bob receiving
the same letter is then
non-zero, see Eq.\,\eqref{pkl},
\begin{equation} \label{eq:ps}
p_s\,(\epsilon) = \frac{\epsilon}{4},
\end{equation}
and Bob receives one of the other three letters with probability
\begin{equation} \label{eq:pd}
p_d\,(\epsilon) = \frac{4-\epsilon}{12}.
\end{equation}
With \emph{a priori} probability $\frac{1}{4}$ Alice announces the positions
of two occurrences of the letter A. Then, Bob's corresponding two letters
occur with the probabilities given in Table \ref{tab:Bob2letters},
where M.P. denotes the marginal probabilities.
\begin{table}[t]
\begin{center}
\begin{tabular}[c]{cccccc|c}
\hline
\hline
\multicolumn{2}{c}{} & \multicolumn{4}{c}{Bob's 2nd letter}&\\
\multicolumn{2}{c}{} & A & B & C & D & M.P.\\
\hline
& A
& $\frac{1}{4} p_s^2$ & $\frac{1}{4}p_s p_d$
& $\frac{1}{4}p_s p_d$ & $\frac{1}{4}p_s p_d$
& $\frac{\epsilon}{16}$\\
Bob's
& B
& $\frac{1}{4}p_s p_d$ & $\frac{1}{4}p_d^2$
& $\frac{1}{4}p_d^2$ & $\frac{1}{4}p_d^2$
& $\frac{4-\epsilon}{48}$\\
1st letter
& C
& $\frac{1}{4}p_s p_d$ & $\frac{1}{4}p_d^2$
& $\frac{1}{4}p_d^2$ & $\frac{1}{4}p_d^2$
& $\frac{4-\epsilon}{48}$\\
& D
& $\frac{1}{4}p_s p_d$ & $\frac{1}{4}p_d^2$
& $\frac{1}{4}p_d^2$ & $\frac{1}{4}p_d^2$
& $\frac{4-\epsilon}{48}$\\[1ex]
\cline{2-7}
& M.P.
& $\frac{\epsilon}{16}$ & $\frac{4-\epsilon}{48}$
& $\frac{4-\epsilon}{48}$ & $\frac{4-\epsilon}{48}$
& $\frac{1}{4}$\\[1ex]
\hline
\hline
\end{tabular}
\caption{\label{tab:Bob2letters}Bob's two letters given that Alice
announced two positions where she got the outcome A.}
\end{center}
\end{table}
The conversion into one key-bit will occur when Bob's letters are unequal,
i.e. in all off-diagonal cases. Similarly, we can construct the probability
tables for Alice's other choices of letters. Let us denote the probability of
successfully generating one key-bit from one letter pair by $p_\textrm{succ}$. For the
first iteration it is then
\begin{equation} \label{eq:psucc}
p_\textrm{succ}^{(1)}\,(\epsilon) = 6 p_d \left( p_s +p_d \right) =
\frac{\left(4-\epsilon\right)\left(2+\epsilon\right)}{12}.
\end{equation}
However, these successfully generated key-bits will contain errors. The
probability that a generated key-bit is wrong is given by
\begin{equation}
p_\textrm{err}^{(1)}~(\epsilon)= \frac{6 p_s p_d}{p_\textrm{succ}^{(1)} }
=\frac{p_s}{p_s + p_d}=\frac{3\epsilon}{4+2\epsilon},
\end{equation}
accounting for the equally likely cases that Bob writes $0$ and Alice $1$ and
vice versa. The mutual information of the key itself is thus less than unity
and depends on $p_\textrm{err}^{(1)}$, which is nonzero for nonzero $\epsilon,$
\begin{eqnarray}
I _{\textrm{key}}\left(p_\textrm{err}^{(1)}\right)
&=& 1 + p_\textrm{err}^{(1)} ~\,\textrm{log$_2$}\, p_\textrm{err}^{(1)} \\
&& + (1-p_\textrm{err}^{(1)})~\,\textrm{log$_2$}\, \left[1-p_\textrm{err}^{(1)}\right]. \nonumber
\end{eqnarray}
Let us regard the mutual information as a cryptographic resource that Alice
and Bob can use later to extract a perfectly correlated key. We are therefore
interested in the \emph{expectation value of the mutual information} which
A\&B share \emph{per qubit pair}. This expectation value is the product of the
mutual information of the generated key-bit and the probability that this
key-bit was actually generated ($p_\textrm{succ}^{(1)}$), divided by the number of
qubit pairs needed (2 in the first iteration) to obtain the key-bit.
\begin{eqnarray} \label{eq:IAB}
I_{A\&B}^{(1)}(\epsilon)
&=& \frac{p_\textrm{succ}^{(1)}}{2} \, I_{\textrm{key}}\left(p_\textrm{err}^{(1)}\right)\\
&=& \frac{\left(4-\epsilon\right)}{48} \left((4-\epsilon)\,\textrm{log$_2$}\,
\frac{4-\epsilon}{2+\epsilon} + 3\epsilon\,\textrm{log$_2$}\, \frac{3\epsilon}{2+\epsilon}
\nonumber \right).
\end{eqnarray}
To deduce similar results for further iterations we first study the properties
of the recycled sequences. The second iteration can again be characterized by
two probabilities $p_s'$ and $p_d'$ defined with analogous meanings as $p_s$
and $p_d$ for the original sequence. The probability $p_s'$ is given by the
probability of Bob receiving the same letter as Alice in the original sequence
twice $\left( {p_{s}}^2\right)$, divided by the total probability of keeping
letters in the first iteration, i.e. failure in generating a key-bit,
$\left(1-p_\textrm{succ}^{(1)} \right)$,
\begin{eqnarray}
p_s'=\frac{\left(\frac{\epsilon}{4}\right)^2}{
\left(\frac{\epsilon}{4}\right)^2+3\left(\frac{4-\epsilon}{12}\right)^2}.
\end{eqnarray}
Similarly, $p_d'$ is given by
\begin{eqnarray}
p_d'=\frac{\left(\frac{4-\epsilon}{12}\right)^2}{
\left(\frac{\epsilon}{4}\right)^2+3\left(\frac{4-\epsilon}{12}\right)^2}
=\frac{1-p_s'}{3}.
\end{eqnarray}
Upon defining $\epsilon'$ in accordance with $p_s'= \frac{\epsilon'}{4}$ and
$p_d'=\frac{4-\epsilon'}{12}$, we can carry the analysis for the first
iteration over to the next iteration by replacing $\epsilon$ by
$\epsilon'$. The relation between $\epsilon'$ and $\epsilon$ is most compactly
stated in the form
\begin{eqnarray}\label{eq:eprime}
\frac{3 \epsilon'}{4-\epsilon'} = \left(\frac{3 \epsilon}{4-\epsilon}
\right)^2,
\end{eqnarray}
showing that the noise reduces quadratically with each iteration step. All
further iterations can thus be analyzed in the same way, each time replacing
the noise $\epsilon$ of the previous iteration by the new noise parameter $
\epsilon'$.
In particular, the probability of successfully generating a key-bit in the
$n$-th iteration is, similarly to Eq.\,\eqref{eq:psucc}, given by
$q^{(n)}=\left(4-\epsilon^{(n)}\right)\left(2+\epsilon^{(n)}\right)/12$, where
$\epsilon^{(n)}$ denotes the noise parameter in the $n$-th iteration. The
conditional probability $p_\textrm{succ}^{(n)}$ of a key-bit being generated in the
$n$-th iteration, after failure in the previous $n-1$ iterations is then given
by
\begin{eqnarray}\label{eq:psuccn}
p_\textrm{succ}^{(n)} = q^{(n)} \prod_{m=1}^{n-1} \left(1-q^{(m)} \right).
\end{eqnarray}
The error probability per key-bit $p_\textrm{err}^{(n)}$ for the $n$-th iteration can
easily be written as
\begin{eqnarray}
p_\textrm{err}^{(n)} = \frac{3\epsilon^{(n)}}{4+2\epsilon^{(n)}}
= \left[1+\left(\frac{4-\epsilon}{3\epsilon}
\right)^{2^{n-1}}\right]^{-1},
\end{eqnarray}
which uses Eq.\,\eqref{eq:eprime}. The contribution to $I_{A\&B}^{\textrm{total}} $ in the $n$-th
iteration is given by $I_{A\&B}^{(n)}(\epsilon)= 2^{-n} \, p_\textrm{succ}^{(n)} \,
I_{\textrm{key}}\left(p_\textrm{err}^{(n)}\right)$, and the overall expectation value
of the mutual information per qubit pair in the limit of infinitely many
iterations is thus
\begin{eqnarray} \label{eq:IABtotal}
I_{A\&B}^{\textrm{total}} (\epsilon)
&=&\sum_{n=1}^\infty \,\frac{p_\textrm{succ}^{(n)}}{2^n} \,
I_{\textrm{key}}\left(p_\textrm{err}^{(n)}\right).
\end{eqnarray}
This quantity serves as our figure of merit for the comparison with the
6-state protocol. One should however keep in mind that it is an average over
the various key-bit sequences of the successive iterations. Each of them has
different noise properties which must be taken into account when the data are
processed further by a privacy amplification procedure.
In the noiseless case ($\epsilon =0$), the Singapore protocol yields a mutual
information $I_{A\&B}^{(1)}(0)= {1\over 3}$ for the first iteration. This is
as much as one can get in the 6-state protocol; but here we can improve the
efficiency by continuing the key generation with the left-over sequences,
e.g. $I_{A\&B}^{(1)}(0) + I_{A\&B}^{(2)}(0)= {1 \over 3} + {1 \over 18 } =
0.389$ up to the second iteration and so on, with the limiting value of
0.4. When Alice and Bob share the complete mixture ($\epsilon =1$) we find the
expected result of $I_{A\&B}^{\textrm{total}} (1) \propto \,\textrm{log$_2$}\, [1] = 0.$ The numerical plots of the
total mutual information for Alice and Bob when terminating the key generation
after one, two, ..., five iterations are shown in Fig.\,\ref{fig:IAB}. The
comparison with the 6-state protocol shows that the mutual information
obtained in the Singapore protocol is larger from the third iteration
onwards. Alice and Bob do not need to perform more than three key-generation
iterations as the third iteration already comes so close to the limiting value
that the benefit of further iterations will be less then 0.01\%. The total
gain compared to the 6-state protocol is $\Delta I_{A\&B} = 0.066$ or 20\% in
the noiseless case and vanishes for $\epsilon \ge \frac{2}{3}$, when
$\rho_{A\&B}$ of Eq.\,\eqref{rhoAB} is separable. This larger efficiency is
expected since the Singapore protocol is minimally tomographic and does not
waste as much information as the 6-state protocol.
Our first conclusion is, therefore, that the Singapore protocol is
considerably more efficient than its obvious competitor, the tomographic
6-state protocol, and offers an alternative way for establishing a secret key
between the communication partners. We will now discuss possible incoherent
eavesdropping attacks on the Singapore protocol and find the noise thresholds
below which the security of the protocol is guaranteed.
\begin{figure}[t]
\begin{center}
{\includegraphics[width=0.45\textwidth]{IAB.eps}}
\caption{\label{fig:IAB}
Total mutual information between Alice and Bob for the Singapore protocol for
the 1st to 5th iteration in comparison with the tomographic 6-state
protocol. The plotted average mutual information of Alice and Bob in the
6-state protocol represents the function
$I^{\textrm{6-state}}_{A\&B}(\epsilon) = \frac{1}{6} \left( \epsilon \,\textrm{log$_2$}\,
\epsilon + (2-\epsilon) \,\textrm{log$_2$}\, [2-\epsilon] \right).$ Note, that the total
mutual information up to the 3rd, 4th and 5th iteration are so close that
they overlap and the 3rd iteration is already a very good approximation of $
I_{A\&B}^{\textrm{total}} $. The plot covers the range $0< \epsilon < \frac{2}{3}$, i.e., the
$\epsilon$-values for which $\rho_{A\&B}$ of Eq.\,\eqref{rhoAB} is not
separable. }
\end{center}
\end{figure}
\subsection{Constraints on Eve's eavesdropping}\label{sec:constraints}
In Sec.\,\ref{sec:MQT} Alice and Bob received a two-qubit state sent by a
provider. We must however assume that this provider is not trustworthy and
eager to know A\&B's secret. We hence identify the provider as the most
dangerous eavesdropper (Eve) possible, and give her full control over the
source. In the \emph{worst case scenario}, Eve is smarter with her technology
and can even replace the usually noisy channel between Alice and Bob by a
perfect one. She is then in the position to entangle an additional ancilla to
each qubit pair she sends, and the disturbances she causes by doing so would
imitate noise. But Alice and Bob perform a complete tomography of the shared
state and since they agree to use the channel only when the noise has the
properties reflected in Eq.\,\eqref{rhoAB}, they can greatly restrict how
Eve can entangle ancillas, and later on deduce the shared key.
Eve will prepare a 3-party \emph{pure} state because she has no advantage from
creating a mixed state and thus introducing classical noise herself. We
decompose the state as
\begin{equation}
\label{Sepsilon}
| S_{\epsilon} \rangle = \sum_{l=1}^4 |l \bar l \rangle | E_l \rangle,
\end{equation}
where the (unnormalized) states $| E_l \rangle $ represent Eve's ancilla. The
pure states $|l\bar l \rangle $ belong to Alice and Bob and form a
non-orthogonal basis $\{|l \bar l \rangle\}_{l=1}^4$, where the tetrahedron
states $| l \rangle $ and $| \bar l \rangle $ are defined as
\begin{eqnarray}\label{eq:tetra}
\begin{aligned}
|l \rangle \langle l |
&= \frac{1}{2} \left(\mathbbm{1} + \vec t_l \cdot \vec \sigma \right), \\
|\bar l \rangle \langle \bar l |
&= \frac{1}{2} \left(\mathbbm{1} - \vec t_l \cdot \vec \sigma \right), \\
\end{aligned} \quad l = 1, 2, 3, 4,
\end{eqnarray}
with the phase-conventions
\begin{eqnarray}
\langle l|k\rangle = \langle \bar k|\bar l\rangle \quad \mbox{and} \quad
\langle l|\bar k\rangle = - \langle k|\bar l\rangle,
\end{eqnarray}
the second of which implied by the first. Some useful relations of these
states can be found in the Appendix. Note, that the decomposition of
Eq.\,\eqref{Sepsilon} allows for exactly four components on Eve's side,
together forming Eve's ancilla state. Each ancilla can thus be represented by
a maximally four-dimensional system so that the ancilla can be regarded as
another qubit pair.
Eve's ancilla state is however restricted by the condition that A\&B must
receive the two-qubit state $\rho_{A\&B}$ of Eq.\,\eqref{rhoAB} which they
check by comparing their outcomes of the tomographic measurement. Assuming
that all the noise originates in Eve's eavesdropping attempt, the 3-party
state $|S_{\epsilon} \rangle$ must be such that
\begin{equation}
\,\textrm{tr}\,_\textrm{Eve}\left[|S_\epsilon\rangle\langle S_\epsilon|\right]
=\rho_{A\&B}(\epsilon).
\end{equation}
Let us rewrite $\rho_{A\&B}$ using the expansion of the identity matrix in
Eq.\,\eqref{eq:identity} and the decomposition of the singlet in
Eq.\,\eqref{eq:e}, to get
\begin{eqnarray}
\rho_{A\&B}(\epsilon)
&=& \frac{1}{8}\sum_{l,k=1}^4 |l \bar l \rangle
\langle k \bar k | \left(1 - \frac{3}{2} \epsilon
+ 3 \epsilon \delta_{lk}\right).
\end{eqnarray}
This expression immediately gives the following constraints on Eve's ancilla
state
\begin{eqnarray}\label{anccon}
\langle E_k | E_l \rangle = \frac{2-3\epsilon}{16}
+\frac{3\epsilon}{8}\delta_{kl} ~ \mbox{ for } ~ k, l = 1, ...,4.
\end{eqnarray}
For $\epsilon = 0$, the $|E_l \rangle$ are identical and Eve cannot extract
any information. For $\epsilon =\frac{2}{3}$, the scalar products in
Eq.\,\eqref{anccon} show the orthogonality of the $|E_l\rangle$ which implies
that $\rho_{A\&B}$ is separable. Indeed, Alice and Bob share a separable
Werner state
\begin{eqnarray}
\rho_{A\&B}\left(\frac{2}{3}\right) =\frac{1}{4}\sum_{l=1}^4~|l\bar
l\rangle\langle l\bar l|
=\frac{1}{4}\left(1-\frac{1}{3} \vec \sigma_A \cdot \vec \sigma_B\right),
\end{eqnarray}
and thus all correlations are classical.
In Ref.\,\cite{pyramids} it was shown that, given the constraints of
Eq.\,\eqref{anccon}, the most general state Eve can construct, up to unitary
equivalence, can be written as
\begin{eqnarray}\label{Sepsilon1}
|S_{\epsilon} \rangle &=&
\alpha | s_{12} \rangle | s_{34} \rangle + \beta |s_{13} \rangle |s_{24}
\rangle,
\end{eqnarray}
where the first qubit is held by Alice, the second by Bob and the third and
fourth by Eve. The amplitudes $\alpha$ and $\beta$ must now be chosen so that
Eve's ancilla satisfies Eq.\,\eqref{anccon}. By using Eq.\,\eqref{Sepsilon}
and Eq.\,\eqref{Sepsilon1} we find Eve's states to be
\begin{eqnarray}
| E_k \rangle &=& \alpha \frac{1}{2\sqrt{2}} | s \rangle
-\frac{\beta}{2}\left(|\bar k k\rangle +\frac{1}{2}|k \bar k \rangle\right),
\end{eqnarray}
and evaluating the scalar products $\langle E_k|E_l \rangle$ and comparing with
Eq.\,\eqref{anccon} we deduce the constraints on the parameters $\alpha$ and
$\beta$,
\begin{eqnarray}\label{eq:ab1}
|\beta|^2 = \epsilon \qquad\mbox{and}\qquad \left|\alpha +
\frac{\beta}{2}\right|^2 = 1 -\frac{3}{4}\epsilon.
\end{eqnarray}
Since we have a freedom of global phase for $|S_\epsilon\rangle$ we can choose
$\beta$ to be real, i.e. $\beta=\sqrt\epsilon$. The only free parameter is
then the phase $\phi$ in
\begin{equation}\label{eq:phi}
\alpha + \frac{\beta}{2} = e^{\textrm{i} \phi}\sqrt{1-\frac{3\epsilon}{4}}.
\end{equation}
Eve wants to guess Alice's key-bit and constructs a state $\rho^{(k)}_E$ for
each outcome $k$ Alice could measure regardless of Bob's result. These
conditional ancilla states are
\begin{eqnarray}\label{eq:rhok}
\rho^{(k)}_E &=& \,\textrm{tr}\,_{A\&B} \left[P_k ~ |S_\epsilon\rangle\langle
S_\epsilon| \right],\\
&=&\frac{|\beta |^2}{8} |\bar k\bar k \rangle\langle \bar k\bar k|
\nonumber\\
&& + \, \frac{1}{4} \left(\alpha |s\rangle - \frac{\beta}{\sqrt 2} |\bar k
k\rangle \right)
\left(\alpha^* \langle s| - \frac{\beta^*}{\sqrt 2} \langle \bar k k|
\right),\nonumber
\end{eqnarray}
where $P_k$ is the POVM element for Alice measuring $k$. Note, that all
$\rho_E^k$ are subnormalized to ${1 \over 4}$, the \emph{a priori} probability
that Alice will measure a particular $k$.
Owing to the symmetry between Alice and Bob, the ancilla states conditional to
Bob's measurement results are unitarily equivalent to these
$\rho^{(k)}_E$. Therefore, it does not matter whether Eve tries to learn
Alice's measurement results or Bob's.
\subsection{Incoherent Eavesdropping attacks}\label{sec:eavesdropping}
Let us summarize what we have found so far. For each qubit pair that Eve sends
to A\&B she will keep a qubit pair (the ancilla) for herself. In the first
iteration of the key generation scheme Alice and Bob will use the measurement
outcomes of two qubit pairs. Eve has thus two corresponding ancillas which she
can measure to guess A\&B's generated key-bit. For the second iteration Eve
will have four, for the third she will have eight ancillas and so on.
We suppose Eve has no means of storing her ancillas until classical
communication between Alice and Bob is done. She therefore has to measure them
individually as she creates them, without being able to include the classical
information in her measurement. Her measurement can be optimized such as to
maximize her mutual information with Alice. We will call her optimal strategy
for doing this an \emph{incoherent attack} as opposed to a coherent (joint)
measurement performed on the bunch of ancillas correlated through the key
generation process. The optimal POVM for the $\rho_E^{(k)}$ of
Eq.\,\eqref{eq:rhok} was found using the iterative procedure in
Ref.\,\cite{AccInf} and is analogous to the optimal POVM for the 6-state
protocol given there. The 4-member POVM consists of the projectors
\begin{eqnarray} \label{eq:el}
M_l&=&|e_l\rangle\langle e_l|, \nonumber\\
|e_l\rangle&=&{1 + \sqrt 3 e^{- \textrm{i} \phi} \over 2} |s \rangle
+ \sqrt{\frac{ 3}{2}} e^{- \textrm{i} \phi} | \bar l l \rangle,
\end{eqnarray}
for $l=1, 2, 3, 4$, where $\phi$ is the phase of Eq.\,\eqref{eq:phi}, and the
$M_l$ obey $\sum_{l=1}^4 \, M_l = \mathbbm{1}$. Note that the POVM is
independent of the noise parameter, $\epsilon$.
Interestingly, in the interval $0 < \epsilon < \bar{\epsilon}$ where
$\bar{\epsilon} = 0.1725$ (obtained numerically by solving a transcendental
equation), it was found that a 5-member POVM gives a slightly larger mutual
information then the 4-member POVM (less than 1\% larger). The fifth element
has the following expression
\begin{eqnarray} \label{eq:e5}
M_5&=&|e_5\rangle\langle e_5| \;\; \mbox{where} \;\; |e_5\rangle = \sqrt{2
\mu - 4 \mu^2} \sum_{l=1}^4 |e_l\rangle\
\end{eqnarray}
where $\mu$ is a function of $\epsilon$ and $0 \leq \mu \leq 1/2$. All the
other $\{|e_l\rangle\}_{l=1}^4$ have to be modified in the following manner
\begin{eqnarray} \label{eq:sum_povm5}
|e_j\rangle \to \left( |e_j\rangle - \mu \sum_{l=1}^4 |e_l\rangle\ \right)
\end{eqnarray}
to ensure the elements sum up to identity. For the purpose of finding the
noise threshold, we can just use the simpler 4-member POVM since as we shall
see later, the noise threshold is always much larger than $\bar{\epsilon}$ for
any iteration.
With the 4-member POVM, the joint probabilities $q_{kl}$ of Alice measuring
$k$ and Eve measuring $l$ are given by the same expression as Alice and Bob's
joint probabilities in Eq.\,\eqref{pkl} with $\epsilon$ replaced by a new
noise parameter $\eta$,quantifying the noise between Alice and Eve, with
\begin{equation}\label{eq:eta}
\eta(\epsilon)=\left(\sqrt{1-\frac{3\epsilon}{4}}
-\sqrt\frac{3\epsilon}{4}\right)^2.
\end{equation}
Note that for $\epsilon=0$ the noise between Alice and Eve reaches a maximum
($\eta=1$) and when $\epsilon=\frac{2}{3}$ there is no noise between Alice and
Eve ($\eta=0$).
We define the probabilities $q_s$ ($q_d$) for Alice and Eve having the same (a
particular different) measurement result in analogy to the $p_s$ ($p_d$) in
Eq.\,\eqref{eq:ps} (Eq.\,\eqref{eq:pd}) by
\begin{eqnarray} \label{eq:qsqd}
q_s(\eta) = q_{kk}=\frac{\eta}{4}
\quad \textrm{and} \quad q_d(\eta)= q_{k\neq l}=\frac{4-\eta}{12}.
\end{eqnarray}
When a key-bit is generated between Alice and Bob the joint probabilities
between Alice's and Eve's results are as given in Table \ref{tab:AE} where we
assumed that Bob grouped $\framebox{AB} = 0$ and $\framebox{CD} =1$
(similarly for other groupings). Note, that the probabilities in Table
\ref{tab:AE} are again more anti-correlated than they are correlated since
$q_s \le q_d$ for all $\eta$. We compute the mutual information between Alice
and Eve from this table of probabilities. The middle column does not
contribute and the mutual information becomes
\begin{eqnarray}\label{eq:IAE} \nonumber
I_{A\&E}^{(1)}(\epsilon)&=&\frac{p_\textrm{succ}^{(1)}}{2} \left\{
(q_s^2+q_d^2) \,\textrm{log$_2$}\,\left[\frac{q_s^2+q_d^2}{2}\right]\right. \\
&& + \left.
4 q_d^2 \,\textrm{log$_2$}\,\left[q_d^2\right] + 2 q_s q_d \,\textrm{log$_2$}\,\left[q_s q_d\right]
\right. \nonumber \\ && - \left.
(q_s^2 + 3 q_d^2) \,\textrm{log$_2$}\,\left[\frac{q_s^2 + 3 q_d^2}{4}\right]
\right. \\ && - \left.
2 q_d (q_s + q_d) \,\textrm{log$_2$}\,\left[\frac{q_d(q_s + q_d)}{2}\right]
\right\}. \nonumber
\end{eqnarray}
where the prefactor is the same as in Eq.\,\eqref{eq:IAB}. This value gives
the upper bound to the amount of information Eve can obtain about the key
generated in the first iteration. It is valid for incoherent attacks only, by
whatever suitable method Eve might employ to extract Alice's key-bit.
\begin{table}[t]
\begin{tabular}{ccccccc|c}
\hline
\hline
\multicolumn{2}{c}{}
& \multicolumn{5}{c}{Eve's combinations} & \\[1ex]
\multicolumn{2}{c}{}
& AA & AB & AC, AD & CD & CC & \\
\multicolumn{2}{c}{}
& BB & BA & CA, DA &DC & DD & \\
\multicolumn{2}{c}{}
& & & BC, BD & & & \\
\multicolumn{2}{c}{}
& & & CB, DB & & & M.P.\\
\hline
Alice's
& 0
&$\frac{q_s^2+q_d^2}{4}$
&$\frac{q_sq_d}{2}$ &$\frac{q_d(q_s+q_d)}{4}$
&$\frac{q_d^2}{2}$
&$\frac{q_d^2}{2}$
& 1/2\\[1ex]
key-bit
&1
&$\frac{q_d^2}{2}$
&$\frac{q_d^2}{2}$ &$\frac{q_d(q_d+q_s)}{4}$
&$\frac{q_sq_d}{2}$
&$\frac{q_d^2+q_s^2}{4}$
& 1/2 \\[1ex]
\cline{2-8}
&M.P.
&$X$
&$Y$
&$Y$
&$Y$
&$X$
& 1\\
\hline
\hline
\end{tabular}
\caption{\label{tab:AE}Joint probabilities between Alice and Eve with the
marginals $X = \left(q_s^2+ 3 q_d^2\right)/4$ and $Y= q_d(q_d+q_s)/2$, for
the grouping AB = 0 and CD =1.}
\end{table}
In the $n$-th iteration Eve will have $2^n$ qubit pairs available for
measurement. She measures all qubit pairs individually and gets a sequence of
$2^n$ letters with A, B, C and D occurring $n_A, n_B, n_C$ and $n_D = 2^n
-n_A-n_B-n_C $ times, respectively. In all announced positions which
contribute in the key generation of a key-bit, Alice always has the same
letter, say A. The probability of Eve measuring a sequence which contains
$n_A$ times A is given by, with $q_s$ and $q_d$ from Eq.\,\eqref{eq:qsqd},
\begin{equation}
q_n^{\textrm{A}}(n_A) = \frac{1}{4} \,q_s^{n_A} \,q_d^{2^n -n_A}.
\end{equation}
If Bob grouped $\framebox{AB} = 0$ and $\framebox{CD} =1$, say, the
probability of Eve getting a particular distribution $\{n_A, n_B, n_C, n_D\}$
and Alice having the key-bit 0 is then
\begin{eqnarray} \nonumber
q_n^0 (n_A, n_B,)
&=& q_n^{\textrm{A}} (n_A) + q_n^{\textrm{B}} (n_B),\\
&=&\frac{q_d^{2^n }}{4} \left[ \left( \frac{q_s}{q_d}\right)^{n_A}
+ \left(\frac{q_s}{q_d}\right)^{n_B}\right].
\end{eqnarray}
and similarly for group 1. The marginal probabilities for Eve getting a
distribution $\{n_A, n_B, n_C, n_D\}$ is then
\begin{eqnarray} \nonumber
q_n(n_A, n_B,n_C,n_D)
&=& q_n^{0} (n_A, n_B) + q_n^1 (n_C,n_D),\\
&=&\frac{q_d^{2^n }}{4} \sum_{J=A}^D \, \left(\frac{q_s}{q_d}\right)^{n_J}.
\end{eqnarray}
The number of sequences with a particular distribution $\{n_A, n_B, n_C,
n_D\}$ is
\begin{equation}
\binom{2^n}{n_A,n_B,n_C,n_D}
= \frac{2^n! ~ \delta_{2^n, n_A+n_B+n_C+n_D}}{n_A!~n_B!~ n_C!~ n_D!},
\end{equation}
and Alice's marginals $q_n^k$ for $k=0,1$ turn out correctly
\begin{eqnarray}\nonumber
q_n^k &=& \sum_{n_A, n_B, n_C, n_D} \binom{2^n}{n_A,n_B,n_C,n_D} ~ q_n^k
(n_A, n_B,n_C,n_D) \\
&=& \frac{1}{2}.
\end{eqnarray}
The contribution to the mutual information that Eve shares with Alice from
the $n$-th iteration can now be calculated,
\begin{eqnarray}\label{eq:IAEn} \nonumber
I_{A\&E}^{(n)}(\epsilon)&=&\frac{p_\textrm{succ}^{(n)}}{2^n} ~
\sum_{k=0}^1 ~~ \sum_{n_A,n_B,n_C,n_D=0}^{2^n} \\ \nonumber
&& \times \binom{2^n}{n_A,n_B,n_C,n_D} \, q_n^k (n_A, n_B,n_C,n_D) \, \\
&& \times \,\textrm{log$_2$}\, \left[ \frac{q_n^k (n_A, n_B,n_C,n_D)}{q_n (n_A, n_B,n_C,n_D)
\, q_n^k }\right],
\end{eqnarray}
where $p_\textrm{succ}^{(n)}$ is again the probability of success in the $n$-th
iteration (Eq.\,\eqref{eq:psuccn}) and the factor of $2^{-n}$ gives the
mutual information per qubit pair used. The total mutual information that Eve
can reach if Alice and Bob perform infinitely many iterations is then
\begin{eqnarray}\label{eq:IAEtotal}
I_{A\&E}^{\textrm{total}} (\epsilon) &=& \sum_{n=1}^{\infty} \,I_{A\&E}^{(n)} (\epsilon).
\end{eqnarray}
In the noise-free case the mutual information of Eve vanishes $I_{A\&E}^{\textrm{total}}
(0)\propto \,\textrm{log$_2$}\, \left[ 1\right] =0$. This is clear since the channel between
Alice and Bob is perfect and the channel between Alice and Eve is completely
noisy ($\eta=1$).
\subsection{Security} \label{sec:security}
According to the Csisz\'ar-K\"orner Theorem in Ref.\,\cite{CK}, Alice and Bob
are able to share a secret key provided their mutual information $I_{A\&B}^{\textrm{total}} $
exceeds the mutual informations shared between Eve and one of the
communication partners $I_{A\&E}^{\textrm{total}}$ and $I_{B\&E}^{\textrm{total}}$. The CK-yield
is then
\begin{equation}\label{eq:CKyield}
Y_{CK} = I_{A\&B}^{\textrm{total}} -I_{A\&E}^{\textrm{total}}.
\end{equation}
The CK-yield determines the length of the secure key Alice and Bob can obtain
from the generated raw key of length $L$; namely the length of the secure key
will maximally be $Y_{CK} \, L$ (for one-way communication). The intersection
between $I_{A\&B}^{\textrm{total}}$ and $I_{A\&E}^{\textrm{total}}$ thus gives the final noise threshold below which
a secret key between Alice and Bob can be generated by one-way communication
that relies on error correction codes.
\begin{figure}[t]
\begin{center}
{\includegraphics[width=0.4\textwidth]{Yield.eps}}
\caption{\label{fig:yield}Yield for the Singapore protocol for the 1st to
5th iteration in comparison with the tomographic 6-state protocol. For
the 6-state protocol we used the mutual information between Alice and
Eve obtained in Ref.\,\cite{AccInf}. The yields for the 3rd, 4th and 5th
iteration overlap too, as they did in Fig.\,\ref{fig:IAB} and the latter
can be regarded as a very close approximation to the yield in the limit
of infinitely many iterations. }
\end{center}
\end{figure}
In Fig.\,\ref{fig:yield} we plot the CK-yield for the 6-state protocol and
different iterations of the Singapore protocol. Observe that the yield of the
Singapore protocol is distinctly larger than the yield of the tomographic
6-state protocol from the second iteration onwards. For $\epsilon = 0$ the
gain is already 20\% and it increases significantly for larger noise
parameters. Further, the noise threshold for the first iteration of the
Singapore protocol is at $\epsilon = 0.409$ and increases to $\epsilon =
0.417$ for the 3rd iteration. Additional iterations will raise this value in
the 4th decimal place, so that $\epsilon = 0.417$ can count as the maximum
noise Alice and Bob can accept when establishing a secure key with the
Singapore protocol. In contrast, the 6-state protocol has its noise threshold
at the much smaller value of $\epsilon = 0.236.$ Compared to the 6-state
protocol the noise threshold of the Singapore protocol is remarkable 76.7\%
higher. This result is our key observation in this paper. Given a number of
qubit pairs, measuring the tetrahedron POVM and using the above key-generation
scheme thus leads to a raw key substantially longer than the one that could be
produced by the 6-state protocol. Additionally, Alice and Bob are still able
to share a secret key when the noise level exceeds the 6-state threshold by a
lot.
\subsection{Discussion and conclusions}
The analysis of the eavesdropping attacks was carried out for the original
protocols, without any error-correction or privacy amplification schemes. It
is thus a comparison between the Singapore protocol and the 6-state protocol
in their pure form only up to the extraction of a \emph{raw key}. We have seen
the inherent potential of both protocols and analysed in detail how much
mutual information the communication partners Alice and Bob can establish
between each other when using the Singapore protocol. It turns out that A\&B
can already stop the key generation after the third iteration without losing
much, since the mutual information up to the third iteration is already very
close to the limiting value of infinitely many iterations. The comparison with
the 6-state protocol showed that the efficiency of the Singapore protocol is
up to 20\% larger than in the 6-state protocol.
We continued our discussion by constructing incoherent eavesdropping attacks
under the following assumptions:
1) A source controlled by the eavesdropper Eve distributes the singlet state,
mixed with unbiased, white noise scaled by the noise parameter $\epsilon$.
2) Eve is the cause for all noise; and
3) Eve can not store her ancillas and is thus not able to incorporate knowledge
of the classical communication between Alice and Bob when measuring her
ancillas. Additionally she is constrained to perform only individual
attacks and cannot measure correlated ancillas in a joint
measurement. Condition 1) is equivalent to the scenario where Alice sends a
qubit in a state orthogonal to one of the tetrahedron states from
Eq.\,\eqref{eq:tetra}, each with probability $\frac{1}{4}$. Eve could then
intercept the traveling qubit and produce an optimal clone and an
anti-clone. Here she also keeps two qubits which she can measure and Bob
receives a disturbed state. Together with condition 2) this leads to the
\emph{worst case scenario} for Alice and Bob, where Eve can realize optimal
cloning. This is reasonable since we are interested in absolute security
statements which rely only on the laws of physics and not on technical
abilities of Eve. On the other hand, we assumed condition 3) which is clearly
a relaxation to this strictness. But this is still a valid constraint given
that modern technology has not developed reliable quantum storage systems and
it is not yet feasible to perform joint measurements on demand. However, we
have also analysed coherent eavesdropping attacks on the Singapore protocol as
discussed in Ref.\,\cite{TetraCrypt} and will present a detailed and extended
report in due time.
Our discussion did assume throughout that Alice and Bob share a state of the
form given in Eq.\,\eqref{rhoAB}. A natural and open question is then how well
will the Singapore protocol perform if the state differs from the above,
e.g. if the noise is somehow biased, and how does it then compare to the
6-state protocol? Another issue worth addressing is the possible use that Eve
can make of the information gained when Alice and Bob perform a privacy
amplification or key purification by other means. However, under the
conditions 1) - 3), the Singapore protocol provides an efficient alternative
for generating a secret key between two communication partners. The measurement
of the tetrahedron POVM is from a practical point of view as feasible as the
comparable tomographic 6-state measurement (see Ref.\,\cite{OneLoop}) and the
efficiency and security under incoherent attacks is significantly higher.
\acknowledgements
We gratefully acknowledge inspiring discussions with S. M. Assad and
W. K. Chua. H. K. Ng would like to thank the Defence Science and Technology
Agency (DSTA) of Singapore for their financial support. J. Anders gratefully
acknowledges the financial and personal support of the Gottlieb Daimler und
Karl Benz-Stiftung. This work was supported by A$^*$Star Grant
No. 012-104-0040 and by NUS Grant WBS: R-144-000-109-112.
|
1,108,101,563,146 | arxiv | \section{Models and assumptions}
{\bf Model assumptions.} A standard concordance cosmology with $H_0= 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega{_m} = 0.30$, and $\Omega{_\Lambda} = 0.70$ is adopted, \blue{which gives a scale of 5.63 kpc/$\mbox{\ensuremath{^{\prime\prime}}}$ at $z=6.4$.} All magnitudes are presented in the AB system. Milky Way dust extinction is negligible at these wavelengths and is not corrected for.
{\bf Target selection.}
The two sources presented here are the first two targets observed as part of our Cycle 1 {\it JWST}\ program (Observation ID 1967; PI: M.Onoue).
This program targets 12 of the lowest-luminosity sources from the sample of quasars at redshift 6.0 to 6.4 (Ref.\cite{Matsuoka2018c}) discovered by the Hyper Suprime-Cam Subaru Strategic Program\cite{HSC-SSP2018}, an optical wide-field survey using the 8.2-meter Subaru telescope.
This deep survey, which is sensitive to quasars a factor of 10 less luminous than those discovered by the shallow surveys, e.g., Sloan Digital Sky Survey, mitigates the bias to high luminosity of previous high-redshift quasar studies.
{\bf Observations and data reduction.}
The data presented in this paper were taken with Module B of the NIRCam instrument, which has a field of view of $2.2\times2.2$ arcminutes$^2$.
The total exposures of 3100 seconds in the two filters (F356W and F150W) were obtained simultaneously.
A $4\times4$ primary and sub-pixel dithering pattern was employed to mitigate cosmic ray hits and bad pixels in the detector and to ensure sub-pixel resampling during the stacking step.
We used the \textsf{INTRAMODULEBOX} and \textsf{STANDARD} dithering patterns for the primary and sub-pixel dithers, respectively.
We used the \textsf{BRIGHT1} readout mode.
The data were processed using the standard procedures in {\it JWST}\ pipeline version 1.7.2.
The pre-calibrated ``Stage 2" image frames were downloaded from the MAST archive.
These images have the pipeline parameter reference files \textsf{jwst\_1009.pmap} for \targetone\ and \textsf{jwst\_1011.pmap} for {J2236+0032}, as registered in the {\it JWST}\ Calibration Reference Data System\footnote{\url{https://jwst-crds.stsci.edu}}.
For those individual frames, global background light was first subtracted using the \textsf{Background2D} function of \textsf{Photutils}\cite{Bradley2022}.
Those archived images clearly have horizontal and vertical stripe noise patterns, known as ``$1/f$ noise".
This $1/f$ noise was subtracted by first masking bright objects, then collapsing the 2D images along each axis of the detectors and estimating the noise amplitudes by measuring sigma-clipped median values. These amplitudes were then subtracted from each row and column.
The horizontal stripes were measured for each of the four detector amplifiers separately.
Those post-processed Stage 2 image frames were then aligned and stacked with inverse-variance weighting using the Stage 3 pipeline, keeping the original position angle of the detector for the purpose of building the PSF library.
The F356W images were resampled with a pixel scale a factor of two smaller than that of the detector, using the drizzling algorithm implemented in the \textsf{Resample} step of the pipeline.
The F150W images were resampled with the original pixel scale. The final pixel scale for F356W and F150W are $0\farcs{}0315$ and $0\farcs{}0312$, respectively.
{\bf 2D image decomposition of quasar and host galaxy emission using \texttt{galight}.}
Decomposing the image of a quasar into central point source and extended host galaxy requires a high-quality model for the PSF. In this paper, we use the stars in our image for our PSF model. Space-based telescopes have a much sharper and more stable PSF than ground-based telescopes, and the {\it Hubble Space Telescope} ({\it HST}) has been used to measure quasar host galaxies to redshift up to $z\sim2$\cite{Ding2020, 2004ApJ...614..568J, 2016ApJ...830..156M, 2019ApJ...882..141M}.
However, {\it HST}'s $\sim$90-minute orbit means that it is continually passing between Earth's shadow and direct sunlight, causing the telescope to expand and contract (``orbital breathing'') and giving rise to a time-dependent PSF. The difficulty of modeling the PSF has not allowed quasar host galaxies to be detected at significantly beyond $z>2$\cite{2020ApJ...900...21M}.
The {\it JWST}\ provides the opportunity to extend quasar host studies to much higher redshift. Unlike {\it HST}, the {\it JWST}\ orbits the second Sun-Earth Lagrange (L2) point, which is a much more stable thermal environment creating more stable optics and a more stable PSF. Moreover, it is sensitive at longer infrared wavelengths, where the bulk of the starlight from high-redshift quasar hosts should be detected.
Ding et al.\cite{2022ApJ...939L..28D} used {\it JWST}-CEERS collaboration data\cite{CEERSI} to make the first successful host galaxy detection for quasars up to $z\sim3.5$.
In this paper, we follow our previous strategy\cite{Ding2020, 2022ApJ...939L..28D} and build a PSF library by identifying all isolated, unsaturated stars in our images of sufficient signal-to-noise ratio.
We identified 9$|$5 PSFs in filter F150W$|$F356W in the images for \targetone, and 16$|$16 PSFs in filter F150W$|$F356W for {J2236+0032}.
We use our two-dimensional modeling software \texttt{galight}\cite{Ding2020} to fit the quasar images to a model of a scaled PSF (the spatially unresolved point-like quasar) and a PSF-convolved 2D S\'ersic\ profile (the host galaxy).
We adopt uniform priors for the effective radius R$_{\rm eff}$~$\in[0\farcs{}03 ,2\farcs{0}]$ and the S\'ersic\ index ($n)\in[0.3,9]$ of the host, to avoid un-physical parameter inference.
We follow our previous strategy\cite{2022ApJ...939L..28D} to obtain a weighted inference for the decomposition result. A brief description is as follows: After subtracting the remaining local background, we use each PSF in our library in turn to fit the image.
Each PSF's best-fit $\chi^2$ value determines its performance. We select a group of two, three, and five PSFs from the library that has the top level $\chi^2$ performance and then average them using \texttt{psfr} (Birrer et al. in prep) utilizing features of \texttt{lenstronomy}\cite{2021JOSS....6.3283B}.
To optimize our modeling of the unresolved quasar emission, we consider both best-fit models using individual stars and average models based on the combined PSF stars described above. Thus, we add the three averaged PSFs as new members to the PSF library. Then, we take the results from the five top-performing PSF models in the updated library. We determine our final result parameters with uncertainties by weighting their $\chi ^2$ values. The inferred host filter magnitude, size, S\'ersic\ index, and other fit parameters are presented in Tab.~1. The fitting result in Figure~\ref{fig:decomp} adopts the PSF with the best performance.
\blue{The size of our host galaxies are defined using the S\'ersic\ effective radius R$_{\rm eff}$\ along the semi-major axis measured by \texttt{galight}. In the literature, e.g.,\cite{Shibuya2015}, the circularized R$_{\rm eff}$\ is also used through R$_{\rm eff}$ $_{\rm, circ.}$ = R$_{\rm eff}$$_{\rm, maj.} \sqrt{q}$, where $q$ is the ellipticity ($b/a$). Further, the obtained host residual allows us to measure an empirical R$_{\rm 1/2}$ through the flux growth curve. We take the inferred S\'ersic\ position to measure the flux growth curve to infer the R$_{\rm eff}$\ as reported in Table~1. Note that this R$_{\rm 1/2}$ measures the galaxy light after convolving effect, which is larger than the S\'ersic\ R$_{\rm eff}$\ when a galaxy is compact at the PSF width.}
\blue{Since \targetone\ has no clear host detection in the F150W band, we thus redo the fit and fix the host galaxy parameters (host S\'ersic\ index, R$_{\rm eff}$, ellipticity, and position angle) to the values inferred from the F356W band and allow the central position and amplitude to vary. We still see no evidence for a host residual, but the model gives a formal host magnitude of $27.12\pm0.71$. We thus quote a lower limit to the magnitude of $26.4$ mag.}
The quasar host of {J2236+0032}\ appears to be a compact galaxy. As a result, we find that the S\'ersic\ $n$ is poorly constrained, and the resulting host residual is point-like with an inferred R$_{\rm eff}$\ of $0\farcs{}03$ (the lower limit of the prior), indicating that the central PSF is not fully removed.
This fit implies a
stellar mass above $10^{11.8}$~M$_\odot$. Thus, we refit this object, fixing $n=1$ in both F150W and F356W. This gives a much-improved fit. The host is quite elongated in F356W, suggesting that it a disky edge-on galaxy. We also find a consistent position angle between F356W and F150W, as shown in Fig.~\ref{fig:decomp}. Thus, for {J2236+0032}, we adopt the results with S\'ersic\ $n$ fixed to 1. \blue{We further directly fit the ``data $-$ quasar" residual image (i.e., host) as the S\'ersic\ model and relaxing the S\'ersic\ index; the inferred $n$ become 1.55 with very consistent R$_{\rm eff}$\ and host mag obtained as fixing $n$ as 1.}
{\bf Stellar mass and luminosity of the host galaxy.} We use the \texttt{gsf} package\cite{gsf} to fit the broad-band spectral energy distribution (SEDs) for our quasar host galaxies. A Chabrier initial mass function, with a simple stellar population (SSP)-like star formation history, is also adopted. Given that we have photometry in two bands, we need to make additional assumptions to obtain informative results. For \targetone, we have fixed the age of the stellar population to $0.5$ Gyr (i.e., the galaxy formed $0.4$ Gyr after the Big Bang), and metallicity $\log Z/Z_\odot = -0.7$~\cite{2022arXiv220803281T}.
While the presence of nebular lines can affect the broad-band photometry, our NIRSpec data suggest that they are weak (Onoue et al. in preparation); thus, we do not include them in our modeling.
Given the non-detection of the host galaxy in the F150W filter, we set an upper limit of $26.4$ mag estimate (details see Sanity checks). The dust attenuation A$_{\rm v}$\ is also considered to be fixed at $0.5$ by assuming $\beta=-2$\cite{Bouwens2014}, a subsolar parameterization\cite{Castellano2014} of the Meurer relation and the Calzetti extinction law\cite{Calzetti2000}. For {J2236+0032}, for which we have detections in two bands, we also fix $\log Z/Z_\odot$ and A$_{\rm v}$\ as above but allow the age to be a free parameter with uniform priors of $0.01-1$ Gyr. We use the inferred SED templates to estimate the value of $M_{*}$ and $M_{\rm uv}$. \blue{To estimate the uncertainty of our fits, we use a range of parameter assumptions to estimate how much the resulting stellar masses and UV fluxes vary; for both targets, we vary the parameters $\log Z/Z_\odot$ at range [-1.0, -0.3] and A$_{\rm v}$\ at range [0.3, 0.7]. For \targetone, we also test to vary the age at [0.3, 0.7] Gyr.} The final fits and parameters of the two quasars' host galaxies are shown in Fig.~\ref{fig:sed} and Tab.~1.
There are two companion objects with a projected distance of $\sim9.2$ and $15.5$~kpc to \targetone.
Their optical-to-NIR photometry is presented in Table~E1, where the optical CModel photometry ($g$,$r$,$i$,$z$,$y$) is derived from Subaru/HSC\cite{SSP_DR3} and the NIR photometry is derived by running \texttt{galight}\ on the NIRCam images.
Object \#1 is marginally detected in the HSC $i$-band with signal-to-noise ratio of 5.
Object \#2 is also detected in the HSC $r$ and $i$ bands with signal-to-noise ratio of 5 and 7, respectively. This effectively rules out the possibility that they are at the same redshift as the quasar at $z = 6.34$, as intergalactic Lyman alpha absorption should make them invisible in $r$ and $i$.
{\bf Sanity checks.} The inferred host to quasar flux ratio is low, $<10\%$, for \targetone\ in F356W and {J2236+0032}\ in F150W. We perform several sanity checks for these two cases to confirm that the detections of the host in our quasars are real. First, our final reduced data are co-added using 16 dithers. To check whether
the apparent host is caused by a random ghost from any particular dither frame, we reanalyze the data using the first 8 and the second 8 dithers separated. We are able to detect the host from both halves of the data. To ensure that the apparent host galaxy flux is not dominated by a mismatched PSF core,
we mask the quasar center (using a $0\farcs{}12$ and $0\farcs{}06$ radius aperture for F356W and F150W, respectively) and redo the fit. Again, the host is well detected. Finally, to rule out the possibility that the residual is an artifact of the PSF stars we selected, we randomly choose two PSF stars and fit one with the other. The PSF residual does not show any extended feature, while when we use either of these PSF stars for our PSF model, we clearly detect the quasar host. We show these test results for \targetone\ F356W in Fig~\ref{fig:sanity}.
We also test the fidelity of the host magnitude inference using a joint fit across both bands.
We fix the host galaxy parameters for {J2236+0032}\ (where the host {\em is} detected in F150W), setting the S\'ersic\ parameters to those inferred from F356W, we find a very similar host magnitude to the value when all parameters are allowed to vary: the inferred host magnitude only changes from $25.63$ to $25.54$ mag.
\begin{figure*
\centering
\renewcommand\thefigure{E1}
\includegraphics[width=0.9\textwidth]{NA_temp_z6_decomp/z67QSO_z_M1450.pdf}\\
\caption{\textbf{The distribution of known quasars at redshift $\mathbf{z>5.8}$.}
The y-axis indicates the absolute magnitudes at rest-frame 1450 \AA.
The two target quasars in this work are shown in red (\targetone\ at $z=6.34$, and {J2236+0032}\ at $z=6.40$), while other low-luminosity quasars from the Subaru/HSC sample are shown in blue.
The {\it JWST}\ 12 Cycle 1 targets in GO \#1967 are highlighted with open circles.
Other known quasars are shown in black. \label{fig:shellqs_sample}}
\end{figure*}
\begin{table}
\centering
\begin{tabular}{lccccccc}
\hline
& HSC-g & HSC-r & HSC-i & HSC-z & HSC-y & F150W & F356W \\ \hline
Object \#1 & $>26.9$ & $>26.5$ & $25.9 \pm 0.2$ & $24.9 \pm 0.2$ & $24.2 \pm 0.3$ & $22.85 \pm 0.02$ & $21.49 \pm 0.02$ \\
Object \#2 & $>26.8$ & $26.1 \pm 0.2$ & $25.5 \pm 0.2$ & $>25.7$ & $>24.6$ & $24.64 \pm 0.02$ & $23.55 \pm 0.02$ \\ \hline
\end{tabular}\label{tab:sed_companion}
\flushleft{
\textbf{Table E1 $|$ The optical-to-NIR photometry of two bright companion galaxies to \targetone.} Three-sigma lower-limit magnitudes are given in the filters in which the objects are not detected.
}
\end{table}
\begin{figure*
\centering
\renewcommand\thefigure{E2}
\includegraphics[trim = 0mm 0mm 20mm 0mm, clip, width=0.8\textwidth]{J2255_SED_map.pdf}\\
\includegraphics[trim = 0mm 0mm 20mm 0mm, clip, width=0.8\textwidth]{J2236_SED_map.pdf}\\
\caption{\textbf{The SED inference using the host galaxy two-band photometry based on \texttt{gsf}.} The red data points with errors indicate the inferred host fluxes, and blue diamonds show the predictions using the best-fit model. The assumptions and SED model inference are indicated in the plots.}
\label{fig:sed}
\end{figure*}
\begin{figure*
\centering
\renewcommand\thefigure{E3}
Use the first 8 dither images.\\
\includegraphics[trim = 93mm 10mm 153mm 10mm, clip, width=0.95\textwidth]{J2255+0251_F356W_first_half_qso_final_plot.pdf}\\
Fit with a central mask
\includegraphics[trim = 93mm 10mm 153mm 10mm, clip, width=0.95\textwidth]{run_with_mask_qso_final_plot.pdf}\\
Fit one star with the other star
\includegraphics[trim = 93mm 10mm 153mm 10mm, clip, width=0.95\textwidth]{PSF_fit_PSF_F356W_qso_final_plot.pdf}\\
\caption{\textbf{Three sanity checks to confirm the host detection is real.} Top panel is the decomposition results using the first half of the 8 dithering images. The middle panel is the fit excluding a $0\farcs{}12$ radius mask in the quasar center. The bottom panel presents the fit using one PSF to subtract another PSF, showing no evidence for an extended feature. \label{fig:sanity}}
\end{figure*}
|
1,108,101,563,147 | arxiv | \section{Introduction}
Template matching by normalized cross correlation (NCC) is widely used in computer vision applications such as image registration, stereo matching, motion estimation, object detection and localization, and visual tracking \cite{, avants2008symmetric, heo2011robust, lewis1995fast, luo2010fast, smeulders2014visual}. Here we show that the robustness of the algorithm can be improved by applying deep learning. Namely, if the template and source images are preprocessed by a convolutional network, the rate of false matches can be significantly reduced, and NCC output becomes more useful for rejecting suspect matches. The training of the convolutional network follows the "siamese network" method of learning a measure of similarity from pairs of input images \cite{chopra2005learning}. The learning is only weakly supervised, in the sense that a true match to the template should exist somewhere in the source image, but the location of that match is not an input to the learning procedure. If NCC already works fairly well for template matching with raw images, then the incorporation of deep learning is expected to improve its accuracy further.
We test the power of our technique using images acquired by serial section electron microscopy (EM). NCC template matching is commonly applied to image patches in the course of assembling a 3D image stack from 2D images of individual sections \cite{preibisch2009bead, saalfeld2012elastic}. Achieving highly precise alignment between successive sections is critical for the accuracy of the subsequent step of tracing fine neurites through the image volume. Erroneous matches may arise because sections are deformed, distorted, or damaged during collection, and defects may also arise during imaging \cite{saalfeld2012elastic}. An image of a (0.1 mm)$^3$ brain volume is roughly a teravoxel \cite{lichtman2014bigdata}, and a high quality assembly could require up to 100 million template matches \cite{saalfeld2012elastic}. Every false match leads to tracing errors in multiple neurites, so even a small error rate across so many matches can have devastating consequences.
In empirical tests, we find that the error rate of template matching on raw serial EM images is on the order of 1 to 3\%. Preprocessing with a bandpass filter lowers the error rate \cite{berg2001geometric}, and substituting convolutional networks improves upon that error rate by a factor of 2-7x. The overall result is an error rate of 0.05 to 0.30\%. A common strategy for reducing false matches is to reject those that are suspect according to some criteria \cite{saalfeld2012elastic}. This can be problematic if too many true matches are rejected and there are not enough matches to describe the deformation in a given region, which can also lead to tracing errors in multiple neurites. We show that NCC output provides superior rejection efficiency once deep learning is incorporated. To achieve zero false matches under our most accurate conditions, we need only reject 0.12\% of the true matches based on NCC output, an improvement of 3.5x over the efficiency based on a bandpass filter.
The idea of using deep learning to improve NCC template matching for image correspondences is simple and obvious, but has been little explored as far as we know. The closest antecedent of our work introduced an NCC layer inside a network used for the person identification problem \cite{subramaniam2016deep}. Recent work applying deep learning to image correspondences avoids template matching and instead trains a convolutional network to directly output a vector field \cite{dosovitskiy2015flownet, long2014convnets, pathak2016learning}. This approach is well-suited for computing dense correspondences, while template matching makes sense for computing sparse sets of corresponding points.
An advantage of the template matching approach is its interpretability. The height of an NCC peak provides information about the goodness of a match, and the width of an NCC peak provides information about the accuracy of spatial localization. Furthermore, one can examine the convolutional network output to see what image features are being used to compute matches. In our application to serial EM images, it appears that the network detects mitochondria. It learns to suppress image defects that arise from brightness-contrast fluctuations and damaged sections. It also learns to suppress high contrast edges of blood vessels. When such edges are present, they tend to produce a strong NCC peak, but the peak is very wide due to the near straightness of the edges, leading to imprecise spatial localization.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{examples_original.jpg}
\caption{Three examples of template matching by 3D NCC correlograms. The large image in each example is the source image, and the small inset is the template. The correlogram plots the Pearson correlation coefficient $r$ values for all possible translations of the template across the source. The first image shows a correlogram with a well-localized maximum ("peak"). The second and third images show correlograms that fail to produce a clearly distinct maximum.}
\label{ncc_examples}
\end{figure}
\section{Methods}
\subsection{Weakly Supervised Similarity Metric Learning by Siamese Convolutional Nets}
The inputs to the NCC are a template image and a larger source image. If the template image is placed somewhere inside the borders of the source image, the template pixels are in one-to-one correspondence with a subset of the source pixels, and the Pearson correlation coefficient can be computed for the pixel pairs. This computation can be done for all placements of the template image inside the source image, yielding an output image called the normalized cross-correlogram or correlogram for short (see Fig. \ref{ncc_examples}). The location in the correlogram with the largest Pearson coefficient is considered the location at which the template matches the source.
Ideally, the correlogram should have a high and narrow peak only at the location of a true match, and should be low at all other locations. There should be no peak at all if there is no good match between the template and source. In practice, there can be peaks at spurious locations, leading to false matches. Another failure mode is a wide peak near a true match, leading to imprecise spatial localization.
To reduce the failure rate, one could apply preprocessing to the template and source images prior to computing the NCC. The preprocessing step can be trained from data using standard methods for supervised learning of a similarity metric \cite{kulis2013metric, yang2006distance}. Given pairs of points in a space $X$ that are known to be similar or dissimilar, and a similarity measure $S:\mathbb{R}^n\times\mathbb{R}^n\to \mathbb{R}$, the method is to learn an embedding $\psi:X\rightarrow \mathbb{R}^n$ such that $S(\psi(x),\psi(y))$ is large for similar $(x,y)$ and small for dissimilar $(x,y)$. If the embedding function $\psi$ is a neural network, then the technique is known as "siamese networks"\cite{bromley1993signature} because identical networks are applied to both $x$ and $y$ \cite{chopra2005learning}.
We train siamese convolutional networks by repeating the following for template-source pairs that are known to contain a true match:
\begin{enumerate}
\item Compute the correlogram for source and template image.
\item Find the peak of the correlogram.
\item Make a gradient update to the convolutional net that increases the height of the peak.
\item Draw a small box around the peak of the correlogram.
\item Find the maximum of the correlogram outside the box, and call this the "secondary peak."
\item Make a gradient update to the convolutional net that decreases the secondary peak.
\end{enumerate}
The cost function for the above algorithm is the difference in the heights of the primary and secondary peaks, which we will call the "correlation gap." The cost function has two purposes, depending on the shape of the correlogram (Fig. \ref{loss_function_2D}). If the primary peak is wider than the box, then the secondary peak will not actually be a local maximum (Fig. \ref{loss_function_2D}). In this case, the cost function encourages narrowing of the primary peak, which is good for precise spatial localization. The size of the box in the algorithm represents the desired localization accuracy. In other cases, the secondary peak will be a true local maximum, in which case the purpose of the cost function is to suppress peaks corresponding to false matches.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{plots.jpg}
\caption{Loss function intuition. (a) 2D toy example. Left, make a correlogram with a wide peak more narrow. Right, promote the first peak and diminish the second peak. (b) Real 3D example. Generate NCC correlogram from template and source, then promote the first peak and diminish the second peak.}
\label{loss_function_2D}
\end{figure}
The above algorithm corresponds to similarity metric learning if the primary peak indeed represents a true match and the secondary peak represents a false match. In fact, the NCC does have a nonzero error rate, which means that some of the examples in the training have incorrect labeling. However, if the error rate starts out small, one can hope that the similarity metric learning will make it even smaller. Our algorithm requires supervision, in the sense that a good match should exist between each source-template pair. However, the location of the match is not required as an input to the learning, so the supervision is fairly weak.
By itself, the above algorithm may lead to pathological solutions in which the network is able to minimize the cost function by ignoring the input image. To avoid these solutions, one can additionally train on source-template pairs that are known to contain no good match. Since these are dissimilar pairs, the goal of learning is to reduce the peak of the NCC.
\begin{enumerate}
\item Compute the correlogram for source and template image.
\item Find the peak of the correlogram.
\item Make a gradient update to the convolutional net that decreases the height of the peak.
\end{enumerate}
Dissimilar pairs can be artificially generated by permuting the source and template images within a batch.
\label{gen_inst}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{architecture.jpg}
\caption{Architecture diagram of two channel neural network with NCC block as a bridge. a) Perspective view of the network. Gray boxes represent residual blocks that are either connected to each other through skip connections, max-pooling and convolution layers. b) Left view where U-Net architecture can be easily seen. c) Top view shows two Siamese networks with weight-sharing.}
\label{architecture}
\end{figure}
\subsection{Implementation}
Fig. \ref{architecture} depicts siamese convolution networks, i.e., two networks with the same architecture and weight-sharing between the networks. The architecture is FusionNet \citet{hegde2016fusionnet}, which is a variant of U-Net. Instead of convolution blocks it uses residual blocks consisting of three convolution layers and a skip connection from the first layer to the last. Instead of concatenation at each level, the output of the left-side is summed with the right-side. This network also enforces symmetric input-output resolution.
The input size of both networks plays a crucial role because it defines the sparseness of features that the network will preserve for optimizing the NCC. The template and the source image are both squares with sizes 160px and 512px. We consistently use $3\times 3$ convolution layers with $tanh$ non-linearity. At each level the number of features is doubled starting with 8 up to 64 channels. The output of FusionNet is passed through another convolution layer that feeds to the NCC layer.
The NCC layer is implemented in TensorFlow using FFT \citet{lewis1995fast} and can handle batches and multiple channels effectively. The loss layer takes as input the NCC correlogram, computes the maximum peak, removes a 20px square window centered at the peak, then computes the next maximum value that represents the second peak. The initial framework for the loss is defined to maximize the difference between the first and second peaks during training.
Training alternated between a batch of eight source-template pairs and then the same batch with randomly permuted source-template pairings. Gradient descent used the Adam optimizer with learning rate of 0.0005. Training converged within 10,000 iterations. The training data consisted of pairs of inputs sampled from an affine-aligned stack of images that contained non-affine deformations. It is recommended either to choose a dataset with enhanced pathological cases that the network is expected to handle or to use data augmentation for covering the problem space of possible damages and deformations. During the training we randomly cropped the source and template images such that the position of the peak is randomly distributed. Also, to increase the size of the training dataset we used random rotations of both inputs by 90, 180, 270 degrees.
\section{Experiments}
We validated our model on 95 serial images from the training set, an unpublished EM dataset with a resolution of 7x7x40nm$^3$. Each image was 15,000x15,000px, and had been roughly aligned with an affine model but still contained considerable non-affine distortions up to 250px (full resolution).
From the serial images, we produced three datasets: raw images (\textit{raw}), images preprocessed with a circular Gaussian bandpass filter that was optimally tuned to produce a low number of false matches (\textit{bandpass}), and images preprocessed with our convolutional net by applying the larger convolutional channel across the entire image, upsampling and blending accordingly (\textit{convnet}). We then varied the parameters of our template matching procedure, varying the template image size between small and large (\textit{160px} and \textit{224px}), and matching between neighboring images (\textit{adjacent}) as well as the next-nearest neighbors (\textit{across}). For matching between next-nearest neighbors, a slightly different bandpass parameter was used, as the optimal filter for next-nearest neighbors differed from the filter for neighboring images. The network used was identical in all experiments, having been trained on 160px template and 512px source patches from adjacent sections. Table \ref{table:parameters} summarizes the training and experiment parameters.
\begin{table}[h]
\caption{Image parameters for training and testing. Unless otherwise noted, resolutions are given after 3x downsampling where 1px represents 21nm.}
\centering
\small
\begin{tabular}{l*{6}{c}r}
\toprule
& \multicolumn{1}{c}{Training} & \multicolumn{2}{c}{Adjacent} & \multicolumn{2}{c}{Across}\\
\cmidrule{2-6}
Template size & 160px & 160px & 224px & 160px & 224px \\
Source size & 512px &\multicolumn{2}{c}{512px} &\multicolumn{2}{c}{512px} \\
Section depth & 40nm &\multicolumn{2}{c}{40nm} &\multicolumn{2}{c}{80nm} \\
\cmidrule{2-6}
Section size (full res.) & ~33,000px & \multicolumn{2}{c}{15,000px} & \multicolumn{2}{c}{15,000px} \\
No. of sections & 1040 & \multicolumn{2}{c}{95} & \multicolumn{2}{c}{48}\\
No. of matches & 10,000 & \multicolumn{2}{c}{~144,000} & \multicolumn{2}{c}{~72,000}\\
\cmidrule{2-6}
Bandpass $\sigma$ (full res.) & N/A & \multicolumn{2}{c}{2.0-12.0px} & \multicolumn{2}{c}{2.5-25.0px}\\
\bottomrule
\end{tabular}
\label{table:parameters}
\end{table}
In each experiment, both the template and the source images were downsampled by a factor of 3 before NCC, so that 160px and 224px templates were 480px and 672px at full resolution, while the source image was fixed at 512px downsampled (1,536px full resolution). The template matches were taken in a triangular grid covering the image, with an edge length of 400px at full resolution (Fig. \ref{vectorfield} shows the locations of template matches across an image).
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{vectorfield.jpg}
\caption{Example displacement vector fields for each image condition. Representation of output of template matching in a regular triangular grid (edge length 400px at full resolution) across a 15,000x15,000px image. Each node represents the centerpoint of a template image used in the template matching procedure. Each vector represents the displacement of that template image to its matching location in its source image. Matches shown are based on 224px template size on across (next-nearest neighbor) sections. Raw: matches on the raw images. Bandpass: matches on images filtered with a Gaussian bandpass filter; Convnet: matches on the output of the convolutional network processed image.}
\label{vectorfield}
\end{figure}
Our first method to evaluate performance was to compare error rates. Errors were detected manually, using a tool that allowed human annotators to inspect the template matching inputs and outputs. The tool is based on the visualization of the displacement vectors that result from each template match across a section, as shown in Fig. \ref{vectorfield}. Any match that significantly differed (over 50px) from its neighbors were rejected, and matches that differed from neighbors but not significantly were individually inspected for correctness by visualizing a false color overlay of the template over the source at the match location. The latter step was needed as there were many true matches that deviated prominently from its neighbors: the template patch could contain neurites or other features parallel to the sectioning plane, resulting in large motions of specific features in a random direction that may not be consistent with the movement of the larger area around the template (see Fig. \ref{difficult_match} for an example of this behavior). Table \ref{table:error_rates} summarizes the error counts in each experiment.
\begin{figure}[h]
\centering
\includegraphics[width=250pt]{difficult_match.jpg}
\caption{Manual inspection difficulties. a) The vector field around a match (circled in red) that prominently differs from its neighbors. b) The template for the match, showing many neurites parallel to the sectioning plane. c) The false color overlay of the template (green) over the source image (red) at the matched location, establishing the match as true.}
\label{difficult_match}
\end{figure}
\begin{table}[h]
\caption{False matches for each image condition across experiments. Total possible adjacent matches: 144,500. Total possible across matches: 72,306.}
\centering
\small
\begin{tabular}{l*{9}{c}r}
\toprule
& \multicolumn{4}{c}{Adjacent} & \multicolumn{4}{c}{Across}\\
\cmidrule{2-9}
Template size & \multicolumn{2}{c}{160px} & \multicolumn{2}{c}{224px} & \multicolumn{2}{c}{160px} & \multicolumn{2}{c}{224px} \\
\hline
Raw & 1,778 & 1.23\% & 827 & 0.57\% & 2,105 & 2.91\% & 1,068 & 1.48\% \\
Bandpass & 480 & 0.33\% & 160 & 0.11\% & 1,504 & 2.08\% & 340 & 0.47\% \\
Convnet & \textbf{261} & \textbf{0.18\%} & \textbf{69} & \textbf{0.05\%} & \textbf{227} & \textbf{0.31\%} & \textbf{45} & \textbf{0.06\%} & \\
\bottomrule
\end{tabular}
\label{table:error_rates}
\end{table}
To ensure that fewer false matches were not coming at the expense of true matches, we evaluated the overlap between true match sets created by the bandpass images and our convnet images. Table \ref{table:true_overlap} summarizes how many true matches were unique to the bandpass, convnet, or neither.
\begin{table}[h]
\caption{Dissociation of true matches set between the bandpass and convnet. Counts of true matches per category. Total possible adjacent matches: 144,500. Total possible across matches: 72,306.}
\centering
\small
\begin{tabular}{l*{9}{c}r}
\toprule
& \multicolumn{4}{c}{Adjacent} & \multicolumn{4}{c}{Across}\\
\cmidrule{2-9}
Template size & \multicolumn{2}{c}{160px} & \multicolumn{2}{c}{224px} & \multicolumn{2}{c}{160px} & \multicolumn{2}{c}{224px} \\
\hline
Neither & 144 & 0.10\% & 54 & 0.04\% & 162 & 0.22\% & 33 & 0.05\% & \\
Bandpass only & 117 & 0.08\% & 15 & 0.01\% & 65 & 0.09\% & 12 & 0.02\% \\
Convnet only & 336 & 0.23\% & 106 & 0.07\% & 1342 & 1.86\% & 307 & 0.42\% \\
\bottomrule
\end{tabular}
\label{table:true_overlap}
\end{table}
To assess how easily false matches could be removed, we evaluated matches with the following criteria:
\begin{itemize}
\item \textit{norm}: The Euclidean norm of the displacement required to move the template image to its match location in the source image, at full resolution.
\item \textit{r max}: The first peak of the correlogram serves as a proxy for confidence in the match.
\item \textit{r delta}: The difference between the first peak and second peak (after removing a 5px square window surrounding the first peak) of the correlogram provides some estimate of the certainty there is no other likely match in the source image, and the criteria the convnet was trained to optimize.
\end{itemize}
These criteria can serve as useful heuristics to accept or reject matches to approximate the unknown partitions for the true and erroneous matches. The less overlap between the actual distributions when projected onto the criterion dimension, the more useful that criterion. Fig. \ref{criteria_distributions} plots these three criteria across the three image conditions.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{error_metrics.jpg}
\caption{Match criteria for adjacent 224px experiment. (a) Distributions of the three template match criteria for each image condition. Red bars represent counts of false matches that were manually identified. Green bars represent counts of true matches. (b) Percentage of true matches that must be rejected to reduce the error rate when varying the \textit{r delta} criterion. See the Appendix for distributions \& rejection curves from other experiments.}
\label{criteria_distributions}
\end{figure}
\section{Discussion}
In all experiments, the images preprocessed by our convnet consistently produced fewer false matches than the other two sets of images with a reduction factor of 2-7x (see Table \ref{table:error_rates}). The convnet produced matches in the vast majority of cases that the bandpass produced matches. It did introduce some false matches that the bandpass did not, but it correctly identified 3-20 times as many additional true matches relatively (see Table \ref{table:true_overlap}). The majority of the false matches in the convnet output were also present in the bandpass case, which establishes the convnet as superior to and not merely different from bandpass.
Notably, the reduction in error from applying the convnet generalized well to both harder (across sections at 160px and 224px template sizes) and easier (adjacent sections at 224px template size) tasks. In fact, the convnet provided larger gains in experiments other than on the task on which it was trained. This ability to generalize is crucial to applications as different template matching parameters are often needed at different stages of the alignment process. The results suggest that a single convnet may be used throughout the range of speed-accuracy tradeoffs (smaller-larger template size) as well as in dealing with missing sections (across).
Inspecting the filtered image, the convnet seems to identify keypoints (small dark objects that localize well, such as mitochondria) and suppress objects that do not localize well (e.g. lines, such as cell membranes, or consistently patterned regions, such as regions inside cell bodies and blood vessels). See Fig. \ref{ncc_examples2} for examples from the convnet image set. The convnet fails when the template does not contain the keypoints it has learned to identify. The last column in Fig. \ref{ncc_examples2} contains a template that is almost completely occupied by a cell body, and the convnet failed to find the true match. The raw image can be more useful in those cases, because it can match on corner-like edges (see Sup. Fig. 8 in the Appendix). This can be improved by biasing the training set with more of these pathological examples.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{examples.jpg}
\caption{Difficult examples from the dataset with damaged areas \& local brightness changes. Correlograms are projected to be 2D with white pixels having higher \textit{r} values. The last column is an example of failure by the convnet.}
\label{ncc_examples2}
\end{figure}
Fortunately, when these false matches do occur with the convnet, we can reject them efficiently using our match criteria. The convnet transformed the true match distributions for \textit{r max} and \textit{r delta} to be more left-skewed, while the erroneous match distribution for \textit{r delta} remain with lower values (see Fig. \ref{criteria_distributions}a), resulting in a distribution more amenable to accurate error rejection. For the case of adjacent sections with 224px templates, we can remove every error in our convnet output by rejecting matches with an \textit{r delta} below 0.05, which removes only 0.12\% of the true matches. The same threshold also removes all false matches in the bandpass outputs, but removes 0.40\% of the true matches (see Fig. \ref{criteria_distributions}b). This 3.5x improvement in rejection efficiency is critical to balancing the trade-off between complete elimination of false matches and retaining as many true matches as possible.
The improvement in rejection efficiency also generalized well across experiments, as evident in the Appendix, Sup. Fig. 15. Achieving a 0.1\% error rate on the most difficult task we tested (across, 160px template size) required rejecting 20\% of the true matches on bandpass, while less than 1\% rejection of true matches was sufficient with the convnet.
\section{Conclusions}
Combining NCC with deep learning reduces false matches from template matching. It also improves the efficiency by which those false matches can be removed so that a minimal number of true matches are rejected. This is a very promising technique that offers us the ability to significantly increase the throughput of our alignment process while maintaining the precision we require. We expect this technique to serve well in other areas that demand such high-quality template matching.
We would like to explore how well this technique generalizes from one EM dataset to another, as well as to investigate if there is transfer learning that could benefit the segmentation convolutional network that follows the alignment process.
\medskip
\small
|
1,108,101,563,148 | arxiv | \section{Evaluating safety contracts for UWB Geofencing}\label{sec:evaluating}
From SR2 we know that our system described above must provide a stop command to the AGV in sufficient time for it to stop before entering the danger zone. In this section we describe experiments conducted to determine the nature of the safety contract for our system by identifying the safety guarantees that it is possible to make for the system, as well as the conditions under which those guarantees hold.
\subsection{Experiment Design}
The purpose of our experiment is to determine if properties can be guaranteed by the fog/edge UWB system in the defined scenario and to identify what factors impact on the ability of the UWB system to provide these guarantees. This will allow us to develop the safety contract for the UWB geofencing system. We will also use to results of the experiment to understand the dependencies on the UWB system components and their safety contracts.
\subsubsection{UWB Indoor Geofencing System}
As we are making our initial investigation of safety and developing safety assurance, the entire real-time location system was implemented and evaluated in the University of York Robotics Lab~\cite{yrl}, a purpose-built research facility.
Sewio provided detailed instructions on how to install the system correctly. We ensured we followed these, placing the anchors in a square configuration all around 2.5m off the ground to ensure consistent and reliable coverage, using Sewio's software and charts to optimise the signal strengths and synchronisation stability between anchors. We ensured that the tags were at least 15cm above the floor at all times (by placing them on a cardboard box on top of our AGV) to ensure height compliance and to keep them away from metal objects on the AGV which can cause interference. As stated earlier, Sewio tags have a claimed positioning accuracy of +-30cm so we set the virtual stop line 30cm back from our physical stop line giving a buffer zone where the AGV should stop (shown in Fig. \ref{fig:robotDanger}) with 98\% confidence.
In section \ref{sec:eval}, we analyse this setup for accuracy and latency. Sedlacek et al. \cite{Sedlacek2016} found that Sewio tags had a median positioning error of 39.6cm at various locations including those with diminished anchor coverage. The errors varied with location. A tag placed in the centre of the coverage area had median positioning error of 23.0cm but one near the corner with line-of-sight of fewer anchors had median error of 73.8cm \cite{Sedlacek2016}. In our future work, we will analyse varying the size of the buffer zone allocated to each AGV according to that AGV's specifications (velocity, size, tag transmission rate etc.). Here, we focus on developing safety assurances so mandate that all AGVs have identical setup so we can investigate what specifications ensure the AGV stops in time (and what specifications cannot ensure the AGV stops) and thus what guarantees we can make.
\subsubsection{Safety Controller}\label{sec:postFilter} We need to detect encroachments using the AGV tag's location and issue a safety alert to the AGV so it stops. For this, we used a MS Windows laptop connected to the Sewio wi-fi network. It runs the Sewio VirtualBox VM image \cite{sewio} and provides a bridge between:\\ (tags+anchors) $\leftrightarrow$ (Cisco router) $\leftrightarrow$ (Java API in the VM).
We setup one geofenced zone (as shown in Fig. \ref{fig:robotDanger}) via Sewio's software. Our Java Safety Controller (section \ref{sec:geofenceImpl}) analyses tag and geofence information extracted using Sewio's API \cite{sewioAPI}. It provides command-line alerts to the user and issues ``Stop'' commands to the AGV if it encroaches. The communication from laptop to the AGV uses plink~\cite{plink} - a command-line tool from MS Windows (laptop) to Linux (AGV) that executes remote commands on the AGV via a non-interactive \emph{ssh} session. This ensures minimal processing overhead and so we can pinpoint delays more easily.
\subsubsection{ Control and Communication with AGV and Machine}
Our evaluation used two purpose-built AGVs based upon the York Robotics Kit hardware~\cite{yrlOcto}; \textbf{R1} controlled by a Raspberry Pi 3B+ and \textbf{R2} controlled by a Raspberry Pi 3A. They have different speeds of travel. R1 defaults to 0.093 m/s and R2 to 0.277 m/s, i.e., R2 is 3x faster than R1. This provides the opportunity for evaluating different speeds and its effects. Using COTS AGVs ensures we have total control over the AGVs (full software access). COTS AGVs are also more accessible. Many commercial AGVs are out of the price range of all but high-end factories. We aim to demonstrate that we can assure safety across the range from COTS to high-end commercial AGVs. We start here by analysing and ensuring the safety of COTS AGVs.
The AGVs run York Robotics Lab Python control software to control the motor speeds \cite{yrlPy}. The Java Safety Controller on the laptop uses plink to call the Python running under Linux on the AGV. Each robot travels on a variety of (randomly chosen) trajectories towards the protected zone to thoroughly analyse the stopping conditions as shown by the blue dotted arrows in Fig. \ref{fig:robotDanger}.
\subsection{Experimental Results}\label{sec:eval}
To analyse the safety guarantees of the geofencing, we set the tags to default settings: transmission every 100ms with medium signal strength to ensure battery life without compromising signal quality. We ran 2 evaluations.
\subsubsection{Non-Line-of-Sight (NLOS) test}\label{sec:NLOS}
We investigated NLOS detection by running R1 with a tag mounted on top into a tunnel constructed from three (thick-walled) cardboard boxes. The tunnel was 39cm wide, 31cm high, 43 cm deep and placed straddling the virtual stop line. While inside the tunnel, the tag functioned normally while the AGV travelled through the tunnel and the AGV stopped in the same range (within the box plots in Fig. \ref{fig:stopDists}) as when the AGV was not in the tunnel.
\subsubsection{AGV Configurations test}\label{sec:configTest}
\begin{table}
\begin{center}
\caption{Table listing the AGV test configurations and whether the AGV stopped safely outside the danger zone. Tag transmits every $n$ milliseconds (Tag Period). $v$ is max speed of R2, $v/3$ is max speed of R1 (and R2 running at 1/3 speed). Server indicates if the Python data server was running. Rand indicates whether a random offset is added to the gaps between tag transmissions.}
\label{table:1}\begin{tabular}{ c|c|c|c|c|c|c}
\hline
Test ID & AGV & Tag Period & Speed & Server & Rand & Stopped?\\
\hline
T1 & R1 & 100 & v/3 & N & Y & yes \\
T2 & R2 & 100 & v & Y & Y & no\\
T3 & R2 & 100 & v/3 & Y & Y & no\\
T4 & R2 & 100 & v & N & Y & no\\
T5 & R2 & 100 & v/3 & N & Y & yes\\
T6 & R2 & 200 & v & N & Y & no \\
T7 & R2 & 200 & v/3 & N & Y & no\\
T8 & R2 & 200 & v & N & N & no\\
T9 & R2 & 200 & v/3 & N & N & no (once)\\
\hline
\end{tabular}
\end{center}
\end{table}
We drove the AGVs at the danger zone at a range of angles (see Fig. \ref{fig:robotDanger}). The AGV stops when the system detects an encroachment. We measured the distance from the front of the tag to the front of the physical stop line (black dashed line shown in Fig. \ref{fig:robotDanger}), repeating this process 50 times (over a number of days) for the nine AGV configurations given in Table \ref{table:1}. By varying the configurations, we evaluate the following criteria:
\begin{itemize}
\item AGV speed where $v$ is max speed of R2, $v/3$ is max speed of R1 (and is R2 running at 1/3 speed).
\item processor / RAM. R1's Raspberry Pi has a Cortex-A53 (ARMv8) 64-bit @1.4GHz with
1GB LPDDR2 SDRAM while R2 has the same CPU with
512MB LPDDR2 SDRAM. Both run Linux (and Python).
\item effect of running processes - run a Python data server on the Raspberry Pi that hogs the CPU periodically.
\item tag transmission period (in ms).
\item regularity of tag transmission.
\end{itemize}
We list if the AGV stops before the danger zone for each configuration in Table \ref{table:1} and create box and whisker plots for the AGVs' stopping distances in Fig. \ref{fig:stopDists} (distance between tag and boundary of danger zone which should be $>$0cm). The box and whisker plots show the median, inter-quartile range (box), upper and lower extremes (whiskers) and outlier points (dots). To meet the safety guarantee each box, whisker and dots should be above the y=0 line in their entirety.
\begin{figure}[h!]
\centering
\includegraphics[width=9cm]{figures/figure2.png}
\caption{Box and Whisker Plots of the stopping distances (measured perpendicularly from the stop line with a tape measure) for test configurations (T1-T9) over 50 runs per configuration. 0cm is the danger zone. +30cm is the virtual stop line to give a stopping buffer. 60cm is the outer edge of the buffer. AGVs should stop before 0cm line on all runs.\label{fig:stopDists}}
\vspace*{-1.5mm}
\end{figure}
\subsection{Using Results to Define Safety Contracts for Geofencing System}
From experiments we can identify the properties that can be guaranteeed by the geofencing system along with the conditions under which those properties can be guaranteed. In our case study, with relation to safety requirement SR2 (see section \ref{sec:safetyReq}), the property of concern for the safety contract is the \textbf{stopping distance}. This requires the following guarantee: \textit{The AGV will stop no more than 30 cm beyond the defined virtual stop line (giving a buffer zone).} We need to analyse the results of our case study to determine the factors that guarantee this.
From the results in Table \ref{table:1} and Fig. \ref{fig:stopDists} we find that only T1 and T5 guaranteed this result over 50 runs. Analysing further, with 99\% confidence the population mean stopping distance (assuming a population of 1M runs) is between 31.3 and 42.2cm based on 50 samples for T1 (36.74 $\pm$ 5.46cm before the danger zone), and with 99\% confidence the population mean stopping distance is between 31.1 and 38.4cm based on 50 samples for T5 (34.76 $\pm$ 3.68cm before the danger zone). T1 uses AGV R1; and T5 uses R2 with the same configuration as R1 had for T1. This provides a preliminary analysis of safety guarantees for geolocation of AGVs.
If we want a confidence level of $99\% \pm 1$ ($>$98\%) then we need to analyse each configuration over 16369 runs to provide true safety guarantees for a safety-critical task or environment.
Even over 50 runs, we can identify factors of the AGVs that would fail to guarantee safety. T9 encroached once into the danger zone (the lower extreme of the box and whisker is below the stop line in Fig. \ref{fig:stopDists}) but this fails the safety guarantee nonetheless. With 99\% confidence the population mean of T9 is between 20.6 and 30.1cm, based on 50 samples - this fails SR2. All other test configurations encroach multiple times. The variables that were seen to affect these results were: AGV velocities, running processes on AGVs, tag transmission periods (gaps) and tag transmission regularity. In particular, having a server running on the AGV caused extreme outlier values in stopping distances and large encroachments as did a combination of faster AGV speed and slower tag transmission rates. Switching off random offset on the tag transmission gap also caused large encroachments on the faster AGV. NLOS operation did not affect the AGV stopping distance in our analyses though other authors found that NLOS did adversely affect geolocation \cite{contigiani2018,zandian2016performance}. These observations allow us to identify the conditions for safety assurance and plan our future work such as dynamic buffer zones (see section \ref{sec:future}) related to AGV velocity.
Based on our case study we can therefore identify the following conditions upon which the guarantee relies:
\begin{itemize}
\item AGV speed does not exceed 0.093 m/s
\item Control command software process has schedule priority on AGV Raspberry Pi
\item Tag transmission gap is at least 100 milliseconds
\item There is a random offset between tag transmissions
\end{itemize}
In fact, by considering geofencing as an end-to-end service, we can see that it consists of a number of stages. Fig. \ref{fig:stages} illustrates the stages required for AGV control using geofencing. A safety contract could be specified for any of these stages, defining the individual properties that can be guaranteed, and the conditions under which they hold. This illustrates why a modular safety approach is required; properties cannot be guaranteed for geofencing unless properties of the individual components (wifi, comms, tags, anchors, processors) can also be guaranteed. For example the tags must guarantee with sufficient confidence that 100 millisecond transmission gaps can be provided. The conditions that are required for this must be specified as part of the contract for the tag. These conditions in turn must be demonstrated to hold. Adopting a modular safety approach also supports changes to the system and the configuration of the factory. The explicit safety contract for each system element defines the bounds within which that element must perform. When changes are made to any individual element, as long as the element can still be shown to satisfy the safety contract, we know that the overall safety requirement can still be met without the need to re-evaluate the whole system. This would allow, for example, different tags with different performance to be used, so long as the guarantees specified by the contract are still met.
\begin{figure}[h]
\centering
\includegraphics[height=8.5cm]{figures/figure3.png}
\caption{Stages for geofencing control. \label{fig:stages}}
\end{figure}
\section{Factory AGV Geofencing Scenario}\label{sec:scenario}
Our scenario considers a highly automated and flexible factory that includes AGVs that move inventory and equipment around a factory. Due to the dynamic reconfiguration that may occur in the factory, the AGVs cannot simply follow fixed routes around the factory, but must instead be capable of planning their own paths through the factory in order to fulfill their task. The AGVs must determine the best path to safely perform a task without colliding with objects in the factory environment (such as equipment, machinery, humans or other AGVs), or entering restricted areas (such as areas of high human occupancy). Achieving this requires sensing, perception and path planning technology on the AGVs. There are three techniques that combine to achieve this:
1. Localisation: (see Hodge \cite{hodge2022} and Zafari et al. \cite{zafari2019survey} for surveys of techniques). This harnesses the ubiquitous connectivity of Internet of Things (IoT) to localize and position IoT devices (tags). This is suitable for dynamic factories to set up safe areas with restricted access (geofencing), e.g., areas under maintenance, where humans operate or where the AGV would be unsafe, e.g. a spillage. It gives a global overview of the AGV and its navigation environment but cannot locate obstacles or other collision points (local view).
2. Local motion planning: Using object detection/recognition \cite{Hodge2020}. This requires AGV-mounted sensors such as IR proximity sensors, LIDAR, or cameras. This navigation is suitable for detecting dense objects such as relocatable machinery, boxes, other AGVs, vehicles. It generates a detailed and high fidelity map of the AGV's immediate surroundings. It provides a fine-grained local view but not a global overview of the navigation so the AGV cannot avoid danger zones or areas where humans are working.
3. Global path planning: Generally uses a map which generates a fixed overview and may require prior knowledge of the layout. It is ideal for large and immovable objects. The map can be a) pre-defined and fixed such as a floorplan or CAD drawing; or b) constructed by the AGV itself using SLAM ``to incrementally build a map of this environment while simultaneously using this map to compute absolute vehicle location'' \cite{dissanayake2001solution}. This is more dynamic but is slow and compute-intensive. In \cite{Hodge2020} we developed a mapless navigation algorithm that uses only data from on-board sensors (local data) to navigate; and analysed its safety.
A combined navigation approach would overcome the weaknesses in each technique and achieve both a detailed local view and an overall view of the navigation. It helps ensure no single point of failure. In \cite{Hodge2020} we developed and assured global path planning. As a further step to achieving a combined approach, we focus on ensuring the safety of 1) localisation using fog/edge geofencing to create an overall view, to provide protection for areas of the factory where humans work and stop AGV encroachment into danger zones. Geofenced areas can be created and removed when needed (e.g. during emergency repairs or where humans are working) and are more flexible than maps as comparing an AGV's location against a pre-defined map cannot cope with dynamic layout changes. The geofencing can be used to provide an emergency stop capability for the AGV.
\begin{figure}
\centering
\includegraphics[width=6.5cm]{figures/figure1.png}
\caption{Our scenario is 8m x 8m x 2.5m high with four geofencing anchors in a square configuration (master anchor bottom right). The geofenced zone is the danger zone (bottom right) plus 30cm buffer zone where the AGV stops.\label{fig:robotDanger}}
\vspace*{-1.5mm}
\end{figure}
The core requirements of the geofencing are:
\begin{enumerate}
\item The AGV should STOP before entering the danger zone.
\item If the AGV enters the danger zone, then it should be immediately stopped.
\item The AGV should navigate away from the buffer zone.
\item Accurate geolocation of tags placed on the AGV.
\item The system should detect possible failures of hardware, software and comms and ensure ``Safe Failure''. If this affects an AGV then that AGV should stop. If it affects all AGVs (network outage) then they should all stop.
\end{enumerate}
Criteria 3 and 5 are not considered in our initial work here as they relate to the AGV's path-planning abilities rather than the geofencing. Fig. \ref{fig:robotDanger} shows a schematic of our UWB geofenced laboratory where AGVs must not encroach into the danger zone and must obey safety requirements.
\subsection{Safety Requirements}\label{sec:safetyReq}
By considering the hazards associated with the scenario described above the following high-level safety requirements can be identified for the AGV system:
\begin{itemize}
\item \textbf{SR1} – AGV shall not plan a path that results in collision with an object in the factory
\item \textbf{SR2} – AGV shall stop before entering an unsafe zone (danger zone in Fig. \ref{fig:robotDanger}) or colliding with an object
\end{itemize}
SR1 says that when planning a path in order to achieve a task within the factory (such as moving parts from one location to another) the AGV should ensure that its path does not result in a collision. This can be achieved using information from sensors and object detection to locate objects \cite{Hodge2020}. SR2 says that even if an unsafe path is planned, the AGV should not enter a danger zone or collide with an object. This requirement can be considered as an ``emergency stop'' or ''collision avoidance`` facility, which provides extra safety assurance.
This second safety requirement is the focus in this paper. We focus on an ``emergency stop'' with geofencing. This allows the AGV to avoid dangerous areas that a sensor-based collision avoidance system would not be able to `see'. Sensor-based collision avoidance is covered extensively in the literature, e.g., \cite{guiochet2017safety}. We consider how geofencing implemented using a real-time indoor location system (RTLS) can provide assurance against SR2.
We analyse how modular safety criteria can be developed. This entails determining safety contracts for each of the components of the RTLS, and generating evidence to support the satisfaction of the safety contracts. This involves the following steps:
\begin{itemize}
\item identifying the properties that can be guaranteed for the components
\item generating evidence to support these properties
\item understanding the conditions under which the properties hold
\item identifying failure conditions for the components
\end{itemize}
\subsection{Geofencing Implementation}\label{sec:geofenceImpl}
One emerging RF technology for accurate localisation is ultra-wideband (UWB) ranging technology which uses tags and anchors for geolocation. We provide a literature review of UWB geofencing in section \ref{sec:related}. UWB devices transmit ultra short-pulses over a large bandwidth ($>$500MHz), with frequency range between 3.1 and 10.6GHz, and a very
low duty cycle. This reduces power consumption \cite{zafari2019survey}. UWB is robust to interference from other signals \cite{geng2005multipath}.
Our experimental setting is based on Sewio's Indoor UWB RTLS Kit \cite{sewio} for AGV geofencing, analysed against our safety requirements, determining what factors affect its ability to meet those requirements.
It is widely used in industry, can support up to 5000 tags and has a reported positioning accuracy of +-30cm (98\% of positions have an error $<$30 cm). Hulka et al. \cite{hulka2020accuracy} found the Sewio RTLS UWB tracking system accurate and reliable for tracking basketball players. Contigiani et al. \cite{contigiani2018} compared the Sewio and Openrtls UWB tracking systems and found similar positioning accuracy but Sewio had superior battery life due to its energy saving. However, they found positioning accuracy degraded when used in an occluded environment due to NLOS errors. Hence, we evaluate NLOS tracking in section \ref{sec:eval}.
Our RTLS geofencing system has five modules \textbf{1) Tags} mounted on the AGVs and powered by rechargeable Li-Ion batteries. The tags transmit data packets periodically over Decawave UWB radio to data receivers (anchors). The tags do not communicate peer-to-peer. \textbf{2) Anchors} mounted on brackets in fixed positions (see Fig. \ref{fig:robotDanger}) which are reference devices with a known position. One anchor is designated the ``main''. The system uses triangulation and the precise measurement of time difference between a
signal arriving from a tag (blink) to a set of anchors to calculate the tag's exact location. To ensure high accuracy, the anchors need to be
synchronized very precisely which is done periodically from the master. \textbf{3) Location Engine} software that collects the tag and anchor data and calculates the absolute tag positions. \textbf{4) Safety Controller}. We developed this in Java using Sewio's Java API \cite{sewioAPI} to calculate the relative distance between tags and geofenced zones. If a tag encroaches into a zone, the controller generates an ``alert'' and sends a ``STOP'' signal (see section \ref{sec:postFilter}). Although we only analyse two AGVs here, having the controller on a laptop (or laptops) connected to the main anchor via wi-fi router allows us to generate an overall picture of a factory. This controls the AGVs and factory safety and can also incorporate global maps and map-based control. If we had a safety controller on each AGV then we would need to update all AGV controllers every time there was a factory layout change and would have difficulty controlling the AGVs as a group. \textbf{5) User Interface}, currently a simple command line interface that runs in conjunction with the safety controller and is designed to display tag locations and geofenced zone alerts.
\section{Conclusion and Future Work}\label{sec:future}
This paper introduces a case study for evaluating the safety assurance of AGVs in dynamic factories. We analyse the accuracy of geolocating AGVs using fog/edge computing and whether an AGV can stop outside a virtual danger zone set up using a commercial UWB tracking system to protect the area from encroachment. In particular we investigated safety requirement SR2 from section \ref{sec:safetyReq}, \textit{AGV shall stop outside the danger zone and before colliding with an object}. We found that the safety contract conditions rely on guarantees made by different components of the UWB system (tags, anchors, AGV speed and on-board processors which affect AGV response time) and fog/edge (wi-fi) capabilities - these components will all require safety contracts. Additionally, AGV geofencing is an end-to-end service that requires a number of stages. Each stage could have a separate contract defining the properties that can be guaranteed and the conditions under which they hold, supporting a modular approach to safety assurance.
This study is a proof-of-concept. Further work will analyse the geofencing and guarantees over more AGV runs and extend the analyses to navigating the AGV away from the danger zone and detecting failures of software and comms. We will perform load testing to investigate how many AGVs can be safely controlled in conjunction with how many safety controller laptops are necessary for a given number of AGVs. We can further investigate the fog/edge network and its effects on stopping distances. How do we certify the network? What limits do we need to impose? How can we overcome network errors and still ensure safety?
Future work will develop a set of contract templates and establish a set of heuristics for guaranteeing that the contracts can be met (e.g., min/max AGV speed, tag transmission rate). We will evaluate a range of AGVs (COTS to commercial AGVs) to establish these parameters and also investigate using a variable-depth buffer zone for different AGVs. The radius of the buffer will vary to reflect how the guaranteed stopping distance changes with different AGVs and according to the current state of an AGV (its speed, tag transmission gap, wi-fi network throughput, network delay etc.)
Further ahead, we can combine the UWB geolocation with global path planning using maps and algorithms such as A* and mapless path planning using sensor data only~\cite{Hodge2020}.
\section{Introduction}
As manufacturing becomes increasingly automated and requires increased flexibility, there is a move to more dynamic factories. This necessitates dynamic supply chain solutions where factories manufacture a broad range of products using frequent reconfiguration of the factory layout to optimise production, efficiency and costs. This dynamism brings particular challenges when attempting to provide assurance for the safety of operation prior to deployment.
Dynamic factory operations require increased digitisation to provide increased flexibility and automation. The implementation of this often requires extensive use of commercial off the shelf (COTS) systems and components, which provide accessible and competitively priced capabilities, but often lack the rigorous certification of components developed specifically for safety-related tasks \cite{jaradat2017challenges}. Dynamic factories are also inherently interconnected, adding to the complexity of the safety analysis and assurance task. It is crucial that these safety challenges are addressed, particularly for autonomous applications \cite{burton2020mind}. It is only if the safety of dynamic factories can be assured that they can be fully embraced. It is also vital that safety assurance is achieved in a manner that does not unnecessarily constrain the systems and thus negate the benefits of flexibility and automation.
This paper analyses our approach to safety assurance for dynamic factories using a case study. We aim to achieve a balance between safety assurance and viability (cost and efficiency). For this. we consider factory operations that use autonomous guided vehicles (AGV) to move products and components around in an efficient manner. Unlike AGVs in traditional factories, the AGV operation is not constrained to fixed routes, and cannot rely on a fixed infrastructure or environment. In addition AGVs will often have to operate in collaborating teams and in cooperation with humans. To assure factory safety, it must be proven that AGVs can navigate safely by not colliding with other objects or humans present in the factory and by keeping away from potentially changing hazardous areas (danger zones). Geofencing is a commonly used location-based approach which we analyse to control the risk in hazardous areas of a factory. Factories are indoor environments so geofencing requires indoor localisation capability such as through the use of Ultra-Wide Band (UWB) IoT technology. Shule et al. \cite{shule2020uwb} state that ``\emph{UWB has the potential to become a standard technology for relative positioning and ranging in multi-robot systems, having been applied to a
wide variety of scenarios in recent years}''.
We undertake an experimental evaluation in a laboratory setting using AGVs that must not encroach into a danger zone. Our platform uses commercial hardware for UWB geofencing and an AGV built from COTS hardware plus PC software we developed to analyse the data and communicate with the AGV. Commercial AGVs are expensive so using COTS hardware for the AGV is more economically viable if AGVs are to be used more widely. It also allows us extensive control and enables a more thorough analyses of a number of aspects of AGVs and how they affect safety constraints.
Our main contributions are: we run a lab-based case study of AGV operations making use of commercial IoT geofencing technology and relate our experimental results to the specification of safety contracts. From these evaluations, we consider a modular safety assurance approach for dynamic factories.
The rest of the paper is organised as follows: section \ref{sec:safetyAssurance} provides background for our overall approach that we adopt to safety assurance of dynamic factories. We introduce a scenario that is the subject of our experimental study and describe the safety requirements that we derived in section \ref{sec:scenario}. Section \ref{sec:evaluating} describes our experimental design and evaluates the results. Section \ref{sec:related} considers related indoor localisation work and section \ref{sec:future} details future work.
\section*{Acknowledgement}
The work is funded by the Swedish Foundation for Strategic Research
under the project “Future factories in the cloud (FiC)” and the Assuring Autonomy International Programme (www.york.ac.uk/assuring-autonomy).
\bibliographystyle{ieeetr}
\section{Related Geolocation Work}\label{sec:related}
Other authors have analysed various aspects of UWB geolocation. Zandian and Witkowski use similar performance analyses to ours but they built their own UWB system rather than using a commercially available system \cite{zandian2016performance}. Segura et al. \cite{segura2010experimental} also built a real-time AGV navigation system for indoor
environments using COTS
components. We want to verify industry-standard UWB kit and can incorporate industry standard AGVs as we develop our safety assurance framework and safety-assured system.
For the initial analyses here, it is much easier to control, assess and verify our COTS AGVs. Ruiz and Granja \cite{ruiz2017comparing} compared three commercial UWB systems for accuracy in an experimental evaluation and found a range of performances with Decawave outperforming BeSpoon which in turn outperforms UbiSense. The Sewio kit we analysed uses Decawave hardware.
Martinkovi{\v{c}} et al. \cite{martinkovivc2019use} investigate a real-time location system for hybrid assembly environments. They analyse how UWB tracking can be incorporated in manufacturing where humans and AGVs cooperate closely. Similarly, Park et al. \cite{park2016bim} combine UWB tracking with dead reckoning and a path planner for safe AGV navigation in indoor construction sites.
Ridolfi et al. \cite{ridolfi2018analysis} look at the scalability of indoor UWB tracking systems and how to maximise the number of tags that can be tracked. Krishnan et al. \cite{krishnan2007uwb} focus on optimising the network setup for best localisation accuracy. Ieni \cite{ieni2018realization} analyses the performance of indoor UWB tracking by evaluating different localization algorithms. These systems provide practical localisation results for analysis but do not provide safety-assessments or safety guarantees.
{\v{C}}ernohorsk{\`y}
et al. \cite{cernohorsky2018real} investigate UWB tracking in confined spaces (e.g., corridors) and provide suggestions for overcoming issues with interference and accuracy. Zandian et al. \cite{zandian2016performance} compare the performance of LOS vs NLOS UWB tracking. They find a slight degradation in positioning accuracy with NLOS compared to LOS geolocation. These authors findings will need to be considered in our future work on developing safety contracts for dynamic factories.
Other uses of UWB tracking include tracking sports players such as basketball players \cite{hulka2020accuracy} for player movement analysis, tracking shoppers inside retail stores to analyse their behaviour (movements) \cite{contigiani2018} \cite{gabellini2019large}, tracking players in large-scale augmented reality (AR) systems \cite{cirulis2019}, incorporating tracking tags in AR headsets to geolocate users \cite{cyrus2019hololens} or tracking indoor drones in warehouses \cite{shule2020uwb,macoir2019uwb,tiemann2015design}.
\section{Safety Assurance of Dynamic Factories}\label{sec:safetyAssurance}
It is crucial that the safety risk associated with factory operations is sufficiently analysed, controlled and monitored \cite{jaradat2017challenges}. Many of the same characteristics that make dynamic factories desirable from a technical perspective also pose significant safety assurance challenges. Dynamic factories are designed to be flexible and easily reconfigured. This makes safety analysis, which is traditionally conducted prior to operation, a challenge since the state of the system is hard to predict in advance \cite{javed2020towards}. Control in dynamic factories is also often widely distributed, with limited centralised control. This can make the task of determining and enforcing safety requirements more challenging. Compounding this, factory operators will often have little control over the design and evolution of the commercial systems and components that are used. This can significantly weaken safety assurance due to a high degree of uncertainty about the actual performance or behaviour of these commercial components.
Our approach to safety assurance is modular and based on the use of safety contracts \cite{fenn2007safety} between elements of the factory. The German automation technology supplier ‘PILZ’ \cite{pilz} places emphasis on the necessary modular certification of the individual factory devices (PILZ uses the term Safety 4.0 to indicate modular safety solutions). Most of the literature however focuses on dependability in general but without much focus on safety which is challenging in dynamic environments \cite{javed2020towards}.
\subsection{Safety Cases for Dynamic Factories}
In many safety-related domains, such as railways or nuclear, it is common practice to develop an explicit safety case for systems. The safety case sets out the argument as to why a system is considered safe to operate in a particular environment, and provides evidence to support the argument (such as results from analysis or testing) \cite{kelly2004goal}. Safety cases are particularly used for novel systems where there is limited established best practice for assuring safety, where the explanation for why the system is safe must be clearly communicated \cite{sujan2016should}. Given the novelty of dynamic factories, we propose that an explicit safety case should be provided.
Making factories more dynamic often requires a shift from standalone systems to edge or fog networks of devices and services performing, cooperatively, a number of functionalities \cite{kagermann13}. Creation of a safety case will therefore generally need to be cooperative in the sense that a safety case for a dynamic factory cannot be built by a single stakeholder or organisation. The supplier organisations, (e.g. the supplier of a smart sensor used in the factory), have the best knowledge of the properties and characteristics of their components and should define a set of safety guarantees for different usages. However, there is a limit to what the supplier organisation is able to provide out-of-context from the particular factory or operation; remember that such components are often general-purpose commercial products that may be used in many different applications. The suppliers can provide assurance for the component (e.g. through the use of UL/ CE certification \cite{kostolani2019effective}), but can say little about the safety assurance of the factory operation as a whole as they do not have the necessary knowledge. It is the responsibility therefore of the system integrator to assess whether the overall safety of the system can be demonstrated by the integration of the components. In particular, the integrator must identify, through safety analysis, the hazards posed by factory operation and determine how the components may contribute to hazards (this could for example be done through considering deviations on the functionality of the components and their interactions).
As a result of this delegation of safety responsibility to different stakeholders, we determined that a modular safety case approach should be adopted \cite{kelly2001concepts}. In a modular safety case, the overall safety case for the dynamic factory is split into separate modules of argument and evidence relating to different system components.
\subsection{Safety Contracts}
The safety case modules can be linked together using safety contracts in order to create a coherent case for the overall system. Such a modular approach requires that the safety contract for each component in the factory be determined and explicitly defined. This contract defines the set of properties that the component is able to assure and a definition of potential failure behaviour of the component. In order to be usable as part of the integrated safety case for the factory, we propose that each of the identified properties should be defined with the following assume-guarantee reasoning form:
\begin{quote}
\emph{if \{condition\} then \{component\} shall provide \{property\} with confidence of \{confidence\}}
\end{quote}
The \emph{condition} and \emph{property} represent, respectively, the assumptions and guarantees of an assume-guarantee contract \cite{sangiovanni12}. In particular the \emph{condition} and \emph{confidence} of this assume-guarantee contract specification is crucial to our approach; for any component there exist limitations on the circumstances under which its behaviour is guaranteed and these must be clearly understood and specified. Understanding and expressing these limitations is particularly important in systems that make use of COTS components, where the confidence may be quite low. The conditions that affect a guarantee could be diverse, depending on the nature of the guarantee and the component. Conditions may relate to the state of the operating environmental, such as the effects of lighting conditions on sensor performance. They may be conditions on other system components, such as the availability of electrical power. Conditions may also include operational constraints on the way a system or component is installed or operated.
As an example, an assurance contract for a pressure sensor may include:
\begin{quote}
\emph{If temperature is greater than -20$^{\circ}$C then pressure sensor shall provide air pressure value with accuracy of 0.1\% with confidence of 99\%.}
\end{quote}
The construction of such a safety contract requires evidence for the accuracy of the pressure sensor, for example evidence from testing of the sensors and in-service data for the sensor performance in operation. The more evidence that is available, the higher the confidence in that performance will be. This evidence would be included as part of the safety case module for the pressure sensor. The safety contract also reflects the knowledge that the sensor providers have that the sensors do not perform as well at very low temperatures. This places the limitation condition on temperature defined in the contract. Using the safety contract, the integrator will be able to assess whether a pressure sensor meeting this safety contract will be sufficient to meet the safety requirements identified for the overall system.
In this paper we analyse the safety requirements for an experimental scenario involving the use of geofencing technology in the operation of an AGV in a factory. The results highlight the need for the development of a modular safety case and help us determine the required safety contracts for key elements of the system.
|
1,108,101,563,149 | arxiv | \section{Introduction}
\label{Sec:Intro}
The simplest model for describing a spherical star in equilibrium is the well-known Lane-Emden equation (see Ref.~\cite{chandrasekhar1957introduction} and references therein)
\begin{equation}
\frac{1}{x^2}\frac{d}{dx}\left( x^2 \frac{d\Theta}{dx}\right) + \Theta^N = 0,
\label{Eq:LaneEmden}
\end{equation}
where $x$ represents a dimensionless radius, $\Theta^N$ is proportional to the mass density $\rho$ and $N$ is the polytropic index characterizing the equation of state of the matter. This model is based on the assumption of a static and spherically symmetric Newtonian perfect fluid with a polytropic equation of state in which the pressure $p$ is related to the density through the relation $p(\rho) = K\rho^{\gamma}$, $K$ being a constant and $\gamma = 1 + \frac{1}{N}$ the adiabatic index. Under these assumptions, Eq.~(\ref{Eq:LaneEmden}) easily follows from the condition of hydrostatic equilibrium and the Poisson equation for the gravitational potential, and it yields a simple and successful model that is able to describe (in first approximation) most of the stars in the Universe and even other astrophysical objects like planets. For example, our Sun can be described in first approximation by the Lane-Emden equation~(\ref{Eq:LaneEmden}) with polytropic index $N = 3$ ($\gamma = 4/3$), while low-mass white-dwarfs stars can be described by Eq.~(\ref{Eq:LaneEmden}) with index $N = 3/2$ ($\gamma = 5/3$). Giant planets, like Jupiter and Saturn, can be approximated by $N = 1$ ($\gamma = 2$) while the solution with $N = 0$ ($\gamma = \infty$) corresponds to a constant density, incompressible sphere and therefore serves as a simple model for rocky planets~\cite{chandrasekhar1957introduction,kTrB-Book}.
Although the Lane-Emden equation~(\ref{Eq:LaneEmden}) provides a simple model for most stars in our Universe, a more realistic description clearly requires additional physical ingredients, such as incorporating the effects of the rotation of the star, the presence of magnetic fields, radiation processes etc. Furthermore, if the star is very compact, then general relativistic effects become important. For a star of radius $R$ and mass $M$, the compactness is measured by the ratio $r_s/R$ where $r_s := 2G_N M/c^2$ is the Schwarzschild radius of the star (with $G_N$ and $c$ denoting, respectively, Newton's constant and the speed of light). More generally, the compactness ratio at radius $r$ is defined as $2m(r)/r$ with $m(r) := G_N M(r)/c^2$, where $M(r)$ denotes the mass contained in the sphere of radius $r$ centered at the origin. Relativistic corrections must be taken into account whenever this ratio ceases to be much smaller than one. This is the case for neutron stars or more exotic stars, like quark stars (see Refs.~\cite{Shapiro,Glendenning:1997wn} for textbooks treating these subjects).
In this article, we discuss the general relativistic generalization of the Lane-Emden equation, which is known as the Tolman-Oppenheimer-Volkoff (TOV) equation~\cite{rT39,jOgV39} and serves as a model for describing such compact stars, assuming they can still be modeled by a static and spherically symmetric perfect fluid. The TOV equation is obtained by replacing the Newtonian Euler-Poisson system by its relativistic generalization, the Euler-Einstein system of equations in which the self-gravity of the matter is described according to Einstein's theory of general relativity. This leads to generalizations of the hydrostatic equilibrium condition and Poisson's equations which correctly take into account the effects from general relativity and enhance the magnitude of the pressure gradient.
While a detailed mathematical analysis of the Lane-Emden equation~(\ref{Eq:LaneEmden}) has been known for a long time (see again~\cite{chandrasekhar1957introduction} and references therein and also~\cite{uS00} for the case of more general equations of state), a rigorous analysis of its relativistic counterpart has been completed only in more recent years. Pioneering work in this direction has started with the work by Rendall and Schmidt~\cite{Rendall_1991}, where it is shown that under certain assumptions on the equation of state, there exists for each value of the central density a unique global solution of the TOV equation in which the corresponding star either has a finite radius (and the solution being the Schwarzschild solution in the exterior region) or has infinite radius with the energy density converging to zero as $r\to \infty$.\footnote{Stars with infinite radius are relevant as well, as long as their density decays sufficiently fast to zero when $r\to \infty$ such that their total mass is finite. In particular, this is the case for boson stars, where the perfect fluid source of matter is replaced by a massive scalar field, see Ref.~\cite{sLcP17} for a recent review. For a recent study regarding the asymptotic behavior of some perfect fluid star models with infinite extend, see Ref.~\cite{lAaB19}.} Some necessary and sufficient conditions on the equation of state yielding a star with finite radius are also given in~\cite{Rendall_1991}. A different proof for the existence of solutions describing a star with finite radius was given by Makino~\cite{makino1998}, under assumptions on the equation of state which are similar to the one formulated in the next section of the present article, with the effective adiabatic index $\gamma$ being restricted to the range $4/3 < \gamma < 2$ for sufficiently small values of the density. The work in~\cite{makino1998} also discusses the radial linearized perturbations of the static solutions, showing that they lead to a self-adjoint operator with a purely discrete spectrum. For further work providing conditions on the equation of state which yield a spherical star of finite (or infinite) extend see Refs.~\cite{wS02,jH02}. In particular, the work by Simon~\cite{wS02} discusses the relation of these conditions with the uniqueness property of the static spherical stars among all possible static, asymptotically flat solutions of the Euler-Einstein equations. Other conditions that guarantee the finiteness of the star's radius have been presented by Ramming and Rein~\cite{Ramming_2013}. These conditions cover perfect fluid stars as well as self-gravitating collisionless gas configurations in both the Newtonian and relativistic regimes. For a general study of the relativistic spherically symmetric static perfect fluid models based on the theory of dynamical systems, see Ref.~\cite{Heinzle_2003}.
Coming back to the compactness ratio of the star (which determines when the relativistic effects are important), Buchdahl showed~\cite{Buchdahl:1959zz} that if the pressure is isotropic and the energy density does not increase outwards, then any static, spherically symmetric relativistic star must satisfy the inequality $r_s/R < 8/9$. This inequalities was later generalized by Andr\'easson~\cite{Andreasson:2007ck} who provides an $r$-independent bound on the compactness ratio $2m(r)/r$ under purely algebraic inequalities on the energy density and pressure, and hence removes the monotonicity assumption on the density profile.
The goal of this article is to provide a self-contained pedagogical review of the most important aspects of the TOV equation and its solutions. We start in section~\ref{Sec:TOV} with a systematic deduction of the TOV equation from the Euler-Einstein system of equations with a static, spherical ansatz, and we specify our assumptions on the equation of state. Moreover, in order to facilitate the mathematical analysis that follows, we rewrite the TOV equation in terms of dimensionless quantities. Next, in section~\ref{Sec:LocalExistence} we use the contraction mapping principle in order to prove the existence of a unique local solution for the dimensionless TOV equation near the center of symmetry $r = 0$. It should be noted that this step does not follow in a straightforward way from the standard results of the theory of ordinary differential equations, since the TOV equation is nonlinear and singular at $r = 0$. Next, in section~\ref{Sec:GlobalExistence} we prove that under the assumptions on the equation of state given in section~\ref{Sec:TOV} the local solution can be extended to either infinite radius or to a finite radius, and partly following~\cite{Ramming_2013} we prove that as long as the effective adiabatic index $\gamma$ is strictly larger than $4/3$ for small densities, the radius must be finite. Our proof also shows that the Buchdahl inequality $2m(r)/r < 8/9$ must hold for all values of the radius $r > 0$. A numerical example is analyzed in section~\ref{Sec:Numerical} and a summary and conclusions are presented in section~\ref{Sec:Conclusions}. This article also contains several appendices which provide technical details and some important examples. In appendix~{\ref{App:Curvature} we give details on the computation of the Riemann, Einstein and Ricci tensors which are used to derive the TOV equation. In appendix~\ref{App:StatFis} we provide a derivation of the equation of state describing a relativistic, ideal classical monoatomic gas from purely statistical physics considerations and mention the corresponding results for a completely degenerate ideal Fermi gas. In appendix~\ref{App:ModifiedBessel} we discuss some important properties of the modified Bessel functions of the second kind which are needed in appendix~\ref{App:StatFis}. In the final appendix~\ref{App:Completeness} we prove the completeness of the function space $X_R$ which plays a fundamental role for the local existence proof in section~\ref{Sec:LocalExistence}.
In most of the article, we work in geometrized units, for which $G_N = c = 1$.
\section{Derivation of the TOV equation and assumptions on the equation of state}
\label{Sec:TOV}
In this section, we start with a review of the derivation of the TOV equation. Then, we state the precise assumptions on the equation of state on which the results in the subsequent sections are based on.
\subsection{Field equations and static, spherically symmetric ansatz}
The field equations describing a relativistic, self-gravitating perfect fluid configuration are given by the coupled system consisting of the $10$ independent components of Einstein's field equations,
\begin{equation}
\label{Eq:Einstein}
G_{\mu\nu} = \frac{8\pi G_N}{c^4}T_{\mu\nu},
\end{equation}
together with the $4$ relativistic Euler equations
\begin{equation}
\label{Eq:Euler}
\nabla^\mu T_{\mu\nu} = 0.
\end{equation}
Here and in the following, Greek indices $\mu,\nu,\ldots$ denote spacetime indices which run over $0,1,2,3$, $G_{\mu\nu}$ are the components of the Einstein tensor associated with the spacetime metric $g_{\mu\nu}$ (which is symmetric, i.e. $G_{\mu\nu} = G_{\nu\mu}$ and hence has $10$ independent components like the metric components $g_{\mu\nu}$), and $T_{\mu\nu} = T_{\nu\mu}$ are the components of the energy-momentum-stress tensor which describes the sources of energy and matter. For the perfect fluid case considered here,
\begin{equation}
\label{Eq:EMST}
T_{\mu\nu} = \frac{\varepsilon + p}{c^2} u_\mu u_\nu + p g_{\mu\nu},
\end{equation}
where $\varepsilon$, $p$ and $u^\mu = g^{\mu\nu} u_\nu$ refer, respectively, to the energy density, pressure and the components of the four-velocity of the fluid, normalized such that $u_\mu u^\mu = -c^2$. In terms of an orthonormal frame ${\bf e}_{\hat{0}},{\bf e}_{\hat{1}},{\bf e}_{\hat{2}},{\bf e}_{\hat{3}}$ of vector fields such that ${\bf e}_{\hat{0}} = c^{-1} u^\mu\partial_\mu$, the components of the energy-momentum-stress tensor are
\begin{equation}
\label{Eq:ComT}
(T_{\hat{\alpha}\hat{\beta}}) = \mbox{diag}(\varepsilon,p,p,p),
\end{equation}
and thus $\varepsilon$ and $p$ represent the energy density and pressure measured by an observer which is co-moving with the fluid (i.e. an observer whose world line is tangent to the four-velocity).
The Einstein tensor $G_{\mu\nu}$ is obtained from the Riemann curvature tensor $R^\alpha{}_{\beta\mu\nu}$ as follows:
\begin{equation}
G_{\mu\nu} = R_{\mu\nu} - \frac{R}{2}g_{\mu\nu},
\end{equation}
where $R_{\mu\nu} = R^{\alpha}{}_{\mu\alpha\nu}$ are the components of the Ricci tensor and its trace $R = g^{\mu\nu}R_{\mu\nu}$ is the Ricci scalar. The components of the Riemann curvature tensor, in turn, are given by
\begin{equation}
\label{Eq:Riemann}
R^\mu{}_{\nu\alpha\beta} = \partial_\alpha\Gamma^\mu{}_{\beta\nu} + \Gamma^\sigma{}_{\beta\nu}\Gamma^\mu{}_{\alpha\sigma} - (\alpha \leftrightarrow \beta)
= -R^\mu{}_{\nu\beta\alpha},
\end{equation}
where $\Gamma^{\nu}{}_{\alpha\beta}$ denote the Christoffel symbols, which are determined by the components of the metric tensor and their first derivatives,
\begin{equation}
\label{Eq:Christoffel}
\Gamma^\nu{}_{\alpha\beta} = \frac{1}{2}g^{\nu\sigma}\left(\frac{\partial g_{\beta\sigma}}{\partial x^{\alpha}} + \frac{\partial g_{\alpha\sigma}}{\partial x^{\beta}} - \frac{\partial g_{\alpha\beta}}{\partial x^{\sigma}}\right).
\end{equation}
Due to the contracted Bianchi identities, $\nabla^\mu G_{\mu\nu} = 0$, Eq.~(\ref{Eq:Euler}) is a consequence of Einstein's field equations~(\ref{Eq:Einstein}), so in principle it is sufficient to solve Eq.~(\ref{Eq:Einstein}). However, as we will see, it is simpler to solve instead the relativistic Euler equations~(\ref{Eq:Euler}) together with part of the components of the Einstein equations.
For the remainder of this article, we focus on spherically symmetric and static configurations, in which the metric has the form
\begin{equation}
\label{Eq:MetricAnsatz}
ds^2 = g_{\mu\nu} dx^\mu dx^\nu
= -e^{\frac{2\Phi(r)}{c^2}}c^2dt^2 + e^{2\Psi(r)}dr^2
+ r^2(d\vartheta^2 + \sin^2\vartheta d\varphi^2),
\end{equation}
where $(x^\mu) = (t, r, \vartheta, \varphi)$ are spherical coordinates and $\Phi$ and $\Psi$ are functions of the radius coordinate $r$ only which will be determined by the field equations~(\ref{Eq:Einstein},\ref{Eq:Euler}). Note that when $\Phi = \Psi = 0$, the metric~(\ref{Eq:MetricAnsatz}) reduces to the Minkowski metric in spherical coordinates. In the solutions discussed below, the coordinate $r$ runs from $0$ to $\infty$. For the solution to be regular at $r=0$ we require $\Phi(r)$ and $\Psi(r)$ to be smooth, even functions of $r$ (i.e. all their derivatives of odd order vanish at $r=0$). As $r\to \infty$ we require asymptotic flatness, that is $\Phi,\Psi \to 0$. The perfect fluid configuration is also assumed to be static and spherically symmetric. This means that $\varepsilon = \varepsilon(r)$ and $p = p(r)$ are functions of $r$ only, and that the four-velocity is of the form
\begin{equation}
u^\mu\frac{\partial}{\partial x^\mu} = e^{-\frac{\Phi}{c^2}}\frac{\partial}{\partial t},
\end{equation}
such that the fluid elements are at rest in the reference frame defined by the coordinate system $(t,r,\vartheta,\varphi)$.
\subsection{Explicit expressions for the Einstein tensor and exterior solution}
In order to compute the $10$ independent components of the Einstein tensor $G_{\mu\nu}$ appearing in Eq.~(\ref{Eq:Einstein}), one needs to calculate first the $40$ independent Christoffel symbols $\Gamma^\nu{}_{\alpha\beta}$, as explained in the previous subsection. To carry out this calculation, it is convenient to exploit the block-diagonal form of the metric and write it as follows:
\begin{equation}
(g_{\mu\nu}) = \begin{pmatrix} \tilde{g}_{ab} & 0 \\ 0 & r^2\hat{g}_{AB} \end{pmatrix}, \qquad (g^{\mu\nu}) = \begin{pmatrix} \tilde{g}^{ab} & 0 \\ 0 & r^{-2}\hat{g}^{AB} \end{pmatrix},
\label{Eq:SphMetric}
\end{equation}
where $a,b$ refer to the coordinates $t,r$ and $A,B$ to the coordinates $\vartheta,\varphi$. For the specific parametrization~(\ref{Eq:MetricAnsatz}) relevant to this section, the two blocks are given by
\begin{align}
\tilde{g}_{ab}dx^adx^b &= -e^{2\Phi(r)}dt^2 + e^{2\Psi(r)}dr^2,
& (\hbox{$a, b = t, r$}), \label{Eq:TwoMetric}\\
\hat{g}_{AB}dx^Adx^B &= d\vartheta^2 + \sin^2\vartheta d\varphi^2,
& (\hbox{$A, B = \vartheta, \varphi$}). \label{Eq:SphTwoMetric}
\end{align}
From now on, we work in geometrized units in which $G_N = c = 1$, implying in particular that time and mass have units of length. The details of the calculations are presented in Appendix~\ref{App:Curvature}; here we directly present the resulting expressions for the Christoffel symbols and the components of the Einstein tensor. The non-vanishing Christoffel symbols are:
\begin{eqnarray}
&& \Gamma^{t}{}_{tr} = \Gamma^{t}{}_{rt} = \Phi', \qquad
\Gamma^{r}{}_{rr} = \Psi', \qquad
\Gamma^{r}{}_{tt} = \Phi' e^{2(\Phi - \Psi)},
\label{Eq:Christoffel1}\\
&& \Gamma^{\vartheta}{}_{r\vartheta} = \Gamma^{\vartheta}{}_{\vartheta r}
= \Gamma^{\varphi}{}_{r\varphi} = \Gamma^{\varphi}{}_{\varphi r} = \frac{1}{r},
\label{Eq:Christoffel2}\\
&& \Gamma^{r}{}_{\vartheta\vartheta} = -re^{-2\Psi}, \qquad
\Gamma^{r}{}_{\varphi\varphi} = -r\sin^2\vartheta e^{-2\Psi},
\label{Eq:Christoffel3}\\
&& \Gamma^{\vartheta}{}_{\varphi\varphi} = -\sin\vartheta\cos\vartheta, \qquad
\Gamma^{\varphi}{}_{\varphi\vartheta} = \Gamma^{\varphi}{}_{\vartheta\varphi} = \cot\vartheta,
\label{Eq:Christoffel4}
\end{eqnarray}
which give rise to the following expressions for the Einstein tensor:
\begin{align}
\label{Eq:TE1}
G^{t}{}_{t} & = \frac{1}{r^2}\left(e^{-2\Psi} - 1\right) - \frac{2\Psi'}{r}e^{-2\Psi}, \\
\label{Eq:TE2}
G^{r}{}_{r} & = \frac{1}{r^2}\left(e^{-2\Psi} - 1\right) + \frac{2\Phi'}{r}e^{-2\Psi}, \\
\label{Eq:TE3}
G^{\vartheta}{}_{\vartheta} = G^{\varphi}{}_{\varphi} & = \left[\Phi'' + \Phi'(\Phi' - \Psi') + \frac{\Phi' - \Psi'}{r}\right]e^{-2\Psi},
\end{align}
the off-diagonal components being zero.
Based on these expressions, it is a simple task to derive the Schwarzschild metric, which describes the unique static, spherically symmetric family of solutions in the exterior vacuum region. In vacuum, there are no energy sources and thus $T_{\mu\nu} = 0$ and Einstein's field equations imply
\begin{align}
\label{Eq:S1}
\frac{1}{r^2}\left(e^{-2\Psi} - 1\right) - \frac{2\Psi'}{r}e^{-2\Psi} & = 0, \\
\label{Eq:S2}
\frac{1}{r^2}\left(e^{-2\Psi} - 1\right) + \frac{2\Phi'}{r}e^{-2\Psi} & = 0, \\
\label{Eq:TS3}
\left[\Phi'' + \Phi'(\Phi' - \Psi') + \frac{\Phi' - \Psi'}{r}\right]e^{-2\Psi} & = 0.
\end{align}
The first equation only involves $\Psi(r)$ and can be rewritten as
\begin{equation}
G^t{}_t = -\frac{1}{r^2}\frac{d}{dr}[r(1 - e^{-2\Psi})] = 0,
\end{equation}
and hence $r(1 - e^{-2\Psi}) = 2M$ for some integration constant $M$. For reasons which will become clear shortly, we assume $M > 0$ to be positive. Therefore,
\begin{equation}
\label{Eq:aSch}
e^{-2\Psi} = 1 - \frac{2M}{r}.
\end{equation}
Moreover, subtracting Eq.~(\ref{Eq:S1}) from (\ref{Eq:S2}) one obtains the relation
\begin{equation}
\Phi' = -\Psi',
\end{equation}
which can be integrated to yield
\begin{equation}
\Phi = -\Psi,
\end{equation}
where without loss of generality we have set the integration constant to zero, since otherwise it could be absorbed into a redefinition of the time coordinate $t$ (which does not alter the physics of the problem because of the general covariance principle of General Relativity). Using this relation in Eq.~(\ref{Eq:aSch}) one obtains
\begin{equation}
\label{Eq:PhiSch}
e^{2\Phi} = 1 - \frac{2M}{r},
\end{equation}
which yields the Schwarzschild solution, given by the line element
\begin{equation}
ds^2 = -\left(1 - \frac{2M}{r}\right)dt^{2} + \left(1 - \frac{2M}{r}\right)^{-1}dr^{2} + r^2(d\vartheta^{2} + \sin^{2}\vartheta d\varphi^{2}).
\end{equation}
We see that for $r \gg M$, $2M/r \ll 1$, and in this limit the metric can be considered to describe a small perturbation of the flat Minkowski metric. Thus, in this case the Newtonian limit is valid which allows one to identify the quantity $-M/r$ with the Newtonian potential $\Phi$, that is, $\Phi = -M/r$. In this sense, the integration constant $M$ can be identified with the total mass of the central object. The Schwarzschild metric is an exact non-trivial (i.e. non-flat) solution of the Einstein field equations. In the absence of matter, it describes a non-rotating black hole (see, for instance, Ref.~\cite{Wald} for details).
\subsection{Interior region and TOV equations}
In the interior region, the relevant field equations are obtained by replacing the right-hand sides of Eqs. (\ref{Eq:S1})-(\ref{Eq:TS3}) with the corresponding components of $8\pi$ times the energy-momentum-stress tensor.\footnote{Recall that we work in geometrized units in which $G_N = c = 1$.} Using the fact that $T^t{}_t = -\varepsilon$, $T^r{}_r = T^\vartheta{}_\vartheta = T^\varphi{}_\varphi = p$, we obtain the following three equations
\begin{align}
\label{Eq:Einstein1}
\frac{1}{r^2}\left(e^{-2\Psi} - 1\right) - \frac{2\Psi'}{r}e^{-2\Psi} & = -8\pi\varepsilon, \\
\label{Eq:Einstein2}
\frac{1}{r^2}\left(e^{-2\Psi} - 1\right) + \frac{2\Phi'}{r}e^{-2\Psi} & = 8\pi p, \\
\label{Eq:Einstein3}
\left[\Phi'' + \Phi'(\Phi' - \Psi') + \frac{\Phi' - \Psi'}{r}\right]e^{-2\Psi} & = 8\pi p.
\end{align}
As in the vacuum case, the left-hand side of Eq.~(\ref{Eq:Einstein1}) only involves the metric field $\Psi(r)$, and it can be rewritten in the form
\begin{equation}
\frac{1}{r^2}\frac{d}{dr}[r(1 - e^{-2\Psi})] = 8\pi\varepsilon.
\end{equation}
Integrating both sides of this equation yields
\begin{equation}
\label{Eq:aTOV}
e^{-2\Psi(r)} = 1 - \frac{8\pi}{r} \int_{0}^{r} \varepsilon(s) s^2 ds,
\end{equation}
where we have used the fact that $\Psi(r)$ is regular at $r = 0$ to fix the integration constant. Introducing the mass function
\begin{equation}
\label{Eq:MasaT}
m(r) := 4\pi\int_{0}^{r} \varepsilon(s) s^2 ds,
\end{equation}
which measures the mass-energy contained in a sphere of radius $r$, Eq.~(\ref{Eq:aTOV}) can be rewritten as
\begin{equation}
\label{Eq:masa}
e^{-2\Psi(r)} = 1 - \frac{2m(r)}{r}.
\end{equation}
Eliminating the factor $e^{-2\Psi(r)}$ from Eq. (\ref{Eq:Einstein2}) one obtains
\begin{equation}
\label{Eq:Phi}
\Phi'(r) = \frac{m(r) + 4\pi r^3 p(r)}{r[r - 2m(r)]}.
\end{equation}
This is the relativistic generalization of the Newtonian equation $\Phi'(r) = m(r)/r^2$, to which Eq.~(\ref{Eq:Phi}) reduces to in the limit $p \ll \varepsilon$ and $m(r) \ll r$.
Next, one needs an equation for the pressure $p(r)$. Such an equation could be obtained by substituting Eqs.~(\ref{Eq:aTOV}) and (\ref{Eq:Phi}) into the last Einstein equation~(\ref{Eq:Einstein3}). However, a lot of algebraic work can be saved by considering instead Eq.~(\ref{Eq:Euler}), from which one directly obtains the same result, which is
\begin{equation}
p' = -(p + \varepsilon)\Phi'.
\end{equation}
Finally, we may eliminate $\Phi'$ from this equation by using Eq.~(\ref{Eq:Phi}), obtaining the well-known Tolman-Oppenheimer-Volkoff (TOV) equation
\begin{equation}
\label{Eq:TOV}
p'(r) = -[p(r) + \varepsilon(r)]\frac{m(r) + 4\pi r^3 p(r)}{r[r - 2m(r)]}.
\end{equation}
This generalizes the Newtonian condition for hydrostatic equilibrium $p'(r) = -\rho(r)\frac{m(r)}{r^2}$ (with $\rho$ the mass density) to the general relativistic case. Note that the relativistic correction terms tend to increase the pressure gradient $|p'|$, yielding more compact objects. Note also that Eq.~(\ref{Eq:TOV}) is singular at $r = 0$ and $2m(r) = r$. The first one requires appropriate regularity conditions at the center and will be dealt with by replacing the mass function $m(r)$ with the mean density (see sections~\ref{SubSec:Dimensionless} and~\ref{Sec:LocalExistence} below). Regarding the potential singularity at $2m(r) = r$, we will prove in section~\ref{Sec:GlobalExistence} that (under the hypotheses made in this article), $2m(r) < r$ everywhere, such that it does not occur. For now we note that Eq.~(\ref{Eq:MasaT}) implies that $m(r)\simeq r^3$ near the center such that $2m(r)/r\simeq r^2$.
In summary, the metric for a spherical, static, self-gravitating perfect fluid configuration is given by
\begin{equation}
ds^2 = -e^{2\Phi(r)}dt^2 + \left(1 - \frac{2m(r)}{r}\right)^{-1}dr^2 + r^2(d\vartheta^2 + \sin^2\vartheta d\varphi^2),
\end{equation}
where $m(r)$ is given by Eq. (\ref{Eq:MasaT}), $\Phi(r)$ is determined from Eq. (\ref{Eq:Phi}), and $p(r)$ must satisfy the TOV equation~(\ref{Eq:TOV}). The latter can be integrated as soon as one specifies an equation of state which provides a relation between the pressure $p$ and the energy density $\varepsilon$. In the next subsection we specify our precise assumptions on the equations of state considered in this article, while in the subsequent sections we provide a rigorous analysis for the existence of solutions of the TOV equation.
\subsection{The equation of state}
\label{sec:EquationState}
In the following, we state our assumptions on the equation of state, which provides a relation between the pressure $p$ and the energy density $\varepsilon$. Such a relation should be obtained from a statistical mechanics model of the matter, which usually provides the pressure and energy density as a function of the particle density $n$ and the temperature $T$ of the system:
\begin{equation}
p = p(n,T),\qquad
\varepsilon = \varepsilon(n,T),
\end{equation}
see Appendix~\ref{App:StatFis} for the specific example of an ideal monoatomic relativistic gas. For the following, we assume that the perfect fluid configuration is in \emph{local thermodynamic equilibrium}, that is, each fluid (or gas) cell is in thermodynamic equilibrium and thus the macroscopic quantities describing the state of this cell satisfy the laws of thermodynamics. Assuming that the cell contains a fixed number $N$ of particles, the relevant macroscopic quantities characterizing the state of the cell are its volume $V = N/n$, its entropy $S = s N/n$ (with $s$ the entropy density), its energy $U = \varepsilon N/n$, and other quantities such as its temperature $T$. Since $N$ is fixed, the first law of thermodynamics implies that
\begin{equation}
d\left(\frac{\varepsilon}{n}\right) = T d\left( \frac{s}{n} \right) -p d\left(\frac{1}{n}\right).
\label{Eq:FirstLaw}
\end{equation}
In general, the energy density $\varepsilon$ is a function of the entropy per particle $s/n$ and $n$; however, in this article we assume the perfect fluid is \emph{isentropic}, that is, $s/n$ is constant throughout the fluid, such that the first term on the right-hand side of Eq.~(\ref{Eq:FirstLaw}) can be ignored. In this case, $\varepsilon$ depends only on $n$ and given an equation of state in the form $p = p(n)$, integration of Eq.~(\ref{Eq:FirstLaw}) yields
\begin{equation}
\varepsilon(p) = ne_0 + n\int_{0}^{n} p(\overline{n})\frac{d\overline{n}}{\overline{n}^2},\qquad
p = p(n),
\label{Eq:epsilonp}
\end{equation}
where $e_0$ denotes the rest mass energy of the particle and where from now on, we regard $\varepsilon$ as a function of $p$ instead of $n$. More precisely, we assume $p: [0,\infty)\to \mathbb{R}$ is a continuously differentiable function of the particle density $n$, satisfying the following conditions:
\begin{itemize}
\item[$(i)$] $p(n) > 0$ for $n > 0$ (positive pressure)
\item[$(ii)$] $p$ is monotonously increasing
\item[$(iii)$] Introducing the effective adiabatic index
\begin{equation}
\label{Eq:gamma(n)}
\gamma(n) := \frac{\partial\log p}{\partial\log n}(n) = \frac{n}{p(n)}\frac{\partial p}{\partial n}(n),
\qquad n > 0,
\end{equation}
we assume there is a constant $\gamma_1 > 1$ such that, for all small enough $n$,
\begin{equation}
\gamma(n)\geq \gamma_1
\end{equation}
\item[$(iv)$] $e_0 > 0$ (positive rest mass energy)
\end{itemize}
The condition $(iii)$ implies that for small enough $n_2\geq n_1 > 0$,
\begin{equation}
\frac{p(n_1)}{p(n_2)} \leq \left( \frac{n_1}{n_2} \right)^{\gamma_1},
\label{Eq:pInequality}
\end{equation}
which implies that $p(n)$ converges to zero at least as fast as $n^{\gamma_1}$ for $n\to 0$. In particular, this assures that the integral in Eq.~(\ref{Eq:epsilonp}) is well-defined, and it follows from the conditions $(i)$--$(iv)$ that $\varepsilon : [0, \infty)\to \mathbb{R}$ is a continuously differentiable, monotonously increasing function which satisfies $\varepsilon(p)/n \to e_0$ as $p \to 0$.
For a discussion of realistic equations of state, including those describing phase transitions, we refer the reader to Ref.~\cite{Glendenning:1997wn}. In this case, the function $\varepsilon(p)$ might be discontinuous; however, it seems that models for neutron star matter based on two conserved quantities (baryonic number and electric charge) do yield a continuous relation between $n$, $p$ and $\varepsilon$, see chapter~9 in~\cite{Glendenning:1997wn}. See also Refs.~\cite{NeutronStarStructure,NuclearEquation,DenseMatter,MassesRadii} for recent work and reviews on realistic equations of state describing dense matter in neutron stars.
\subsection{Dimensionless field equations and summary}
\label{SubSec:Dimensionless}
For the analysis in the following sections it is useful to introduce the averaged energy density $\overline{\rho}(r)$ contained in a sphere of radius $r$:
\begin{equation}
\label{Eq:W}
\overline{\rho}(r) := \frac{m(r)}{\frac{4\pi}{3} r^3}
= \frac{3}{r^3}\int_0^r \varepsilon(p(s)) s^2 ds,\qquad r > 0,
\end{equation}
which is regular at the center. In terms of $\overline{\rho}(r)$, Eqs.~(\ref{Eq:Phi},\ref{Eq:TOV}) can be rewritten as
\begin{equation}
\label{Eq:TOV2}
\Phi'(r) = -\frac{p'(r)}{p(r) + \varepsilon(p(r))}
= \frac{4\pi r}{3} \frac{\overline{\rho}(r) + 3p(r)}{1 - \frac{8\pi}{3} r^2 \overline{\rho}(r)}.
\end{equation}
Furthermore, it is also very convenient for the following to work in terms of dimensionless quantities. For this reason, we write the radius, pressure, energy density and averaged energy density as follows:
\begin{equation}
r = \ell x,\qquad
p(r) = p_c P(x), \qquad
\varepsilon(p) = \varepsilon_c e(P), \qquad
\overline{\rho}(r) = \varepsilon_c w(x),
\label{Eq:Dimensionless}
\end{equation}
where $p_c = p(0)$ is the central pressure, $\varepsilon_c$ the central energy density, and $\ell$ is a free parameter which will be chosen later. Here, the function $e(P)$ represents the dimensionless equation of state which satisfies the same properties as the function $\varepsilon(p)$ in Eq.~(\ref{Eq:epsilonp}). By definition, the functions $P(x)$, $w(x)$ and $e(P)$ satisfy the following conditions at the center,
\begin{equation}
P(0) = w(0) = 1, \qquad
e(1) = 1.
\label{Eq:CenterConditions}
\end{equation}
In terms of these quantities, the field equations~(\ref{Eq:TOV2}) are
\begin{equation}
\frac{d}{dx}\left(\frac{\Phi}{\lambda}\right) = -\frac{1}{e + \lambda P}\frac{dP}{dx}
= \frac{4\pi\ell^2 \varepsilon_c x}{3\lambda} \frac{w(x)
+ 3\lambda P(x)}{1 - \frac{8\pi\ell^2\varepsilon_c}{3}x^2 w(x)},
\label{Eq:TOV3}
\end{equation}
where we have introduced the dimensionless parameter
\begin{equation}
\lambda := \frac{p_c}{\varepsilon_c},
\end{equation}
representing the ratio between the central pressure and energy density. Note that in the Newtonian limit $\lambda \to 0$ since in this case the energy density and pressure are dominated by the contribution from the rest mass. In this sense, the parameter $\lambda$ measures how relativistic the resulting configuration will be. We see from Eq.~(\ref{Eq:TOV3}) that it is convenient to choose the length scale parameter $\ell$ such that
\begin{equation}
\frac{4\pi\ell^2 \varepsilon_c}{3} = \lambda.
\label{Eq:lDef}
\end{equation}
Also introducing the function $\phi(x) := \Phi(r)/\lambda$, our final form of the dimensionless field equations is
\begin{equation}
\label{Eq:TolmanA}
\frac{d}{dx}\phi(x) =
-\frac{1}{e(P(x)) + \lambda P(x)} \frac{d}{dx} P(x)
= x \frac{w(x) + 3\lambda P(x)}{1 - 2\lambda x^2 w(x)},
\end{equation}
with
\begin{equation}
\label{Eq:WA}
w(x) = \frac{3}{x^{3}}\int_0^x e(P(y)) y^2 dy.
\end{equation}
Note that in the Newtonian limit $\lambda\to 0$, Eq.~(\ref{Eq:TolmanA}) reduces to
\begin{equation}
\label{Eq:TolmanNewton}
\frac{d}{dx}\phi(x) =
-\frac{1}{e(P(x))} \frac{d}{dx} P(x) = x w(x),
\end{equation}
which are the correct Newtonian equations.
\section{Local existence near the center}
\label{Sec:LocalExistence}
In this section we prove, for each value $p_c > 0$ of the central pressure, the existence of a unique local solution $p(r)$ of the TOV equation~(\ref{Eq:TOV}) in the vicinity of the center of symmetry $r = 0$ such that $p(0) = p_c$. In the next section, this solution will be shown to possess a unique extension to a solution $p: [0,R_*]\to \mathbb{R}$ of Eq.~(\ref{Eq:TOV}) which is monotonically decreasing and satisfies $p(R_*) = 0$, and hence describes a spherical static star of finite radius $R_*$.
In order to demonstrate the existence of the local solution of the TOV equation, we rewrite Eq.~(\ref{Eq:TolmanA}) as a fixed point problem and use the contraction mapping principle. For this, we integrate both sides of
\begin{equation}
\frac{d}{dx}P(x) = -[e(P(x)) + \lambda P(x)]x \frac{w(x) + 3\lambda P(x)}{1 - 2\lambda x^2 w(x)},
\label{Eq:TolmanABis}
\end{equation}
over $x$, obtaining (taking into account the central condition $P(0) = 1$ from Eq.~(\ref{Eq:CenterConditions})) the integral equation
\begin{equation}
\label{Eq:IntTOV}
P(x) = 1 - \int_0^x \left[e(P(y)) + \lambda P(y) \right]
\frac{w(y) + 3\lambda P(y)}{1 - 2\lambda w(y) y^2} y dy =: TP(x),
\end{equation}
where $w(x)$ is given by (\ref{Eq:WA}). The problem now consists in finding a function $P(x)$ (in a suitable function space which will be specified below) which satisfies the fixed point equation $P = TP$. This can be achieved by means of the contraction mapping principle, which provides sufficient conditions for $T$ to posses a unique fixed point. We recall this important result which can be found in many textbooks (see, for instance~\cite{ReedSimon80}).
\begin{Teo}[contraction mapping principle]
\label{Thm:Banach}
Let $\left(X, \|\cdot\|\right)$ be a Banach space, and let $A = \overline{A} \subset X$ be a closed, non-empty subset of $X$. Let $T : A \rightarrow A$ be a mapping from $A$ to itself which constitutes a contraction, that is, there exists a constant $L$ satisfying $0 \leq L < 1$ such that
\begin{equation}
\| T(u) - T(v)\| \leq L\|u - v\|\qquad \hbox{for all $u,v\in A$}.
\end{equation}
Then, $T$ has a unique fixed point $u^*\in A$, that is, there exists a unique $u^* \in A$ such that $T(u^*) = u^*$.\footnote{The theorem says even more: the unique fixed point $u^*\in A$ can be obtained as the limit of the sequence $(u_k)$ defined by
\begin{equation*}
u_1 := T(u), \quad u_2 := T^2(u) = T(T(u)), \quad \ldots, \qquad u_k := T^k(u),
\end{equation*}
starting from any point $u\in A$. This sequence converges exponentially fast to $u^*$ as the following error bound shows:
\begin{equation*}
\|u_k - u^*\| \leq \frac{L^k}{1 - L}\|u_1 - u\|, \qquad k = 1, 2, 3, \ldots
\end{equation*}
}
\end{Teo}
In order to apply this theorem to the fixed point problem~(\ref{Eq:IntTOV}) we introduce, for each $R > 0$, the space $X_R := C_b( (0,R],\mathbb{R})$ of bounded, continuous real-valued functions on the interval $(0,R]$, equipped with the infinity norm:
\begin{equation}
\|P\|_{\infty} := \sup_{0 < x \leq R} |P(x)|, \quad P \in X_R.
\label{Eq:Norm}
\end{equation}
In Appendix~\ref{App:Completeness} we show that $\|\cdot\|_\infty$ defines a norm on $X_R$ and that $(X_R,\|\cdot\|_\infty)$ defines a Banach space, that is, a complete normed vector space. Next, we introduce the subset $A_R \subset X_R$ defined as
\begin{equation}
A_R := \left\{P \in X_R \; \biggr\rvert \; \lim\limits_{x\to 0} P(x) = 1 \; \hbox{and} \; \frac{1}{2} \leq P(x) \leq 1\; \hbox{for all} \; 0 < x \leq R\right\}.
\end{equation}
Clearly, $A_R$ is not empty since it contains the constant function $P = 1$. Furthermore, it is not difficult to verify that $A_R$ is closed: if $P_k$ is a sequence in $A_R$ which converges to $P\in X_R$ in the infinity norm, that is,
\begin{equation}
\| P_k - P \|_\infty = \sup_{0 < x \leq R} |P_k(x) - P(x)| \to 0,\qquad k\to \infty,
\end{equation}
then $P_k$ converges uniformly to $P$ and it follows that $P(x)\to 1$ as $x\to 0$ and $\frac{1}{2}\leq P(x)\leq 1$ since $P_k\in A_R$. Therefore, the limiting point $P$ of the sequence $P_k$ also lies in $A_R$, and it follows that $A_R$ is closed.
For the following, we show that the map $T$ defined in Eq.~(\ref{Eq:IntTOV}) is well-defined on $A_R$, maps $A_R$ into itself and defines a contraction provided that $R > 0$ is small enough. For this, first note that due to the fact that $e(P)$ is an increasing function and that $P\leq 1$ it follows from Eq.~(\ref{Eq:WA}) and the normalization $e(1) = 1$ that
\begin{equation}
w(x) = \frac{3}{x^3}\int_0^x e(P(y)) y^2 dy \leq \frac{3}{x^3}\int_0^x e(1) y^2 dy = 1,
\end{equation}
for all $P\in A_R$, such that $w(x)$ is bounded from above by $1$. Also, since $P \geq 1/2$ for all $P\in A_R$, it follows that
\begin{equation}
w(x) = \frac{3}{x^3}\int_0^x e(P(y)) y^2 dy \geq \frac{3}{x^3}\int_0^x e(1/2) y^2 dy
= e(1/2) =: w_0 > 0,
\end{equation}
which allows us to conclude that $w_0 \leq w \leq 1$ for all $P \in A_R$. Moreover, since $e$ and $P$ are continuous, it follows that $w$ is continuous and (using L'H\^opital's rule) that $w(x)\to e(1) = 1$ as $x\to 0$. Thus, if the function $P$ lies in the set $A_R$, then the function $w$ defined by Eq.~(\ref{Eq:WA}) belongs to the set
\begin{equation}
B_R := \left\{w \in X_R \; \biggr\rvert \; \lim\limits_{x\to 0} w(x) = 1 \; \hbox{and} \; w_0 \leq w(x) \leq 1\; \hbox{for all} \; 0 < x \leq R\right\}.
\end{equation}
After these preliminary remarks, we are ready to show that the map $T$ in Eq.~(\ref{Eq:IntTOV}) defines a contraction on $A_R$, provided $R > 0$ is small enough: first, we observe that $1 - 2\lambda w(y) y^2 \geq 1 - 2\lambda R^2$ for all $0 < y \leq R$ if $w\in B_R$, such that the denominator in the integrand of Eq.~(\ref{Eq:IntTOV}) cannot vanish if $0 < x \leq R$ and $R$ is chosen small enough, such that $2\lambda R^2 < 1$. Next, using again the continuity and boundedness of the functions $e$, $P$ and $w$, it follows that $TP: (0,R]\to \mathbb{R}$ is continuous and satisfies $TP(x)\to 1$ for $x\to 0$. Moreover, because the integrand in Eq.~(\ref{Eq:IntTOV}) is positive, it follows that $TP$ is monotonously decreasing. To show that $TP\in A_R$ it thus remains to prove that $TP(R) \geq \frac{1}{2}$. For this, we use the estimates $P\leq 1$, $w\leq 1$, $1 - 2\lambda w(y) y^2 \geq 1 - 2\lambda y^2$ and the fact that $e$ is an increasing function in order to estimate
\begin{equation*}
[e(P(y)) + \lambda P(y)] \frac{w(y) + 3\lambda P(y)}{1 - 2\lambda w(y) y^2}
\leq (1 + \lambda)\frac{1 + 3\lambda}{1 - 2\lambda y^2},
\end{equation*}
which implies
\begin{eqnarray*}
TP(x) &=&
1 - \int_0^x [e(P(y)) + \lambda P(y)] \frac{w(y) + 3\lambda P(y)}{1 - 2\lambda w(y) y^2} y dy \\
&\geq& 1 - \int_0^x (1 + \lambda)\frac{1 + 3\lambda}{1 - 2\lambda y^2} y dy \\
&=& 1 + \left(1 + \lambda \right)\frac{1 + 3\lambda}{4\lambda}\log(1 - 2\lambda x^2),
\end{eqnarray*}
for all $0 < x \leq R$, and the required condition $T P(R) \geq \frac{1}{2}$ is satisfied if $R > 0$ is small enough, such that
\begin{equation}
\label{Eq:De}
2\lambda R^2 \leq 1 - e^{-\frac{2\lambda}{(1+\lambda)(1 + 3\lambda)}},
\end{equation}
which is slightly stronger than the previous requirement $2\lambda R^2 < 1$. Therefore, if $R$ satisfies the inequality~(\ref{Eq:De}), the map $T$ defined by Eq.~(\ref{Eq:IntTOV}) is a well-defined map from $A_R$ into itself. To apply the contraction mapping principle, it remains to prove that $T$ defines a contraction on $A_R$ (for sufficiently small $R > 0$), that is, there must exist a constant $0 \leq L < 1$ such that
\begin{equation}
\|TP_2 - TP_1\|_{\infty} \leq L\|P_2 - P_1\|_{\infty}, \quad \hbox{for all $P_1, P_2 \in A_R$}.
\end{equation}
In order to verify this condition, we write the difference $TP_2 - TP_1$ in the following form:
\begin{equation}
TP_2(x) - TP_1(x) = -\int_0^x
\left[ F_{\lambda}(P_2(y), w_2(y), y) - F_{\lambda}(P_1(y), w_1(y), y) \right] y dy,
\label{Eq:TPDiff}
\end{equation}
with $F_\lambda : \left[\frac{1}{2}, 1\right] \times [w_0, 1] \times [0, R]\to \mathbb{R}$ the continuously differentiable function defined by
\begin{equation}
F_\lambda(p, w, y) := [e(p) + \lambda p]\frac{w + 3\lambda p}{1 - 2\lambda w y^2},\qquad
\frac{1}{2}\leq p\leq 1,\quad w_0\leq w\leq 1,\quad 0\leq y\leq R.
\end{equation}
According to the mean value theorem \cite{Apostol}, one has for all $\frac{1}{2}\leq P_1,P_2\leq 1$, $w_0\leq w_1,w_2\leq 1$ and $0\leq y\leq R$,
\begin{equation}
F_{\lambda}(P_2, w_2, y) - F_{\lambda}(P_1, w_1, y)
= \frac{\partial F_{\lambda}}{\partial P}(P_*, w_*, y)(P_2 - P_1)
+ \frac{\partial F_{\lambda}}{\partial w}(P_*, w_*, y)(w_2 - w_1),
\end{equation}
with $P_* = P_1 + \theta_P(P_2 - P_1)$, $0 < \theta_P < 1$, lying between $P_1$ and $P_2$ and likewise, $w_* = w_1 + \theta_w(w_2 - w_1)$, $0 < \theta_w < 1$. Using this into Eq.~(\ref{Eq:TPDiff}) one obtains the estimate
\begin{eqnarray}
|TP_2(x) - TP_1(x)| &\leq& \int_0^x
\left[ \left|\frac{\partial F_{\lambda}}{\partial P}(P_*(y), w_*(y), y)(P_2(y) - P_1(y))\right|
+ \left|\frac{\partial F_{\lambda}}{\partial w}(P_*(y), w_*(y), y)(w_2(y) - w_1(y))\right|\right] y dy
\nonumber\\
&\leq& \int_0^x
\left[ C_1(R) |P_2(y) - P_1(y)| + C_2(R) |w_2(y) - w_1(y)| \right] y dy,
\label{Eq:TPDiffEst}
\end{eqnarray}
with the constants
\begin{equation*}
C_1(R) := \max\limits_{\substack{\frac{1}{2} \leq P \leq 1 \\ w_0 \leq w \leq 1 \\ 0 \leq y \leq R }}
\left|\frac{\partial F_{\lambda}}{\partial P}(P, w, y)\right|,
\qquad
C_2(R) := \max\limits_{\substack{\frac{1}{2} \leq P \leq 1 \\ w_0 \leq w \leq 1 \\ 0 \leq y \leq R }}
\left|\frac{\partial F_{\lambda}}{\partial w}(P, w, y)\right|.
\end{equation*}
Taking the supremum over $x$ on both sides of the inequality~(\ref{Eq:TPDiffEst}) one obtains the estimate
\begin{equation}
\|TP_2 - TP_1\|_{\infty} \leq \frac{R^2}{2}
\left[ C_1(R)\|P_2 - P_1\|_\infty + C_2(R) \|w_2 - w_1\|_\infty\right],
\label{Eq:TPDiffEstBis}
\end{equation}
for all $P_1,P_2\in A_R$ and $w_1,w_2\in B_R$. Furthermore, using the definition~(\ref{Eq:WA}), one obtains in a similar manner the estimate
\begin{equation}
|w_2(x) - w_1(x)| \leq \frac{3}{x^3}\int_0^x |e(P_2(y)) - e(P_1(y))| y^2\, dy
\leq C_3 \| P_2 - P_1 \|_\infty,
\label{Eq:wDiffEst}
\end{equation}
with the constant
\begin{equation*}
C_3 := \max\limits_{\frac{1}{2} \leq P \leq 1}\left| \frac{de}{dP}(P)\right|,
\end{equation*}
where we have used that fact that $e: [1/2, 1] \to \mathbb{R}$ is a continuously differentiable function due to the properties of the function $\varepsilon(P)$ defined in~(\ref{Eq:epsilonp}). Combining the two estimates~(\ref{Eq:TPDiffEstBis},\ref{Eq:wDiffEst}) one obtains, finally
\begin{equation}
\|TP_2 - TP_1\|_\infty \leq L(R)\|P_2 - P_1\|_\infty,\qquad
L(R) := \frac{R^2}{2}\left[ C_1(R) + C_2(R) C_3 \right],
\end{equation}
for all $P_1,P_2\in A_R$. Since $C_1(R)$ and $C_2(R)$ decrease with $R$, it is clear that one can choose $R > 0$ small enough such that $L(R) < 1$ and $T: A_R\to A_R$ describes a contraction on $A_R$. Now we can use the contraction mapping principle (Theorem~\ref{Thm:Banach}) to show:
\begin{Teo}
\label{Thm:LocalExistence}
For small enough $R > 0$, there exists a unique, continuously differentiable solution $P: (0,R)\to \mathbb{R}$ of the dimensionless TOV equation~(\ref{Eq:TolmanABis}) satisfying $\lim\limits_{x\to 0} P(x) = 1$.
\end{Teo}
\begin{proof}
Theorem~\ref{Thm:Banach} and the previous observations guarantee that for small enough $R > 0$ the map $T$ has a unique fixed point $P$ in $A_R$. Since $TP: (0,R)\to \mathbb{R}$ is differentiable, $P = TP$ is differentiable as well and differentiating both sides of the equation $P(x) = TP(x)$ with respect to $x$ one finds that Eq.~(\ref{Eq:TolmanABis}) is satisfied for all $0 < x < R$, and hence $dP/dx$ is also continuous.
Regarding the uniqueness property, if $\tilde{P}: (0,R)\to \mathbb{R}$ was another continuously differentiable solution of Eq.~(\ref{Eq:TolmanABis}) such that $\lim\limits_{x\to 0}\tilde{P}(x) = 1$, then $\tilde{P}$ would also be a fixed point of $T$ and hence would agree with $P$.
\end{proof}
Finally, $\phi$ is obtained by integrating both sides of Eq.~(\ref{Eq:TolmanA}):
\begin{equation}
\phi(x) = \phi_c + \int_0^x \frac{w(y) + 3\lambda P(y)}{1 - 2\lambda y^2 w(y)} y dy,\qquad
0\leq x < R,
\end{equation}
with a constant of integration $\phi_c$ denoting the central value of $\phi$. If the solution exists globally, one can adjust this constant such that $\phi(x)\to 0$ for $x\to \infty$. Equivalently, if a global solution with finite radius $x_* > 0$ exists (sufficient conditions for this to occur will be discussed in the next section), one can choose the value of $\phi_c$ such that $\phi(x_*)$ matches its Schwarzschild value $\phi(x_*) = \frac{1}{2} \log\left(1 - \frac{2M}{\ell x_*}\right)$, with $M := \ell\lambda x_*^3 w(x_*)$ the total mass of the configuration.
In this way, one obtains a unique, continuously differentiable solution $(\phi(x),P(x))$ of Eqs.~(\ref{Eq:TolmanA}) on a small interval $(0,R)$ near the center with the required boundary conditions $\phi(0) = \phi_c$ and $P(0) = 1$. Moreover, with some algebra work one can show that the original Euler-Einstein equations~(\ref{Eq:Einstein1},\ref{Eq:Einstein2},\ref{Eq:Einstein3}) are satisfied.
\section{Global existence of finite radius solutions and Buchdahl bound}
\label{Sec:GlobalExistence}
In the previous section we proved the existence of a unique solution $P: (0,R)\to \mathbb{R}$ of the dimensionless TOV equation~(\ref{Eq:TolmanABis}) on a small interval $(0,R)$, which satisfies the required boundary condition $\lim\limits_{x\to 0} P(x) = 1$ at the center, see Theorem~\ref{Thm:LocalExistence}. In this section, we show that under suitable hypotheses on the equation of state, this solution can be extended to an interval $(0,x_*)$ with $x_* > R$ describing the surface of the star, which is characterized by the condition $\lim\limits_{x\to x_*} P(x) = 0$ of vanishing pressure.
To prove this result, we define
\begin{align*}
x_* & := \sup\bigg\{ x_1 > 0 \; \biggr\rvert \; P: (0, x_1) \to \mathbb{R}
\hbox{ is a continuously differentiable solution of Eq.~(\ref{Eq:TolmanABis}) satisfying }
\lim\limits_{x\to 0} P(x) = 1 \\
& \qquad\qquad\qquad\qquad
\hbox{and such that $0 < P(x) \leq 1$ and $1 - 2\lambda x^2 w(x) > 0$ for all $x\in (0,x_1)$}
\bigg\}.
\end{align*}
According to Theorem~\ref{Thm:LocalExistence}, $x_* > 0$ is well-defined. There are two alternatives. Either
\begin{itemize}
\item[(a)] $x_* < \infty$ is finite, or
\item[(b)] $x_* = \infty$ is infinite.
\end{itemize}
Moreover, since $dP/dx < 0$, $P(x)$ is a monotonously decreasing function and case (a) occurs either if
\begin{itemize}
\item[(a.1)] $\displaystyle\lim_{x\to x_*}{[1 - 2\lambda x^2 w(x)]} > 0$ and $\displaystyle\lim_{x \to x_*}{P(x)} = 0$, or if
\item[(a.2)] $\displaystyle\lim_{x\to x_*}{[1 - 2\lambda x^2 w(x)]} = 0$.
\end{itemize}
The central result of this section is to show that under the conditions $(i)$--$(iv)$ in section~\ref{sec:EquationState}, only the case (a.1) can occur if $\gamma_1 > 4/3$, which means that the local solution has a unique extension describing a star of finite radius $R_* = \ell x_* > 0$. The strategy of the proof is the following: first, we eliminate case (b), i.e. we exclude the possibility of a star with infinite extension. Subsequently, we eliminate case (a.2) by proving that the averaged density function $w(x)$ cannot grow too fast to make the denominator in Eq.~(\ref{Eq:TolmanABis}) zero. As a by-product of this result, we will also obtain a bound on the compactness ratio
\begin{equation}
\frac{2m(r)}{r} = 2\lambda x^2 w(x),
\end{equation}
which shows that it is, in fact, not only smaller than one (as required to eliminate case (a.2)) but even smaller than $8/9$ for all $0 < x < x_*$. In particular, this implies that the compactness ratio at the surface of the star $r \to R_*$ is bounded from above by the well-known Buchdahl value $8/9$.
We start with the following theorem which eliminates case (b):
\begin{Teo}
\label{Thm:FiniteRadius}
Suppose the conditions $(i)$--$(iv)$ in section~\ref{sec:EquationState} are satisfied with the lower adiabatic bound $\gamma_1 > 4/3$. Then $x_* < \infty$ is finite.
\end{Teo}
\begin{proof}
We suppose that $x_* = \infty$ is infinite and show that this leads to a contradiction. Since $x_* = \infty$ implies that $P$ is bounded, and since $P$ is monotonously decreasing, the limit
\begin{equation}
\label{Eq:Pinf}
P_\infty := \lim\limits_{x\to \infty} P(x) \geq 0
\end{equation}
exists. The remainder of the proof is based on the following two simple lemmas whose proofs will be given further below. The first lemma shows that $P_\infty$ must be zero:
\begin{Lem}
\label{Lem:1}
Suppose $x_* = \infty$. Then $P_\infty = 0$.
\end{Lem}
The second lemma provides a lower bound on the energy density which will be key in the proof of the theorem:
\begin{Lem}
\label{Lem:2}
Any equation of state fulfilling the conditions $(i)$--$(iv)$ in section~\ref{sec:EquationState} satisfies the following estimate: there are constant $C > 0$ and $P_1 > 0$ such that
\begin{equation}
\label{Eq:Estimatione}
e(P) \geq CP^{1/\gamma_1},
\end{equation}
for all $0 \leq P\leq P_1$.
\end{Lem}
We now return to the proof of Theorem~\ref{Thm:FiniteRadius} and show that $x_* = \infty$ and $P_\infty = 0$ leads to a contradiction if $\gamma_1 > 4/3$. To this purpose, we use Eq.~(\ref{Eq:TolmanABis}) to estimate
\begin{equation}
-\frac{1}{e(P(x)) + \lambda P(x)} \frac{d}{dx} P(x) \geq x w(x)
\end{equation}
for all $x > 0$. Integrating both sides of this inequality yields
\begin{equation}
\label{Eq:Est}
- \int_x^\infty \frac{1}{e(P(y)) + \lambda P(y)} \frac{dP}{dy}(y) dy \geq \int_x^\infty w(y) y dy.
\end{equation}
Using the variable substitution $P = P(y)$ and the estimate~(\ref{Eq:Estimatione}), the integral on the left-hand side can be rewritten and estimated according to
\begin{equation}
- \int_x^\infty \frac{1}{e(P(y)) + \lambda P(y)} \frac{dP}{dy}(y) dy
= \int_0^{P(x)} \frac{dP}{e(P) + \lambda P}
\leq \int_0^{P(x)} \frac{dP}{C P^{1/\gamma_1}}
= \frac{P(x)^{1 - 1/\gamma_1}}{C(1 - 1/\gamma_1)},
\end{equation}
for all large enough $x\geq x_1$, such that $P(x_1)\leq P_1$. This yields the following lower bound on $P$:
\begin{equation}
\label{Eq:EstimationLHS}
P(x)^{1 - 1/\gamma_1}
\geq - C_1\int_x^\infty \frac{1}{e(P(y)) + \lambda P(y)} \frac{dP}{dy}(y) dy,
\end{equation}
with $C_1 := C(1 - 1/\gamma_1) > 0$ a constant. Next, we estimate the integral on the right-hand side of Eq.~(\ref{Eq:Est}). Recalling that $\overline{m}(x) := x^3 w(x)$ is proportional to the mass function, which is an increasing function of $x$, we obtain
\begin{equation}
\label{Eq:EstimationRHS}
\int_x^\infty w(y) y dy = \int_x^\infty \frac{\overline{m}(y)}{y^2} dy
\geq \overline{m}(x) \int_x^\infty\frac{dy}{y^2} = \frac{\overline{m}(x)}{x} = x^2 w(x),
\end{equation}
for all $x > 0$. The three estimates~(\ref{Eq:Est},\ref{Eq:EstimationLHS},\ref{Eq:EstimationRHS}) imply the following inequality between $P$ and $w$:
\begin{equation}
P(x)^{1 - 1/\gamma_1} \geq C_1 x^2 w(x)
\label{Eq:Est2}
\end{equation}
for all $x\geq x_1$. Combining this with the estimate $w(x) \geq e(P(x))$ (which follows directly from the definition~(\ref{Eq:WA}) of $w(x)$ and the monotonicity properties of $e$ and $P$) and the key estimate~(\ref{Eq:Estimatione}) yields
\begin{equation}
P(x)^{1 - 2/\gamma_1} \geq C_2 x^2
\label{Eq:Est3}
\end{equation}
for all $x\geq x_1$, with the new constant $C_2 := C C_1 = C^2(1 - 1/\gamma_1) > 0$. This already yields a contradiction for $\gamma_1 \geq 2$, since in this case the left-hand side converges to zero (or stays constant if $\gamma_1 = 2$) while the right-hand side goes to infinity as $x\to \infty$. This proves the theorem for $\gamma_1\geq 2$.
It remains to analyze the case $4/3 < \gamma_1 < 2$. For this, we use again the key estimate~(\ref{Eq:Estimatione}) and the fact that $e(P(x)) \leq w(x)$, obtaining $P(x)^{1/\gamma_1} \leq C^{-1} w(x)$, or
\begin{equation}
\left[\frac{w(x)}{C}\right]^{\gamma_1} \geq P(x)
\end{equation}
for all $x\geq x_1$. Combining this with the inequality~(\ref{Eq:Est2}) yields
\begin{equation}
w(x)^{\gamma_1 -2} \geq C_3 x^2
\end{equation}
for all $x\geq x_1$ with the positive constant $C_3 = C_1 C^{\gamma_1-1}$. Since $w(x) = \overline{m}(x)/x^3$ and $\gamma_1 - 2 < 0$ this can be rewritten as
\begin{equation}
\overline{m}(x)^{2 - \gamma_1} \leq \frac{1}{C_3 x^{3\gamma_1 - 4}},
\end{equation}
for $x\geq x_1$. However, since $4/3 < \gamma_1 < 2$ this leads to a contradiction since in the limit $x\to \infty$ the right-hand side converges to $0$ while the mass function $\overline{m}(x)$ is positive and increasing. This concludes the proof of the theorem.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{Lem:1}]
Again, the proof is by contradiction. If $P_\infty \neq 0$, then the function $P$ would satisfy
$P(x) \geq P_{\infty} > 0$ for all $x > 0$, and since $e(P)$ is monotonously increasing, this would imply that $e(P(x)) \geq e(P_\infty) =: e_\infty > 0$ for all $x > 0$. According to Eq.~(\ref{Eq:WA}) this would yield $w(x) \geq e_{\infty} > 0$ for all $x > 0$, which in turn would imply that
\begin{equation}
1 - 2\lambda x^2 w(x) \leq 1 - 2\lambda x^2 e_\infty
\end{equation}
for all $x > 0$. However, this would contradict the assumption $x_* = \infty$ which requires $1 - 2\lambda x^2 w(x) > 0$ for all $x > 0$. Therefore, we must have $P_\infty = 0$ as claimed.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{Lem:2}]
For the proof of this lemma, we use the inequality~(\ref{Eq:pInequality}) from section~\ref{sec:EquationState}, which implies
\begin{equation}
\label{Eq:n}
n \geq n_2\left[ \frac{p(n)}{p(n_2)}\right]^{1/\gamma_1}
\end{equation}
for all small enough $n_2\geq n > 0$. Using the assumptions $(i)$ and $(iv)$ from section~\ref{sec:EquationState} and the estimate~(\ref{Eq:n}) in the expression~(\ref{Eq:epsilonp}) for $\varepsilon(p)$ one obtains,
\begin{equation}
\varepsilon(p) \geq ne_0 \geq n_2 e_0 \left[ \frac{p(n)}{p(n_2)}\right]^{1/\gamma_1},
\end{equation}
for all small enough $0 < n\leq n_2$. Setting $C_2 := n_2 e_0/p_2^{1/\gamma_1}$ with $p_2 := p(n_2)$ it follows from this that
\begin{equation}
\varepsilon(p) \geq C_2 p^{1/\gamma_1}
\end{equation}
for all $0 < p\leq p_2$. Since $\varepsilon(p) = \varepsilon_c e(P)$ and $p = p_c P$ the lemma follows.
\end{proof}
To conclude the global existence proof, it remains to eliminate case (a.2). In fact, we obtain a stronger result which shows that for all $0 < x < x_*$, one must have $1 - 2\lambda x^2 w(x) = 1 - 2m(r)/r < 1/9$:
\begin{Teo}
\label{Thm:Buchdahl}
Let $P: (0,x_*)\to \mathbb{R}$ be the maximally extended continuously differentiable solution of the dimensionless TOV Eq.~(\ref{Eq:TolmanABis}) such that $\lim\limits_{x\to 0} P(x) = 1$, $0 < P(x) < 1$ and $1 - 2\lambda x^2 w(x) > 0$ for all $0 < x < x_*$. Then, $2m(r)/r = 2\lambda x^2 w(x) < 8/9$ for all $0 < x < x_*$.
\end{Teo}
\begin{proof}
The proof is a straightforward generalization to arbitrary radius $r\in (0,R_*)$ of standard arguments used to establish the Buchdahl bound, see for instance section 6.2 in Ref.~\cite{Wald}. For this, we set $r = \ell x$, $m(r) := \ell\lambda x^3 w(x)$, $\Psi(r) := -\frac{1}{2}\log[1 - 2m(r)/r) ]$ and use the fact that the Einstein equations~(\ref{Eq:Einstein1},\ref{Eq:Einstein2},\ref{Eq:Einstein3}) are satisfied. Subtracting Eq.~(\ref{Eq:Einstein2}) from Eq.~(\ref{Eq:Einstein3}) yields
\begin{equation}
\left[\Phi'' + \Phi'(\Phi' - \Psi') - \frac{\Phi' + \Psi'}{r}\right]e^{-2\Psi} - \frac{1}{r^2}\left(e^{-2\Psi} - 1\right)
= 0.
\end{equation}
Dividing both sides by $r$ one can rewrite this as the following identity:
\begin{equation}
e^{-\Phi(r) - \Psi(r)}\left[\frac{\Phi'(r)}{r}e^{\Phi(r) - \Psi(r)}\right]' = \left[\frac{m(r)}{r^3}\right]'.
\label{Eq:Identity}
\end{equation}
Since $m(r)/r^3$ is proportional to the mean density, which is by itself proportional to $w(x)$, and since $x\frac{dw}{dx} = 3[ e(P(x)) - w(x) ]\leq 0$, the mean density is a non-increasing function. Therefore, it follows from Eq.~(\ref{Eq:Identity}) that
\begin{equation}
\label{Eq:Des1}
\left[\frac{\Phi '(r)}{r}e^{\Phi(r) - \Psi(r)}\right]' \leq 0.
\end{equation}
Next, let $0 < r < r_2 < R_* = \ell x_*$. Then, it follows that
\begin{equation}
\frac{\Phi'(r)}{r} e^{\Phi(r) - \Psi(r)} \geq \frac{\Phi'(r_2)}{r_2} e^{\Phi(r_2) - \Psi(r_2)}
= \frac{m(r_2) + 4\pi r_2^3 p(r_2)}{r_2^3\left[ 1 - \frac{2m(r_2)}{r_2} \right]}
e^{\Phi(r_2) - \Psi(r_2)},
\end{equation}
where we have used Eq.~(\ref{Eq:Phi}) to eliminate $\Phi'(r_2)$. Since $p(r_2)\geq 0$ and
\begin{equation}
1 - \frac{2m(r_2)}{r_2} = e^{-2\Psi(r_2)},
\label{Eq:a2}
\end{equation}
this inequality leads to
\begin{equation}
\Phi'(r) e^{\Phi(r)} \geq r e^{\Psi(r)}\frac{m(r_2)}{r_2^3} e^{\Phi(r_2) + \Psi(r_2)}.
\end{equation}
Integrating both sides from $r = 0$ to $r_2$ yields
\begin{equation}
e^{\Phi(r_2)} - e^{\Phi(0)} \geq e^{\Phi(r_2) + \Psi(r_2)}
\frac{m(r_2)}{r_2^3} \int_0^{r_2} \frac{r dr}{\sqrt{1 - \frac{2m(r)}{r}}}.
\end{equation}
To estimate the integral on the right-hand side, we use again the fact that $m(r)/r^3$ is a non-increasing function, such that $2m(r) \geq 2m(r_2) r^3/r_2^3$ for all $0\leq r\leq r_2$, and obtain
\begin{equation}
e^{\Phi(r_2)} - e^{\Phi(0)} \geq e^{\Phi(r_2) + \Psi(r_2)}
\frac{m(r_2)}{r_2^3}\int_0^{r_2} \frac{r dr}{\sqrt{1 - \frac{2m(r_2)}{r_2^3} r^2}}
= \frac{1}{2} e^{\Phi(r_2)} \left[ e^{\Psi(r_2)} - 1 \right],
\label{Eq:Des}
\end{equation}
where we have used Eq.~(\ref{Eq:a2}) again. Eq.~(\ref{Eq:Des}) implies that
\begin{equation}
0 < 2e^{\Phi(0)} \leq e^{\Phi(r_2)} \left[ 3 - e^{\Psi(r_2)} \right],
\end{equation}
which immediately yields the desired result:
\begin{equation}
1 - \frac{2m(r_2)}{r_2} = e^{-2\Psi(r_2)} > \frac{1}{9}.
\end{equation}
\end{proof}
\section{A numerical example}
\label{Sec:Numerical}
In the previous sections we have shown that for a given equation of state fulfilling the conditions $(i)$--$(iv)$ in section~\ref{sec:EquationState} with the lower adiabatic bound $\gamma_1 > 4/3$, there exists for each value of $p_c/\varepsilon_c > 0$ a unique solution of the TOV equation which describes a relativistic, spherical and static star of finite radius $R$ and mass $M$. In this section, we show by means of numerical calculation how to obtain the quantitative properties of the star, including the values of $R$ and $M$, the compactness ratio $2M/R$ and the pressure profile. For the sake of illustration we focus on the specific case of a polytropic equation of state of the form
\begin{equation}
p(n) = K n^{\gamma}
\label{Eq:polytrope}
\end{equation}
with $K$ a positive constant and $\gamma$ the adiabatic index which, in the results shown below, is fixed to the value $5/3$. As explained in appendix~\ref{App:StatFis}, this value corresponds to the low temperature and density limit of a monoatomic ideal gas. Integrating the first law for an isentropic fluid yields the corresponding expression for the energy density
\begin{equation}
\varepsilon(p) = n e_0 + \frac{K}{\gamma-1} n^\gamma
= e_0\left( \frac{p}{K} \right)^{1/\gamma} + \frac{p}{\gamma-1}.
\label{Eq:polytrope_eps}
\end{equation}
Rewritten in terms of the dimensionless quantities defined in Eq.~(\ref{Eq:Dimensionless}) and using the fact that $e(1) = 1$, this yields
\begin{equation}
e(P) = \frac{(\gamma-1-\lambda) P^{1/\gamma} + \lambda P}{\gamma-1},\qquad
0 < \lambda = \frac{p_c}{\varepsilon_c} < \gamma-1.
\end{equation}
(Note that for the case of a monoatomic gas one should also have $p_c/\varepsilon_c \ll 1$ in the low temperature limit, so that the example studied in this section is most probably not physically realistic for values of $\lambda$ lying close to $\gamma-1$.)
To perform the numerical integration of the TOV equation, we convert the integral equation~(\ref{Eq:WA}) for the dimensionless mean density field $w$ into the differential equation
\begin{equation}
\frac{d}{dx} w(x) = -\frac{3}{x}\left[ w(x) - e(P(x)) \right],
\label{Eq:dw}
\end{equation}
which is numerically integrated along with the dimensionless TOV equation~(\ref{Eq:TolmanABis}) using a standard fourth-order accurate Runge-Kutta scheme (see, for instance, section 7.5 in Ref.~\cite{oSmT12} and references therein). The integration is started at the center $x = 0$, where the right-hand side of Eq.~(\ref{Eq:dw}) is replaced with $0$, owing to the fact that both functions $w(x)$ and $P(x)$ behave as $1 + {\cal O}(x^2)$ near $x = 0$. (This can be inferred from the local existence theorem in section~\ref{Sec:LocalExistence}, the fixed point formula~(\ref{Eq:IntTOV}) and the definition of $w$ in Eq.~(\ref{Eq:WA}).) The integration is stopped as soon as $P$ becomes negative, which yields the dimensionless radius $R/\ell = x_*$ and the dimensionless total mass $M/\ell =
\lambda x_*^3 w(x_*)$ of the star, up to a numerical error. (This error is monitored by varying the stepsize $\Delta x$ of the integrator.) Using Eqs.~(\ref{Eq:lDef}) and (\ref{Eq:polytrope_eps}) one finds that the length scale $\ell$ is given by
\begin{equation}
\ell = \ell_0 \lambda^{-\frac{2-\gamma}{2(\gamma-1)}}
\left( 1 - \frac{\lambda}{\gamma-1} \right)^{\frac{\gamma}{2(\gamma-1)}},\qquad
\ell_0 := \sqrt{\frac{3}{4\pi}}\left( \frac{K}{e_0^\gamma} \right)^{\frac{1}{2(\gamma-1)}},
\label{Eq:ell0}
\end{equation}
and hence we shall specify the results in terms of the alternative length scale $\ell_0$ which is independent of $\lambda$.
The results of the numerical integration for different values of $\lambda$ in the admissible range $0 < \lambda < \gamma - 1$ are shown in Table~\ref{Tab:Polytrope} and in Figs.~\ref{Fig:MassCompactness},\ref{Fig:MvsR} and~\ref{Fig:Profiles}. Note that for small values of $\lambda$ the mass increases while the radius of the star decreases as $\lambda$ grows, giving rise to more compact stars. However, as $\lambda$ continues to grow this trend is halted and $M/\ell_0$ reaches a maximum at about $\lambda\approx 0.12$ after which it starts decaying as $\lambda$ continues to grow until it reaches a local minimum around $\lambda\approx 0.5$ and starts growing again until reaching another local maximum. Similarly, the radius $R/\ell_0$ decreases until it reaches a local minimum at about $\lambda\approx 0.4$ after which it increases until reaching a local maximum. This behavior gives rise to the spiral structure shown in Fig.~\ref{Fig:MvsR}.
In the Newtonian limit $\lambda\to 0$, one may compare our results with the corresponding results from the Lane-Emden equation (see for instance section 3.3 in~\cite{Shapiro})
\begin{equation}
\frac{R}{\ell} = \frac{a}{\ell}\xi_1,\qquad
\frac{M}{\ell} = 3\frac{a^3}{\ell^3}\lambda\xi_1^2|\Theta'(\xi_1)|,\qquad
\frac{a^2}{\ell^2} = \frac{1}{3}\frac{\gamma}{\gamma-1}.
\end{equation}
For the present example $\gamma = 5/3$ one finds $\xi_1 \approx 3.65$, $\xi_1^2|\Theta'(\xi_1)| \approx 2.71$ and $a/\ell = \sqrt{5/6}$, which yields
\begin{equation}
\frac{R}{\ell} \approx 3.33,\qquad
\frac{M}{\ell} \approx 6.18\lambda,
\end{equation}
and compares well with the corresponding values in Table~\ref{Tab:Polytrope} for small $\lambda$.
Finally, we note again from the plots in Fig.~\ref{Fig:Profiles} that the relativistic stars with high $\lambda$ are much more compact than their Newtonian counterparts. We also note that although the compactness ratio $2M/R$ at the surface reaches a maximum at about $\lambda\approx 0.3$, the maximum of the local compactness ratio $2m(r)/r$ occurs inside (and not at the surface of) the star, and this maximum seems to be growing monotonously with $\lambda$. In all cases this maximum is less than $8/9$, as predicted by the local Buchdahl bound proven in Theorem~\ref{Thm:Buchdahl}. (Note that the Newtonian equations predict a compactness ratio of $2M/R\approx 3.71\lambda$ which can be larger than one.)
\begin{table*}[h]
\caption{Results for the dimensionless radius $R/\ell = x_*$, dimensionless total mass $M/\ell = \lambda x_*^3 w(x_*)$ and compactness ratio $2M/R = 2\lambda x_*^2 w(x_*)$ at the surface of the star for the polytropic equation of state~(\ref{Eq:polytrope}) and different values of $\lambda$. Also shown are the radii $R/\ell_0$ and masses $M/\ell_0$ in terms of the physical scale $\ell_0$ defined in Eq.~(\ref{Eq:ell0}) which is independent of $\lambda$. The stepsize used to produce these results is $\Delta x = 0.005$, and three significant figures are shown.
}
\begin{tabular}{c||c|c|c|c||c}
$\lambda$ & $R/\ell$ & $M/\ell$ & $R/\ell_0$ & $M/\ell_0$ & $2M/R$ \\
\hline
$0.001$ & $3.33$ & $0.0615$ & $18.7$ & $0.0345$ & $0.00370$ \\
$0.01$ & $3.29$ & $0.0582$ & $10.2$ & $0.181$ & $0.0353$ \\
$0.05$ & $3.16$ & $0.232$ & $6.06$ & $0.446$ & $0.147$ \\
$0.1$ & $3.08$ & $0.368$ & $4.47$ & $0.535$ & $0.239$ \\
$0.2$ & $3.16$ & $0.524$ & $3.03$ & $0.502$ & $0.331$ \\
$0.3$ & $3.68$ & $0.640$ & $2.36$ & $0.410$ & $0.348$ \\
$0.4$ & $5.24$ & $0.812$ & $2.10$ & $0.325$ & $0.310$ \\
$0.5$ & $11.3$ & $1.34$ & $2.37$ & $0.282$ & $0.238$ \\
$0.6$ & $42.8$ & $5.12$ & $2.74$ & $0.327$ & $0.239$ \\
$0.65$ & $234$ & $28.9$ & $2.59$ & $0.320$ & $0.246$ \\
\end{tabular}
\label{Tab:Polytrope}
\end{table*}
\begin{figure}[H]
\centerline{\includegraphics[width=9cm]{MvsLambda.pdf}
\includegraphics[width=9cm]{CompacticidadvsLambda.pdf}
}
\caption{Plots of the total mass $M/\ell_0$ (left panel) and the compactness ratio $2M/R$ at the surface of the star (right panel) as a function of $\lambda$.}
\label{Fig:MassCompactness}
\end{figure}
\begin{figure}[H]
\centerline{\includegraphics[width=11cm]{MvsR.pdf}}
\caption{The total mass $M/\ell_0$ vs. radius $R/\ell_0$ for different values of $\lambda$.}
\label{Fig:MvsR}
\end{figure}
\begin{figure}[H]
\centerline{\includegraphics[width=9cm]{Presion.pdf}
\includegraphics[width=9cm]{Compacticidad.pdf}
}
\caption{Plots of the dimensionless pressure $p/p_c = P$ (left panel) and the local compactness ratio $2m(r)/r = 2\lambda x^2\omega(x)$ (right panel) as a function of the dimensionless radius $r/\ell_0 = x\ell/\ell_0$ for different values of $\lambda$. As is visible from these plots the stars become more compact as $\lambda$ increases, with the maximum of the local compactness ratio lying inside the star.}
\label{Fig:Profiles}
\end{figure}
\section{Summary and conclusions}
\label{Sec:Conclusions}
In this article, we have given a systematic derivation of the TOV equation, starting from the most general static and spherically symmetric ansatz for the metric and fluid fields which allows one to reduce the Euler-Einstein system to a set of ordinary differential equations. Under the assumptions on the equation of state discussed in section~\ref{sec:EquationState} and the additional assumption that the effective adiabatic index $\gamma(n)$ (defined in Eq.~(\ref{Eq:gamma(n)})) satisfies the bound $\gamma(n) \geq 4/3 + \varepsilon$ (with $\epsilon > 0$) for small enough values of the particle density $n$, we have provided a rigorous proof for the existence and uniqueness of global solutions of the TOV equations describing a static, spherical star of finite radius and mass. Furthermore, we have shown that the familiar Buchdahl bound $2m(r)/r < 8/9$ holds for any radius $r > 0$ (smaller than or equal to the radius of the surface of the star).
In particular, the results presented in this article apply to any perfect fluid with positive baryonic rest mass and a polytropic equation of state $p(n) = K n^\gamma$ with adiabatic index $\gamma > 4/3$. This includes the equation of state describing an ideal nonrelativistic monoatomic gas, for which $\gamma = 5/3$. Interestingly, the ultrarelativistic counterpart, for which $\gamma = 4/3$, is not included in our analysis. However, as discussed in detail in appendix~\ref{App:StatFis}, an ideal, relativistic monoatomic gas has an equation of state whose effective adiabatic index $\gamma(n)$ interpolates between the two values $4/3$ and $5/3$ in the limits $n\to \infty$ and $n\to 0$, respectively. Since our assumption on $\gamma(n)$ is only needed for small values of $n$ (and not in the ultrarelativistic limit $n\to \infty$), our results fully cover the case of the ideal relativistic monoatomic gas. It is only near the surface of the star (where $n$ is small and thus the gas is practically Newtonian) that the assumption $\gamma(n) \geq 4/3 + \varepsilon$ is required.
For a given equation of state fulfilling our assumptions, the quantitative properties of the star, like its radius, mass, density profile etc. can be obtained from numerical calculations. We have provided an example in section~\ref{Sec:Numerical} for a polytropic equation of state with adiabatic index $\gamma = 5/3$, although the method described in that section can be adapted to more general equations of state in a straightforward way. The most important feature found from the numerical calculations is the spiral-type behavior (see Fig.~\ref{Fig:MvsR}) in the mass-versus-radius relation for the resulting family of static, spherical stars and the existence of a maximum mass configuration in this family, which is important because it indicates a change in behavior for the stability of the star (see chapter 6 in Ref.~\cite{Shapiro}). Further numerical examples based on a dynamical system approach can be found in Ref.~\cite{Heinzle_2003}. For numerical time evolutions of (numerically perturbed) TOV stars, see for instance~\cite{fGfLmM12}.
Our proof for the global existence of stars with finite radius was mostly inspired by the work by Ramming and Rein~\cite{Ramming_2013} and the proof for the Buchdahl bound is a straightforward generalization of the arguments presented in section 6.2 in Ref.~\cite{Wald}. Although the results presented in this article are not new and have been widely studied in the literature, they are scattered in different articles and books. Therefore, we hope that our self-contained review regarding the most important results of the TOV equation and its solutions may serve as a useful pedagogical introduction to the topic and motivate research on more realistic star models including rotation and magnetic fields, for which rigorous mathematical results are still scarce.
\acknowledgments
It is a pleasure to thank Emilio Tejeda and Thomas Zannias for useful comments on a previous version of this article and an anonymous referee for pointing out to us the relevant references concerning realistic equations of state for neutron stars. ECN was supported by the CONACYT project ``Ayudante de investigador" No.~17840 and by a postgraduate CONACYT fellowship. OS was partially supported by a CIC grant to Universidad Michoacana de San Nicol\'as de Hidalgo.
|
1,108,101,563,150 | arxiv | \section{ Introduction and Motivation}
For a given quantum state of a many-body system with density matrix $\rho$, measurements of observables $O_{A}$ supported inside a spatial subregion $A$ are determined by the reduced density matrix $\rho_{A}$, defined by
\begin{equation}\label{red}
{\rm Tr}(\rho O_{A})={\rm Tr}_{A}(\rho_{A}O_{A}).
\end{equation}
The relation above is defined to hold for all operators $O_{A}$ in $A$. It follows that $\rho_{A}={\rm Tr}_{B}(\rho)$, where the trace is taken over the complement $B=A^{c}$ .
Since an observer in $A$ has no direct access to degrees of freedom in $B$, he/she suffers a loss of information that can be quantified by the entanglement entropy:
\begin{equation}\label{ent}
S_{A}=-{\rm Tr}_{A}(\rho_{A} \ln \rho_{A}).
\end{equation}
$S_{A}$ provides a measure of the entanglement between $A$ and $B$, since increasing the entanglement between $A$ and $B$ will increase the loss of information upon restriction to $A$.
The study of entanglement entropy was originally motivated by attempts to interpret black hole entropy as information loss by an observer restricted to the outside of the event horizon \cite{bombelli1986quantum}. More recently, entanglement entropy has become an important tool in condensed matter physics, where it plays a role as a diagnostic of many body states. Indeed, the scaling of entanglement entropy characterizes the amenability of systems to numerical simulations such as the density matrix renormalization algorithm (DMRG) in 1d, and the nature of the challenge in higher dimensions.
An important class of applications of entanglement entropy studies are topological states. Such states have no local observables which reveal their nature, and thus the entanglement entropy may be used in such a situation where no obvious way exists to identify the topological order. For example in \cite{jiang2012identifying} the interplay of DMRG and entanglement entropy on a torus has been used to identify the nature of topological degeneracy.
While entanglement entropy provides an important measure of entanglement, the reduced density matrix \(\rho_{A}\) is a more fundamental object. In particular, the study of the entanglement spectrum, i.e. the eigenvalues of $\rho_{A}$, has picked up pace as it has been recognized as a tool for probing topological order in a more detailed way. For example, the relation between the entanglement entropy of Quantum Hall wave functions and the edge theory associated with such states has been elaborated in \cite{li2008entanglement,lauchli2010disentangling,chandran2011bulk,qi2011general}. Entanglement spectrum also holds a direct relation to the gluing function as well as the gapless edge modes in topological insulators \cite{turner2010entanglement,fidkowski2011model}. These remarkable relations between a bulk property and edge physics highlight the wealth of information encoded in the entanglement spectrum.
As stated above, the entanglement spectrum of quantum systems may reveal a lot about their nature.
Even more detailed information is available if one knows the actual eigenstates of $\rho_A$. Since any $\rho_A$ is Hermitean and positive semidefinite, it may be expressed as:
\begin{eqnarray}
\rho_A=e^{-H_{A}}
\end{eqnarray}
for some Hermitean operator $H_{A}$. If $H_{A}$ is known, the detailed study of $\rho_A$ follows immediately to the exploration of $H_{A}$. Unfortunately, in most cases $H_{A}$ does not offer a particular simplification or advantage as it is in general a highly nonlocal operator.
However, in particular special cases $H_{A}$ may become local and simple enough to be used for calculations. The prime example for such a situation has arisen as a result of studies of Hawking and Unruh radiation. According to the Bisognano-Wichmann theorem \cite{bisognano1975duality,bisognano1976duality}, the causal evolution of a quantum field theory where $A$ is taken to be a half space may be described by a modular operator which is generated by a Lorentz boost. The Minkowski ground state in a causal wedge is then shown to satisfy a Kubo-Martin-Schwinger condition with respect to the boost, establishing $H_{A}$ as the generator of Lorentz boost.
Recently this result was extended by Casini, Huerta and Myers \cite{myers}. For a spherical region $A$ in a CFT, they find that the entanglement Hamiltonian may be written explicitly in a local form using the physical energy density \(T_{00}\):
\begin{equation}
\label{Ha}
H_{A}=\int_{A}\beta(x)T_{00}(x).
\end{equation}
In this paper, we use the locality property of entanglement Hamiltonians such as \eqref{Ha} to compute the entanglement entropy of excited states.
The starting point of our story is an elementary derivation of the above formula using the representation of the ground state reduced density matrix \( \langle\phi | \rho_{A}| \phi '\rangle \) as a Euclidean Path integral integral with boundary conditions for the fields \(\phi\) and \(\phi '\) along the cut at $A$ \cite{hlw}.
Deferring the explicit derivation to section II, let us first discuss the basic idea.
Treating \(\rho_{A}\) as a propagator, we derive the expression (\ref{Ha}) by performing the path integral along a Euclidean "time" $s$ that evolves the upper edge of A to the bottom.
The resulting path integral may be expressed as:
\begin{equation}
\label{Texp}
\rho_{A} =Z^{-1}_{A} T \exp \{- \int_{s_{i}}^{s_{f}} K(s) ds\} ,
\end{equation}
where $T$ denotes "time" ordering in $s$ and $K$ is the quantum operator generating $s$ evolution.
If the path integral of our theory is invariant under translations in $s$, then $K$ is a conserved charge independent of ``entanglement time'' $s$. Hence:
\begin{equation}
\rho_{A} = \exp\left( -(s_{f}-s_{i}) K \right).
\end{equation}
\begin{figure}[hb]
\begin{center}
\includegraphics[scale=0.5]{Fig11.pdf}
\caption{{\bf Evaluating \(\rho_{A}\) along Euclidean time s}}
\label{Fig1}
\end{center}
\end{figure}
A well studied situation is the case where the theory is rotationally invariant, and $A ={x^{1}>0 }$ is a half space. Taking $s$ to be the angular variable on the \(x^{1},x^{0}=t_{E}\) plane, we find the standard result that $K$ is the angular new operator (or the boost generator in Minkowski signature) \cite{unruh1984}.
From a more general perspective, $K$ can be viewed as a Killing energy that can be written in terms of the energy momentum tensor. For any constant $s$ slice \(\Sigma\) we can write
\begin{align} \label{K}
K = \int_{\Sigma} T_{ab} k^{a} d \Sigma^{b}, \qquad
H_{A} = (s_{f} -s_{i} ) K ,
\end{align}
where $k^{a} =\frac{dx^{a}}{ds}$ is the Killing vector for the boost and $\{x^{a}\}$ is a set of flat space coordinates. Choosing to evaluate $K$ on $\Sigma =A$ we find $k^{a} \sim \delta^{a}_{0} $ and $\Sigma^{a} \sim \delta^{a}_{0} $, which reproduces the relation (\ref{Ha}).
Given a spherically symmetric region $A'$ in a Euclidean CFT of any dimension, we will determine the entanglement Hamiltonian for $\rho_{A'}$ by making use of a conformal map $u$ taking $A$ to $A'$, which induces a mapping $\rho_{A} \rightarrow \rho_{A'}=U \rho_{A} U^{-1}$ \footnote{This is essentially a Euclidean version of the arguments in \cite{myers}.}. The vector field $k'^{a}=\frac{dx^a}{ds'}$ for the new entanglement time $s'$, is just the image of $k$ under $u$. Thus, the entanglement Hamiltonian for $A'$ is given by (\ref{Ha}) with
\begin{equation}
\label{temp}
\beta(x) = 2\pi k'^{0}(x),
\end{equation}
where $x\in A$ and the factor of $2\pi$ is simply $s_f-s_i$. We will interpret $\beta(x)$ as a local ``entanglement'' temperature, that is determined by the shape of $A$ and the background geometry of the CFT. In this interpretation, equation (\ref{Ha}) resembles a density matrix for the original, physical system in local thermal equlibrium with temperature $\beta(x)$. The entanglement entropy is the thermal entropy of this system. It must be emphasized, however, that the appearance of $\beta(x)$ does not correspond to a "real" temperature in the sense that
all inertial observers will find that local observables are at their vacuum values in accordance with \eqref{red}\footnote{However, non-inertial observers whose proper time coincides with $s$ will observe thermal radiation due to the local temperature \cite{unruh1984}.}. However, the point of view of a local "entanglement temperature" is appealing: indeed $\beta(x)$ must vanish at boundary of the region, signaling a high effective temperature close to the boundary. This behavior may be understood as the statement that the degrees of freedom close to the boundary are the ones most entangled with the external region, and thus have a larger entropy.
Consistent with this interpretation, we have checked that for two dimensional CFT's in various backgrounds with central charge $c$, the ground state entanglement entropy can be obtained by integrating the equilibrium thermal entropy per unit length
\begin{equation}
\frac {dS_{thermal}}{dx} = \frac{ c\pi}{3\beta(x)}
\end{equation}
over the region $A$ using (\ref{temp}). Moreover, for excited states \(\beta(x)\) relates the increase in entanglement entropy to an increase in energy inside $A$ via a \emph{local} first law-like equation:
\begin{equation}\label{density}
d \delta S_{A} (x) = \beta(x) Tr_A(\delta \rho_{A} T_{00} ) dx,
\end{equation}
Here $\frac {d \delta S_{A}}{dx} (x)$ is the local entanglement entropy density\footnote{This is not to be confused with the "entanglement density", introduced in \cite{Nozaki:2013wia} and discussed later in this paper.} relative to the ground state and $\delta \rho_{A}$ is the variation in the reduced density matrix due to the increase in energy. To first order in $\delta \rho_{A}$, the total increase in entanglement entropy is obtained by integrating (\ref{density}) over $A$.
Under a \emph{general} variation of the ground state $\rho_{A} \rightarrow \rho_{A} +\delta \rho_{A} $ we find that the first order change in entanglement entropy is
\begin{equation}
\label{deltaS}
\delta S_{A} = Tr_A(\delta \rho_{A} H_{A}).
\end{equation}
For ground states with other conserved charges \(Q_{a}\) that preserve conformal invariance (e.g. momentum in 1+1 D) , the corresponding charge densities \(q_{a}\) and the associated chemical potentials \(\mu_a\) will appear in the form
\begin{equation}
H_{A} =\int_{A} \beta(x) (T_{00} - \mu_{a} q_{a}),
\end{equation}
leading to a generalized first law:
\begin{equation}
d \delta S_{A} (x) = \beta (x) \delta \langle T_{00}\rangle dx - \beta(x) \mu_{a} \delta \langle q_{a} \rangle dx.
\end{equation}
While preparing this manuscript, a paper \cite{Nozaki:2013vta} was posted where a set of constraint equations for $\delta S_{A}$ and an expression for "entanglement density" were derived using AdS/CFT. In section (\ref{Sec:holo}) we provide a CFT derivation of those results in two spacetime dimensions and generalize the constraint equations to arbitrary dimensions\footnote{The constraint equation was recently generalized to holographic CFT's in 3 space-time dimensions in \cite{Bhattacharya:2013fk} }. We will also comment on the relation between our results and those in calculations in \cite{bianchi} and \cite{tak}.
\section{Path Integral Derivation of the Entanglement Hamiltonian}\label{Sec:Path}
Consider a Euclidean QFT on a manifold $M$ and some spatial region $A$.
The path integral expression for the reduced density matrix on $A$ is similar to the propagator of the theory except that the initial and final states live on the upper and lower edge of a branch cut defined along $A$. Thus, to switch to a canonical description, it is natural to choose a foliation of $M$ by constant $s$-slices \(\Sigma(s)\) such that the initial/final slice at ``time"$(s_i,s_f)$ lie on the branch cut (see Fig. \ref{Fig1}). The manifold $M$ is then parametrized by coordinates $(s,y^{a})$ where $ y^{a}$ are coordinates on $\Sigma$ .
The reduced density matrix on $A$ in the Schr\"odinger picture is
\begin{eqnarray}\label{rho}
\langle\phi_{0}(s_{f}) | \rho_{A}| \phi_{0}'(s_{i})\rangle =
\int D[\phi]e^{-{\cal S}[\phi]}\delta[\phi (s_{f})-\phi_{0}(s_{f})] \delta[\phi (s_{i})-\phi_{0}(s_{i})],
\end{eqnarray}
where ${\cal S}[\phi]$ is the action functional.
To find the entanglement Hamiltonian, we divide the ``time" interval \([s_{i},s_{f}]\) into small steps \([s_{n+1},s_{n}]\) of size \(\Delta s\) and consider a discretization of the path integral in (\ref{rho}). For notational simplicity we will write $\rho_{A}[s_{n+1},s_{n}] = \langle\phi(s_{n+1}) | \rho_{A}| \phi(s_{n})\rangle$, so that
\begin{align}\label{dis}
\langle\phi_{0}(s_{f}) | \rho_{A}| \phi_{0}'(s_{i})\rangle = \int d[\phi(s_{N-1})]... d[\phi(s_{2})] \rho_{A}[s_{f},s_{N-1}] ...\rho_{A}[s_{n+1},s_{n}]... \rho_{A}[s_{2},s_{i}].
\end{align}
Next we will regard the matrix element $\rho_{A}[s_{n+1},s_{n}]$ as a function of the final time $s_{f}$ and final field configuration $\phi(s_{n+1}, y)$.
We wish to show that this function satisfies a heat equation
\begin{equation}
\frac{\partial}{\partial s_{n+1}} \rho_{A}(s_{n+1}) = - K(s_{n+1}) \rho_{A}(s_{n+1})
\end{equation}
and identify the operator $K(s_{n+1})$. For a given field configuration in the path integral we need to evaluate $\frac{\partial}{\partial s_{n+1}} {\cal S}[ \phi(s_{n+1},y), s_{n+1}]$ at \emph{fixed} $\phi(s_{n+1}, y)$. One way of doing this is to keep the final time at $s_{n+1}$, but transform the background metric by a diffeomorphism that enlarges the proper size of the integration region.
Explicitly we want a coordinate transformation $s \rightarrow s'(s)$ such that
\begin{align}\label{der}
{\cal S}+ d {\cal S} = \int_{s_{n}}^{s_{n+1}+ ds }ds \int_{\Sigma(s)} d^{d-1}y \mathcal{L}[g_{ab}, \phi]= \int_{s_{n}}^{s_{n+1}}ds' \int_{\Sigma(s')} d^{d-1}y \mathcal{L}[g_{ab}+dg_{ab}, \phi],
\end{align}
where $g_{ab}(s, y)$ is the metric on $M $. Therefore,
\begin{align}
d{\cal S}= \int_{s_{n}}^{s_{n+1}}ds' \int_{\Sigma(s')} d^{d-1}y \frac{\delta \mathcal{L}}{\delta g_{ab}} dg_{ab}.
\end{align}
In a general coordinate system this transformation and the response of
the path integral $\rho_(s_{n+1})$ is
\begin{eqnarray} & x^{a} \rightarrow x^{a} = {x^{a}}'-\epsilon^{a}\\ &
d \rho(s_{n+1}) = - \frac{1}{2} \int_{[s_{n},s_{n+1}]\times \Sigma} \langle T_{ab} \rangle \nabla^{(a} \epsilon^{b)} \sqrt{g} d^{d}x = \int_{\Sigma(s_{n+1})} \langle T_{ab} \rangle \epsilon^{b} d\Sigma^{a}.
\end{eqnarray}
Here $\langle \rangle$ refers to the path integral average on $[s_{n},s_{n+1}]$. In the last equality we assumed the \emph{quantum} conservation law $ \nabla ^{a}\langle T_{ab}\rangle =0$ and applied the divergence theorem; this means that $T_{ab}$ includes a possible anomalous contribution due to the transformation of the Jacobian in the path integral measure.
The coordinate transformation that will satisfy equation \ref{der} is
\begin{equation}
\epsilon^{a} = \frac{dx^{a}}{ds} f(s) ds,
\end{equation}
where the function $f(s)$ smoothly goes from 0 to 1 as $s$ goes from $s_{n}$ to $s_{n+1}$. This is so that we do not change the lower endpoint of the $s$ integration.
Defining
\begin{equation}
K(s_{n+1}) = \int_{\Sigma(s_{n+1})} \langle T_{ab} \rangle \frac{dx^{b}}{ds} d\Sigma^{a} ,
\end{equation}
we find
\begin{align}
\frac{\partial}{\partial s_{n+1}} \rho_{A}[s_{n+1},s_{n}] = \int D[\phi]e^{-S[\phi]}(-K(s_{n+1}) )=\langle \phi_{0}(s_{n+1}) |-K(s_{n+1})\rho_{A}| \phi_{0}'(s_{n})\rangle =-(\hat{K} \rho_{A}) (s_{n+1}) .
\end{align}
The solution to this heat equation with initial condition $\rho_{A}(s_{n})=0$ is \(\rho_{A}[s_{f},s_{N-1}]= \langle\phi(s_{n+1})|1-\Delta s K|\phi(s_{n})\rangle\). Inserting this into equation (\ref{dis}) gives
\begin{align}
\langle\phi_{0}(s_{f}) | \rho_{A}| \phi_{0}'(s_{i}) \rangle =\int \prod_{n=1}^{N-1} D[\phi(s_{n})] \langle\phi(s_{n+1})|1-\Delta s K|\phi(s_{n}) \rangle\\
=\langle\phi_{0}(s_{f})|T \exp \left(- \int_{s_{i}}^{s_{f}} K(s) ds \right)| \phi_{0}'(s_{i})\rangle.
\label{K'}
\end{align}
This is the most general form of the entanglement Hamiltonian in a QFT. Since equation \eqref{K'} only depends on the geometric data provided by the vector field \(\frac{dx^a}{ds}\) which in turn is determined by the region $A$, it represents a \emph{universal} relation between the entanglement Hamiltonian and the quantum stress tensor.
To recover the local Entanglement Hamitonian (\ref{Ha}), we consider regions $A$ for which $s \rightarrow s+ d s $ is a spacetime symmetry of the path integral (\ref{rho}) so that $K[s] $ is the corresponding conserved charge. Since $K$ is independent of $s$, we can evaluate it on any time slice (say at $s_{i}$) and the time ordered product in (\ref{K'}) reduces to
\begin{equation}\label{local}
\rho_{A} = \exp(-(s_{f}-s_{i}) K(s_{i})).
\end{equation}
Below we will show that $s \rightarrow s+ ds $ is indeed a spacetime symmetry of the path integral if $A$ is a half space in a rotationally invariant QFT or a spherical region in a CFT, and we will derive the corresponding local entanglement Hamiltonians. Here we would like to note that given a small deformation of the region $A$ away from translational or spherical symmetry, one could perform a systematic expansion of equation (\ref{K'}) using the deformed entanglement Hamiltonian \(K_{0} + \epsilon K_{1} \). To first order in \(\epsilon\) this would just add a perturbation to the local entanglement Hamiltonian which is localized near the boundary of $A$. A similar strategy can be applied to deformations of the theory away from rotational or conformal invariance. We leave this for future work.
\section{Examples of local Entanglement Hamiltonians}
\subsection{Entanglement Hamiltonians in 2D}
To illustrate how to compute $K$ and its entanglement entropy, we first review the case of a rotationally invariant QFT on $\mathbb{R}^2$ with $A$ the region $A$ being the half line \(A=\{x^{1}>0\}\) \cite{unruh}. Since $A$ is mapped to itself by a \(2\pi\) rotation, we choose $s$ to be the angular coordinate on the Euclidean plane so that \(\Sigma(s)\) are rays emanating from the origin as in Figure \ref{Fig2}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{Fig21.pdf}
\caption{{\bf Foliation of the Euclidean plane corresponding to angular quantization}}
\label{Fig2}
\end{center}
\end{figure}
Then
\begin{equation}
\frac{dx^a}{ds}\partial_a=x^{1}\partial_{0}-x^{2}\partial_{1}
\end{equation}
is a Killing vector field generating rotations of the plane. Since the path integral measure is assumed to be rotationally invariant, $K$ is just the angular momentum \cite{susskind}
\begin{equation}\label{Rindler}
K= \int_{\Sigma(s=0)} x^{1}T_{00}-x^{0}T_{01}=\int_{A} x^{1}T_{00}.
\end{equation}
The entanglement Hamiltonian is given by equation (\ref{Ha}) with the entanglement temperature
\begin{equation}
\beta= 2\pi x^1.
\end{equation}
Upon Wick rotating \(s \rightarrow i s\), the circular flow generated by $K$ becomes hyperbolas representing the worldlines of uniformly accelerated observers, and \(\beta(x)\) is the proper temperature they experience. Thus in Minkowski signature $K$ is the boost generator. The form of the entanglement Hamiltonian implies that \(\rho_{A}\) represents an ensemble with the physical energy density \(T_{00}\) in local thermal equilibrium with local temperature \(\beta(x)\); its entanglement entropy is therefore just the thermal entropy, obtained by integrating the thermal entropy density \(\frac {dS_{thermal}}{dx}\) over $A$ \cite{susskind}.
In particular, for a CFT with central charge $c$, it is well known that \cite{hlw}
\begin{equation}
\frac {dS_{thermal}}{dx}=\frac{c \pi}{3\beta(x)}
\end{equation}
so the entanglement entropy is
\begin{equation}
S_{A}= \int_{\delta}^{L}dx \frac{c \pi}{6 x}=\frac{c}{6} Log \frac{L}{\delta},
\end{equation}
where we have introduced a UV and IR cutoff on $A$ restricting the integration to \([\delta,L]\). The local temperature is higher near the boundary of $A$ and diverges at $x=0$ due to the zero of the vector field, which is also the singularity of the foliation defined by $s$. As a result, most of the contribution to the entanglement entropy arises from near the edge.
For a CFT on $\mathbb{R}^2$ we can easily generalize the previous results to an arbitrary interval $A'=[u,v]$. Let \(z=x^{1}+i x^{0}\) so that \(\frac{dz}{ds}=i z\) is the rotational vector field appropriate to the region $A$ discussed previously. The conformal map
\begin{equation} \label{cm}
z=-\frac{w-u}{w-v}
\end{equation}
induces a transformation $U$ on the reduced density matrices:
\begin{equation}\label{U}
\rho_{A} \rightarrow \rho_{A'}= U \rho_{A} U^{-1},
\end{equation}
by transforming the boundary conditions of the path integral. The path integral measure is conformally invariant because there is no anomaly in flat space. Meanwhile, the vector field \(\frac{dz}{ds'}\) is mapped to
\begin{equation}
\frac{dw}{ds'} = \frac{dw}{dz} \frac{dz}{ds'} =\frac{i(w-u)(w-v)}{u-v}.
\end{equation}
It is clear that the periodic flow defined by this vector field will evolve $A' \to A'$. Moreover, the transformation \(w \rightarrow w + \frac{dw}{ds'} d s'\) is a symmetry of the CFT on the $w$ plane, because it can be decomposed into a combination of a conformal transformation between $z$ and $w$, and an ordinary rotation on the $z$ plane.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{Fig3.pdf}
\caption{{\bf A rotation on the z plane (represented as a Riemann sphere) is mapped to a conformal rotation on the w plane}}
\label{default}
\end{center}
\end{figure}
Thus, the entanglement Hamiltonian for \(\rho_{A'}\) is
\begin{equation} \label{HaR2}
H_{A'} = \int_{A} 2\pi \frac{(y^{1}-u)(y^{1}-v)}{u-v}T_{00} dy ,
\end{equation}
where we defined \(w= y^{1}+iy^{0}\) and evaluated the integral along $A$ for convenience.
As before, the entanglement entropy is obtained from integrating \(\frac {dS_{thermal}}{dx}\) using the entanglement temperature
\begin{equation}
\beta(y) = 2\pi \frac{(y^{1}-u)(y^{1}-v)}{u-v}.
\end{equation}
This gives
\begin{equation}
S_{A'} = \frac{c}{3} log \frac{v-u}{\delta},
\end{equation}
as expected\footnote{ Note that even though \(Tr_A(\rho_{A} log \rho_{A})\) is invariant under the similarity transformation (\ref{U}) of the reduced density matrix, we get a different result for the entanglement entropy of \(\rho_{A'}\) because we have to transform the regularized boundary of $A$. }.
For a CFT at finite temperature (\(w \sim w+i \beta'\)) or on a spatial circle (\(w \sim w+ L\)), we can similarly derive the entanglement Hamiltonian by finding the conformal map from the $z$-plane to \(\mathbb{R} \times S^{1}\) or \(S^{1} \times \mathbb{R}\). Given $A' = [-l,l] \times \{0\}$, the conformal map and entanglement temperature for a CFT at the (ordinary) temperature $\beta'$ is
\begin{align}
\label{ft}
z= \frac{-\exp\bigg(\frac{2\pi w}{\beta'}\bigg) + \exp\bigg(-\frac{2\pi l}{\beta'}\bigg) }{\exp\bigg(\frac{2\pi w}{\beta'}\bigg)- \exp\bigg(\frac{2\pi l}{\beta'}\bigg)}, \qquad
\beta = 2 \beta' {\rm csch}\left(\frac{ 2l \pi}{\beta'}\right) \sinh\left(\frac{\pi (l - y)}{\beta'}\right) \sinh\left(\frac{\pi (l+y}{\beta'}\right).
\end{align}
The results for a CFT at finite size can be obtained from equation (\ref{ft}) by the substitution \( \beta' \rightarrow i L\).
Below we summarize the results for the entanglement temperature and entanglement entropy obtained by integrating the thermal entropy density in various CFT backgrounds.
\begin{table}[htdp]
\caption{Entanglement Temperature for $A=[-l,l]$ in different CFT backgrounds}
\begin{center}
\begin{tabular}{|l |c| c | r }
\hline
CFT Background & Entanglement Temperature \(\beta(y)\) & Entanglement Entropy \(S_{A}=\int_{A} \frac{\pi c}{3 \beta (y)}\)\\ \hline
Zero Temp. and infinite size: $M=\mathbb{R}^{2}$&\(\frac{2\pi (l^{2}-y^{2})}{2l}\)&\(\frac{c}{3}\ln \frac{l}{\delta}\) \\ \hline
Finite temperature $\beta': M= \mathbb{R} \times S^{1}$& $ 2 \beta' {\rm csch}\left(\frac{ 2l \pi}{\beta'}\right)\sinh\left(\frac{\pi (l -
y)}{\beta'}\right) \sinh\left(\frac{\pi (l+ y)}{\beta'}\right) $ & $\frac{c}{3} \ln\left( \frac{\beta'}{\pi \delta} \sinh(\frac{2\pi l}{\beta'}) \right)$\\ \hline
Finite Size $L: M=S^{1} \times \mathbb{R}$&\(2 L {\rm csc}\left(\frac{2l \pi}{L}\right) \sin\left(\frac{\pi (l - y)}{L}\right) \sin\left(\frac{\pi (l+y)}{L}\right) $&\(\frac{c}{3} \ln\left(( \frac{L}{\pi \delta} \sin(\frac{2\pi l}{L}) \right)\)\\
\hline
\end{tabular}
\end{center}
\label{default}
\end{table}%
The results for the entanglement entropies were derived previously using the replica trick, \cite{calabrese2004entanglement, calabrese2009entanglement, hlw,wl}, and serve as a check on our results for the entanglement temperature and Hamiltonian.
\subsection{Entanglement Hamiltonians in higher dimensions}
Here we generalize the results of the previous section to spherical entangling surfaces in dimensions \(d>2\). As before, we first consider a rotationally invariant CFT on $\mathbb{R}^d$ with \(A=\{x^{1} > 0 \} \). We choose polar coordinates on the \( x^{1},x^{0}\) plane \( x^{1}= z \cos (\frac{s}{l})\), \(x^{0} =z \sin (\frac{s}{l})\), so the flat metric is
\begin{align}
d\tau^{2} = (\frac{z}{l})^{2}ds^{2} +dz^{2} + d\vec{x}^2 .
\end{align}
At this point $l$ is an arbitrary length parameter introduced to make $s$ dimensionful.
Then the result (\ref{Rindler}) for the entanglement Hamiltonian of \(\rho_{A}\) is still valid. Now we map $\mathbb{R}^{d} \rightarrow H^{d-1} \times S^{1}$, by multiplying the metric above by a conformal factor \( (\frac{l}{z})^2\).
\begin{equation}
d\tau^{2}_{H^{d-1}\times S^{1}}=ds^{2} + (\frac{l}{z})^2 (dz^{2} + d\vec{x}^2 ).
\end{equation}
The \(H^{d-1}\) factor refers to hyperbolic space, which is the image of the half space $A$. Thus we see that \(\rho_{A}\) is transformed into a thermal density matrix \(\rho_{H^{d-1}}\) on hyperbolic space. Since this conformal map does not change the original coordinates on $\mathbb{R}^{d}$, the vector field generated by the new entanglement Hamiltonian is just \(\frac{\partial}{\partial s} \).
Now consider a new reduced density matrix \(\rho_{A'}\) for a ball of radius $l$. We will obtain the entanglement Hamiltonian \(H_{A'}\) by mapping \(\rho_{H^{d-1}} \rightarrow \rho_{A'} \) as follows. First we choose coordinates \( ( u,\Omega_{d-2},s) \) on \(H^{d-1} \times S^{1}\) and spherical coordinates \((r, \Omega_{d-2},t)\) on $\mathbb{R}^{d}$ so that the metrics are
\begin{align}
d\tau^{2}_{H^{d-1} \times S^{1}} = ds^{2} + R^{2}(du^{2}+\sinh(u)^{2} d \Omega^{2}_{d-2}),\\
d\tau^{2}_{R^{d}} = dt^{2}+ dr^{2} +r^{2} d\Omega^{2}_{d-2}.
\end{align}
Then, defining complex coordinates \(\sigma=u+i \frac{s}{l}\) and \(w=r+it\) on the respective two dimensional slices, we consider the mapping introduced in \cite{hung}
\begin{equation}\label{hung}
e^{-\sigma}= \frac{l-w}{l+w}.
\end{equation}
This is an analogue of equation (\ref{cm}) mapping \(\rho_{A'}\rightarrow \rho_{H^{d-1}}\). The entanglement vector field and entanglement Hamiltonian is
\begin{align}
\frac{dw}{ds}=\frac{dw}{d\sigma}\frac{d\sigma}{ds}= i \frac{l^{2}-r^{2}}{2l}, \qquad
H_{A}= 2 \pi \int_{A} \frac{l^{2}-r^{2}}{2l} T_{00}.
\end{align}
This agrees with the result of \cite{myers}, where a Minkowski signature version of the conformal mapping (\ref{hung}) was used to derive the entanglement Hamiltonian.
\section{CFT derivation of Entanglement Entropy for excited states}
Consider a state $|\psi \rangle$ in a QFT in $\mathbb{R}^{1,d-1}$ with a density matrix \(\rho^{0}=|\psi \rangle\langle \psi|\). As in \cite{bianchi} we make a small perturbation $\rho= \rho^{0}+\delta \rho$ and consider the entanglement entropy of a region $A$. Expanding to first order in \(\delta \rho_{A} \) we find
\begin{equation}
S_{A}=-Tr_{A}(\rho_{A} \ln \rho_{A})=-Tr_{A} (\rho^{0}_{A}\ln \rho^{0}_{A})-Tr_{A}(\delta\rho^{0}_{A} \ln \rho^{0}_{A})-Tr_{A}(\delta \rho_{A}),
\end{equation}
where $\delta\rho_A=Tr_B(\delta\rho)$.
The normalization \(Tr(\rho_{A})=Tr_A(\rho^{0}_{A})=1\) implies \(Tr(\delta \rho_{A})=0\), so the first order change in entanglement entropy due to the perturbation $\delta \rho $ is simply
\begin{equation} \label{EE}
\delta S_{A}=-Tr_{A}(\delta\rho_{A} \ln \rho^{0}_{A})=Tr_{A}(\delta \rho_{A} H_{A}).
\end{equation}
Note that there is also a term proportional to ${\rm Tr}(\delta \rho)$ which vanishes due to the normalization \(Tr(\rho)=1\). When the state $\rho^{0}$ is the ground state, we will refer to
\(\delta S_{A}\) as the \emph{renormalized} entanglement entropy\footnote{This is only a first order approximation to the renormalized entropy, but we will just call it renormalized entropy for short.} \cite{hlw}. It is just the increase in ``entanglement energy'' of the new state, measured according to the ground state entanglement Hamiltonian. However we emphasize that equation (\ref{EE}) applies to an \emph{arbitrary} deformation \(\delta \rho\) for \emph{any} initial state $\rho^{0}$.
When the region $A$ is a half space in a QFT or a spherical ball in a CFT, we can use the entanglement temperatures previously derived to obtain \(H_{A}\) for the ground state as in equation (\ref{Ha}). From equation (\ref{EE}) we have:
\begin{align} \label{EE1}
\delta S_{A}=Tr_{A}(\delta \rho_{A} \int_{A}\beta(x)T_{00}(x))=\int_{A}\beta(x)Tr(\delta \rho T_{00}(x)):=\int_{A}\beta(x) \delta \langle T_{00}(x) \rangle
\end{align}
In the second to last equality, we noted that the operator \(T_{00}(x)\) is only being evaluated inside $A$ so that \(\delta \rho_{A} \) can be replaced with \(\delta \rho \). Note that in (\ref{EE}) the operator $\delta \rho_{A}$ and $H_{A}$ are defined on a subregion $A$ with boundaries, which implies boundary conditions have to be imposed at $\partial A$ on their quantization. On the other hand, in (\ref{EE1}) the operator $T_{00}$ is interpreted as the energy density quantized with the boundary conditions appropriate to the \emph{whole} space; we have merely chosen to \emph{evaluate} it inside $A$. These two interpretation must agree by the definition of the reduced density matrix. As a check, in appendix B we will show that for a particular excitation of a free scalar field with non-uniform energy density, (\ref{EE1}) and (\ref{EE}) do indeed give the same result for \(\delta S_{A}\).
When \(\delta \langle T_{00} \rangle \) is spatially uniform\footnote{Since our entanglement Hamiltonian was derived for a CFT on $\mathbb{R}^d$, we will assume the energy density starts to die off somewhere outside $A$, in order for the energy to be finite.} inside $A$, we can remove it from the integration, so that
\begin{equation} \label{EE2}
\delta S_{A}= \beta_{0} \delta \langle T_{00} \rangle
Vol(A) := \beta_{0} \delta E_{A},
\end{equation}
where $\delta E_{A} = \delta \langle T_{00} \rangle {\rm Vol}(A)$ is the excitation energy inside region $A$, and \(\beta_{0} \) is the average entanglement temperature inside $A$
\begin{equation} \label{beta0}
\beta_0=\frac{\int_A\beta(x)}{{\rm Vol}(A)}.
\end{equation}
When the region $A$ has radius $l$, we find\footnote{ As already noted in \cite{tak}, this is also consistent with the computation of \(\delta S_{A} \) for primary states of a two dimensional CFT which was performed in \cite{Berganza:2011mh} via the replica trick.} \(\beta_{0} = \frac{2\pi}{d+1} l\) in agreement with the result of \cite{tak}. However, we note that the holographic results of \cite{tak} only strictly apply to nonabelian gauge theories with holographic duals, at large N and assuming a small region $A$ (i.e. for small radius $l$), whereas our result is valid to order $O(\delta \rho)$ for any CFT and any radius $l$. We also note that there is a discrepancy between our results when \(\delta \langle T_{00}\rangle \) is spatially varying. Given a state with \( \delta \langle T_{00} \rangle = \sum_{n=0}^{\infty} a_{n} r^{n}\) in a $d>2$ dimensional CFT\footnote {We will explain the restriction to $d>2$ in the section VI.}, we find
\begin{equation}\label{Entropy of excited sphere}
\delta S_{A}= 2 \pi {\rm Vol}(S^{d-2}) \sum_{n=0} \frac{a_{n}l^{d+n}}{(d+n)^{2} -1}
\end{equation}
which disagrees with the holographic calculation of the same quantity in equation (20) of \cite{tak}. In section IV, we will discuss the holographic version of eq \ref{EE} and speculate on a possible source of the discrepancy. As noted earlier, we have checked in appendix B that our results (\ref{EE}) and (\ref{EE1}) are consistent for a \emph{non-uniform} excitation of a free scalar field, where $\delta S_{A}$ can be computed explicitly.
\section{A generalized first law for entanglement entropy}
Equation (\ref{EE1}) resembles a local first law of thermodynamics inside the region $A$:
\begin{equation}
\label{flaw}
d \delta S_{A}(x)=\beta(x) \delta \langle T_{00}(x) \rangle dx.
\end{equation}
When other conserved charges are present, a generalization of equation (\ref{flaw}) can be derived as follows.
Consider a state at finite temperature $T$ and with conserved charges \(Q_{a}\) that preserve conformal invariance and chemical potentials \(\mu_{a}\) weighted with the following density matrix
\begin{equation}
\rho =\frac{\exp\bigg(-\frac{(H- \mu_{a}Q_{a})}{T}\bigg)}{Z}.
\end{equation}
After tracing over the complement of $A$ we arrive at a path integral representation of \(\rho_{A}\) similar to the one given in equation (\ref{rho}), except that
adding the charges has effectively shifted our Hamiltonian from $H$ to \(H' =H- \mu_{a}Q_{a}\).
The corresponding shift in the energy density is \(T_{00}' = T_{00}-\mu_{a}q_{a} \), where we introduced the charge densities $q_{a}$ by $Q_a:= \int_{space} q_{a}d^{d-1}x$. Going through the same path integral derivation as in section \ref{Sec:Path}, we would reproduce equation (\ref{Ha}) with \(T_{00}\) replaced by \(T_{00}'\). Under a deformation \( \delta \rho \) that changes the charge densities and energy inside $A$, equation (\ref{flaw}) now becomes
\begin{equation} \label{gflaw}
d \delta S_{A}(x)=\beta(x) \delta \langle T'_{00}(x) \rangle dx =\beta(x) \{\delta \langle T_{00}(x) \rangle dx - \mu_{a} \delta \langle q_{a}(x) \rangle dx\}
\end{equation}
A simple way to check the above argument for the entanglement Hamiltonian leading to equation (\ref {gflaw}) is to consider a state \(\rho \sim \exp[- \beta'(H-\mu P)]\) for a two dimensional CFT with total central charge $c$. In this case the conserved Virasoro charges are the Hamiltonian \(H=L_{0} +\widebar{L_{0}} -\frac{c}{12} \) and momentum \(P=L_{0} -\widebar{L_{0}} \). The entanglement Hamiltonian for an interval $A=[0,l]$ is
\begin{equation} \label{Ha'}
H_{A}= \int_{0}^{l} \beta(x)(T_{00}-\mu T_{01}) dx = \int_{0}^{l} \beta(x)(1-\mu)T_{++} + \beta(x)(1+\mu)T_{--},
\end{equation}
where \(T_{\pm \pm} =\frac{1}{2}(T_{00} \pm T_{01})\) are the right and left moving components of the stress tensor, and \(\beta(x)\) is the entanglement temperature (\ref{ft}) for a CFT at finite temperature\footnote{Technically, to get a discrete spectrum for $P$ we should put the CFT on a spatial \(S^{1}\) of length $L$. Here we will assume \(\beta' >> L\), so that we can ignore the periodicity along $L$ in computing the entanglement temperature.} \(\beta'\). The operator in equation (\ref{Ha'}) is the sum of two \emph{commuting} entanglement Hamiltonians corresponding to non-interacting ensembles at finite (ordinary) temperature \(\beta_{\pm} =\beta'(1\pm \mu)\) and with energy density \(T_{\pm \pm }\). Assuming that the left and right central charges are equal, each ensemble has an effective central charge of \(\frac{c}{2}\) . Thus the entanglement entropy is:
\begin{align}\label{EE'}
S_{A}= \frac{c}{6} \ln\left( \frac{\beta_{+}}{\pi \delta} \sinh(\frac{\pi l}{\beta_{+}}) \right) + \frac{c}{6} \ln\left( \frac{\beta_{-}}{\pi \delta} \sinh(\frac{\pi l}{\beta_{-}}) \right).
\end{align}
This agrees with the result of \cite{hubeny2007covariant} obtained via the replica trick and holographic calculations.
\section{Holographic derivation and discussion of related papers}\label{Sec:holo}
According to the holographic prescription of \cite{rt}, the entanglement entropy for a state \(|\psi \rangle\) in a region $A$ of a $d$-dimensional CFT with a holographic dual gravity theory is
\begin{equation} \label{RT}
S_{A}=\frac{Area(\gamma_{A})}{4 G},
\end{equation}
where \(\gamma\) is a minimal surface, anchored on \(\partial A\), in the bulk spacetime representing the gravity dual of the corresponding CFT, $G$ is the bulk Newton's constant. The geometry dual to the ground state in the CFT corresponds to pure AdS
\begin{equation}
d\tau^{2} = (\frac{R}{z})^{2}(-dt^{2} + dz^{2} + r^{2}d\Omega^{2}_{d-2}),
\end{equation}
and the minimal surface for \(A=\{r=l\}\) is a half sphere extending into the bulk: \(\gamma_{A} = \{ r^{2}=l^{2} -z^{2}\} \).
For general excited states, it is difficult to find the exact bulk metric and compute the minimal surface. However, just as in the CFT computation of the previous section, a drastic simplification occurs if we consider only the first order deformation of the entanglement entropy, which is proportional to the variation of area functional :
\begin{equation} \label{dA}
\delta Area(\gamma_{A})= \delta \int_{\gamma_{A}} \sqrt{g} =\int_{\gamma_{A}}\delta \sqrt{g}.
\end{equation}
In the last equality, we observed that the area variation due to the deformation of the surface \(\gamma_{A}\) vanishes by the definition of a minimal surface. Thus, the area variation is entirely due to the change in the metric, and there is no need to solve for the minimal surface in the new geometry. Comparing this equation to (\ref{EE2}), we see that \(\delta \rho_{A} \) corresponds to the deformation of the metric while \(H_{A}\) corresponds to the ground state minimal surface. The second fact is less obvious from the usual AdS/CFT correspondence, but it is consistent with ideas proposed in \cite{myers}. In reference \cite{myers} it was shown that for spherical regions A, there exists a foliation of AdS by hyperbolic slices \(\mathcal{H}=H^{d-1}\times R \) such that one of the slices is a causal horizon \(\gamma_{A}' \)that is anchored on \(\partial A\). Since a horizon is also a minimal surface, we can identify \(\gamma_{A}'=\gamma_{A}\). The new foliation of AdS is dual to a CFT on the boundary slice \(\mathcal{H} \), which is in a thermal state that is conformally related to \(\rho_{A}\) . It is thus tempting to identify the foliation of AdS and the associated horizon \( \gamma_{A} \) with the reduced density matrix \( \rho_{A} \) and therefore \(H_{A}\).
As in \cite{tak}\footnote{see also \cite{allahbakhsi2013e} for an extension of results in \cite{tak}} we consider an excited state with energy density \footnote{To facilitate comparisons with \cite{tak}, in this section we write $ \delta \langle T_{00} \rangle = \langle T_{00} \rangle $, with the understanding that the energy density in the latter expression is normal ordered so as to subtract the vacuum energy. Note that there is a typo in eq. (2) of \cite{tak} where $d$ was replaced with $d-1$.} \( \langle T_{00} \rangle = \frac{d R^{d-1} m }{16 \pi G} \). As established in ref. \cite{sk}, the holographic stress tensor associated with this energy density and the boundary metric determines the \emph{asymptotic} form of the bulk metric near the boundary at \(z\sim 0\) to be:
\begin{align}\label{FG}
d\tau^{2} = (\frac{R}{z})^{2}(- g^{-1}(z) dt^{2} + g(z)dz^{2} + r^{2}d\Omega^{2}_{d-2}), \quad {\rm with} \quad
g(z)=1+mz^{d}+ ...
\end{align}
where the ellipsis denotes higher order terms in $z$. In this approximation, the first order variation of the entanglement entropy for spherical regions A is
\begin{eqnarray}
\frac{\delta S'_{A}}{\delta m}\bigg|_{m=0} \delta m &=&\frac{\delta Area(\gamma_{A})}{4G} \bigg|_{m=0}= R^{d-1}\Omega_{d-2} \int_{0}^{l} \frac{r(z)^{d-2}}{z^{d-1}} \delta \sqrt{g(z) + r'(z)^{2}} \nonumber \\
&=& \beta_{0} \delta E_{A},
\end{eqnarray}
where we evaluated the integral along the half sphere \( r^{2}=l^{2} -z^{2}\) corresponding to the ground state at $m=0$, $\beta_{0}=\frac{2\pi}{d+1} l$, and $\delta E_{A}$ is defined as in the section IV. The notation $\delta S'_{A}$ is a reminder of the additional approximation due to the expansion \eqref{FG}, where sub-leading in terms in $z$ were dropped. However, in this case, this approximation (truncation) leads to a result which agrees with the field theoretic one in eq. \eqref{beta0}
Next, we consider a a non-uniform state with energy density $\langle T_{00}\rangle = \frac{d R^{d-1} m }{16 \pi G} \sum_{n\geq 0} c_n r^n $ in a $d>2$ dimensional CFT.
Note that this state is not allowed $d=2$ spacetime dimensions, because the energy density has to satisfy a wave equation, as explained later in this section. The dual metric has the same form as in \eqref{FG} with
\begin{align}
g(z)=1+m z^d \sum_{n\geq 0} c_n r^n+ \dots,
\end{align}
using \eqref{dA} we find:
\begin{eqnarray}
\delta S'_{A}= {m l^d R^{d-1}{\rm Vol}(S^{d-2})\over 8G}\sum_{n\geq 0} {c_n l^n \over 1+d+n}.
\end{eqnarray}
The above expression reproduces and generalizes the results in \cite{tak}, without recourse to a an explicit evaluation of the minimal surfaces. This time, we note that above $\delta S'_{A}$ differs from our result \eqref{Entropy of excited sphere} for the entropy of a sphere, although both are supposed to represent entropy of a system with the same non-uniform energy density.
In \cite{tak}, use of equation (\ref{FG}) was justified by taking the small region limit, that is, the \( l\rightarrow 0 \) limit in which \(\gamma_{A} \) approaches the \(z=0\) boundary. However, neglect of higher order terms in $z$, while not affecting the energy density \(\langle T_{00} \rangle \), may affect the computed entropy. For example, adding a correction of the form $m z^{d+k}r^\mu$ will yields, using \eqref{dA}, a contribution proportional to $l^{d+k+\mu}$ to the holographic entropy. Neglect of such terms
may be the reason that our results agree with those of \cite{tak} only for the case of uniform energy density. In this way, our result provides an easy consistency check for the $z\rightarrow 0$ limit metric used in holographic calculations.
\subsection{Dynamical equations for entanglement entropy and entanglement density}
While this project was being completed, we noticed a recent paper \cite{Nozaki:2013vta} where a set of dynamical equations were derived for \(\delta S_{A}\) in the case of time dependent excited states by using the holographic formula (\ref{RT}). In $d=2$ spacetime dimensions they are:
\begin{align} \label{D'}
(\partial_{t}^{2} - \partial_{\xi}^{2}) \delta S_{A} (\xi, l, t) =0 \\
(\frac{\partial_{l}^{2}}{4} - \frac{\partial_{t}^{2}}{4} -\frac{1}{2l^{2}} ) \delta S_{A} (\xi,l,t)=0
\label{D''}
\end{align}
where \(A= [ \xi- l, \xi +l ]\). In the holographic setting these equations arose from solving Einstein's equations perturbatively to determine the evolution of the metric for the excited state. Here we will provide a simple field theoretic derivation of these equations. First note that in terms of the variable \(x'=x-\xi\), the renormalized entanglement entropy for a CFT on a plane is
\begin{equation}\label{REE}
\delta S_{A} = 2\pi\int_{-l}^{l} dx' \frac{l^{2}-x'^{2}}{2l^{2}} \langle T_{00} \rangle(x'+\xi,t),
\end{equation}
so the entanglement temperature is independent of \(\xi\). Thus,
\begin{equation}
(\partial_{t}^{2} - \partial_{\xi}^{2}) \delta S_{A} (\xi, l, t) =2\pi\int_{-l}^{l} dx' \frac{l^{2}-x'^{2}}{2l^{2}} (\partial_{t}^{2} - \partial_{\xi}^{2})\langle T_{00} \rangle(x'+\xi,t)=0,
\end{equation}
where in the last equality we used the fact that in $d=2$ the conservation of the energy momentum tensor combined with its tracelessness imply that \(T_{00}= T_{++} + T_{--}\) is a sum of left and right movers, and therefore satisfy the wave equation. The second equation (\ref{D'})can be obtained straightforwardly by applying the differential operator to (\ref{REE}) and integrating by parts using \(\partial^{2}_{t}T_{00}=-\partial_{\xi}^{2}T_{00}=-\partial_{x'}^{2}T_{00}\).
As in \cite{Nozaki:2013vta}, we can also generalize and (\ref{D'}) to the case when we couple an operator \(O(x,t) \) to a source \(J(x,t)\) so that our physical Hamiltonian is deformed to \(H' = H - \int J O d^{d-1}x\). Provided that $O(x,t) $ preserves conformal symmetry, this deformation changes the ground state Hamiltonian by deforming the energy density \(T_{00}\rightarrow T_{00}'= T_{00}- J O\) in \ref{Ha}. The equations (\ref{D'}) are now modified by source terms that arise form the differential operators hitting \(J(x,t)O(x,t)\).
Thus
\begin{align}\label{source}
(\partial_{t}^{2} - \partial_{\xi}^{2}) \delta S_{A} (\xi, l, t) = \int_{-l}^{l} \beta(x',l)(\partial_{t}^{2} - \partial_{\xi}^{2} ) (J(x'+\xi,t) \langle O(x'+\xi,t)\rangle_{J}), \\
(\frac{\partial_{l}^{2}}{4} - \frac{\partial_{t}^{2}}{4} -\frac{1}{2l^{2}} ) \delta S_{A} (\xi,l,t)= - \int_{-l}^{l}\beta(x',l) \frac{\partial_{t}^{2}}{4}(J(x'+\xi,t)\langle O(x'+\xi,t) \rangle_{J} ),
\label{source'}
\end{align}
with
\begin{equation}
\beta(x', l)=2\pi\frac{l^2-{x'}^2}{2l^2}.
\end{equation}
To facilitate a comparison with the result of \cite{Nozaki:2013vta}, we take the Fourier transform of \( \langle O(x'+\xi,t) \rangle_{J}\) and make explicit the dependence of \(J(x'+\xi,t) \) on \( \langle O(k_{1},w_{1})\rangle_{J}\):
\begin{align}
\langle O(x,t)\rangle_{J} = \int d \omega_{1} \int dk_{1} \langle O(k_{1},\omega_{1})\rangle_{J} e^{i(k_{1}\xi+\omega_{1}t)} e^{ik_{1}x'}, \\
J(x'+\xi,t) = \int d \omega_{2} \int dk_{2} f(k_{2},\omega_{2}) \langle O(k_{2},\omega_{2})\rangle_{J} e^{i(k_{2}\xi+\omega_{2}t)} e^{ik_{2}x'}.
\end{align}
Above we chose the source $J$ corresponding to the perturbation of the bulk scalar given in equation (3.17) of \cite{Nozaki:2013vta}. Inserting these in
(\ref{source}) and integrating over $x'$ gives equations of the form
\begin{align}
(\partial_{t}^{2} - \partial_{\xi}^{2}) \delta S_{A} (\xi, l, t) =\int d\omega_{1} \int d\omega_{2} \int dk_{1} \int dk_{2} F(k_{1},k_{2},\omega_{1},\omega_{2},l ) \langle O(k_{1},\omega_{2})\rangle_{J} \langle O(k_{2},\omega_{2})\rangle_{J} e^{i((k_{1}+k_{2})\xi+(\omega_{1}+\omega_{2})t)},
\end{align}
and similarly (\ref{source'}). These equations have the same form as (3.22) and (3.23) of \cite{Nozaki:2013vta}, which were interpreted as the holographic dual to the perturbative Einstein's equations with the right hand side serving as the matter source.
In general dimensions,we can derive a constraint equation similar to \ref{D''} for a ball $A$ of radius $l$ centered on $\vec{\xi}$ :
\begin{align} \label {cst}
(\partial_{l}^{2} - (d-2) \frac{\partial_{l}}{l} - \nabla^{2}_{\xi} -\frac{d}{l^{3}} ) \delta S_{A} (\vec{\xi},l,t)=0
\end{align}
As in the case of 2 dimensions , this can be verified straightforwardly by applying the differential operator above to the expression for $\delta S_{A}$ in \eqref{REE} and integrating by parts after noting that:
\begin{equation}\label{green}
\int_{A} \beta (r) \nabla^{2}_{\vec{\xi}}T_{00} (\vec{\xi}+\vec{r}) dr d\Omega=\int_{A} \beta (r) \nabla^{2}_{\vec{r}}T_{00} (\vec{\xi}+\vec{r}) dr d\Omega= -\int_{A} \nabla \beta (r) \cdot \nabla_{\vec{r}}T_{00} (\vec{\xi}+\vec{r}) dr d\Omega
\end{equation}
For $d=3$, \cite{Bhattacharya:2013fk} recently derived the same equation holographically. In{\cite{Lashkari:2013uq}, a general argument was proposed explaining why \eqref{REE} leads to the perturbative Einstein's equations via the holographic entanglement entropy formula \eqref{RT}.
In addition, a quantity called entanglement density was introduced in \cite{Nozaki:2013vta}. In $d=2$, for an interval \(A=[u,v] \) of length \(l=v-u\) and midpoint \(\xi\), this is defined as
\begin{align}
n(\xi, l, t) = \frac{1}{2} \frac{\delta^{2} S_{A}}{\delta u \delta v}, \qquad
\Delta n(\xi, l, t) = \frac{1}{2}\frac{\delta^{2} \Delta S_{A}}{\delta u \delta v},
\end{align}
where in the second equality we present the shifted entanglement density in terms of the renormalized entanglement entropy \(\Delta S_{A} \). Writing \(\Delta S_{A}\) in terms of $u$ and $v$ as in equation \eqref{HaR2} and computing the derivatives gives
\begin{align}
l^{2} \Delta n(\xi, l, t) + \Delta S_{A} =0,\\
\lim_{l\R0} \Delta n(\xi, l, t) = T_{00}(\xi) \lim_{ l \rightarrow 0} 2\pi\int_{-l}^{l} dx' \frac{l^{2}-x'^{2}}{2l^{2}} = \frac{\pi}{3} T_{00}(\xi).
\label{n}
\end{align}
which agrees with the holographic results of \cite{Nozaki:2013vta}.
Finally, we note some overlap with \cite{bianchi}.
The author of \cite{bianchi} considered a gravitational theory on Rindler space and derived the change in entanglement entropy across the Rindler horizon as in equation (\ref{EE}) due to a metric perturbation \( g_{ab} \rightarrow g_{ab} +h_{ab} \). There the entanglement Hamiltonian (\ref{K}), was evaluated along the event horizon H and was shown to be equal to an operator \(\hat{A}_{H}\) that measures the area of the event horizon. The crucial ingredient deriving this relation was the universal coupling \(\int h_{ab}T^{ab} \) of the graviton with the energy momentum tensor, which results in a perturbative Einstein's equation that relates \(T_{ab}\) to \(\Box h_{ab} \). Thus, the renormalized horizon entanglement entropy was found to be
\begin{equation}
\delta S_{H} = \frac{Tr (A_{H} \delta \rho_{H})}{4G} =\frac{\delta Area (H)}{4G}.
\end{equation}
Even though this equation was not derived from AdS/CFT, there is an obvious parallel here with equation (\ref{dA}), where the minimal surface \(\gamma_{A}\) is identified with the horizon H.
\section{Conclusion}
In this paper, we employed path integral methods to find a universal relation between the ground state entanglement Hamiltonian for an arbitrary region $A$ and the physical stress tensor. For spherical entangling surfaces in a CFT we find, as in \cite{myers}, that the entanglement Hamiltonian is the integral of a local density against a local entanglement temperature. We further generalize this result to include states with conserved charges preserving conformal invariance and derive new expressions for the entanglement Hamiltonians in various cylindrical backgrounds in 2 dimensions. Along the way, we show that the standard results for entanglement entropy in $d=2$ dimensions that are traditionally derived from the replica trick can be obtained easily by evaluating the \emph{thermal} entropy density using the entanglement temperature, and integrating over $A$. While completing this paper, we became aware that the same method was used in \cite{Swingle:2013hga} to obtain the leading area law behavior of entanglement entropy for a half space $A$ in a $d+1$-dimensional CFT and to derive the exact result for a finite interval $A$ in a $d=2$ CFT on the plane. It was also argued there that at high temperatures the entanglement entropy for theories with a mass gap $m$ can be estimated by cutting off the size of the integration region A at \(x^{1}=\frac{1}{m}\), and indeed this gives the exact result for $d=2$. In this paper, we made the additional observation that the entanglement temperature relates the change in entanglement entropy to changes in conserved charges of the ground state via equation (\ref{gflaw}). However, we should note that the spatially varying entanglement temperature is not physical in the sense that it does not determine the expectation value of local observables such as \(T_{00}\) (Indeed, \(\langle T_{00} \rangle \) is a constant.). This is because the entanglement Hamiltonian (\ref{Ha}) is an integral over operators that do not commute, so the reduced density matrix does not factorize. Indeed the entanglement temperature is not even conformally invariant; however equation \ref{gflaw} shows that in a fixed conformal frame, it gives a universal relation between the expectation value of physical charges inside a region A and the renormalized entanglement entropy.
The relation (\ref{Ha}) between the entanglement Hamiltonian and the stress tensor, when combined with our CFT expression (\ref{EE}) for renormalized entanglement entropy provides a direct connection between the expectation value of the stress tensor and the increase in entanglement entropy, as was first noted in the holographic calculations of \cite{tak} and further generalized in \cite{Nozaki:2013vta}. We also want to point out that it was recently observed in \cite{bha} that for spherical and cylindrical regions A, the holographic prescription \cite{rt} for the ground state entanglement entropy coincide with setting the \emph{finite} part of 00 component of the holographic stress tensor to zero on a 4D slice of the bulk spacetime. The idea is that given a parametrization \(r=f(z)\) of the 4D slice, setting
\begin{equation}\label{hst}
\langle T^{h}_{00} \rangle = \langle K_{ab}-h_{ab}K \rangle =0,
\end{equation}
where \(h_{ab}\) is the induced metric and \(K_{ab}\) is the extrinsic curvature
gives a differential equation for \(f(z)\) that is identical to the minimal area equation. A heuristic field theory justification might go as follows. Demanding that a state in a CFT has the same entanglement entropy of the ground state corresponds to setting
\begin{equation}
\delta S_{A}= \int_{A} \beta_{A} \langle T^{CFT}_{00} \rangle=0.
\end{equation}
If we follow \cite{myers} we identify the region casual development of $A$ with a curved 4D slice of AdS, then identifying \(\langle T_{00}^{CFT} \rangle \) with \(\langle T^{h}_{00} \rangle\) would gives equation \ref{hst}.
\section*{Acknowledgments}
We thank Phil Szepietowski for participation in the initial stages of this project. G.W. would like to thank Peter Arnold, Thomas Mark, and Yifei Shi for helpful discussions and the Michigan Center for Theoretical Physics for hospitality. L.PZ. would like to acknowledge the hospitality of University of Virginia through the Physics Theory Visitor program and University of Texas, Austin.
This work was supported by the DoE Grant \#DE-SC0007859 to the University of Michigan. G.W. was supported by the Presidential Fellowship from the University of Virginia. IK would like to acknowledge financial support from NSF CAREER award No. DMR-0956053. DV and GW were supported in part by DOE grant \#DE-SC0007984.
|
1,108,101,563,151 | arxiv | \section{Introduction}\label{intro}
The completeness and separability of function spaces is useful and important in functional analysis.
The completeness is especially effective in guaranteeing the limit of a sequence of functions in methods of
successive approximation,
and the separability enables us to obtain constructive proofs for many theorems that can be turned into
algorithms for use in numerical and constructive analysis.
\par
In this paper, the completeness and separability of various function spaces that appear in
nonadditive measure theory are discussed in quite generality.
The spaces we consider are the function spaces such as the space of all measurable functions $\calL^0(\mu)$,
the Choquet-Lorentz space $\calL^{p,q}(\mu)$, the Lorentz space of weak type $\calL^{p,\infty}(\mu)$,
the space of all $\mu$-essentially bounded measurable functions $\calL^\infty(\mu)$,
and their quotient spaces determined by a proper equivalence relation.
In those function spaces, the spaces $\calL^0(\mu)$ and $\calLinfty(\mu)$ are defined
using only a nonadditive measure $\mu$, while $\calLpq(\mu)$ and $\calL^{p,\infty}(\mu)$ are defined using so-called nonlinear integrals
such as the Choquet integral and the Shilkret integral.
The difficulty in discussing the completeness and separability of those function spaces is due to the fact that
the measures and integrals involved are nonadditive, and as a result, the natural distance
on the function spaces does not satisfy the triangle inequality in general.
\par
Our strategy to overcome this difficulty is as follows.
For instance, to show the completeness of the Lorentz space $\calL^{p,q}(\mu)$,
we first establish the Cauchy criterion for convergence in $\mu$-measure
in the space $\calF_0(X)$ of all measurable real-valued functions defined on a measurable space $(X,\calA)$
with a nonadditive measure $\mu$.
In other words, we try to find a characteristic of the measure $\mu$ such that for any $\{f_n\}_\seqn\subset\calF_0(X)$,
it follows that $\{f_n\}_\seqn$ is Cauchy for convergence in $\mu$-measure if and only if $\{f_n\}_\seqn$ converges
in $\mu$-measure.
This criterion enables us to find the limit function $f$ of a given Cauchy sequence $\{f_n\}_\seqn$
in $\calL^{p,q}(\mu)$ with respect to convergence in $\mu$-measure.
Next we show and apply suitable integral convergence theorems of the Choquet integral
to verify that the limit $f$ belongs to $\calL^{p,q}(\mu)$
and the sequence $\{f_n\}_\seqn$ converges to $f$
with respect to the distance on $\calL^{p,q}(\mu)$.
\par
The paper is organized as follows.
Section~\ref{pre} sets up notation and terminology. It also contains a discussion of an equivalence
relation in the space $\calF_0(X)$.
The aim of Section~\ref{Cauchy} is to find a sufficient condition to be imposed on a nonadditive measure $\mu$
for the Cauchy criterion for convergence in $\mu$-measure to hold in the space $\calF_0(X)$.
One of such conditions can be found in~\cite{J-S-W-K-L-Y} and it is shown that the Cauchy criterion holds in $\calF_0(X)$
if $\mu$ is continuous from below and satisfies the pseudometric generating property; see also~\cite{L-M-P-K}.
In this section a new characteristic of nonadditive measures, which is weaker than the continuity from below,
is introduced to show that the Cauchy criterion for convergence in $\mu$-measure holds in $\calF_0(X)$
if $\mu$ satisfies this property in addition to the pseudometric generating property.
The property introduced above is called property~(C), and in Section~\ref{necessity},
property~(C) is shown to be necessary for the Cauchy criterion to hold in the case where $X$ is countable.
The completeness of various function spaces are discussed in Sections~\ref{MFS}--\ref{EBF}
by newly presenting some integral convergence theorems of the Choquet and Shilkret integrals.
Dense subsets and the separability of the function spaces are also discussed as related topics.
Section~\ref{conclusion} provides a summary of our results and future tasks.
\section{Preliminaries}\label{pre}
Throughout the paper, $(X,\calA)$ is a measurable space, that is,
$X$ is a nonempty set and $\calA$ is a $\sigma$-field of subsets of $X$.
Let $\bbR$ denote the set of the real numbers and $\bbN$
the set of the natural numbers.
Let $\exR:=[-\infty,\infty]$ be the set of the extended real numbers with usual total order
and algebraic structure.
Assume that $(\pm\infty)\cdot 0=0\cdot (\pm\infty)=0$
since this proves to be convenient in measure and integration theory.
\par
For any $a,b\in\exR$, let $a\vee b:=\max\{a,b\}$ and $a\wedge b:=\min\{a,b\}$
and for any $f,g\colon X\to\exR$, let $(f\vee g)(x):=f(x)\vee g(x)$ and $(f\wedge g)(x):=f(x)\wedge g(x)$
for every $x\in X$.
Let $\calF_0(X)$ denote the set of all $\calA$-measurable real-valued functions on $X$.
Then $\calF_0(X)$ is a real linear space with usual pointwise addition and scalar multiplication.
For any $f,g\in\calF_0(X)$, the notation $f\leq g$ means that $f(x)\leq g(x)$ for every $x\in X$.
Let $\calF_0^+(X):=\{f\in\calF_0(X)\colon f\geq 0\}$.
A function taking only a finite number of real numbers is called a \emph{simple function}.
Let $\calS(X)$ denote the set of all $\calA$-measurable simple functions on $X$.
\par
For a sequence $\{a_n\}_\seqn\subset\exR$ and $a\in\exR$, the notation
$a_n\uparrow a$ means
that $\{a_n\}_\seqn$ is nondecreasing and $a_n\to a$, and $a_n\downarrow a$ means
that $\{a_n\}_\seqn$ is nonincreasing and $a_n\to a$.
For a sequence $\{A_n\}_\seqn\subset\calA$ and $A\in\calA$, the notation $A_n\uparrow A$ means
that $\{A_n\}_\seqn$ is nondecreasing and $A=\bigcup_{n=1}^\infty A_n$, and $A_n\downarrow A$
means that $\{A_n\}_\seqn$ is nonincreasing and $A=\bigcap_{n=1}^\infty A_n$.
The characteristic function of a set $A$, denoted by $\chi_A$, is the function on $X$
such that $\chi_A(x)=1$ if $x\in A$ and $\chi_A(x)=0$ otherwise.
Given two sets $A$ and $B$, let $A\triangle B:=(A\setminus B)\cup (B\setminus A)$
and $A^c:=X\setminus A$.
Let $2^X$ denote the collection of all subsets of $X$.
\subsection{Nonadditive measures}\label{measure}
A \emph{nonadditive measure}\/ is a set function $\mu\colon\calA\to [0,\infty]$ such that
$\mu(\emptyset)=0$ and $\mu(A)\leq\mu(B)$ whenever $A,B\in\calA$ and $A\subset B$.
This type of set function is also called a monotone measure~\cite{W-K}, a capacity~\cite{Choquet},
or a fuzzy measure~\cite{R-A,Sugeno} in the literature.
\par
Let $\calM(X)$ denote the set of all nonadditive measures $\mu\colon\calA\to [0,\infty]$.
We say that $\mu$ is \emph{order continuous}~\cite{Drewnowski}
if $\mu(A_n)\to 0$ whenever $A_n\downarrow\eset$,
\emph{conditionally order continuous}\/ if $\mu(A_n)\to 0$ whenever $A_n\downarrow\eset$
and $\mu(A_1)<\infty$,
\emph{strongly order continuous}~\cite{J-K-W} if $\mu(A_n)\to 0$
whenever $A_n\downarrow A$ and $\mu(A)=0$,
\emph{continuous from above}\/ if $\mu(A_n)\to\mu(A)$ whenever $A_n\downarrow A$,
\emph{continuous from below}\/ if $\mu(A_n)\to\mu(A)$ whenever $A_n\uparrow A$,
and \emph{continuous}\/ if it is continuous from above and from below.
If $\mu$ is continuous from above, then it is strongly order continuous, hence order continuous.
If $\mu$ is order continuous, then it is conditionally order continuous, but the converse
does not hold even for the Lebesgue measure on the real line.
\par
Following the terminology used in~\cite{W-K},
$\mu$ is called \emph{weakly null-additive}\/ if $\mu(A\cup B)=0$
whenever $A,B\in\calA$ and $\mu(A)=\mu(B)=0$,
\emph{null-additive}\/ if $\mu(A\cup B)=\mu(A)$ whenever $A,B\in\calA$ and $\mu(B)=0$,
\emph{autocontinuous from above}\/ if $\mu(A\cup B_n)\to\mu(A)$
whenever $A, B_n\in\calA$ and $\mu(B_n)\to 0$, and
\emph{autocontinuous from below}\/ if $\mu(A\setminus B_n)\to\mu(A)$
whenever $A, B_n\in\calA$ and $\mu(B_n)\to 0$.
Furthermore, we say that $\mu$ satisfies \emph{property (S)}~\cite{Sun}
if any $\{A_n\}_\seqn\subset\calA$ with $\mu(A_n)\to 0$
has a subsequence $\{A_{n_k}\}_\seqk$
such that $\mu\left(\bigcap_{i=1}^\infty\bigcup_{k=i}^\infty A_{n_k}\right)=0$,
\emph{property ($\mbox{S}_1$)}~\cite{L-L}\/ if any $\{A_n\}_\seqn\subset\calA$
with $\mu(A_n)\to 0$ has a subsequence $\{A_{n_k}\}_\seqk$
such that $\mu\left(\bigcup_{k=i}^\infty A_{n_k}\right)\to 0$,
and the \emph{pseudometric generating property}\/ ((p.g.p.)~for short)~\cite{D-F}
if $\mu(A_n\cup B_n)\to 0$ whenever $A_n,B_n\in\calA$
and $\mu(A_n)\lor\mu(B_n)\to 0$.
It is easy to see that $\mu$ satisfies the (p.g.p.)~if and only if for any $\ep>0$ there is $\delta>0$
such that $\mu(A\cup B)<\ep$ whenever $A,B\in\calA$ and $\mu(A)\lor\mu(B)<\delta$.
\par
The following characteristic of nonadditive measures is also used.
We say that $\mu$ is \emph{monotone autocontinuous from above}~\cite{Rebille}\/ if
$\mu(A\cup B_n)\to\mu(A)$ whenever $A,B_n\in\calA$, $\{B_n\}_\seqn$ is nonincreasing,
and $\mu(B_n)\to 0$, \emph{monotone autocontinuous from below}~\cite{Rebille}\/ if
$\mu(A\setminus B_n)\to\mu(A)$ whenever $A,B_n\in\calA$,
$\{B_n\}_\seqn$ is nonincreasing,
and $\mu(B_n)\to 0$, and \emph{null-continuous}~\cite{U-M}
if $\mu(\bigcup_{n=1}^\infty N_n)=0$ whenever $\{N_n\}_\seqn\subset\calA$ is nondecreasing
and $\mu(N_n)=0$ for every $\seqn$.
\par
The autocontinuity from above implies the monotone autocontinuity from above,
while the autocontinuity from below implies the monotone autocontinuity from below.
If $\mu$ is monotone autocontinuous from above or from below,
then it is null-additive, hence weakly null-additive.
If $\mu$ satisfies the (p.g.p.), then it is weakly null-additive.
Property ($\mbox{S}_1$) always implies property (S); they are equivalent if $\mu$ is strongly order continuous.
If $\mu$ is continuous from below and satisfies the (p.g.p.), then it satisfies property ($\mbox{S}_1$);
see~\cite[Corollary~1]{J-W-Z-W-K}.
\par
If $\mu$ is autocontinuous from above and continuous from below,
then it is autocontinuous from below~\cite[Theorem~6.12]{W-K} and
satisfies property (S)~\cite[Proposition~6]{J-S-W-K} and the (p.g.p.)~\cite[Theorem~2]{J-W-Z-W-K}.
If $\mu$ is continuous from below, then it is null-continuous.
The null-continuity also follows from property (S)~\cite[Proposition~3.1]{U-M}
and from the conjunction of the weak null-additivity
and the strong order continuity~\cite[Proposition~9]{A-U-M}.
If $X$ is countable, then the null-continuity is equivalent to property (S)~\cite[Proposition~3.2]{U-M}.
\par
A nonadditive measure $\mu$ is called \emph{subadditive}\/ if
$\mu(A\cup B)\leq\mu(A)+\mu(B)$ for every disjoint $A,B\in\calA$,
\emph{relaxed subadditive}\/ if there is a constant $K\geq 1$ such that
$\mu(A\cup B)\leq K\left\{\mu(A)+\mu(B)\right\}$ for every disjoint $A,B\in\calA$ (in this case
$\mu$ is called \emph{$K$-relaxed subadditive}).
Every subadditive nonadditive measure is relaxed subadditive.
If $\mu$ is relaxed subadditive, then it satisfies the (p.g.p.).
\begin{remark}
The relaxed subadditivity is also called the quasi-subadditivity according to the terminology
used in metric space theory.
\end{remark}
It is easy to see that a nonadditive measure $\mu$ satisfies one of the characteristics
introduced above other than the subadditivity if and only if the measure $\mu^r$,
which is defined by $\mu^r(A):=\mu(A)^r$ for every $A\in\calA$ and $r>0$, satisfies
the same characteristic. For instance, $\mu$ satisfies the (p.g.p.)~if and only if $\mu^r$ satisfies the (p.g.p.).
This observation is to be useful and important when discussing the completeness and separability
of the Choquet-Lorentz space in Section~\ref{CLspace}.
\par
See ~\cite{Denneberg,Pap,W-K} for further information on nonadditive measures.
\subsection{The Choquet and Shilkret integrals}\label{integrals}
The following nonlinear integrals are widely used in nonadditive measure theory and its applications.
Let $\mu\in\calM(X)$. The \emph{Choquet integral}~\cite{Choquet, Schmeidler} is defined by
\[
\Ch(\mu,f):=\int_0^\infty\mu(\{f>t\})dt,
\]
for every $f\in\calF_0^+(X)$, where the right hand side is the Lebesgue integral.
The Choquet integral is equal to the abstract Lebesgue integral if $\mu$ is
$\sigma$-additive~\cite[Propositions~8.1 and 8.2]{Kawabe2016}.
The \emph{Shilkret integral}~\cite{Shilkret,Zhao} is defined by
\[
\Sh(\mu,f):=\sup_{t\in [0,\infty)}t\mu(\{f>t\})
\]
for every $f\in\calF_0^+(X)$.
In the above definitions the nonincreasing distribution function $\mu(\{f>t\})$ may be replaced with
$\mu(\{f\geq t\})$ without any change.
\par
The following elementary properties of the Choquet and Shilkret integrals are
easy to prove; see also~\cite{Kawabe2016}.
Recall that these integrals are neither additive nor even subadditive in general.
\begin{itemize}
\item Monotonicity:\ For any $f,g\in\calF_0^+(X)$, if $f\leq g$,
then $\Ch(\mu,f)\leq\Ch(\mu,g)$ and $\Sh(\mu,f)\leq\Sh(\mu,g)$.
\item Generativity:\ For any $c\geq 0$ and $A\in\calA$,
it follows that $\Ch(\mu,c\chi_A)=\Sh(\mu,c\chi_A)=c\mu(A)$.
\item Positive homogeneousness:\ For any $c\geq 0$ and $f\in\calF_0^+(X)$,
it follows that $\Ch(\mu,cf)=c\,\Ch(\mu,f)$ and $\Sh(\mu,cf)=c\,\Sh(\mu,f)$.
\item Elementariness:\ If $h\in\calS(X)$ is represented by
\[
h=\sum_{k=1}^n(c_k-c_{k-1})\chi_{A_k}=\bigvee_{k=1}^n c_k\chi_{A_k},
\]
where $\seqn$, $c_0=0<c_1<c_2<\dots<c_n<\infty$, and
$A_1\supset A_2\supset\dots\supset A_n$, then it follows that
\[
\Ch(\mu,h)=\sum_{k=1}^n (c_k-c_{k-1})\mu(A_k)\;\mbox{ and }\;\Sh(\mu,h)=\bigvee_{k=1}^n c_k\mu(A_k).
\]
\item Relaxed subadditivity:\ If $\mu$ is $K$-relaxed subadditive for some $K\geq 1$,
then for any $f,g\in\calF_0^+(X)$ it follows that
\[
\Ch(\mu,f+g)\leq 2K\bigl\{\Ch(\mu,f)+\Ch(\mu,g)\bigr\}
\]
and
\[
\Sh(\mu,f+g)\leq 2K\bigl\{\Sh(\mu,f)+\Sh(\mu,g)\bigr\}.
\]
\item Upper marginal continuity:\ For any $f\in\calF_0^+(X)$, it follows that
\[
\Ch(\mu,f)=\sup_{r>0}\Ch(\mu,f\land r)\;\mbox{ and }\;\Sh(\mu,f)=\sup_{r>0}\Sh(\mu,f\land r).
\]
\item Transformation formula:\ Let $0<p<\infty$. For any $f\in\calF_0^+(X)$ it follows that
\[
\Ch(\mu,f^p)=\int_0^\infty pt^{p-1}\mu(\{f>t\})dt
\]
and
\[
\Sh(\mu,f^p)=\Sh(\mu^{1/p},f)^p.
\]
\end{itemize}
\subsection{Various modes of convergence of measurable functions}\label{mode}
Let $\{f_n\}_\seqn\subset\calF_0(X)$ and $f\in\calF_0(X)$.
There are several ways to define the convergence of sequences of measurable functions.
We say that $\{f_n\}_\seqn$ converges \emph{$\mu$-almost everywhere}\/ to $f$,
denoted by $f_n\to f$ $\mu$-a.e.,
if there is $N\in\calA$ such that $\mu(N)=0$ and $f_n(x)\to f(x)$ for every $x\not\in N$.
We also say that $\{f_n\}_\seqn$ converges \emph{$\mu$-almost uniformly}\/ to $f$, denoted by
$f_n\to f$ $\mu$-a.u., if for any $\ep>0$ there is $E_\ep\in\calA$ such that $\mu(E_\ep)<\ep$
and $f_n$ converges to $f$ uniformly on $X\setminus E_\ep$.
Another concept of convergence is not quite intuitive, but it has
some advantages in analysis.
We say that $f_n$ converges \emph{in $\mu$-measure}\/ to $f$, denoted by $f_n\mconv f$,
if $\mu(\{|f_n-f|>\ep\})\to 0$ for every $\ep>0$.
The sequence $\{f_n\}_\seqn\subset\calF_0(X)$ is simply said to converge $\mu$-almost everywhere,
$\mu$-almost uniformly, and in $\mu$-measure, if there is a function $f\in\calF_0(X)$ such that
$f_n\to f$ $\mu$-a.e., $f_n\to f$ $\mu$-a.u., and $f_n\mconv f$, respectively.
Every sequence of measurable functions converging $\mu$-almost uniformly converges
$\mu$-almost everywhere and in $\mu$-measure to the same limit function.
\par
The three modes of convergence introduced above
require that the differences between the elements $f_n$
of the sequence and the limit function $f$ should become small in some sense as $n$ increases.
The following definition involves only the elements of the sequence.
We say that $\{f_n\}_\seqn$
is \emph{Cauchy in $\mu$-measure}\/ if for any $\ep>0$ and $\delta>0$
there is $n_0\in\bbN$ such that $\mu(\{|f_m-f_n|>\ep\})<\delta$
whenever $m,n\in\bbN$ and $m,n\geq n_0$.
\par
The relation between convergence in measure and almost everywhere convergence
is made precise in nonadditive versions of the Lebesgue and the Riesz theorem.
The former states that any sequence converging $\mu$-almost everywhere converges in $\mu$-measure
if and only if $\mu$ is strongly order continuous~\cite[Theorem~5.2]{L-M-P-K},
and the latter states that any sequence converging in $\mu$-measure
has a subsequence converging $\mu$-almost everywhere
if and only if $\mu$ satisfies property (S)~\cite[Theorem~5.17]{L-M-P-K}.
Furthermore, any sequence converging in $\mu$-measure has a subsequence converging $\mu$-almost
uniformly if and only if $\mu$ satisfies property ($\mbox{S}_1$)~\cite[Theorem~4]{L-L}.
As to Cauchy in $\mu$-measure sequences, it can be found in~\cite[Theorem~7.2]{L-M-P-K} that
any sequence converging in $\mu$-measure is Cauchy in $\mu$-measure if and only if
$\mu$ satisfies the (p.g.p.).
By~\cite[Theorem~7.3]{L-M-P-K}
any Cauchy in $\mu$-measure sequence always converges in $\mu$-measure
if $\mu$ is continuous from below and satisfies the (p.g.p.).
Thus the conjunction of the continuity of $\mu$ from below and the (p.g.p.)~is a sufficient condition
for the Cauchy criterion to hold for convergence in $\mu$-measure of measurable functions.
\par
See a survey paper~\cite{L-M-P-K} for further information on various modes
of convergence of measurable functions in nonadditive measure theory.
\subsection{Equivalence relation and quotient space}\label{equiv}
The quotient space of $\calF_0(X)$ is constructed by an equivalence relation
determined by a nonadditive measure $\mu$.
The proof of the following statements is routine and left it to the reader.
\begin{itemize}
\item Assume that $\mu$ is weakly null-additive. Given $f,g\in\calF_0(X)$,
define the binary relation $f\sim g$ on $\calF_0(X)$ by $\mu(\{|f-g|>c\})=0$ for every $c>0$
so as to become an equivalence relation on $\calF_0(X)$.
For every $f\in\calF_0(X)$ the equivalence class of $f$
is the set of the form $\{g\in\calF_0(X)\colon f\sim g\}$ and denoted by $[f]$.
Then the quotient space of $\calF_0(X)$ is defined by $F_0(X):=\{[f]\colon f\in\calF_0(X)\}$.
\item Assume that $\mu$ is weakly null-additive.
Given equivalence classes $[f],[g]\in F_0(X)$ and $c\in\bbR$, define addition and scalar multiplication
on $F_0(X)$ by $[f]+[g]:=[f+g]$ and $c[f]:=[cf]$.
They are well-defined, that is, they are independent of which member of an equivalence class we choose
to define them.
Then $F_0(X)$ is a real linear space.
\end{itemize}
\par
The binary relation on $\calF_0(X)$ defined above may not
be transitive unless $\mu$ is weakly null-additive; see~\cite[Example~5.1]{Kawabe2021}.
In what follows, let $S(X):=\{[h]\colon h\in\calS(X)\}$.
\subsection{Prenorms}\label{prenorm}
Let $V$ be a real linear space.
A \emph{prenorm} on $V$ is a nonnegative real-valued function $\norm{\cdot}$
defined on $V$ such that $\norm{0}=0$ and $\norm{-x}=\norm{x}$ for every $x\in V$.
Then the pair $(V,\norm{\cdot})$ is called a \emph{prenormed space}.
A prenorm $\norm{\cdot}$ is called \emph{homogeneous}\/ if it follows that
$\norm{cx}=|c|\norm{x}$ for every $x\in V$ and $c\in\bbR$ and
\emph{truncated subhomogeneous}\/ if it follows that
$\norm{cx}\leq\max(1,|c|)\norm{x}$ for every $x\in V$ and $c\in\bbR$.
A \emph{seminorm}\/ is a prenorm that is homogeneous and satisfies the triangle inequality, that is,
$\norm{x+y}\leq\norm{x}+\norm{y}$ for every $x,y\in V$.
Then a \emph{norm} is a seminorm that separates points of $V$, that is,
for any $x\in V$, if $\norm{x}=0$ then $x=0$.
Following~\cite{D-D}, a prenorm $\norm{\cdot}$ is called relaxed if it
satisfies a \emph{relaxed triangle inequality}, that is, there is a constant $K\geq 1$
such that $\norm{x+y}\leq K\left\{\norm{x}+\norm{y}\right\}$ for every $x,y\in V$
(in this case, $\norm{\cdot}$ is called the \emph{$K$-relaxed prenorm}).
A \emph{quasi-seminorm}\/ on $V$ is a prenorm that is homogeneous
and satisfies a relaxed triangle inequality. Then a \emph{quasi-norm}\/ is a quasi-seminorm
that separates points of $V$.
\par
To associate with similar characteristics of nonadditive measures,
a prenorm $\norm{\cdot}$ is called \emph{weakly null-additive}\/ if $\norm{x+y}=0$
whenever $x,y\in V$ and $\norm{x}=\norm{y}=0$ and
\emph{null-additive}\/ if $\norm{x+y}=\norm{x}$
whenever $x,y\in V$ and $\norm{y}=0$.
\par
Let $(V,\norm{\cdot})$ be a prenormed space.
Let $\{x_n\}_\seqn\subset V$ and $x\in V$.
We say that $\{x_n\}_\seqn$ \emph{converges}\/ to $x$, denoted by $x_n\to x$,
if $\norm{x_n-x}\to 0$.
We may simply say that $\{x_n\}_\seqn$ converges
if the limit $x$ is not needed to specify.
The notion of a Cauchy sequence involves only the elements of the sequence and
we say that $\{x_n\}_\seqn$ is \emph{Cauchy}\/ if for
any $\varepsilon>0$ there is $n_0\in\bbN$
such that $\norm{x_m-x_n}<\varepsilon$ whenever $m,n\in\bbN$ and $m,n\geq n_0$.
Not every converging sequence is Cauchy since prenorms satisfy neither
the triangle inequality nor its relaxed ones in general.
A subset $B$ of $V$ is called \emph{bounded}\/ if $\sup_{x\in B}\norm{x}<\infty$.
\par
A prenormed space $(V,\norm{\cdot})$ is called \emph{complete}\/
if every Cauchy sequence in $V$ converges to an element in $V$.
It is called \emph{quasi-complete}\/ if every bounded Cauchy sequence in $V$ converges to an element in $V$.
The denseness and the separability can be defined in the same way as in ordinary normed spaces.
We say that $V$ is \emph{separable}\/ if there is a countable subset $D$ of $V$ such that
$D$ is \emph{dense} in $V$, that is,
for any $x\in V$ and $\ep>0$ there is $y\in D$ such that $\norm{x-y}<\ep$.
\par
If the prenorm $\norm{\cdot}$ is needed to emphasize in the above terms,
then the phrase ``with respect to $\norm{\cdot}$'' is added to each term.
\section{The Cauchy criterion for convergence in measure}\label{Cauchy}
Given a $\sigma$-additive measure $\mu$, a sequence of measurable functions
converges in $\mu$-measure if and only if it is Cauchy
in $\mu$-measure~\cite[Theorems~C and~E in Chapter~IV, Section~22]{Halmos}.
This fact is referred to as the Cauchy criterion for convergence in measure
and was already extended to nonadditive measures that are continuous from below and satisfy the (p.g.p.).
Therefore, the discussion in this section starts with recalling some
already known results associated with the Cauchy criterion.
\begin{theorem}[{\cite[Theorems~7.2 and 7.3]{L-M-P-K} and \cite[Theorems~1 and~2]{J-S-W-K-L-Y}}]\label{base}
Let $\mu\in\calM(X)$.
\begin{enumerate}
\item[\us{(1)}] The following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\mu$ satisfies the (p.g.p.).
\item[\us{(ii)}] Any sequence $\{f_n\}_\seqn\subset\calF_0(X)$ converging in $\mu$-measure
is Cauchy in $\mu$-measure.
\end{enumerate}
\item[\us{(2)}] Assume that $\mu$ satisfies the (p.g.p.).
If a sequence $\{f_n\}_\seqn\subset\calF_0(X)$ is Cauchy in $\mu$-measure
and has a subsequence converging in $\mu$-measure to a function $f\in\calF_0(X)$,
then $f_n\mconv f$.
\item[\us{(3)}] Assume that $\mu$ is continuous from below and satisfies the (p.g.p.).
Any Cauchy in $\mu$-measure sequence $\{f_n\}_\seqn\subset\calF_0(X)$ has a subsequence
converging $\mu$-almost uniformly.
\item[\us{(4)}]
Assume that $\mu$ is continuous from below and satisfies the (p.g.p.).
Then, for any sequence $\{f_n\}_\seqn\subset\calF_0(X)$
the following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure.
\item[\us{(ii)}] $\{f_n\}_\seqn$ converges in $\mu$-measure.
\end{enumerate}
\end{enumerate}
\end{theorem}
\par
The purpose of this section is to find a weaker characteristic of a nonadditive measure $\mu$,
under which every Cauchy in $\mu$-measure sequence of measurable functions
converges in $\mu$-measure.
To this end we first introduce a new characteristic of nonadditive measures.
\begin{definition}\label{C-pro}
Let $\mu\in\calM(X)$. We say that $\mu$ satisfies \emph{property~(C)} if
for any sequence $\{E_n\}_\seqn\subset\calA$,
it follows that $\mu\left(\bigcup_{n=k}^\infty E_n\right)\to 0$
whenever $\sup_\seql\mu\left(\bigcup_{n=k}^{k+l}E_n\right)\to 0$.
\end{definition}
It is easy to see that every nonadditive measure that is continuous from below satisfies property~(C).
Other examples of nonadditive measures satisfying property~(C) are as follows.
\begin{proposition}\label{ex-C-pro}
Let $\mu\in\calM(X)$.
\begin{enumerate}
\item[\us{(1)}] Assume that for any $\ep>0$ there is $\delta>0$ such that condition
\[
\sup_\seqn\mu(E_n)<\delta\,\Longrightarrow\,
\mu\left(\bigcup_{n=1}^\infty E_n\right)<\ep \tag{$*$}
\]
holds for every nondecreasing sequence $\{E_n\}_\seqn\subset\calA$.
Then $\mu$ satisfies property (C).
In particular, the above assumption holds if there is a constant $M\geq 1$ such that
\[
\mu\left(\bigcup_{n=1}^\infty E_n\right)\leq M\sup_\seqn\mu(E_n)
\]
for every nondecreasing sequence $\{E_n\}_\seqn\subset\calA$.
\item[\us{(2)}] Assume that for any $\ep>0$ there is $\delta>0$ such that condition ($*$)
holds for every sequence $\{E_n\}_\seqn\subset\calA$.
Then $\mu$ satisfies the (p.g.p.) and property~(C).
\item[\us{(3)}] Let $\lambda\in\calM(X)$.
Let $\theta\colon [0,\infty]\to [0,\infty]$ be a nondecreasing function with $\theta(0)=0$
that is continuous and increasing on a neighborhood of $0$.
Let $\theta\circ\lambda\colon\calA\to [0,\infty]$ is the nonadditive measure
defined by $\theta\circ\lambda(A):=\theta(\lambda(A))$ for every $A\in\calA$.
If $\lambda$ is continuous from below and satisfies the (p.g.p.),
then $\theta\circ\lambda$ satisfies property~(C) and the (p.g.p.).
\item[\us{(4)}] If $\mu$ satisfies property~(C), then it is null-continuous.
\end{enumerate}
\end{proposition}
\begin{proof}
(1)\ Let $\{E_n\}_\seqn\subset\calA$ be a sequence satisfying
\begin{equation}\label{eqn:ex-C-pro1}
\sup_\seql\mu\left(\bigcup_{n=k}^{k+l}E_n\right)\to 0.
\end{equation}
Let $\ep>0$ and take $\delta>0$ such that condition ($*$) holds for
every nondecreasing sequence $\{D_l\}_\seql\subset\calA$.
For this $\delta$, by~\eqref{eqn:ex-C-pro1} there is $k_0\in\bbN$ such that
$\sup_\seql\mu\left(\bigcup_{n=k}^{k+l}E_n\right)<\delta$ for every $k\geq k_0$.
Fix $k\geq k_0$ and let $D_l:=\bigcup_{n=k}^{k+l}E_n$ for every $\seql$.
Then $\{D_l\}_\seql$ is nondecreasing and $\sup_\seql\mu(D_l)<\delta$.
Hence it follows from condition ($*$) that
\[
\mu\left(\bigcup_{n=k}^\infty E_n\right)=\mu\left(\bigcup_{l=1}^\infty D_l\right)<\ep,
\]
which implies $\mu\left(\bigcup_{n=k}^\infty E_n\right)\to 0$.
Thus $\mu$ satisfies property~(C). The rest is easy to prove.
\par
(2)\ It suffices to show that $\mu$ satisfies the (p.g.p.).
Let $\ep>0$ and take $\delta>0$ such that condition ($*$) holds
for every sequence $\{E_n\}_\seqn\subset\calA$.
Let $A,B\in\calA$ and assume that $\mu(A)\lor\mu(B)<\delta$.
Let $E_1:=A$, $E_2:=B$, and $E_n:=\eset$ for every $n\geq 3$.
Then $\sup_\seqn\mu(E_n)=\mu(A)\lor\mu(B)<\delta$,
hence $\mu(A\cup B)=\mu\left(\bigcup_{n=1}^\infty E_n\right)<\ep$
by condition ($*$).
Thus $\mu$ satisfies the (p.g.p.).
\par
(3)\ Let $\theta$ be continuous and increasing on $[0,\delta]$ for some $\delta>0$.
By~\cite[Theorem~2.5.3]{Friedman}, the function $\theta$ has a continuous and increasing
inverse $\theta^{-1}\colon [0,\theta(\delta)]\to [0,\delta]$.
Let $\{E_n\}_\seqn\subset\calA$ be a sequence
satisfying~\eqref{eqn:ex-C-pro1} for $\mu=\theta\circ\lambda$.
Let $\ep>0$. Then there is $k_0\in\bbN$ such that for any $k\geq k_0$ it follows that
\[
\sup_\seql\theta\left(\lambda\left(\bigcup_{n=k}^{k+l}E_n\right)\right)<\ep\land\theta(\delta).
\]
Hence the continuity of $\lambda$ from below yields
\[
\lambda\left(\bigcup_{n=k}^\infty E_n\right)
=\lambda\left(\bigcup_{l=1}^\infty\bigcup_{n=k}^{k+l}E_n\right)
=\sup_\seql\lambda\left(\bigcup_{n=k}^{k+l}E_n\right)\leq\theta^{-1}(\ep\land\theta(\delta)).
\]
Thus
\[
\theta\circ\lambda\left(\bigcup_{n=k}^\infty E_n\right)\leq\ep\land\theta(\delta)\leq\ep,
\]
which implies $\theta\circ\lambda\left(\bigcup_{n=k}^\infty E_n\right)\to 0$.
Hence $\theta\circ\lambda$ satisfies property~(C).
\par
Let $A_n,B_n\in\calA$ for every $\seqn$ and assume that
$\theta\circ\lambda(A_n)\lor\theta\circ\lambda(B_n)\to 0$.
Then $\lambda(A_n)\lor\lambda(B_n)\to 0$, so that $\lambda(A_n\cup B_n)\to 0$
since $\lambda$ satisfies the (p.g.p.), and finally that $\theta\circ\lambda(A_n\cup B_n)\to 0$.
Therefore, $\theta\circ\lambda$ satisfies the (p.g.p.).
\par
(4)\ Take a nondecreasing sequence $\{N_n\}_\seqn\subset\calA$ and assume that $\mu(N_n)=0$ for
every $\seqn$. Then
\[
\sup_\seql\mu\left(\bigcup_{n=k}^{k+l}N_n\right)=\sup_\seql\mu(N_{k+l})=0
\]
for every $\seql$, hence $\mu\left(\bigcup_{n=k}^\infty N_n\right)\to 0$ by property (C).
Thus, for any $\ep>0$, there is $k_0\in\bbN$ such that $\mu\left(\bigcup_{n=k_0}^\infty N_n\right)<\ep$,
hence $\mu\left(\bigcup_{n=1}^\infty N_n\right)<\ep$.
Letting $\ep\downarrow 0$ yields $\mu\left(\bigcup_{n=1}^\infty N_n\right)=0$.
Therefore, $\mu$ is null-continuous.
\end{proof}
Our first issue is to extend assertion (3) of Theorem~\ref{base} to nonadditive measures
satisfying property~(C) and the (p.g.p.).
Example~\ref{counter2} below gives nonadditive measures that are not continuous from below,
but satisfy property~(C) and the (p.g.p.), so that the following theorem
is a sharpened version of (3) of Theorem~\ref{base}.
Since the idea of the proof is the same as Theorem~1 of~\cite{J-S-W-K-L-Y},
only a sketch of the proof is given here for readers' convenience.
\begin{theorem}\label{comp}
Let $\mu\in\calM(X)$.
If $\mu$ satisfies property~(C) and the (p.g.p.),
then any Cauchy in $\mu$-measure sequence $\{f_n\}_\seqn\subset\calF_0(X)$
has a subsequence converging $\mu$-almost uniformly.
\end{theorem}
\begin{proof}
Since $\mu$ satisfies the (p.g.p.),
there is a decreasing sequence $\{\delta_i\}_\seqi$ such that
\begin{itemize}
\item $\delta_0=1/2$ and $0<\delta_i<\delta_{i-1}\land\frac{1}{2^i}$ for every $\seqi$,
\item $\mu(A\cup B)<\delta_{i-1}$ whenever $\seqi$, $A,B\in\calA$,
and $\mu(A)\lor\mu(B)<\delta_i$.
\end{itemize}
\par
Take a Cauchy in $\mu$-measure sequence $\{f_n\}_\seqn\subset\calF_0(X)$.
Then there is an increasing sequence $\{n_i\}_\seqi\subset\bbN$ such that
for any $\seqi$ and $\seqn$, if $n\geq n_i$ then
\begin{equation}\label{eqn:comp1}
\mu(\{|f_n-f_{n_i}|\geq 1/2^i\})<\delta_i.
\end{equation}
For each $i\in\bbN$, let $E_i:=\{|f_{n_{i+1}}-f_{n_i}|\geq 1/2^i\}$. Then
\begin{equation}\label{eqn:comp2}
\mu\left(\bigcup_{i=k}^{r+1}E_i\right)<\delta_{k-1}
\end{equation}
for every $r\in\bbN$ and $k\in\{1,2,\dots,r+1\}$.
For each $k\in\bbN$, let $A_k:=\bigcup_{i=k}^\infty E_i$
and $E:=\bigcap_{k=1}^\infty A_k$.
Then $\{f_{n_k}(x)\}_\seqk$ is a Cauchy sequence in $\bbR$ for every $x\not\in E$.
Therefore, the $\calA$-measurable function $f\colon X\to\bbR$ can be defined by
\[
f(x):=\begin{cases}
\lim\limits_{k\to\infty}f_{n_k}(x) & \mbox{if }x\not\in E,\\
\;0 & \mbox{otherwise}.
\end{cases}
\]
\par
Fix $\seqk$. For any $\seql$, let $r:=k+l-1$. Then \eqref{eqn:comp2} yields
\[
\mu\left(\bigcup_{i=k}^{k+l}E_i\right)=\mu\left(\bigcup_{i=k}^{r+1}E_i\right)<\delta_{k-1},
\]
so that
\[
\sup_\seql\mu\left(\bigcup_{i=k}^{k+l}E_i\right)\leq\delta_{k-1}.
\]
Letting $k\to\infty$ gives $\delta_{k-1}\to 0$, hence
\[
\sup_\seql\mu\left(\bigcup_{i=k}^{k+l}E_i\right)\to 0.
\]
Since $\mu$ satisfies property~(C), if $k\to\infty$ then
\begin{equation}\label{eqn:comp3}
\mu(A_k)=\mu\left(\bigcup_{i=k}^\infty E_i\right)\to 0.
\end{equation}
\par
Let $\ep>0$. By~\eqref{eqn:comp3} there is $k_0\in\bbN$ such that $\mu(A_{k_0})<\ep$.
Now it is routine work to show that $f_{n_k}$ converges to $f$ uniformly on $X\setminus A_{k_0}$.
\end{proof}
\begin{definition}\label{C0}
Let $\mu\in\calM(X)$. We say that $\mu$ satisfies \emph{property~($\mbox{C}_0$)} if
for any sequence $\{E_n\}_\seqn\subset\calA$,
it follows that $\mu\left(\bigcap_{k=1}^\infty\bigcup_{n=k}^\infty E_n\right)=0$
whenever $\sup_\seql\mu\left(\bigcup_{n=k}^{k+l}E_n\right)\to 0$.
\end{definition}
\par
By definition, property~(C) always implies property~($\mbox{C}_0$).
It is easy to see that both properties are equivalent if $\mu$ is strongly order continuous.
Furthermore, for every $r>0$, it follows that $\mu$ satisfies property~(C) if and only if $\mu^r$
satisfies property~(C). The same holds for property~($\mbox{C}_0$).
The following corollary can be proved in the same reasoning as Theorem~\ref{comp}.
\begin{corollary}\label{AE}
Let $\mu\in\calM(X)$.
If $\mu$ satisfies property~($\mbox{C}_0$) and the (p.g.p.),
then any Cauchy in $\mu$-measure sequence $\{f_n\}_\seqn\subset\calF_0(X)$
has a subsequence converging $\mu$-almost everywhere.
\end{corollary}
Now, let us proceed to the Cauchy criterion for convergence in measure.
\begin{theorem}\label{fund}
Let $\mu\in\calM(X)$.
Assume that $\mu$ satisfies property~(C) and the (p.g.p.).
Then, for any sequence $\{f_n\}_\seqn\subset\calF_0(X)$
the following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure.
\item[\us{(ii)}] $\{f_n\}_\seqn$ converges in $\mu$-measure.
\end{enumerate}
\end{theorem}
\begin{proof}
(i)$\Rightarrow$(ii) By Theorem~\ref{comp},
$\{f_n\}_\seqn$ has a subsequence $\{f_{n_k}\}_\seqk$ and $f\in\calF_0(X)$
such that $f_{n_k}\mconv f$. Let $\ep>0$ and $\sigma>0$.
Since $\mu$ satisfies the (p.g.p.), there is $\delta>0$ such that
\begin{equation}\label{eqn:fund1}
\mu(A\cup B)<\ep
\end{equation}
whenever $A,B\in\calA$ and $\mu(A)\lor\mu(B)<\delta$.
Furthermore, $\{f_n\}_\seqn$ being Cauchy in $\mu$-measure, there is $n_0\in\bbN$ such that
for any $m,n\in\bbN$, if $m,n\geq n_0$ then
\begin{equation}\label{eqn:fund2}
\mu(\{|f_m-f_n|>\sigma/2\})<\delta.
\end{equation}
Since $f_{n_k}\mconv f$, there is $k_0\in\bbN$ such that $n_{k_0}\geq n_0$ and
\begin{equation}\label{eqn:fund3}
\mu(\{|f_{n_{k_0}}-f|>\sigma/2\})<\delta.
\end{equation}
Therefore, it follows from \eqref{eqn:fund1}--\eqref{eqn:fund3} that if $n\geq n_0$ then
\[
\mu(\{|f_n-f|>\sigma\})\leq\mu(\{|f_n-f_{n_{k_0}}|>\sigma/2\}\cup
\{|f_{n_{k_0}}-f|>\sigma/2\})<\ep,
\]
which implies $f_n\mconv f$.
\par
(ii)$\Rightarrow$(i) It follows from (1) of Theorem~\ref{base}; see also~\cite[Theorem~7.2]{L-M-P-K}.
\end{proof}
Next we give a structural characteristic of the conjunction of property~(C) and the (p.g.p.).
The following technical lemma is prepared for this purpose and for later use.
\begin{lemma}\label{au}
Let $\mu\in\calM(X)$. Assume that $\mu$ is null-continuous and satisfies the (p.g.p.).
Let $\{f_n\}_\seqn\subset\calF_0(X)$ and $f,g\in\calF_0(X)$.
If $f_n\mconv f$ and $f_n\to g$ $\mu$-a.u., then $f=g$ $\mu$-a.e.~and $f_n\to f$ $\mu$-a.u.
\end{lemma}
\begin{proof}
Let $N:=\{|f-g|>0\}$. Since $f_n\mconv f$ and $f_n\mconv g$, the (p.g.p.)~of $\mu$
implies that $\mu(\{|f-g|>c\})=0$ for every $c>0$.
Therefore, $\mu(N)=0$ by the null-continuity of $\mu$, hence $f=g$ $\mu$-a.e.
\par
Next we prove that $f_n\to f$ $\mu$-a.u.
Since $f_n\to g$ $\mu$-a.u., there is $\{E_k\}_\seqk\subset\calA$ such that
$\mu(E_k)\to 0$ and $\sup_{x\not\in E_k}|f_n(x)-g(x)|\to 0$ for every $\seqk$.
For each $\seqk$, let $D_k:=E_k\cup N$. Then $\mu(D_k)\to 0$
since $\mu$ satisfies the (p.g.p.). Furthermore, for any $\seqk$, letting $\ninfty$ gives
\[
\sup_{x\not\in D_k}|f_n(x)-f(x)|\leq\sup_{x\not\in E_k}|f_n(x)-g(x)|\to 0,
\]
which implies that $f_n\to f$ $\mu$-a.u.
\end{proof}
\begin{proposition}\label{structure}
Let $\mu\in\calM(X)$. Assume that $\mu$ satisfies property~(C) and the (p.g.p.).
Then $\mu$ satisfies property~($\mbox{S}_1$), hence
it is null-continuous and satisfies property~(S).
\end{proposition}
\begin{proof}
Suppose, contrary to our claim, that $\mu$ does not satisfy property ($\mbox{S}_1$).
Then, by~\cite[Theorem~4]{L-L} there are $\{f_n\}_\seqn\subset\calF_0(X)$ and $f\in\calF_0(X)$
such that $f_n\mconv f$ and $\{f_n\}_\seqn$ does not have any subsequence
converging $\mu$-almost uniformly to $f$.
Meanwhile, Theorems~\ref{comp} and~\ref{fund} imply that $\{f_n\}_\seqn$ has a subsequence
$\{f_{n_k}\}_\seqk$ and $g\in\calF_0(X)$ such that $f_{n_k}\to g$ $\mu$-a.u.
By (4) of Proposition~\ref{ex-C-pro}, $\mu$ is null-continuous, hence
Lemma~\ref{au} implies that $f_{n_k}\to f$ $\mu$-a.u.;
this contradicts the fact that
$\{f_n\}_\seqn$ does not have any subsequence converging $\mu$-almost uniformly to $f$.
Hence $\mu$ satisfies property~($\mbox{S}_1$).
See Subsection~\ref{measure} for the fact that $\mu$ satisfies property~(S).
\end{proof}
As a supplementary result of Theorem~\ref{fund},
the notion of Cauchy in $\mu$-measure can be characterized by the convergence of subsequences.
\begin{corollary}\label{subseq}
Let $\mu\in\calM(X)$. Let $\{f_n\}_\seqn\subset\calF_0(X)$.
Assume that $\mu$ satisfies property~(C) and the (p.g.p.).
Then the following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure.
\item[\us{(ii)}] There is a function $f\in\calF_0(X)$ such that any subsequence of $\{f_n\}_\seqn$ has a
further subsequence converging $\mu$-almost uniformly to $f$.
\item[\us{(iii)}] There is a function $f\in\calF_0(X)$ such that any subsequence of $\{f_n\}_\seqn$ has a
further subsequence converging in $\mu$-measure to $f$.
\end{enumerate}
\end{corollary}
\begin{proof}
Observe that $\mu$ satisfies property~($\mbox{S}_1$) by Proposition~\ref{structure}.
\par
(i)$\Rightarrow$(ii)\ Since $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure,
by Theorem~\ref{fund} there is a function $f\in\calF_0(X)$ such that $f_n\mconv f$.
Assertion (ii) thus follows from~\cite[Theorem~2]{L-L}.
\par
(ii)$\Rightarrow$(iii)\ It is obvious.
\par
(iii)$\Rightarrow$(i)\ Let $\ep>0$. For each $\seqn$, let $a_n:=\mu(\{|f_n-f|>\ep\})$.
Take any subsequence $\{a_{n_k}\}_\seqk$ of $\{a_n\}_\seqn$.
Then, $\{f_{n_k}\}_\seqk$ being a subsequence of $\{f_n\}_\seqn$, it follows from assertion (iii) that
$\{f_{n_k}\}_\seqk$ has a further subsequence $\{f_{n_{k_i}}\}_\seqi$ converging
in $\mu$-measure to $f$, so that $a_{n_{k_i}}=\mu(\{|f_{n_{k_i}}-f|>\ep\})\to 0$.
Hence, $a_n\to 0$, that is, $f_n\mconv f$.
Since $\mu$ has the (p.g.p.), $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure
by (1) of Theorem~\ref{base}.
\end{proof}
As the following proposition shows, property~(C) cannot be dropped in
Theorems~\ref{comp} and~\ref{fund}, Proposition~\ref{structure},
and Corollary~\ref{subseq}.
\begin{proposition}\label{counter1}
Let $X:=\bbN$ and $\calA:=2^X$. Let $\mu\colon\calA\to [0,2]$ be the nonadditive measure
defined by
\[
\mu(A):=\begin{cases}
0 & \mbox{if }A=\eset,\\
\sum_{i\in A}1/2^i & \mbox{if $A$ is a nonempty finite subset of $\bbN$},\\[1mm]
1+\sum_{i\in A}1/2^i & \mbox{if $A$ is an infinite subset of $\bbN$}.
\end{cases}
\]
\begin{enumerate}
\item[\us{(1)}] $\mu$ is subadditive, null-continuous, and satisfies the (p.g.p.)~and property~(S).
\item[\us{(2)}] $\mu$ is neither continuous from below nor order continuous.
\item[\us{(3)}] $\mu$ satisfies neither property~(C) nor property~($S_1$).
\item[\us{(4)}] For each $\seqn$, let $A_n:=\{1,2,\dots,n\}$ and $f_n:=\chi_{A_n}$.
Then the sequence $\{f_n\}_\seqn\subset\calF_0(X)$
is Cauchy in $\mu$-measure, but it neither converges in $\mu$-measure
nor has a subsequence converging in $\mu$-measure.
\item[\us{(5)}] For each $\seqn$, let $A_n:=\{n\}^c$ and $f_n:=\chi_{A_n}$.
Then the sequence $\{f_n\}_\seqn\subset\calF_0(X)$ converges in $\mu$-measure to the constant
function $1$
and $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure, but it does not have any subsequence converging
$\mu$-almost uniformly.
\end{enumerate}
\end{proposition}
\begin{proof}
(1)\ We only show that $\mu$ satisfies property~(S). The rest is easy to prove.
Take $\{A_n\}_\seqn\subset\calA$ and assume that $\mu(A_n)\to 0$.
Then there is $n_1\in\bbN$ such that $\mu(A_{n_1})<1/2$, hence $A_{n_1}\subset\{2,3,\dots\}$.
Again by $\mu(A_n)\to 0$, there is $n_2\in\bbN$ such that $n_2>n_1$ and $\mu(A_{n_2})<1/2^2$,
hence $A_{n_2}\subset\{3,4,\dots\}$.
We continue in this fashion to obtain a subsequence $\{A_{n_i}\}_\seqi$
such that $A_{n_i}\subset\{i+1,i+2,\dots\}$ for every $\seqi$.
Then $\mu$ satisfies property~(S) since it follows that
\[
\bigcap_{k=1}^\infty\bigcup_{i=k}^\infty A_{n_i}\subset\bigcap_{k=1}^\infty\{k+1,k+2,\dots\}
=\eset.
\]
\par
(2) For each $\seqn$, let $A_n:=\{1,2,\dots,n\}$.
Then $A_n\uparrow\bbN$ and $\mu(A_n)\to 1$, but $\mu(\bbN)=2$.
Hence $\mu$ is not continuous from below.
Next, for each $\seqn$, let $A_n:=\{n,n+1,\dots\}$. Then $A_n\downarrow\eset$,
but $\mu(A_n)>1$ for every $\seqn$. Hence $\mu$ is not order continuous.
\par
(3) For each $\seqn$, let $A_n:=\{n\}$. Then, letting $k\to\infty$ shows
\[
\sup_{\seql}\mu\left(\bigcup_{n=k}^{k+l}A_n\right)=\sum_{i=k}^\infty\frac{1}{2^i}\to 0,
\]
but
\[
\mu\left(\bigcup_{n=k}^\infty A_n\right)=1+\sum_{i=k}^\infty\frac{1}{2^i}\to 1.
\]
Hence $\mu$ does not satisfy property~(C).
Furthermore, $\mu(A_n)=1/2^n\to 0$,
but for any subsequence $\{A_{n_k}\}_\seqk$ of $\{A_n\}_\seqn$ it follows that
\[
\mu\left(\bigcup_{i=k}^\infty A_{n_i}\right)=\mu(\{n_k,n_{k+1},\dots\})>1
\]
for every $\seqk$. Therefore, $\mu$ does not satisfy property~($\mbox{S}_1$).
\par
(4)\ Let $\ep>0$ and find $n_0\in\bbN$ such that for any $n,l\in\bbN$, if $n\geq n_0$ then
$\sum_{i=n+1}^{n+l}1/2^i<\ep$. Then
\begin{align*}
\mu(\{|f_{n+l}-f_n|>\ep\})
&\leq\mu(A_{n+l}\setminus A_n)\\
&=\mu(\{n+1,\dots,n+l\})\\
&=\sum_{i=n+1}^{n+l}\frac{1}{2^i}<\ep,
\end{align*}
which means that $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure.
\par
Next we prove that $\{f_n\}_\seqn$ does not converge in $\mu$-measure.
Suppose, contrary to our claim, that there is $f\in\calF_0(X)$ such that $f_n\mconv f$.
If there were $n_0\in X$ such that $f(n_0)\ne 1$, then for any $n\geq n_0$
we would have $n_0\in\{|f_n-f|>\ep_0\}$, where $\ep_0:=|1-f(n_0)|/2>0$, hence
\[
\mu(\{|f_n-f|>\ep_0\})\geq\frac{1}{2^{n_0}}>0,
\]
which contradicts our assumption. Hence $f(x)=1$ for every $x\in X$.
Therefore, letting $\ninfty$ shows
\[
\mu(\{|f_n-f|>1/2\})=\mu(\{n+1,n+2,\dots\})=1+\sum_{i=n+1}^\infty\frac{1}{2^i}\to 1,
\]
which contradicts the fact that $f_n\mconv f$.
\par
Finally, suppose that $\{f_n\}_\seqn$ has a subsequence $\{f_{n_k}\}_\seqk$ converging
in $\mu$-measure to a function $f\in\calF_0(X)$.
Since $\mu$ has the (p.g.p.)~and $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure,
it follows from (2) of Theorem~\ref{base}
that $f_n\mconv f$, which contradicts the fact
that $\{f_n\}_\seqn$ does not converge in $\mu$-measure.
\par
(5)\ For any $\ep>0$ and $\seqn$, it follows that
\[
\mu(\{|f_n-1|>\ep\})\leq\mu(A_n^c)=\frac{1}{2^n},
\]
hence $f_n\mconv 1$. Since $\mu$ satisfies the (p.g.p.),
$\{f_n\}_\seqn$ is Cauchy in $\mu$-measure by~(1) of Theorem~\ref{base}.
Suppose, contrary to our claim, that $\{f_n\}_\seqn$ has a subsequence $\{f_{n_k}\}_\seqk$
converging $\mu$-almost uniformly to a function $f\in\calF_0(X)$.
Since $f_{n_k}\mconv 1$ and $\mu$ is null-continuous and satisfies the (p.g.p.),
it follows from Lemma~\ref{au} that $f=1$ $\mu$-a.e.~and $f_{n_k}\to 1$ $\mu$-a.u.
Hence, there is $E\in\calA$ such that $\mu(E)<1/2$ and
\begin{equation}\label{eqn:counter1}
\sup_{x\not\in E}|f_{n_k}(x)-1|\to 0.
\end{equation}
Then $E$ is at most a finite subset of $X$ since $\mu(E)<1/2$.
If $E=\eset$, then $n_k\not\in E$ for every $\seqk$, hence
\[
1\geq\sup_{x\not\in E}|f_{n_k}(x)-1|\geq |f_{n_k}(n_k)-1|=1,
\]
which implies that $\sup_{x\not\in E}|f_{n_k}(x)-1|=1$.
If $E$ is a nonempty finite subset of $X$, then there is $k_0\in\bbN$ such that
$n_k\not\in E$ for every $k\geq k_0$, hence $\sup_{x\not\in E}|f_{n_k}(x)-1|=1$.
Both cases thus contradict~\eqref{eqn:counter1}.
\end{proof}
The following example shows that Theorems~\ref{comp} and~\ref{fund} are
sharpened versions of the corresponding results in~\cite{J-S-W-K-L-Y}.
\begin{example}\label{counter2}
Let $\nu\in\calM(X)$.
Assume that $\nu$ is continuous from below and subadditive.
This assumption is satisfied, for instance, $\nu$ is the Lebesgue measure on $\bbR$
in the case where $X=\bbR$ and $\calA$ is the $\sigma$-field of all Borel subsets of $\bbR$
or $\nu$ is given by $\nu(A):=\sum_{i\in A}1/2^i$ for every $A\in 2^\bbN$
in the case where $X=\bbN$ and $\calA=2^\bbN$.
Define the nonadditive measure $\mu\colon\calA\to [0,\infty]$ by
\[
\mu(A):=\begin{cases}
\nu(A) & \mbox{if }\nu(A)<1,\\
1+\nu(A) & \mbox{if }\nu(A)\geq 1
\end{cases}
\]
for every $A\in\calA$.
Then $\mu$ satisfies property~(C) and the (p.g.p.).
In addition, it is null-additive and null-continuous, but not continuous from below.
\end{example}
\section{The necessity of property (C)}\label{necessity}
In this section, we discuss whether property~(C) is a necessary condition for the Cauchy criterion to hold
in the case where $X$ is countable.
To this end we prepare the following lemmas.
\begin{lemma}\label{countable}
Let $\mu\in\calM(X)$. Assume that $\mu$ is weakly null-additive.
If $X$ is countable, then the following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\mu$ satisfies property~(S).
\item[\us{(ii)}] $\mu$ is null-continuous.
\item[\us{(iii)}] For any sequence $\{E_n\}_\seqn\subset\calA$,
if $\mu(E_n)\to 0$ then $\mu\left(\bigcap_{k=1}^\infty\bigcup_{n=k}^\infty E_n\right)=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i)$\Leftrightarrow$(ii)\ It follows from~\cite[Proposition~3.2]{U-M}.
\par
(ii)$\Rightarrow$(iii)\ We may assume that $\bigcup_{n=1}^\infty E_n\ne\eset$.
For each $x\in\bigcup_{n=1}^\infty E_n$, let $\calE_x$ be the collection of all sets $E_n$ containing $x$,
and let $E(x)$ be the intersection of all sets in $\calE_x$.
Since $D:=\bigcap_{k=1}^\infty\bigcup_{n=k}^\infty E_n$ is at most countable,
it may be expressed by $\{a_m\colon m\in T\}$, where
$T=\bbN$ or $T=\{1,2,\dots,N\}$ for some $N\in\bbN$.
Then $D=\bigcup_{m\in T}E(a_m)$.
\par
Fix $\seqm$. Since $a_m\in\bigcup_{n=1}^\infty E_n$, there is $n_1^{(m)}\in\bbN$
such that $a_m\in E_{n_1^{(m)}}$.
Then $a_m\in\bigcup_{n=n_1^{(m)}+1}^\infty E_n$, hence there is $n_2^{(m)}\in\bbN$ such that
$n_2^{(m)}>n_1^{(m)}$ and $a_m\in\ E_{n_2^{(m)}}$.
We continue in this fashion to obtain an increasing sequence $\{n_j^{(m)}\}_\seqj$ such that
$E(a_m)\subset E_{n_j^{(m)}}$ for every $\seqj$.
Hence $\mu(E(a_m))=0$ for every $m\in T$ since $\mu(E_n)\to 0$.
Therefore, $T$ being at most countable, $\mu(D)=0$
by the weak null-additivity and null-continuity of $\mu$.
\par
(iii)$\Rightarrow$(i)\ It is obvious.
\end{proof}
\begin{remark}
Assertion (iii) of Lemma~\ref{countable} implies the weak null-additivity of $\mu$.
Indeed, take $A,B\in\calA$ and assume that $\mu(A)=\mu(B)=0$.
For each $\seqn$, let
\[
E_n:=\begin{cases}
A & \mbox{if $n$ is even},\\
B & \mbox{if $n$ is odd}.
\end{cases}
\]
Then $\mu(E_n)=0$ for every $\seqn$.
Hence, assertion (iii) yields
$\mu(A\cup B)=\mu\left(\bigcap_{k=1}^\infty\bigcup_{n=k}^\infty E_n\right)=0$,
which implies the weak null-additivity of $\mu$.
\end{remark}
\begin{lemma}\label{null-conti}
Let $\mu\in\calM(X)$. If any Cauchy in $\mu$-measure sequence $\{f_n\}_\seqn$ has
a subsequence converging $\mu$-almost everywhere, then $\mu$ is null-continuous.
\end{lemma}
\begin{proof}
Take a nondecreasing sequence $\{N_n\}_\seqn\subset\calA$ and assume that $\mu(N_n)=0$
for every $\seqn$. Let $N:=\bigcup_{n=1}^\infty N_n$ and assume that $N\ne\eset$
without loss of generality.
For each $\seqn$, let $f_n:=n\chi_{N_n}$.
Then $\{f_n\}_\seqn$ is a Cauchy in $\mu$-measure sequence in $\calF_0(X)$
such that $f_n(x)\to\infty$ for every $x\in N$.
Hence, there are a function $f\in\calF_0(X)$
and a subsequence $\{f_{n_k}\}_\seqk$ of $\{f_n\}_\seqn$ such that $f_{n_k}\to f$ $\mu$-a.e.,
so that there is $N_0\in\calA$ such that $\mu(N_0)=0$ and $f_{n_k}(x)\to f(x)$
for every $x\not\in N_0$.
Since $f_n(x)\to\infty$ for every $x\in N$ and $f$ is real-valued, we have $N\subset N_0$,
hence $\mu(N)=0$. Thus $\mu$ is null-continuous.
\end{proof}
\begin{theorem}\label{NSC0}
Let $\mu\in\calM(X)$. Assume that $\mu$ satisfies the (p.g.p.).
If $X$ is countable, then the following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\mu$ satisfies property~($\mbox{C}_0$).
\item[\us{(ii)}] Any Cauchy in $\mu$-measure sequence $\{f_n\}_\seqn\subset\calF_0(X)$
has a subsequence converging $\mu$-almost everywhere.
\end{enumerate}
\end{theorem}
\begin{proof}
(i)$\Rightarrow$(ii)\ It follows from Corollary~\ref{AE}.
\par
(ii)$\Rightarrow$(i)\ Take $\{E_n\}_\seqn\subset\calA$ and assume that
\[
\sup_\seql\mu\left(\bigcup_{n=k}^{k+l}E_n\right)\to 0.
\]
Then $\mu(E_n)\to 0$, so that Lemma~\ref{countable} yields
$\mu\left(\bigcap_{k=1}^\infty\bigcup_{n=k}^\infty E_n\right)=0$
since $\mu$ is null-continuous by Lemma~\ref{null-conti} and weakly null-additive.
Hence $\mu$ satisfies property~($\mbox{C}_0$).
\end{proof}
\begin{theorem}\label{NSC}
Let $\mu\in\calM(X)$. Assume that $\mu$ satisfies the (p.g.p.).
If $X$ is countable, then the following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\mu$ satisfies property~(C).
\item[\us{(ii)}] Any Cauchy in $\mu$-measure sequence $\{f_n\}_\seqn\subset\calF_0(X)$
has a subsequence converging $\mu$-almost uniformly.
\item[\us{(iii)}] $\mu$ is null-continuous and any Cauchy in $\mu$-measure
sequence $\{f_n\}_\seqn\subset\calF_0(X)$ has a subsequence converging in $\mu$-measure.
\item[\us{(iv)}] $\mu$ is null-continuous and any Cauchy in $\mu$-measure
sequence $\{f_n\}_\seqn\subset\calF_0(X)$ converges in $\mu$-measure.
\end{enumerate}
\end{theorem}
\begin{proof}
(i)$\Rightarrow$(ii)\ It follows from Theorem~\ref{comp}.
\par
(ii)$\Rightarrow$(iii)\ It follows from Lemma~\ref{null-conti}.
\par
(iii)$\Rightarrow$(iv)\ It follows from (2) of Theorem~\ref{base}.
\par
(iv)$\Rightarrow$(i)\ Take $\{E_n\}_\seqn\subset\calA$ and assume that
\begin{equation}\label{eqn:NSC1}
\sup_\seql\mu\left(\bigcup_{n=k}^{k+l}E_n\right)\to 0.
\end{equation}
Let $D:=\bigcap_{k=1}^\infty\bigcup_{n=k}^\infty E_n$.
Since $\mu(E_n)\to 0$ by~\eqref{eqn:NSC1} and $\mu$ is weakly null-additive and null-continuous,
$\mu(D)=0$ follows from Lemma~\ref{countable}.
For each $\seqn$, let
$A_n:=\left(\bigcup_{i=n}^\infty E_i\right)\setminus D$ and $f_n:=\chi_{A_n}$.
Then $A_n\downarrow\eset$, hence $f_n(x)\to 0$ for every $x\in X$.
\par
We first show that $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure.
To this end, observe that for any $k,l\in\bbN$, if $x\not\in\bigcup_{n=k}^{k+l}E_n$,
then $f_{k+l}(x)=f_k(x)$.
In fact, if $x\not\in A_k$, then $x\not\in A_{k+l}$, hence $f_{k+l}(x)=f_k(x)=0$.
If $x\in A_k$, then the set inclusion
\[
A_k=\left\{\left(\bigcup_{n=k}^{k+l-1}E_n\right)\setminus D\right\}
\cup\left\{\left(\bigcup_{n=k+l}^\infty E_n\right)\setminus D\right\}
\subset\left(\bigcup_{n=k}^{k+l} E_n\right)\cup A_{k+l}
\]
implies that $x\in A_{k+l}$ since $x\not\in\bigcup_{n=k}^{k+l}E_n$.
Hence $f_{k+l}(x)=f_k(x)=1$.
Therefore, for given $\ep>0$ and $\delta>0$, by~\eqref{eqn:NSC1} there is $k_0\in\bbN$
such that for any $k,l\in\bbN$, if $k\geq k_0$ then
\[
\mu(\{|f_{k+l}-f_k|>\delta\})\leq\mu\left(\bigcup_{n=k}^{k+l}E_n\right)<\ep.
\]
Thus $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure.
\par
Now, by assertion (iv) there is a function $f\in\calF_0(X)$ such that $f_n\mconv f$.
Since $\mu$ satisfies property~(S) by Lemma~\ref{countable}, the Riesz theorem shows that
$\{f_n\}_\seqn$ has a subsequence $\{f_{n_k}\}_\seqk$ such that $f_{n_k}\to f$ $\mu$-a.e.
Hence $\mu(\{f\ne 0\})=0$ since $f_n(x)\to 0$ for every $x\in X$. Thus
\begin{equation}\label{eqn:NSC2}
\mu(\{|f_n-f|>1/2\}\cup\{f\neq 0\})\to 0
\end{equation}
since $f_n\mconv f$ and $\mu$ satisfies the (p.g.p.).
Observe that
\[
A_n=\{|f_n|>1/2\}\subset\{|f_n-f|>1/2\}\cup\{f\ne 0\}
\]
for every $\seqn$. From this observation and~\eqref{eqn:NSC2} it follows that $\mu(A_n)\to 0$.
Since $\mu(D)=0$ and $\bigcup_{n=k}^\infty E_n\subset A_k\cup D$ for every $\seqk$,
the (p.g.p.)~of $\mu$ implies that $\mu\left(\bigcup_{n=k}^\infty E_n\right)\to 0$.
Therefore, $\mu$ satisfies property~(C).
\end{proof}
\section{The space of the measurable functions}\label{MFS}
Let us introduce a prenorm on the real linear space $\calF_0(X)$.
To this end, define the function $\varphi\colon [0,\infty]\to [0,\pi/2]$ by
\[
\varphi(t):=\begin{cases}
\arctan t & \mbox{if }t\ne\infty\\
\pi/2 & \mbox{if }t=\infty.
\end{cases}
\]
Then $\varphi$ is a continuous and increasing function with the property that $\varphi(t)=0$
if and only if $t=0$.
It satisfies $\varphi(t)\leq t$ and
$\varphi(s+t)\leq\varphi(s)+\varphi(t)$ for every $s,t\in [0,\infty]$.
In addition, given a constant $M>0$, it follows that $\varphi(Mt)\leq\max\{1,M\}\varphi(t)$
for every $t\in [0,\infty]$.
\begin{definition}[\cite{D-S,Rao}]\label{DS}
Let $\mu\in\calM(X)$.
Define the prenorm $\dsn{\cdot}$ on $\calF_0(X)$ by
\[
\dsn{f}:=\inf_{c>0}\varphi(c+\mu(\{|f|>c\}))
\]
for every $f\in\calF_0(X)$.
If the measure $\mu$ is needed to specify, then $\dsn{f}$ is written as $\dsnm{f}{\mu}$.
\end{definition}
In what follows, when the space $\calF_0(X)$ is considered together with the prenorm $\dsn{f}$,
it is written as $\calL^0(\mu)$ since the definition of the prenorm depends on the nonadditive measure $\mu$.
\par
The prenorm $\dsn{\cdot}$ is truncated subhomogeneous, that is,
\[
\dsn{cf}\leq\max\{1,|c|\}\dsn{f}
\]
for every $f\in\calL^0(\mu)$ and $c\in\bbR$,
but not satisfy the triangle inequality in general.
Furthermore, for any $f\in\calL^0(\mu)$ it follows that
$\dsn{f}=0$ if and only if $\mu(\{|f|>c\})=0$ for every $c>0$;
they are equivalent to $\mu(\{|f|>0\})=0$ if $\mu$ is null-continuous.
The same argument as that in ordinary measure theory shows that
for any sequence $\{f_n\}_\seqn\subset\calL^0(\mu)$ and $f\in\calL^0(\mu)$,
convergence with respect to $\dsn{\cdot}$ is equivalent to convergence in $\mu$-measure, that is,
it follows that $\dsn{f_n-f}\to 0$ if and only if $f_n\mconv f$.
In the same way, $\{f_n\}_\seqn$ is Cauchy with respect to $\dsn{\cdot}$
if and only if $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure.
The prenorm $\dsn{\cdot}$ also has the following properties, some of which are related to
the characteristic of nonadditive measures.
\begin{proposition}\label{PDS}
Let $\mu\in\calM(X)$.
\begin{enumerate}
\item[\us{(1)}] For any $A\in\calA$ and $c\in\bbR$
it follows that $\dsn{c\chi_A}=\min\{\varphi(\mu(A)),\varphi(|c|)\}$.
\item[\us{(2)}] For any $f,g\in\calL^0(\mu)$, if $|f|\leq |g|$,
then $\dsn{f}\leq\dsn{g}$.
\item[\us{(3)}] $\mu$ is weakly null-additive if and only if $\dsn{\cdot}$ is weakly null-additive.
\item[\us{(4)}] $\mu$ is null-additive if and only if $\dsn{\cdot}$ is null-additive.
\item[\us{(5)}] If $\mu$ is $K$-relaxed subadditive for some $K\geq 1$,
then $\dsn{\cdot}$ satisfies the $K$-relaxed triangle inequality.
In particular, if $\mu$ is subadditive, then $\dsn{\cdot}$ satisfies the triangle inequality.
\item[\us{(6)}] $\mu$ is null-additive if and only if it follows that $\dsn{f}=\dsn{g}$ whenever
$f,g\in\calL^0(\mu)$ and $f\sim g$.
\end{enumerate}
\end{proposition}
\begin{proof}
Assertions (1)--(5) are easy to prove.
Some of their proofs can be found in~\cite[Proposition~3.2]{Kawabe2021}.
See also~\cite{H-O}.
\par
(6)\ The ``only if'' part. Let$f,g\in\calL^0(\mu)$ and assume that $f\sim g$.
Then $\mu(\{|f-g|>c\})=0$ for every $c>0$, so that $\dsn{f-g}=0$.
Since $\mu$ is null-additive, so is $\dsn{\cdot}$ by (2), hence
$\dsn{f}=\dsn{g+(f-g)}=\dsn{g}$ follows.
\par
The ``if'' part. Take $A,B\in\calA$ and assume that $\mu(B)=0$.
For each $r>0$, let $f:=r\chi_{A\cup B}$ and $g:=r\chi_A$.
Then $\mu(\{|f-g|>c\})=0$ for every $c>0$, hence $f\sim g$, and finally $\dsn{f}=\dsn{g}$.
It thus follows from (1) that
\[
\min\left\{\varphi(\mu(A\cup B),\varphi(r)\right\}=\dsn{f}=
\dsn{g}=\min\left\{\varphi(\mu(A),\varphi(r)\right\}
\]
for every $r>0$. Letting $r\uparrow\infty$ yields $\varphi(\mu(A\cup B))=\varphi(\mu(A))$, which implies
$\mu(A\cup B)=\mu(A)$ since $\varphi$ is injective.
Thus $\mu$ is null-additive.
\end{proof}
Next let us construct the quotient space of $\calL^0(\mu)$ by the equivalence relation
defined in Subsection~\ref{equiv}.
Let
\[
L^0(\mu):=\{[f]\colon f\in\calL^0(\mu)\}.
\]
Given an equivalence class $[f]\in L^0(\mu)$, define the prenorm on $L^0(\mu)$ by
$\dsn{[f]}:=\dsn{f}$, which is well-defined if $\mu$ is null-additive by (6) of Proposition~\ref{PDS}.
This prenorm has the same properties as the prenorm on $\calL^0(\mu)$
and separates points of $L^0(\mu)$, that is, for any $[f]\in L^0(\mu)$,
if $\dsn{[f]}=0$ then $[f]=0$.
\par
Now, the Cauchy criterion with respect to $\dsn{\cdot}$ can be deduced from Theorem~\ref{fund}.
\begin{theorem}\label{calL0}
Let $\mu\in\calM(X)$. Assume that $\mu$ satisfies property~(C) and the (p.g.p.).
\begin{enumerate}
\item[\us{(1)}] A sequence $\{f_n\}_\seqn\subset\calL^0(\mu)$ is Cauchy if and only it converges.
\item[\us{(2)}] Additionally, assume that $\mu$ is null-additive.
A sequence $\{[f_n]\}_\seqn\subset L^0(\mu)$ is Cauchy if and only if it converges.
\end{enumerate}
\end{theorem}
\begin{corollary}\label{L0}
Let $\mu\in\calM(X)$. Assume that $\mu$ satisfies property~(C) and the (p.g.p.).
Then $\calL^0(\mu)$ is complete.
If $\mu$ is additionally assumed to be null-additive, then $L^0(\mu)$ is complete.
\end{corollary}
\begin{remark}
Every nonadditive measure that is continuous from below satisfies property (C).
Furthermore, by Example~\ref{counter2} there is a nonadditive measure
that satisfies property~(C) and the (p.g.p.), but not continuous from below.
Consequently, Theorem~\ref{calL0} and Corollary~\ref{L0} are sharpened versions
of those given in~\cite{Kawabe2021}.
\end{remark}
\begin{example}
Let $X:=\bbN$ and $\calA:=2^X$.
Let $\mu$ be the nonadditive measure given in Proposition~\ref{counter1}.
Then $\mu$ satisfies the (p.g.p.)~but not property~(C).
For each $\seqn$, let $A_n:=\{1,2,\dots,n\}$ and $f_n:=\chi_{A_n}$.
Then by (4) of Proposition~\ref{counter1} the sequence
$\{f_n\}_\seqn\subset\calL^0(\mu)$ is Cauchy but does not converge.
Hence $\calL^0(\mu)$ is not complete.
This fact shows that property~(C) cannot be dropped
in Theorem~\ref{calL0} and Corollary~\ref{L0}.
\end{example}
The results for dense subsets and the separability of $\calL^0(\mu)$ and $L^0(\mu)$ immediately follow
from~\cite[Theorems~7.2 and~7.7]{Kawabe2021}.
Recall that a nonadditive measure $\mu$ has a \emph{countable basis}\/ if there is a countable subset
$\calD$ of $\calA$, which is called a \emph{countable basis} of $\mu$,
such that for any $A\in\calA$ and $\ep>0$
there is $D\in\calD$ for which $\mu(A\triangle D)<\ep$.
Recall that $S(X)=\{[h]\colon h\in\calS(X)\}$.
\begin{theorem}
Let $\mu\in\calM(X)$. Assume that $\mu$ is order continuous.
Then $\calS(X)$ is dense in $\calL^0(\mu)$.
If $\mu$ is additionally assumed to be null-additive, then $S(X)$ is dense in $L^0(\mu)$.
\end{theorem}
\begin{theorem}\label{L0sep}
Let $\mu\in\calM(X)$. Assume that $\mu$ is order continuous and satisfies the (p.g.p.).
Assume that $\mu$ has a countable basis.
Then there is a countable subset $\calE$ of $\calL^0(\mu)$
such that for any $f\in\calL^0(\mu)$ and $\ep>0$ one can find $h\in\calE$ for which $\dsn{f-h}<\ep$.
Hence $\calL^0(\mu)$ is separable.
If $\mu$ is additionally assumed to be null-additive, then $L^0(\mu)$ is separable.
\end{theorem}
\section{The Choquet-Lorentz space}\label{CLspace}
In this section, a type of Lorentz space is defined by using the Choquet integral instead of the Lebesgue integral
and its completeness and separability are discussed.
\begin{definition}\label{C-L}
Let $\mu\in\calM(X)$. Define the function $\pqn{\cdot}\colon\calF_0(X)\to [0,\infty]$ by
\[
\pqn{f}:=\left(\frac{p}{q}\right)^{1/q}\Ch(\mupq,|f|^q)^{1/q}
\]
for every $f\in\calF_0(X)$ and let
\[
\calLpq(\mu):=\{f\in\calF_0(X)\colon\pqn{f}<\infty\}.
\]
If the measure $\mu$ is needed to specify, then $\pqn{f}$ is written as $\pqnm{f}{\mu}$.
The space $\calLpq(\mu)$ is called the \textit{Choquet-Lorentz space}\/
and the prenorm $\pqn{\cdot}$ on $\calLpq(\mu)$ is called the
\textit{Choquet-Lorentz prenorm}.
In particular, if $p=q$ then the space $\calL^{q,q}(\mu)$ is
called the \textit{Choquet $\calL^q$ space}
and $\|\cdot\|_{q,q}$ is called the \textit{Choquet $\calL^q$ prenorm}.
They are denoted by $\calL^q(\mu)$ and $\qn{\cdot}$, respectively.
In other words,
\[
\qn{f}=\Ch(\mu,|f|^q)^{1/q}
\]
for every $f\in\calF_0(X)$ and
\[
\calLq(\mu)=\{f\in\calF_0(X)\colon\qn{f}<\infty\}.
\]
\end{definition}
\par
If the prenorm $\pqn{\cdot}$ on $\calLpq(\mu)$ is a quasi-seminorm,
then $\calLpq(\mu)$ is a real linear subspace of $\calF_0(X)$, but this is not the case in general.
When $\mu$ is $\sigma$-additive, the Choquet-Lorentz space coincides with the Lorentz space
and the Choquet $\calL^q$ space coincides with the Lebesgue space of all $q$-th order integrable functions,
both of which are defined by the abstract Lebesgue integral~\cite[Theorem~6.6]{C-R}.
By definition it follows that
\[
\pqnm{f}{\mu}=\left(\frac{p}{q}\right)^{1/q}\qnm{f}{\mupq}
\]
for every $f\in\calL^{p,q}(\mu)$ and that $\calL^{p,q}(\mu)=\calL^q(\mupq)$.
This observation is important and shows that some properties
of the Choquet Lorentz space may be deduced from those of the Choquet $\calL^q$ space;
see the proofs of Theorems~\ref{L-comp1}, \ref{L-dense}, and~\ref{L-sep}.
The prenorm $\pqn{\cdot}$ does not satisfy the triangle inequality in general
even if $\mu$ is $\sigma$-additive~\cite[Theorem~6.5]{C-R}.
\begin{proposition}\label{pqnorm}
Let $\mu\in\calM(X)$. Let $0<p<\infty$ and $0<q<\infty$.
\begin{enumerate}
\item[\us{(1)}] For any $A\in\calA$ it follows that
\[
\pqn{\chi_A}=\left(\dfrac{p}{q}\right)^{1/q}\mu(A)^{1/p}.
\]
\item[\us{(2)}] For any $f\in\calLpq(\mu)$ it follows that $\pqn{f}=0$
if and only if $\mu(\{|f|>c\})=0$ for every $c>0$;
they are equivalent to $\mu(\{|f|>0\})=0$ if $\mu$ null-continuous.
\item[\us{(3)}] For any $f\in\calLpq(\mu)$ and $c>0$ it follows that $\pqn{cf}=|c|\pqn{f}$.
Hence $\pqn{\cdot}$ is homogeneous.
\item[\us{(4)}] For any $f\in\calLpq(\mu)$ and $c>0$ it follows that
\[
\mu(\{|f|>c\})\leq\frac{1}{c^p}\left(\frac{q}{p}\right)^{p/q}\pqn{f}^p.
\]
\item[\us{(5)}] For any $f,g\in\calLpq(\mu)$, if $|f|\leq |g|$ then $\pqn{f}\leq\pqn{g}$.
\item[\us{(6)}] $\mu$ is weakly null-additive if and only if $\pqn{\cdot}$ is weakly null-additive.
\item[\us{(7)}] $\mu$ is null-additive if and only if $\pqn{\cdot}$ is null-additive.
\item[\us{(8)}] If $\mu$ is $K$-relaxed subadditive for some $K\geq 1$, then $\pqn{\cdot}$ satisfies
the $2^{1+\frac{1}{p}+\frac{1}{q}}K^\frac{1}{p}$-relaxed triangle inequality.
Hence $\pqn{\cdot}$ is a quasi-seminorm.
\item[\us{(9)}] $\mu$ is null-additive if and only if it follows that $\pqn{f}=\pqn{g}$
whenever $f,g\in\calLpq(\mu)$ and $f\sim g$.
\end{enumerate}
\end{proposition}
\begin{proof}
Assertions (1)--(5) are easy to prove
and assertions (6) and (7) can be derived
in the same manner as~\cite[Proposition~3.2]{Kawabe2021}.
\par
(8)\ Let $f,g\in\calLpq(\mu)$. For any $t>0$, we have
\[
\{|f+g|^q>t\}\subset\{2^q|f|^q>t\}\cup\{2^q|g|^q>t\},
\]
hence
\begin{equation}\label{eqn:pqnorm1}
\mu(\{|f+g|^q>t\})\leq K\bigl\{\mu(\{2^q|f|^q>t\})+\mu(\{2^q|g|^q>t\})\bigr\}
\end{equation}
by the $K$-relaxed subadditivity of $\mu$. It thus follows that
\begin{align*}
&\hspace*{-7mm}\left(\frac{q}{p}\right)^{1/q}\pqn{f+g}\\
&=\left(\int_0^\infty\mu(\{|f+g|^q>t\})^{q/p}dt\right)^{1/q}\\
&\leq K^\frac{1}{p}\left(\int_0^\infty\bigl\{\mu(\{2^q|f|^q>t\})
+\mu(\{2^q|g|^q>t\})\bigr\}^{q/p}dt\right)^{1/q}\\
&\leq 2^\frac{1}{p}K^\frac{1}{p}\left(\int_0^\infty\mu(\{2^q|f|^q>t\})^{q/p}dt
+\int_0^\infty\mu(\{2^q|g|^q>t\})^{q/p}dt\right)^{1/q}\\
&=2^\frac{1}{p}K^\frac{1}{p}\left(2^q\Ch(\mupq,|f|^q)+2^q\Ch(\mupq,|g|^q)\right)^{1/q}\\
&\leq 2^\frac{1}{p}K^\frac{1}{p}2^{1+\frac{1}{q}}\left(\Ch(\mupq,|f|^q)^{1/q}
+\Ch(\mupq,|g|^q)^{1/q}\right)\\
&=2^{1+\frac{1}{p}+\frac{1}{q}}K^\frac{1}{p}\left(\frac{q}{p}\right)^{1/q}\left(\pqn{f}+\pqn{g}\right),
\end{align*}
where the first inequality is due to~\eqref{eqn:pqnorm1} and the second and third inequalities
are due to the elementary inequality
\begin{equation}\label{elementary}
|a+b|^r\leq 2^r(|a|^r+|b|^r)
\end{equation}
that holds for every $a,b\in\bbR$ and $0<r<\infty$.
\par
(9)\ It can be proved in the same manner as (6) of Proposition~\ref{PDS}.
\end{proof}
The quotient space
\[
L^{p,q}(\mu):=\{[f]\colon f\in\calLpq(\mu)\}
\]
is defined by the equivalence relation stated in Subsection~\ref{equiv}.
Given an equivalence class $[f]\in L^{p,q}(\mu)$,
define the prenorm on $L^{p,q}(\mu)$ by $\pqn{[f]}:=\pqn{f}$,
which is well-defined if $\mu$ is null-additive by (9) of Proposition~\ref{pqnorm}.
This prenorm has the same properties as the prenorm on $\calLpq(\mu)$ and
separates points of $L^{p,q}(\mu)$, that is, for any $[f]\in L^{p,q}(\mu)$,
if $\pqn{[f]}=0$ then $[f]=0$.
\par
To show the completeness of the spaces $\calL^{p,q}(\mu)$ and $L^{p,q}(\mu)$
some suitable convergence theorems of
the Choquet integral are needed.
Recall that every nonadditive measure that is monotone autocontinuous from below is null-additive.
\begin{proposition}\label{Ch-conv}
Let $\mu\in\calM(X)$. Let $0<q<\infty$.
The following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\mu$ is monotone autocontinuous from below.
\item[\us{(ii)}] For any $\{f_n\}_\seqn\subset\calF_0(X)$ and $f\in\calF_0(X)$,
if they satisfy
\begin{enumerate}
\item[\us{(a)}] for each $\seqn$ there is $N_n\in\calA$ such that $\mu(N_n)=0$
and $f_n(x)\leq f_{n+1}(x)\leq f(x)$ for every $x\not\in N_n$,
\item[\us{(b)}] $f_n\mconv f$,
\end{enumerate}
then $\mu(\{f_n>t\})\uparrow\mu(\{f>t\})$ for every continuity\
point $t$ of the function $\mu(\{f>t\})$.
\item[\us{(iii)}] The Choquet $q$-th monotone nondecreasing almost uniform convergence theorem holds
for $\mu$, that is, for any $\{f_n\}_\seqn\subset\calF_0^+(X)$ and $f\in\calF_0^+(X)$,
if they satisfy condition \us{(a)}\/ and
\begin{enumerate}
\item[\us{(c)}] $f_n\to f$ $\mu$-a.u.,
\end{enumerate}
then $\Ch(\mu,f_n^q)\uparrow\Ch(\mu,f^q)$.
\end{enumerate}
\end{proposition}
\begin{proof}
In this proof, for each $t\geq 0$ and $\seqn$, let $\varphi_n(t):=\mu(\{f_n>t\})$
and $\varphi(t):=\mu(\{f>t\})$.
\par
(i)$\Rightarrow$(ii)\ Let $t_0\in\bbR$ be a continuity point of $\varphi$.
The null-additivity of $\mu$ and condition (a) imply that
\[
\varphi_n(t)\leq\varphi_{n+1}(t)\leq\varphi(t)
\]
for every $t\in\bbR$ and $\seqn$, which yields $\sup_\seqn\varphi_n(t_0)\leq\varphi(t_0)$.
It thus suffices to show
\begin{equation}\label{eqn:Ch-conv1}
\varphi(t_0)\leq\sup_\seqn\varphi_n(t_0).
\end{equation}
To see this, fix $\ep>0$ and let $A:=\{f>t_0+\ep\}$ and $B_n:=\{|f_n-f|>\ep\}$
for every $\seqn$.
Then $\mu(B_n)\to 0$ by conditions (b).
By condition~(a) and the null-additivity of $\mu$, one can find a nondecreasing sequence $\{N_n\}_\seqn$
of $\mu$-null sets such that $f_n(x)\leq f_{n+1}(x)\leq f(x)$ for every $x\not\in N_n$.
Then it is easy to verify that $\{B_n\setminus N_n\}_\seqn$ is nonincreasing and
$A\setminus (B_n\setminus N_n)\subset\{f_n>t_0\}\cup N_n$, so that
\begin{equation}\label{eqn:Ch-conv2}
\mu(A\setminus (B_n\setminus N_n))\leq\mu(\{f_n>t_0\})
\end{equation}
for every $\seqn$.
Furthermore, the monotone autocontinuity of $\mu$ from below yields
\begin{equation}\label{eqn:Ch-conv3}
\mu(A)=\sup_\seqn\mu(A\setminus (B_n\setminus N_n))
\end{equation}
since $\mu(B_n\setminus N_n)\to 0$.
Therefore it follows from~\eqref{eqn:Ch-conv2} and~\eqref{eqn:Ch-conv3} that
\[
\varphi(t_0+\ep)\leq\sup_\seqn\varphi_n(t_0),
\]
which implies~\eqref{eqn:Ch-conv1} since $t_0$ is a continuity point of $\varphi$.
\par
(ii)$\Rightarrow$(iii)\ Let $D$ be the set of all discontinuity points of $\varphi$.
Condition (c) always implies condition (b), so that
$\varphi_n(t)\uparrow\varphi(t)$ for every $t\not\in D$ by assertion (ii). Hence it follows that
\[
\Ch(\mu,f_n^q)=\int_0^\infty qt^{q-1}\varphi_n(t)dt\uparrow
\int_0^\infty qt^{q-1}\varphi(t)dt=\Ch(\mu,f^q)
\]
by the Lebesgue monotone convergence theorem and the transformation formula of the Choquet integral.
\par
(iii)$\Rightarrow$(i)\ Take $A,B_n\in\calA$ and assume that $\{B_n\}_\seqn$ is nonincreasing
and $\mu(B_n)\to 0$. For any $\seqn$, let $f_n:=\chi_{A\setminus B_n}$ and $f:=\chi_A$.
Then they satisfy conditions~(a) and~(c), hence it follows from assertion (iii) that
\[
\mu(A\setminus B_n)=\Ch(\mu,f_n^q)\to\Ch(\mu,f^q)=\mu(A).
\]
Therefore, $\mu$ is monotone autocontinuous from below.
\end{proof}
\begin{remark}
See~\cite[Theorem~3.5]{Rebille} for a primitive form of Proposition~\ref{Ch-conv}.
\end{remark}
\begin{proposition}\label{C-Fatou}
Let $\mu\in\calM(X)$. Let $0<q<\infty$.
The following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\mu$ is monotone autocontinuous from below.
\item[\us{(ii)}] The Choquet $q$-th Fatou almost uniform convergence lemma holds for $\mu$, that is,
for any $\{f_n\}_\seqn\subset\calF_0^+(X)$ and $f\in\calF_0^+(X)$,
if $f_n\to f$ $\mu$-a.u., then $\Ch(\mu,f^q)\leq\liminf_\ninfty\Ch(\mu,f_n^q)$.
\end{enumerate}
\end{proposition}
\begin{proof}
(i)$\Rightarrow$(ii)\ For each $\seqn$, let $g_n:=\inf_{k\geq n}f_k$.
Since $\{g_n\}_\seqn$ and $f$ satisfy conditions (a) and (c) of Proposition~\ref{Ch-conv}, it follows that
\[
\Ch(\mu,f^q)
=\lim_\ninfty\Ch(\mu,g_n^q)=\lim_\ninfty\Ch(\mu,\inf_{k\geq n}f_k^q)\leq\liminf_\ninfty\Ch(\mu,f_n^q).
\]
\par
(ii)$\Rightarrow$(i)\ Take $A,B_n\in\calA$ and assume that $\{B_n\}_\seqn$ is nonincreasing and
$\mu(B_n)\to 0$. For any $\seqn$, let $f_n:=\chi_{A\setminus B_n}$ and $f:=\chi_A$.
Then $f_n\to f$ $\mu$-a.u., so that assertion (ii) yields
\begin{align*}
\mu(A)
&=\Ch(\mu,f^q)\leq\liminf_\ninfty\Ch(\mu,f_n^q)\\
&=\liminf_\ninfty\mu(A\setminus B_n)\leq\limsup_\ninfty\mu(A\setminus B_n)\leq\mu(A).
\end{align*}
Therefore $\mu$ is monotone autocontinuous from below.
\end{proof}
\begin{theorem}\label{L-comp1}
Let $\mu\in\calM(X)$. Let $0<p<\infty$ and $0<q<\infty$.
Assume that $\mu$ is monotone autocontinuous from below and satisfies property~(C) and the (p.g.p.).
Then $\calL^{p,q}(\mu)$ and $L^{p,q}(\mu)$ are quasi-complete.
\end{theorem}
\begin{proof}
We first prove the conclusion for $\calLq(\mu)$.
Let $\{f_n\}_\seqn\subset\calLq(\mu)$ be bounded and Cauchy.
By (4) of Proposition~\ref{pqnorm}, the sequence $\{f_n\}_\seqn$ is Cauchy in $\mu$-measure,
so that by Theorem~\ref{comp} one can find a subsequence $\{f_{n_k}\}_\seqk$ of $\{f_n\}_\seqn$
and a function $f\in\calF_0(X)$ such that $f_{n_k}\to f$ $\mu$-a.u.
\par
Let $\ep>0$.
Since $\{f_n\}_\seqn$ is Cauchy,
there is $n_0\in\bbN$ such that if $m,n\geq n_0$ then
\begin{equation}\label{eqn:L-comp1}
\Ch(\mu,|f_m-f_n|^q)<\ep.
\end{equation}
For any fixed $n$ with $n\geq n_0$, we have $|f_{n_k}-f_n|\to |f-f_n|$ $\mu$-a.u.,
hence it follows from the monotone autocontinuity of $\mu$ from below
and Proposition~\ref{C-Fatou} that
\begin{align}\label{eqn:L-comp2}
\Ch(\mu,|f-f_n|^q)
&\leq\liminf_\kinfty\Ch(\mu,|f_{n_k}-f_n|^q)\nonumber\\
&\leq\limsup_\kinfty\Ch(\mu,|f_{n_k}-f_n|^q)\nonumber\\
&\leq\sup_{k\geq l}\Ch(\mu,|f_{n_k}-f_n|^q)
\end{align}
for every $\seql$.
Since $n_k\to\infty$, there is $k_0\in\bbN$ such that $n_k\geq n_0$ for every $k\geq k_0$,
hence it follows from \eqref{eqn:L-comp1} and \eqref{eqn:L-comp2} that
\[
\Ch(\mu,|f-f_n|^q)\leq\sup_{k\geq k_0}\Ch(\mu,|f_{n_k}-f_n|^q)\leq\ep,
\]
which yields $\qn{f-f_n}\to 0$.
\par
Next we show that $f\in\calLq(\mu)$.
Since $\{f_n\}_\seqn$ is bounded and $|f_{n_k}|\to |f|$ $\mu$-a.u., Proposition~\ref{C-Fatou} implies that
\[
\qn{f}^q=\Ch(\mu,|f|^q)\leq\liminf_\kinfty\Ch(\mu,|f_{n_k}|^q)\leq\sup_\seqn\qn{f_n}^q<\infty.
\]
Hence $f\in\calLq(\mu)$. Thus $\calLq(\mu)$ is quasi-complete.
Since every nonadditive measure that is monotone autocontinuous from below is null-additive,
the quotient space $L^q(\mu)$ and the quotient prenorm $\qn{\cdot}$ are well-defined,
and it turns out that $L^q(\mu)$ is quasi-complete.
\par
Recall that
\[
\pqnm{f}{\mu}=\left(\frac{p}{q}\right)^{1/q}\qnm{f}{\mupq}
\]
for every $f\in\calLpq(\mu)$ and that $\calLpq(\mu)=\calLq(\mupq)$.
The conclusion for $\calLpq(\mu)$ and $L^{p,q}(\mu)$ thus follows
since $\mupq$ is monotone autocontinuous from below
and satisfies property~(C) and the (p.q.p.)~if and only if so is $\mu$, respectively.
\end{proof}
We now reach the completeness of the spaces $\calLpq(\mu)$ and $L^{p,q}(\mu)$.
\begin{corollary}\label{L-comp2}
Let $\mu\in\calM(X)$. Let $0<p<\infty$ and $0<q<\infty$.
Assume that $\mu$ is relaxed subadditive, monotone autocontinuous from below,
and satisfies property~(C).
Then $\calLpq(\mu)$ is complete with respect to the quasi-seminorm $\pqn{\cdot}$
and $L^{p,q}(\mu)$ is complete with respect to the quasi-norm $\pqn{\cdot}$.
\end{corollary}
\begin{proof}
By assumption, $\mu$ is also null-additive and satisfies the (p.g.p.).
Furthermore, by (8) of Proposition~\ref{pqnorm} the prenorm $\pqn{\cdot}$ is a quasi-seminorm
on $\calLpq(\mu)$ and hence a quasi-norm on $L^{p,q}(\mu)$, so that every Cauchy sequence is bounded
with respect to $\pqn{\cdot}$.
The conclusion thus follows from Theorem~\ref{L-comp1}.
\end{proof}
\begin{example}
Let $X:=\bbN$ and $\calA:=2^X$.
Let $\mu$ be the nonadditive measure given in Proposition~\ref{counter1}.
Then $\mu$ is subadditive, hence relaxed subadditive, monotone autocontinuous from below, and satisfies the (p.g.p.),
while it does not satisfy property~(C).
For each $\seqn$, let $A_n:=\{1,2,\dots,n\}$ and $f_n:=\chi_{A_n}$.
Then it follows from (4) of Proposition~\ref{counter1} that $\{f_n\}_\seqn$ does not converge in $\mu$-measure.
Let $0<p<\infty$ and $0<q<\infty$. Then it follows from (1) of Proposition~\ref{pqnorm} that
\[
\pqn{f_n}=\left(\frac{p}{q}\right)^{1/q}\left(\sum_{i=1}^n\frac{1}{2^i}\right)^{1/p}
\]
and
\[
\pqn{f_{n+l}-f_n}=\left(\frac{p}{q}\right)^{1/q}\left(\sum_{i=n+1}^{n+l}\frac{1}{2^i}\right)^{1/p}
\]
for every $n,l\in\bbN$, hence the sequence
$\{f_n\}_\seqn\subset\calLpq(\mu)$ is bounded and Cauchy.
Suppose, contrary to our claim, that $\calLpq(\mu)$ is quasi-complete.
Then, $\{f_n\}_\seqn$ converges with respect to $\pqn{\cdot}$,
hence it converges in $\mu$-measure by (4) of Proposition~\ref{pqnorm},
which is impossible.
Therefore $\calLpq(\mu)$ and $L^{p,q}(\mu)$ are not quasi-complete.
This means that property~(C) cannot be dropped
in Theorem~\ref{L-comp1} and Corollary~\ref{L-comp2}.
\end{example}
We turn to the separability of the Choquet-Lorentz space.
For the proof a suitable convergence theorem is needed this time too.
\begin{proposition}\label{monotone}
Let $\mu\in\calM(X)$. The following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\mu$ is conditionally order continuous.
\item[\us{(ii)}] For any $\{f_n\}_\seqn\subset\calF_0^+(X)$, if it satisfies
\begin{enumerate}
\item[\us{(a)}] $f_{n+1}(x)\leq f_n(x)$ for every $x\in X$ and $\seqn$,
\item[\us{(b)}] $f_n(x)\to 0$ for every $x\in X$,
\item[\us{(c)}] $\Ch(\mu,f_1)<\infty$,
\end{enumerate}
then $\Ch(\mu,f_n)\to 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
(i)$\Rightarrow$(ii)\ For any $t>0$, condition (c) yields
\[
\mu(\{f_1\geq t\})\leq\frac{\Ch(\mu,f_1)}{t}<\infty,
\]
hence conditions (a) and (b) imply that $\mu(\{f_n\geq t\})\downarrow 0$
since $\mu$ is conditionally order continuous.
Furthermore, by condition (c) we have
\[
\int_0^\infty\mu(\{f_1\geq t\})=\Ch(\mu,f_1)<\infty,
\]
so that
\[
\Ch(\mu,f_n)=\int_0^\infty\mu(\{f_n\geq t\})dt\to 0
\]
by the monotone convergence theorem for the Lebesgue integral.
\par
(ii)$\Rightarrow$(i)\ Take $A_n\in\calA$ and assume that $A_n\downarrow\eset$ and $\mu(A_1)<\infty$.
For each $\seqn$, let $f_n:=\chi_{A_n}$.
Then the sequence $\{f_n\}_\seqn$ satisfies conditions (a)--(c), so that $\mu(A_n)=\Ch(\mu,f_n)\to 0$.
Hence $\mu$ is conditionally order continuous.
\end{proof}
\begin{theorem}\label{L-dense}
Let $\mu\in\calM(X)$. Let $0<p<\infty$ and $0<q<\infty$.
Assume that $\mu$ is conditionally order continuous.
Then $\calS(X)\cap\calL^{p,q}(\mu)$ is dense in $\calL^{p,q}(\mu)$.
If $\mu$ is additionally assumed to be null-additive, then $S(X)\cap L^{p,q}(\mu)$ is dense in $L^{p,q}(\mu)$.
\end{theorem}
\begin{proof}
We first prove the conclusion for $\calLq(\mu)$.
Let $f\in\calLq(\mu)$ and take a sequence $\{h_n\}_\seqn\subset\calS(X)$ such that
$|h_n(x)|\leq|f(x)|$ for every $\seqn$ and that $h_n(x)\to f(x)$ for every $x\in X$.
For any $x\in X$ and $\seqn$, let
$g_n(x):=\sup_{k\geq n}|f(x)-h_k(x)|^q$.
Then the sequence $\{g_n\}_\seqn$ satisfies conditions (a)--(c) of Proposition~\ref{monotone},
hence $\Ch(\mu,g_n)\to 0$.
For any $\seqn$,
\begin{align*}
\liminf_\ninfty\Ch(\mu,|f-h_n|^q)
&\leq\limsup_\ninfty\Ch(\mu,|f-h_n|^q)\\
&\leq\sup_{k\geq n}\Ch(\mu,|f-h_k|^q)\\
&\leq\Ch(\mu,g_n),
\end{align*}
hence $\qn{f-h_n}=\Ch(\mu,|f-h_n|^q)^{1/q}\to 0$ since $\Ch(\mu,g_n)\to 0$.
The fact that $h_n\in\calS(X)\cap\calLq(\mu)$ follows since $|h_n|\leq |f|$ and $f\in\calLq(\mu)$.
It thus follows that $\calS(X)\cap\calLq(\mu)$ is dense in $\calLq(\mu)$.
If $\mu$ is additionally assumed to be null-additive, then the quotient spaces $S(X)$ and $L^q(\mu)$
and the prenorm $\qn{\cdot}$ are well-defined,
and it turns out that $S(X)\cap L^q(\mu)$ is dense in $L^q(\mu)$.
\par
The conclusion for $\calLpq(\mu)$ and $L^{p,q}(\mu)$ thus follows
since $\mupq$ is conditionally order continuous and null-additive if and only if so is $\mu$,
respectively.
\end{proof}
\begin{theorem}\label{L-sep}
Let $\mu\in\calM(X)$. Let $0<p<\infty$ and $0<q<\infty$.
Assume that $\mu$ is conditionally order continuous and relaxed subadditive.
Assume that the restriction of $\mu$ to the collection $\calA_0:=\{A\in\calA\colon\mu(A)<\infty\}$
has a countable basis. Then there is a countable subset $\calE$ of $\calLpq(\mu)$ such that
for any $f\in\calLpq(\mu)$ and $\ep>0$ there is $\psi\in\calE$ such $\pqn{f-\psi}<\ep$.
Hence $\calLpq(\mu)$ is separable.
If $\mu$ is additionally assumed to be null-additive, then $L^{p,q}(\mu)$ is separable.
\end{theorem}
\begin{proof}
We first prove the conclusion for $\calLq(\mu)$.
To this end, observe that there is a constant $L>2$ such that
\begin{equation}\label{eqn:L-sep1}
\qn{f_1+f_2+\dots+f_m}\leq L^m\sum_{k=1}^m\qn{f_k}
\end{equation}
for every $\seqm$ and $f_1,f_2,\dots,f_m\in\calF_0(X)$.
Indeed, if $\mu$ is $K$-relaxed subadditive for some $K\geq 1$, then by the relaxed subadditivity
of the Choquet integral stated in Subsection~\ref{integrals},
for any $f,g\in\calF_0^+(X)$ it follows that
\[
\Ch(\mu,f+g)\leq 2K\bigl\{\Ch(\mu,f)+\Ch(\mu,g)\bigr\}.
\]
Hence, repeated application of the inequality
\[
|a+b|^q\leq 2^q(|a|^q+|b|^q),
\]
which holds for every $a,b\in\bbR$, yields
\begin{equation}\label{eqn:L-sep2}
\qn{f+g}\leq L(\qn{f}+\qn{g}),
\end{equation}
where $L:=2^{1+\frac{2}{q}}K^{\frac{1}{q}}>2$.
Therefore, for any $\seqm$ and $f_1,f_2,\dots,f_m\in\calF_0(X)$,
we may apply~\eqref{eqn:L-sep2} $m$ times to obtain~\eqref{eqn:L-sep1}.
\par
Since the restriction of $\mu$ to $\calA_0$ has a countable basis,
there is a countable set $\calD$ of $\calA$-measurable sets with finite $\mu$-measure
such that for any $\ep>0$ and $A\in\calA$ with $\mu(A)<\infty$,
there is $D\in\calD$ for which $\mu(A\triangle D)<\ep$.
Let $\calE$ be the set of all finite linear combinations of the characteristic functions of sets in $\calD$
with rational coefficients. Then $\calE$ is countable.
If $f\in\calE$, then $f$ can be expressed by
\[
f=\sum_{k=1}^n r_k\chi_{D_k},
\]
where $n\in\bbN$, $r_1,\dots,r_n$ are rational numbers, and $D_1,\dots,D_n\in\calD$.
It thus follows form~\eqref{eqn:L-sep1} that
\[
\qn{f}=\biggl\|\sum_{k=1}^n r_k\chi_{D_k}\biggr\|_q\leq L^n\sum_{k=1}^n\qn{r_k\chi_{D_k}}
=L^n\sum_{k=1}^n|r_k|\mu(D_k)^{1/q}<\infty,
\]
which shows that $\calE$ is a subset of $\calLq(\mu)$.
\par
Let $f\in\calLq(\mu)$ and $\ep>0$.
Then by Theorem~\ref{L-dense} there is $\xi\in\calS(X)\cap\calLq(\mu)$ such that
\begin{equation}\label{eqn:L-sep3}
\qn{f-\xi}<\frac{\ep}{3L^3}.
\end{equation}
Since $\xi$ is a simple function, it can be expressed by
\[
\xi=\sum_{k=1}^n c_k\chi_{A_k},
\]
where $\seqn$, $c_1,\dots,c_n\in\bbR$, $A_1,\dots,A_n\in\calA$, $A_i\cap A_j=\eset\;\,(i\ne j)$,
$c_k\ne 0$ for $k=1,2,\dots,n$. Since
\[
|\xi|^q=\biggl|\sum_{k=1}^n c_k\chi_{A_k}\biggr|^q=\sum_{k=1}^n|c_k|^q\chi_{A_k}\geq |c_k|^q\chi_{A_k},
\]
it follows that
\[
\qn{\xi}=\Ch(\mu,|\xi|^q)^{1/q}\geq\Ch(\mu,|c_k|^q\chi_{A_k})^{1/q}=|c_k|\mu(A_k)^{1/q},
\]
which yields $\mu(A_k)<\infty$ for $k=1,2,\dots,n$.
\par
Next take rational numbers $r_1,\dots,r_n$ such that
\begin{equation}\label{eqn:L-sep4}
\max_{1\leq k\leq n}|c_k-r_k|<\frac{\ep}{3L^{n+3}\left(1+\sum_{k=1}^n\mu(A_k)^{1/q}\right)}
\end{equation}
and let
\[
\varphi:=\sum_{k=1}^n r_k\chi_{A_k}.
\]
Since
\[
|\xi-\varphi|=\sum_{k=1}^n|c_k-r_k|\chi_{A_k},
\]
it follows from~\eqref{eqn:L-sep1} that
\begin{align}\label{eqn:L-sep5}
\qn{\xi-\varphi}
&\leq L^n\sum_{k=1}^n\qn{|c_k-r_k|\chi_{A_k}}\nonumber\\
&=L^n\sum_{k=1}^n|c_k-r_k|\mu(A_k)^{1/q}\nonumber\\
&<L^n\sum_{k=1}^n\mu(A_k)^{1/q}\frac{\ep}{3L^{n+3}\left(1+\sum_{k=1}^n\mu(A_k)^{1/q}\right)}\nonumber\\
&<\frac{\ep}{3L^3}.
\end{align}
\par
Finally, since $\mu(A_k)<\infty$ for $k=1,2,\dots,n$, take $D_1,\dots,D_n\in\calD$ such that
\begin{equation}\label{eqn:L-sep6}
\max_{1\leq k\leq n}\mu(A_k\triangle D_k)<\left\{\frac{\ep}{3L^{n+3}(1+\sum_{k=1}^n|r_k|)}\right\}^q
\end{equation}
and let
\[
\psi:=\sum_{k=1}^n r_k\chi_{D_k}.
\]
In the same way as calculating~\eqref{eqn:L-sep5} we obtain
\begin{equation}\label{eqn:L-sep7}
\qn{\varphi-\psi}<\frac{\ep}{3L^3}.
\end{equation}
It thus follows from~\eqref{eqn:L-sep1}, \eqref{eqn:L-sep3}, \eqref{eqn:L-sep5}, and~\eqref{eqn:L-sep7}
that $\qn{f-\psi}<\ep$.
Hence $\calLq(\mu)$ is separable. The separability of $L^q(\mu)$ is now obvious.
\par
The conclusion for $\calLpq(\mu)$ and $L^{p,q}(\mu)$ thus follows since $\mupq$ is conditionally order continuous,
relaxed subadditive, and null-additive if and only if so is $\mu$,
respectively.
\end{proof}
\section{The Lorentz space of weak type}
In this section, the Lorentz space of weak type is defined by using the Shilkret integral
and its completeness and dense subsets are considered.
\begin{definition}\label{wL}
Let $\mu\in\calM(X)$. Define the function $\pinftyn{\cdot}\colon\calF_0(X)\to [0,\infty]$ by
\[
\pinftyn{f}:=\Sh(\mu^{1/p},|f|)
\]
for every $f\in\calF_0(X)$ and let
\[
\calL^{p,\infty}(\mu):=\{f\in\calF_0(X)\colon\pinftyn{f}<\infty\}.
\]
If the measure $\mu$ is needed to specify, then $\pinftyn{f}$ is written as $\pinftynm{f}{\mu}$.
Then the space $\calL^{p,\infty}(\mu)$ is called the \textit{Lorentz space of weak type}\/ and
the prenorm $\pinftyn{\cdot}$ is called the \textit{Lorentz prenorm of weak type}\/ on $\calL^{p,\infty}(\mu)$.
\end{definition}
\par
If the prenorm $\pinftyn{\cdot}$ on $\calL^{p,\infty}(\mu)$ is a quasi-seminorm,
then $\calL^{p,\infty}(\mu)$ is a real linear subspace of $\calF_0(X)$, but this is not the case in general.
When $\mu$ is $\sigma$-additive, the space $\calL^{p,\infty}(\mu)$
is nothing but the ordinary Lorentz space of weak type (this is also referred to as the weak $\calL^p$ space);
see~\cite[Theorem~6.6]{C-R}.
The prenorm $\pinftyn{\cdot}$ does not satisfy the triangle inequality in general even if $\mu$ is $\sigma$-additive.
\begin{proposition}\label{P-wL}
Let $\mu\in\calM(X)$. Let $0<p<\infty$ and $0<q<\infty$.
\begin{enumerate}
\item[\us{(1)}] For any $A\in\calA$ it follows that $\pinftyn{\chi_A}=\mu(A)^{1/p}$.
\item[\us{(2)}] For any $f\in\calL^{p,\infty}(\mu)$
it follows that $\pinftyn{f}=0$ if and only if $\mu(\{|f|>c\})=0$ for every $c>0$;
they are equivalent to $\mu(\{|f|>0\})=0$ if $\mu$ null-continuous.
\item[\us{(3)}] For any $f\in\calL^{p,\infty}(\mu)$ and $c>0$ it follows that $\pinftyn{cf}=|c|\pinftyn{f}$.
Hence $\pinftyn{\cdot}$ is homogeneous.
\item[\us{(4)}] For any $f\in\calL^{p,\infty}(\mu)$
and $c>0$ it follows that $\mu(\{|f|\geq c\})\leq\pinftyn{f}^p/c^p$.
\item[\us{(5)}] For any $f\in\calL^{p,\infty}(\mu)$ it follows that $\pinftyn{f}\leq\pqn{f}$.
\item[\us{(6)}] For any $f,g\in\calL^{p,\infty}(\mu)$, if $|f|\leq |g|$ then $\pinftyn{f}\leq\pinftyn{g}$.
\item[\us{(7)}] $\mu$ is weakly null-additive if and only if $\pinftyn{\cdot}$ is weakly null-additive.
\item[\us{(8)}] $\mu$ is null-additive if and only if $\pinftyn{\cdot}$ is null-additive.
\item[\us{(9)}] If $\mu$ is $K$-relaxed subadditive for some $K\geq 1$, then $\pinftyn{\cdot}$ satisfies
the $2^{1+\frac{1}{p}}K^{\frac{1}{p}}$-relaxed triangle inequality.
\item[\us{(10)}] $\mu$ is null-additive if and only if it follows that $\pinftyn{f}=\pinftyn{g}$
whenever $f,g\in\calL^{p,\infty}(\mu)$ and $f\sim g$.
\end{enumerate}
\end{proposition}
\begin{proof}
Assertions (1)--(6) are easy to prove
and assertions (7) and (8) can be derived in the same manner as~\cite[Proposition~3.2]{Kawabe2021}.
\par
(9)\ Let $f,g\in\calL^{p,\infty}(\mu)$. For any $t>0$, we have
\[
\{|f+g|>t\}\subset\{2|f|>t\}\cup\{2|g|>t\},
\]
hence
\begin{equation}\label{eqn:P-wL1}
\mu(\{|f+g|>t\})\leq K\bigl\{\mu(\{2|f|>t\})+\mu(\{2|g|>t\})\bigr\}
\end{equation}
by the $K$-relaxed subadditivity of $\mu$. Therefore it follows that
\begin{align*}
\pinftyn{f+g}
&=\sup_{t\in [0,\infty)}t\mu(\{|f+g|>t\})^{1/p}\\
&\leq\sup_{t\in [0,\infty)}t\Big[K\bigl\{\mu(\{2|f|>t\})+\mu(\{2|g|>t\bigr\})\bigr\}\Bigr]^{1/p}\\[1mm]
&\leq 2^\frac{1}{p}K^\frac{1}{p}\sup_{t\in [0,\infty)}t\bigl\{\mu(\{2|f|>t\})^{1/p}+\mu(\{2|g|>t\})^{1/p}\bigr\}\\
&\leq 2^\frac{1}{p}K^\frac{1}{p}\biggl(\sup_{t\in [0,\infty)}t\mu(\{2|f|>t\})^{1/p}
+\sup_{t\in [0,\infty)}t\mu(\{2|g|>t\})^{1/p}\biggr)\\[1mm]
&=2^\frac{1}{p}K^\frac{1}{p}\Bigl(\Sh(\mu^{1/p},2|f|)+\Sh(\mu^{1/p},2|g|)\Bigr)\\[1mm]
&=2^{1+\frac{1}{p}}K^\frac{1}{p}\bigl(\pinftyn{f}+\pinftyn{g}\bigr),
\end{align*}
where the first inequality is due to~\eqref{eqn:P-wL1} and the second is due to~\eqref{elementary}.
\par
(10)\ It can be proved in the same manner as (6) of Proposition~\ref{PDS}.
\end{proof}
The quotient space
\[
L^{p,\infty}(\mu):=\{[f]\colon f\in\calL^{p,\infty}(\mu)\}
\]
is defined by the equivalence relation stated in Subsection~\ref{equiv}.
Given an equivalence class $[f]\in L^{p,\infty}(\mu)$,
define the prenorm on $L^{p,\infty}(\mu)$ by $\pinftyn{[f]}:=\pinftyn{f}$,
which is well-defined if $\mu$ is null-additive by (10) of Proposition~\ref{P-wL}.
This prenorm has the same properties as the prenorm on $\calL^{p,\infty}(\mu)$
and separates points of $L^{p,\infty}(\mu)$, that is, for any $[f]\in L^{p,\infty}(\mu)$,
if $\pinftyn{[f]}=0$ then $[f]=0$.
\par
The following convergence theorems of the Shilkret integral are needed to show the completeness
of $\calL^{p,\infty}(\mu)$ and $L^{p,\infty}(\mu)$ and of interest in themselves.
\begin{proposition}\label{Sh-conv}
Let $\mu\in\calM(X)$. The following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\mu$ is monotone autocontinuous from below.
\item[\us{(ii)}] The Shilkret monotone nondecreasing almost uniform convergence theorem holds
for $\mu$, that is, for any $\{f_n\}_\seqn\subset\calF_0^+(X)$ and $f\in\calF_0^+(X)$,
if they satisfy
\begin{enumerate}
\item[\us{(a)}] $f_n(x)\leq f_{n+1}(x)\leq f(x)$ for every $x\in X$ and $\seqn$,
\item[\us{(b)}] $f_n\to f$ $\mu$-a.u.,
\end{enumerate}
then it follows that $\Sh(\mu,f_n)\uparrow\Sh(\mu,f)$.
\item[\us{(iii)}] The Shilkret Fatou almost uniform convergence lemma holds for $\mu$, that is,
for any $\{f_n\}_\seqn\subset\calF_0^+(X)$ and $f\in\calF_0^+(X)$, if $f_n\to f$ $\mu$-a.u.,
then it follows that
\[
\Sh(\mu,f)\leq\liminf_\ninfty\Sh(\mu,f_n).
\]
\end{enumerate}
\end{proposition}
\begin{proof}
(i)$\Rightarrow$(ii)\ We first show the conclusion in the case where $\mu$ is finite.
For each $r>0$, let $g:=f\land r$ and $g_n:=f_n\land r$ for every $\seqn$.
Fix $\seqk$ and take the continuity points $c_1,c_2,\dots,c_k$ of the function $\mu(\{g>t\})$
such that
\begin{itemize}
\item $0=c_0<c_1<c_2<\dots<c_{k-1}<c_k<r$,
\item $|c_i-c_{i-1}|<2r/k\;\,(i=1,2,\dots,k-1)$ and $|r-c_k|<r/k$.
\end{itemize}
For each $\seqn$, let
\[
h_{n,k}:=\bigvee_{i=1}^k c_i\chi_{\{g_n>c_i\}}\quad\mbox{and}\quad
h_k:=\bigvee_{i=1}^k c_i\chi_{\{g>c_i\}}.
\]
Then $0\leq h_{n,k}(x)\leq r$, $0\leq h_k(x)\leq r$,
$|h_{n,k}(x)-g_n(x)|<2r/k$, and $|h_k(x)-g(x)|<2r/k$ for every $x\in X$ and $\seqn$.
\par
Now, since $\{g_n\}_\seqn$ and $g$ satisfy conditions (a) and (b) of assertion (ii) of Proposition~\ref{Ch-conv},
it follows that
\[
\mu(\{g_n>c_i\})\uparrow\mu(\{g>c_i\})
\]
for every $i\in\{1,2,\dots,k\}$. Hence
\begin{equation}\label{eqn:Sh-conv1}
\Sh(\mu,h_{n,k})=\bigvee_{i=1}^k c_i\mu(\{g_n>c_i\})\to\bigvee_{i=1}^k c_i\mu(\{g>c_i\})
=\Sh(\mu,h_k).
\end{equation}
Since
\[
|\Sh(\mu,g)-\Sh(\mu,h_k)|\leq\frac{2r}{k}\mu(X)\mbox{ and }
|\Sh(\mu,g_n)-\Sh(\mu,h_{n,k})|\leq\frac{2r}{k}\mu(X)
\]
for every $\seqn$, by~\eqref{eqn:Sh-conv1} we have
\[
\limsup_\ninfty|\Sh(\mu,g_n)-\Sh(\mu,g)|\leq\frac{4r}{k}\mu(X).
\]
Since $\seqk$ is arbitrarily fixed, letting $\kinfty$ yields
\[
\Sh(\mu,g)=\lim_\ninfty\Sh(\mu,g_n)=\sup_\seqn\Sh(\mu,g_n).
\]
It thus follows from the upper marginal continuity of the Shilkret integral stated in Subsection~\ref{integrals} that
\begin{align*}
\Sh(\mu,f)
&=\sup_{r>0}\Sh(\mu,f\land r)=\sup_{r>0}\sup_\seqn\Sh(\mu,f_n\land r)\\
&=\sup_\seqn\sup_{r>0}\Sh(\mu,f_n\land r)=\sup_\seqn\Sh(\mu,f_n).
\end{align*}
\par
We turn to the general case.
For any $s>0$, since the nonadditive measure $\mu\land s$, which is defined by
$(\mu\land s)(A):=\mu(A)\land s$ for every $A\in\calA$, is finite and monotone
autocontinuous from below, it follows from what has been shown above that
\[
\sup_\seqn\Sh(\mu\land s,f_n)=\Sh(\mu\land s,f),
\]
hence
\begin{align*}
\Sh(\mu,f)
&=\sup_{s>0}\sup_\seqn\Sh(\mu\land s,f_n)
=\sup_\seqn\sup_{s>0}\Sh(\mu\land s,f_n)=\sup_\seqn\Sh(\mu,f_n).
\end{align*}
\par
(ii)$\Rightarrow$(iii)\ For each $\seqn$, let $g_n:=\inf_{k\geq n}f_k$.
Then, $\{g_n\}_\seqn$ and $f$ satisfy conditions (a) and (b) of assertion (ii).
It thus follows that
\[
\Sh(\mu,f)
=\lim_\ninfty\Sh(\mu,g_n)\leq\lim_\ninfty\inf_{k\geq n}\Sh(\mu,f_k)=\liminf_\ninfty\Sh(\mu,f_n).
\]
\par
(iii)$\Rightarrow$(i)\ Take $A,B_n\in\calA$ and assume that $\{B_n\}_\seqn$ is nonincreasing and
$\mu(B_n)\to 0$. For each $\seqn$, let $f_n:=\chi_{A\setminus B_n}$ and $f:=\chi_A$.
Then $f_n\to f$ $\mu$-a.u. Hence, assertion (iii) yields
\begin{align*}
\mu(A)
&=\Sh(\mu,f)\leq\liminf_\ninfty\Sh(\mu,f_n)\\
&=\liminf_\ninfty\mu(A\setminus B_n)\leq\limsup_\ninfty\mu(A\setminus B_n)\leq\mu(A).
\end{align*}
Therefore $\mu$ is monotone autocontinuous from below.
\end{proof}
\par
The proof of the following two results
is left to the reader since the same proof as Theorem~\ref{L-comp1}
and Corollary~\ref{L-comp2} works by using Proposition~\ref{Sh-conv} instead of Proposition~\ref{C-Fatou}.
\begin{theorem}\label{Sh-comp1}
Let $\mu\in\calM(X)$. Let $0<p<\infty$.
Assume that $\mu$ is monotone autocontinuous from below and satisfies property~(C) and the (p.g.p.).
Then $\calL^{p,\infty}(\mu)$ and $L^{p,\infty}(\mu)$ are quasi-complete.
\end{theorem}
\begin{corollary}\label{Sh-comp2}
Let $\mu\in\calM(X)$. Let $0<p<\infty$.
Assume that $\mu$ is relaxed subadditive, monotone autocontinuous from below,
and satisfies property~(C).
Then $\calL^{p,\infty}(\mu)$ is complete with respect to the quasi-seminorm $\pinftyn{\cdot}$
and $L^{p,\infty}(\mu)$ is complete with respect to the quasi-norm $\pinftyn{\cdot}$.
\end{corollary}
\begin{example}
Let $X:=\bbN$ and $\calA:=2^X$.
Let $\mu$ be the nonadditive measure given in Proposition~\ref{counter1}.
Then $\mu$ is subadditive, hence relaxed subadditive, monotone autocontinuous from below,
and satisfies the (p.g.p.), while it does not satisfy property~(C).
For each $\seqn$, let $A_n:=\{1,2,\dots,n\}$ and $f_n:=\chi_{A_n}$.
Then it follows from (4) of Proposition~\ref{counter1} that $\{f_n\}_\seqn$ does not converge in $\mu$-measure.
Let $0<p<\infty$. Then it follows from (1) of Proposition~\ref{P-wL} that
\[
\pinftyn{f_n}=\left(\sum_{i=1}^n\frac{1}{2^i}\right)^{1/p}\mbox{and}\quad
\pinftyn{f_{n+l}-f_n}=\left(\sum_{i=n+1}^{n+l}\frac{1}{2^i}\right)^{1/p}
\]
for every $n,l\in\bbN$, hence the sequence
$\{f_n\}_\seqn\subset\calL^{p,\infty}(\mu)$ is bounded and Cauchy.
Suppose, contrary to our claim, that $\calL^{p,\infty}(\mu)$ is quasi-complete.
Then, $\{f_n\}_\seqn$ converges with respect to $\pinftyn{\cdot}$, hence it converges
in $\mu$-measure by (4) of Proposition~\ref{P-wL}, which is impossible.
Hence $\calL^{p,\infty}(\mu)$ and $L^{p,\infty}(\mu)$ are not quasi-complete.
This means that property~(C) cannot be dropped in Theorem~\ref{Sh-comp1} and Corollary~\ref{Sh-comp2}.
\end{example}
According to Theorem~\ref{L-dense}, the set $\calS(X)\cap\calLpq(\mu)$ is dense in $\calLpq(\mu)$.
The following proposition shows that this is not the case of the Lorentz space of weak type.
\begin{proposition}\label{wLex}
Let $X:=(0,1]$ and $\calA$ be the $\sigma$-field of all Borel subsets of $X$.
Let $\lambda$ be the Lebesgue measure on $\bbR$.
Let $f(x):=1/x$ for every $x\in X$. Then $f\in\calL^{1,\infty}(\lambda)$
and $\calS(X)\subset\calL^{1,\infty}(\lambda)$.
Nevertheless, $\|f-h\|_{1,\infty}\geq 1$ for every $h\in\calS(X)$.
Thus $\calS(X)$ is not dense in $\calL^{1,\infty}(\lambda)$.
\end{proposition}
\begin{proof}
Elementary computation yields $\|f\|_{1,\infty}=1$, hence $f\in\calL^{1,\infty}(\lambda)$.
Take $h\in\calS(X)$ arbitrary and let $r_0:=\max_{x\in X}|h(x)|<\infty$.
Then it follows that
\[
\|h\|_{1,\infty}=\Sh(\lambda,|h|)\leq\Sh(\lambda,r_0)=r_0<\infty,
\]
hence $\calS(X)\subset\calL^{1,\infty}(\lambda)$.
\par
We proceed to show that $\|f-h\|_{1,\infty}\geq 1$.
In the case where $0\leq r_0\leq 1$, for any $t\geq 0$,
\[
\{(f-r_0)^+>t\}=\begin{cases}
(0,1] & \mbox{if }0\leq t<1-r_0,\\[1mm]
\Bigl(0,\frac{1}{t+r_0}\Bigr) & \mbox{if }t\geq 1-r_0,
\end{cases}
\]
hence $\Sh(\lambda,(f-r_0)^+)=1$.
Meanwhile, in the case where $r_0>1$, for any $t\geq 0$,
\[
\{(f-r_0)^+>t\})=\left(0,\frac{1}{t+r_0}\right),
\]
hence $\Sh(\lambda,(f-r_0)^+)=1$.
Since $|f-h|\geq (f-r_0)^+$, it follows that
\[
\|f-h\|_{1,\infty}=\Sh(\lambda,|f-h|)\geq\Sh(\lambda,(f-r_0)^+)=1,
\]
which implies that $\calS(X)$ is no longer dense in $\calL^{1,\infty}(\lambda)$.
\end{proof}
\section{The space of the $\bm{\mu}$-essentially bounded functions}\label{EBF}
Unlike the spaces we have considered so far, the space of all $\mu$-essentially bounded functions
is complete under fairly weak conditions.
\begin{definition}\label{Linfty}
Let $\mu\in\calF_0(X)$. Define the function $\inftyn{\cdot}\colon\calF_0(X)\to [0,\infty]$ by
\[
\inftyn{f}:=\inf\{c>0\colon\mu(\{|f|>c\})=0\}
\]
for every $f\in\calF_0(X)$ and let
\[
\calL^\infty(\mu):=\{f\in\calF_0(X)\colon\inftyn{f}<\infty\}.
\]
If the measure $\mu$ is needed to specify, then $\inftyn{f}$ is written as $\inftyn{f\colon\!\mu}$.
The functions in $\calLinfty(\mu)$ are called \emph{$\mu$-essentially bounded functions}.
\end{definition}
\par
When $\mu$ is $\sigma$-additive, the space $\calLinfty(\mu)$ is nothing but the ordinary seminormed
space of all $\mu$-essentially bounded, $\calA$-measurable functions on $X$ with seminorm $\inftyn{\cdot}$.
In general, the prenorm $\inftyn{\cdot}$ on $\calLinfty(\mu)$ does not satisfy the triangle inequality.
\begin{proposition}\label{Linftyn}
Let $\mu\in\calM(X)$.
\begin{enumerate}
\item[\us{(1)}] For any $A\in\calA$ it follows that
\[
\inftyn{\chi_A}=\begin{cases}
0 & \mbox{if }\mu(A)=0,\\
1 & \mbox{if }\mu(A)>0.
\end{cases}
\]
\item[\us{(2)}] For any $f\in\calLinfty(\mu)$
it follows that $\inftyn{f}=0$ if and only if $\mu(\{|f|>c\})=0$ for every $c>0$;
they are equivalent to $\mu(\{|f|>0\})=0$ if $\mu$ is null-continuous.
\item[\us{(3)}] For any $f\in\calLinfty(\mu)$ and $c\in\bbR$
it follows that $\inftyn{cf}=|c|\inftyn{f}$.
Hence $\inftyn{\cdot}$ is homogeneous.
\item[\us{(4)}] For any $f,g\in\calLinfty(\mu)$, if $|f|\leq |g|$ then $\inftyn{f}\leq\inftyn{g}$.
\item[\us{(5)}] If $\mu$ is null-continuous, then for any $f\in\calLinfty(\mu)$
it follows that $|f|\leq\inftyn{f}$ $\mu$-a.e.
\item[\us{(6)}] The following assertions are equivalent.
\begin{enumerate}
\item[\us{(i)}] $\mu$ is weakly null-additive.
\item[\us{(ii)}] $\inftyn{\cdot}$ satisfies the triangle inequality.
\item[\us{(iii)}] $\inftyn{\cdot}$ is null-additive.
\item[\us{(iv)}] $\inftyn{\cdot}$ is weakly null-additive.
\end{enumerate}
\item[\us{(7)}] $\mu$ is weakly null-additive if and only if it follows that $\inftyn{f}=\inftyn{g}$
whenever $f,g\in\calLinfty(\mu)$ and $f\sim g$.
\end{enumerate}
\end{proposition}
\begin{proof}
Assertions (1)--(5) are easy to prove.
\par
(6) (i)$\Rightarrow$(ii)\ Let $f,g\in\calLinfty(\mu)$.
For any $a,b\in\bbR$, if $a>\inftyn{f}$ and $b>\inftyn{g}$, then
$\mu(\{|f|>a\})=\mu(\{|g|>b\})=0$, hence $\mu(\{|f+g|>a+b\})=0$ by the weak null-additivity of $\mu$,
and finally $\inftyn{f+g}\leq a+b$.
Letting $a\downarrow\inftyn{f}$ and $b\downarrow\inftyn{g}$
yields $\inftyn{f+g}\leq\inftyn{f}+\inftyn{g}$.
\par
The implications (ii)$\Rightarrow$(iii)$\Rightarrow$(iv) are obvious.
\par
(iv)$\Rightarrow$(i)\ Take $A,B\in\calA$ and assume that $\mu(A)=\mu(B)=0$.
Let $f:=\chi_A$ and $g:=\chi_{B\setminus A}$. Then $\inftyn{f}=\inftyn{g}=0$,
hence $\inftyn{f+g}=0$ by (iv), which shows that $\mu(A\cup B)=0$ since $f+g=\chi_{A\cup B}$.
Therefore, $\mu$ is weakly null-additive.
\par
(7)\ The ``only if'' part:\ Let $f,g\in\calLinfty(\mu)$ and assume that $f\sim g$.
Then $\inftyn{f-g}=0$.
Since $\mu$ is weakly null-additive, $\inftyn{\cdot}$ is null-additive by (6),
so that $\inftyn{f}=\inftyn{g+(f-g)}=\inftyn{g}$.
\par
The ``if'' part:\ Take $A,B\in\calA$ and assume that $\mu(A)=\mu(B)=0$.
Let $f:=\chi_{A\cup B}$ and $g:=\chi_A$. Then $\inftyn{f-g}=\inftyn{\chi_{B\setminus A}}=0$,
hence $f\sim g$. Thus $\inftyn{f}=\inftyn{g}$, so that $\inftyn{f}=0$ since $\inftyn{g}=0$.
Hence $\mu(A\cup B)=0$, which implies that $\mu$ is weakly null-additive.
\end{proof}
The quotient space
\[
L^{\infty}(\mu):=\{[f]\colon f\in\calL^{\infty}(\mu)\}
\]
is defined by the equivalence relation stated in Subsection~\ref{equiv}.
Given an equivalence class $[f]\in L^{\infty}(\mu)$,
define the prenorm on $L^{\infty}(\mu)$ by $\pinftyn{[f]}:=\pinftyn{f}$,
which is well-defined if $\mu$ is weakly null-additive by (7) of Proposition~\ref{Linftyn}.
This prenorm has the same properties as the prenorm on $\calLinfty(\mu)$
and separates points of $L^{\infty}(\mu)$, that is, for any $[f]\in L^{\infty}(\mu)$,
if $\inftyn{[f]}=0$ then $[f]=0$.
\par
The following theorem shows that it is not necessary to assume property~(C) and the (p.g.p.)
to show the completeness of the spaces $\calLinfty(\mu)$ and $L^\infty(\mu)$.
\begin{theorem}\label{comp1}
Let $\mu\in\calM(X)$.
Assume that $\mu$ is weakly null-additive and null-continuous.
Then $\calL^\infty(\mu)$ is complete with respect to the seminorm $\inftyn{\cdot}$
and $L^\infty(\mu)$ is complete with respect to the norm $\inftyn{\cdot}$.
\end{theorem}
\begin{proof}
Let $\{f_n\}_\seqn\subset\calLinfty(\mu)$ be a Cauchy sequence.
Then, $\mu$ being weakly null-additive, by (6) of Proposition~\ref{Linftyn},
for any $m,n\in\bbN$ we have $f_m-f_n\in\calLinfty(\mu)$, so that
\begin{equation}\label{eq:comp1}
\mu(\{|f_m-f_n|>\inftyn{f_m-f_n}\})=0
\end{equation}
by (5) of Proposition~\ref{Linftyn} and the null-continuity of $\mu$.
Let
\[
E:=\bigcup_{m=1}^\infty\bigcup_{n=1}^\infty\{|f_m-f_n|>\inftyn{f_m-f_n}\}.
\]
For each $\seqk$ let
\[
E_k:=\bigcup_{m=1}^k\bigcup_{n=1}^k\{|f_m-f_n|>\inftyn{f_m-f_n}\}.
\]
Then $E_k\uparrow E$.
Since $\mu$ is weakly null-additive, \eqref{eq:comp1} implies that $\mu(E_k)=0$ for every $\seqk$,
hence $\mu(E)=0$ by the null-continuity of $\mu$.
\par
For any $x\not\in E$, the sequence $\{f_n(x)\}_\seqn$ is Cauchy in $\bbR$, so that the $\calA$-measurable
function $f\colon X\to\bbR$ can be defined by
\[
f(x):=\begin{cases}
\lim\limits_\ninfty f_n(x) & \mbox{if }x\not\in E,\\[1.5mm]
\;0 & \mbox{otherwise}.
\end{cases}
\]
Then it is easy to see that $\inftyn{f_n-f}\to 0$ and $f\in\calL^\infty(\mu)$.
Therefore $\calLinfty(\mu)$ is complete.
Furthermore, $\mu$ being weakly null-additive,
the quotient space $L^\infty(\mu)$ and the prenorm $\inftyn{\cdot}$ on $L^\infty(\mu)$
are well-defined and it turns out that $L^\infty(\mu)$ is complete.
\par
Finally, by (3) and (6) of Proposition~\ref{Linftyn}
the prenorm $\inftyn{\cdot}$ is a seminorm on $\calLinfty(\mu)$ and a norm on $L^\infty(\mu)$.
\end{proof}
The following theorem shows that $S(X)$ is dense in $\calLinfty(\mu)$
for any nonadditive measure $\mu$.
\begin{theorem}
Let $\mu\in\calM(X)$. Then $\calS(X)$ is dense in $\calLinfty(\mu)$.
If $\mu$ is weakly null-additive, then $S(X)$ is dense in $L^\infty(\mu)$.
\end{theorem}
\begin{proof}
Let $f\in\calLinfty(\mu)$. Then there is $N\in\calA$ such that $\mu(N)=0$
and $f$ is bounded on $X\setminus N$,
hence one can find a sequence $\{h_n\}_\seqn\subset\calS(X)$ such that $\sup_{x\not\in N}|f(x)-h_n(x)|\to 0$.
Let $\ep>0$. Then there is $n_0\in\bbN$ such that $\sup_{x\not\in N}|f(x)-h_{n_0}(x)|\leq\ep/2$,
so that $\mu(\{|f-h_{n_0}|>\ep/2\})=0$, and finally that $\inftyn{f-h_{n_0}}\leq\ep/2<\ep$.
This means that $\calS(X)$ is dense in $\calLinfty(\mu)$.
The denseness of $S(X)$ in $L^\infty(\mu)$ is now obvious.
\end{proof}
\begin{remark}
As is well-known, the spaces $\calLinfty(\mu)$ and $L^\infty(\mu)$
are not separable even if $\mu$ is $\sigma$-additive.
\end{remark}
\section{Summary of results and future tasks}\label{conclusion}
In this paper, given a nonadditive measure $\mu$,
the space of all measurable functions $\calL^0(\mu)$, the Choquet-Lorentz space $\calLpq(\mu)$,
the Lorentz space of weak type $\calL^{p,\infty}(\mu)$, the space of all $\mu$-essentially bounded functions
$\calLinfty(\mu)$,
and their quotient spaces are defined by using the Choquet and Shilkret integrals.
The completeness and separability of those spaces are also discussed relating the characteristic of $\mu$.
Some of our results are as follows.
\begin{itemize}
\item Assume that $\mu$ satisfies property~(C) and the (p.g.p.).
Then $\calL^0(\mu)$ is complete.
If $\mu$ is additionally assumed to be null-additive, then $L^0(\mu)$ is complete.
\item Let $0<p<\infty$ and $0<q\leq\infty$.
If $\mu$ is monotone autocontinuous from below and satisfies property~(C) and the (p.g.p.),
then $\calLpq(\mu)$ and $L^{p,q}(\mu)$ are quasi-complete.
If $\mu$ is relaxed subadditive, monotone autocontinuous from below, and satisfies property~(C),
then $\calLpq(\mu)$ is complete with respect to the quasi-seminorm $\pqn{\cdot}$
and $L^{p,q}(\mu)$ is complete with respect to the quasi-norm $\pqn{\cdot}$.
\item If $\mu$ is weakly null-additive and null-continuous, then $\calLinfty(\mu)$
is complete with respect to the seminorm $\inftyn{\cdot}$ and $L^\infty(\mu)$
is complete with respect to the norm $\inftyn{\cdot}$.
\end{itemize}
\par
All the results listed above hold for every subadditive nonadditive measure that is continuous from below
since such a nonadditive measure is relaxed subadditive, monotone autocontinuous from below, null-continuous,
null-additive, weakly null-additive, and satisfies property~(C) and the (p.g.p.).
\par
As to dense subsets and the separability, the following results are shown among others.
\begin{itemize}
\item Assume that $\mu$ is order continuous. Then $\calS(X)$ is dense in $\calL^0(\mu)$.
If $\mu$ is additionally assumed to be null-additive, then $S(X)$ is dense in $L^0(\mu)$.
\item Assume that $\mu$ is order continuous and satisfies the (p.g.p.).
Assume that $\mu$ has a countable basis.
Then $\calL^0(\mu)$ is separable.
If $\mu$ is additionally assumed to be null-additive, then $L^0(\mu)$ is separable.
\item Let $0<p<\infty$ and $0<q<\infty$.
Assume that $\mu$ is conditionally order continuous.
Then $\calS(X)\cap\calLpq(\mu)$ is dense in $\calLpq(\mu)$.
If $\mu$ is additionally assumed to be null-additive, then $S(X)\cap L^{p,q}(\mu)$
is dense in $L^{p,q}(\mu)$.
\item Let $0<p<\infty$ and $0<q<\infty$.
Assume that $\mu$ is conditionally order continuous and relaxed subadditive.
Assume that there is a countable subset $\calD$ of $\calA$-measurable sets whose $\mu$-measure is finite
such that for any $\ep>0$ and $A\in\calA$, if $\mu(A)<\infty$
then there is $D\in\calD$ for which $\mu(A\triangle D)<\ep$.
Then $\calLpq(\mu)$ is separable.
If $\mu$ is additionally assumed to be null-additive, then $L^{p,q}(\mu)$ is separable.
\item The set $\calS(X)$ is dense in $\calLinfty(\mu)$ for any nonadditive measure $\mu$.
If $\mu$ is weakly null-additive, then $S(X)$ is dense in $L^\infty(\mu)$.
\end{itemize}
\par
In this manner, the study of function spaces in the framework of
nonadditive measure theory has an advantage of significantly expanding the scope of application of the theory.
Another advantage is noticeable when studying the Choquet-Lorentz space.
As already mentioned in Section~\ref{CLspace}, for any $\mu\in\calM(X)$ and $f\in\calLpq(\mu)$,
it follows that
\[
\pqnm{f}{\mu}=\left(\frac{p}{q}\right)^{1/q}\qnm{f}{\mupq}
\]
and that $\calLpq(\mu)=\calLq(\mupq)$.
This fact means that many properties of the Choquet-Lorentz space $\calLpq(\mu)$ can be immediately derived from
those of the space $\calLq(\mupq)$.
Although $\mupq$, which is the power of $\mu$, is not additive in general even if $\mu$ is additive,
it preserves nonadditive characteristics of $\mu$
such as the weak null-additivity, the null-additivity,
the (p.g.p.), properties~(C), ($\mbox{C}_0$), (S), and ($\mbox{S}_1$),
the relaxed subadditivity, and many kinds of continuity and autocontinuity.
This observation suggests the significance of formulating the results of ordinary measure theory
in terms of the characteristics of nonadditive measures.
This is because the above-mentioned preservation of the characteristic of nonadditive measures
cannot be used just by dealing with additive measures.
\par
The properties established in this paper will be fundamental and important when studying
various function spaces appeared in nonadditive measure theory.
It is a future task to investigate further profound properties of those function spaces.
\par
According to Theorem~\ref{NSC}, in the case where $X$ is countable,
if a nonadditive measure $\mu$ is null-continuous and satisfies the (p.g.p.),
then property~(C) is a necessary and sufficient condition for the Cauchy criterion to hold
for convergence in $\mu$-measure.
Even in the case where $X$ is not countable, it would be desirable to find such a condition,
but we have not been able to do this. This is another future task.
\section*{References}\label{ref}
|
1,108,101,563,152 | arxiv | \section{Introduction}
Quantum physics is a technically difficult and abstract subject.~\cite{griffiths}
The subject matter makes instruction quite challenging and students perpetually struggle to master basic
concepts.~\cite{galvez,zollman,styer,narst,theme,my1,phystoday,my2,my3,my4,my5,fischler}
Here I discuss the development and evaluation of Quantum Interactive Learning Tutorials (QuILTs) that help
advanced undergraduate students learn quantum mechanics.
QuILTs are designed to create an active learning environment where students have an opportunity to
confront their misconceptions, interpret the material learned,
draw qualitative inferences from quantitative tools learned in quantum mechanics and
build links between new material and prior knowledge.
They are designed to be easy to implement regardless of the lecturer's teaching style.
A unique aspect of QuILTs is that they are research-based, targeting specific
difficulties and misconceptions students have in learning various concepts in
quantum physics.~\cite{zollman,styer,narst,theme,my1,phystoday,my2,my3,my4,my5,fischler}
They often employ computer-based visualization tools~\cite{mario,pqp,others,mcintyre,zollman1,styer_cups} to help students
build physical intuition about
quantum processes and keep students consistently engaged in the learning process by asking them
to predict what should happen in a particular situation, and then providing appropriate feedback.
They attempt to bridge the gap between the abstract quantitative formalism of quantum mechanics and the qualitative
understanding necessary to explain and predict diverse physical phenomena. They
can be used in class by the instructors once or twice a week as supplements to lectures or outside of the class
as homework or as self-study tool by students.
\section{Details of the QuILTs}
The QuILTs use a learning cycle approach~\cite{wiki} in which students' engage in the topic via examples
that focus their attention, explore the topic through facilitated questioning and observation, explain what they have
learned with instructor facilitating further discussion to help refine students' understanding and
extend what they have learned by applying the
same concepts in different contexts.
The guidance provided by the tutorials is decreased gradually and students
assume more responsibility in order to develop self-reliance.
In addition to the main tutorial, QuILTs often have a ``warm-up" component and a tutorial ``homework".
Students work on the ``warm-up" component of a QuILT at home before working on the main tutorials in class.
These warm-ups typically review the prior knowledge necessary for optimizing the benefits of the main
tutorial related to that topic. The ``tutorial homework" associated with a QuILT can be given
as part of their homework to reinforce concepts after students have worked on the main tutorial.
The tutorial homework helps students apply the topic of a particular QuILT to many specific situations
to learn about its applicability in diverse cases and learn to generalize the concept appropriately.
We design a pre-test and post-test to accompany each QuILT.
The pre-test assesses students' initial knowledge before they have worked on the corresponding QuILT, but typically
after lecture on relevant concepts.
The QuILT, together with the preceding pre-test, often make students' difficulties related to relevant
concepts clear not only to the instructors but also to students themselves.
The pre-test can also have motivational benefits and can
help students better focus on the concepts covered in the QuILT that follows it.
Pre-/post-test performances are also useful for refining and modifying the QuILT.
An integral component of the QuILTs is the adaptation of visualization tools
for helping students develop physical intuition about quantum processes.
A visualization tool can be made much more pedagogically effective if it is embedded in a learning environment such as QuILT.
A simulation, preceded by a prediction and followed by questions, can help students reflect upon what they
visualized. Such reflection can be useful for understanding and remembering concepts.
They can also be invaluable in helping students better understand the differences between classical and quantum concepts.
We have adapted simulations from a number of sources as appropriate~\cite{mario,others,mcintyre,zollman1,styer_cups}
including the open source JAVA simulations developed by Belloni and Christian.~\cite{mario}
Some of the QuILTs, e.g., the QuILT on double-slit experiment which uses simulations developed by Klaus
Muthsam,~\cite{others} are also
appropriate for modern physics courses. The double-slit QuILT uses simulations to teach students about
the wave nature of particles manifested via the double slit experiment with single particles,
the importance of the phase of the probability amplitude for the occurrence of interference pattern and
the connection between having information about which slit a ``particle" went through (``which-path" information)
and the loss of interference pattern.
For the QuILTs based on simulations, students must first make predictions about what and why they
expect a certain thing to happen in a particular situation before exploring the relevant concepts with the simulations.
For example, students can learn about the stationary states of a single particle in
various types of potential wells. Students can change the model parameters and learn how those parameters affect stationary states and
the probability of finding the electron at a particular position. They can also take various linear combinations of stationary states
to learn how the probability of finding the electron at a particular position is affected.
They can calculate and compare the expectation values of various operators in different states for a given potential.
They can also better appreciate why classical physics may be a good approximation to quantum physics under certain conditions.
Students can also develop intuition about the differences between bound states and scattering states by using visual simulations.
Guided visualization tools can also help students understand the changes that take
place when a system containing one particle is extended to many particles.~\cite{styer_cups}
Similar to the development of tutorials for introductory and modern physics,~\cite{lillian,redish}
the development of each QuILT goes through a cyclical iterative process.
Preliminary tutorials are developed based upon common difficulties in learning a particular
topic,~\cite{zollman,styer,narst,theme,my1,phystoday,fischler}
and how that topic fits within the overall structure of quantum mechanics.
The preliminary tutorials are implemented in one-on-one interviews with student volunteers, and modifications are
made. These modifications are essential for making the QuILTs effective.
After such one-on-one implementation with at least half a dozen students, tutorials are tested and evaluated
in classroom settings and refined further.
Working through QuILTs in groups is an effective way
of learning because formulating and articulating thoughts can provide students with an opportunity to solidify concepts and
benefit from one another's strengths. It can also provide an opportunity to monitor their own learning because mutual
discussions can help students rectify their knowledge deficiencies.
Students typically finish a QuILT at home if they cannot finish it in class and
take the post-test associated with it individually in the following class for which no help is provided.
\section{Case Studies}
Below, we briefly discuss case studies related to the development and evaluation of three QuILTs on time development
of wave function, uncertainty principle, and Mach-Zehnder interferometer. The development of each QuILT starts with
a careful analysis of the difficulties students have in learning related concepts. After the preliminary development
of the tutorials and the pre-/post-tests associated with them, we conduct one-on-one 1.5 hour interviews with
6-7 student volunteers for each tutorial using a think-aloud protocol.~\cite{chi} In this protocol, students are asked
to work on a tutorial while talking aloud so that we could follow their thought processes. Hints are provided
as appropriate. These individual interviews provide an opportunity to probe students' thinking as they work through
a tutorial and gauge the extent to which students are able to benefit from them. After each of these interviews,
the tutorials are modified based upon the feedback obtained.
Then, they are administered in the classroom and are modified further based upon the feedback. Table 1 shows the
performance on the pre-/post-test of advanced undergraduate students in a quantum mechanics course on the last version.
The pre-test was given after traditional instruction on relevant concepts but before the tutorial.
Below we summarize each tutorial and discuss student performance on the case-study. We note that the pre-test
and post-test for a QuILT were not identical but often had some identical questions.
\subsection{Time-development QuILT}
One difficulty with the time development of wave functions stems from the fact that many students believe that the only
possible wave functions for a system are stationary states.~\cite{phystoday,quilt} Since the Hamiltonian
of a system governs its time development, we may
expand a non-stationary state wave function $\Psi(x,0)$ at the initial time $t=0$ in terms of the stationary states and then multiply
appropriate time dependent phase factors $e^{-iE_n t/\hbar}$ with each term (they are in general
different for different stationary states because the energies $E_n$ are
different) to find the wave function $\Psi(x,t)$ at time $t$. Students often append an overall
time-dependent phase factor even if the wave function is in a linear superposition of
the stationary states.~\cite{phystoday} To elicit this difficulty, the pretest of this QuILT begins by asking students about
the time dependence of a non-stationary state wave function for
an electron in a one-dimensional infinite square well. If the students choose an overall phase factor similar to that for a stationary
state, they are asked for the probability density, i.e., the absolute square of the wave function.
As noted above, when a non-stationary state is expanded in terms of stationary states,
the probability density at time $t$, $\vert \Psi(x,t) \vert^2$, is generally non-stationary due to a different time-dependent
phase factor in each term.
If students incorrectly choose that the wave function is time-independent even for a non-stationary state, arguing that
overall phase factors cancel out, the tutorial asks them to watch the simulations for the time evolution of the probability densities.
Simulations for this QuILT are adapted from the Open Source Physics simulations developed by Belloni and Christian.~\cite{mario,pqp}
These simulations are highly effective in challenging students' beliefs. Students are often taken aback when they find that the
probability density oscillates back and forth for the non-stationary state. Figure 1 shows snapshots adapted in QuILT from
an Open Source Physics simulation by Belloni and Christian for the probability density for
a non-stationary state wave function for a one-dimensional harmonic oscillator well.
In the actual simulation, students watch the probability density evolve in time.
When students observe that the probability
density does not depend on time for the stationary-state wave function but depends on time for the non-stationary-state
wave function, they are challenged
to resolve the discrepancy between their initial prediction and observation.
In our model, this is a good time to provide students guidance and feedback to help them build a robust knowledge structure.
Students then work through the rest of the QuILT which provides appropriate support and
helps solidify basic concepts related to time development.
Students respond to time development questions with stationary and non-stationary state wave functions in
problems involving different potential energies (e.g., harmonic oscillator, free-particle etc.)
to reinforce concepts, and they receive timely feedback to build their knowledge structure.
For each case, they check their calculations and predictions for the time-dependence of the probability density in each case
with the simulations.
Within an interactive environment, they learn that the Hamiltonian governs the time development of the system, and that
the eigenstates of the Hamiltonian are special with regards to the time evolution of the system. They learn that not
all possible wave functions are stationary-state wave functions, and they learn about
the difference between the time-independent and time-dependent Schroedinger equation.
Table 1 shows that in the case study in which nine students took both the pre-/post-tests, the average student performance
improved from $53\%$ to $85\%$ after working on the QuILT.
As discussed earlier, the most common difficulty on the pre-test was treating the time evolution of
non-stationary states as though those states were stationary states.
Moreover, two students who were absent on the day the pre-test
and tutorial were administered in the class but were present for the post-test in the following class obtained $30\%$ and
$0\%$ on the post-test respectively.
\subsection{Uncertainty Principle QuILT}
The QuILT on the uncertainty principle contains three parts with increasing levels of sophistication.
Depending upon the level of students, the instructors may choose to use only one or all parts.
The first part of this QuILT helps students understand that this fundamental
principle is due to the wave nature of particles. With the help of the de Broglie relation, the QuILT helps students understand that a
sinusoidal extended wave has a well-defined wavelength and momentum but does not have a well-defined position. On the other hand, a wave
pulse with a well defined position does not have a well defined wavelength or momentum.
Students gain further insight into the
uncertainty principle in the second part of the QuILT by Fourier transforming the position-space wave function and noticing how the
spread of the position-space wave function affects its spread in momentum space.
Computer simulations involving Fourier transforms are exploited in
this part of the QuILT and students Fourier transform various position-space wave function with different spreads and check the
corresponding changes in the momentum-space wave function.
The third part of the QuILT helps students generalize the
uncertainty principle for position and momentum operators to any two incompatible observables whose corresponding operators do not
commute. This part of the QuILT also helps students bridge this new treatment with students' earlier encounter with
uncertainty principle for position and momentum in the context of the spread of a wave function in position and momentum space. The
QuILT also helps students understand why a measurement of one observable immediately followed by the measurement of another
incompatible observable does not guarantee a definite value for the second observable.
Table 1 shows that the average performance of 12 students who took the last version of the QuILT
improved from $42\%$ to $83\%$ from pre-test to post-test. In a question that was common for both the pre-test and post-test,
students were asked to make a qualitative sketch of the absolute value of the Fourier transform of a delta function. They were
asked to explain their reasoning and label the axes appropriately. Only one student in the pre-test drew a correct diagram. In
the post-test, 10 out of 12 students were able to draw correct diagrams with labeled axes and explain why the Fourier transform
should be a constant extended over all space. Also, in the post-test, 10 out of 12 students were able to draw the Fourier transform of
a Gaussian position space wave function and were able to discuss the relative changes in the spread of the position and the
corresponding momentum space wave functions. These were concepts they had explored using computer simulations while working
on the QuILT. Similar results were found in individual interviews conducted earlier with other students during the development
of the QuILT.
One of the questions on both the pre-/post-test of this tutorial was the following:\\
Consider the following statements: ``Uncertainty principle makes sense.
When the particle is moving fast, the position measurement has uncertainty
because you cannot determine the particle's position precisely..it is a blur....that's
exactly what we learn in quantum mechanics..if the particle has a large speed, the
position measurement cannot be very precise."
Explain why you agree or disagree with the statement.\\
Out of the 12 students who took both pre-/post-tests, 7 students provided incorrect responses
on the pre-test.
The following are examples of incorrect student responses on the pre-test:
\begin{enumerate}
\item {\it I agree...when P is high it is easy to determine, while x is difficult to determine. The opposite is
also true, when P is small it is difficult to determine, while x is easy to determine.}
\item {\it I agree because when a particle has a high velocity it is difficult to measure the position accurately}
\item {\it I agree because I know the uncertainty principle to be true.}
\item {\it agree. When a particle is moving fast, we cannot determine its position exactly-it resembles a wave-at
fast speed, its momentum can be better determined.}
\end{enumerate}
In comparison, one student provided incorrect response and one did not provide a clear reasoning on the post-test.
\subsection{Mach-Zehnder Interferometer QuILT}
The goals of this QuILT are to review
the interference at a detector due to the superposition of light from the two paths of the interferometer.
The tutorial adapts a simulation developed by Albert Huber
(http://www.physik.uni-muenchen.de/didaktik/Computer/interfer/interfere.html) to help students learn the following
important quantum mechanical concepts:
\begin{itemize}
\item interference of a single photon with itself after it passes through the two paths of the MZ.
\item effect of placing detectors and polarizers in the path of the photon in the MZ.
\item how the information about the path along which a photon went (``which-path" information) destroys the interference
pattern.
\end{itemize}
A screen shot from the simulation is shown in Figure 2.
Students were given the following information about the setup:
The basic schematic setup for the Mach-Zehnder interferometer (MZ) used in this QuILT is as follows (see Figure 3) with changes made later in
the tutorial, e.g., changes in the position of the beam splitters, incorporation of polarizers, detectors or a glass piece,
to illustrate various concepts. All angles of incidence are $45^0$ with respect to the normal to the surface.
For simplicity, we will assume that light can only reflect from one of the two surfaces of the identical half-silvered mirrors (beam splitters)
$BS_1$ and $BS_2$ because of anti-reflection coatings. The detectors $D_1$ and $D_2$ are \underline{point} detectors
located symmetrically with respect to the other components of the MZ as shown.
The photons originate from a monochromatic coherent point source.
Assume that the light through both the $U$ and $L$ paths travels
the same distance in \underline{vacuum} to reach each detector. \\
In this QuILT, students first learn about the basics of phase changes that take place as light reflects or passes through
different beam splitters and mirrors in the MZ setup by drawing analogy with reflected or transmitted wave on a string
with fixed or free boundary condition at one end. Then, students use the simulation to
learn that a single photon can interfere with itself and produce interference pattern after it passes through both paths of the MZ.
Students explore and learn using simulations that
``which-path" information is obtained by removing $BS_2$ or by placing detectors or polarizers in certain locations.
Later in the tutorial, point detector $D_1$ is replaced with a screen.
Table 1 shows that the average performance of 12 students who took the last version of the MZ QuILT
improved from $48\%$ to $83\%$ from pre-test to post-test. Moreover, all but one of the 12 students in the post-test obtained
perfect scores on the following three questions (correct options (c), (b), and (b) respectively) that were similar (but not necessarily
identical to) the kinds of setups they had explored using the simulation within the guided QuILT approach:
\begin{enumerate}
\item
If you insert polarizers 1 and 2 (one with a horizontal and the other with a $45^0$
transmission axis) as in the Figure 4, how does the interference pattern compare with the case when the two polarizers have orthogonal transmission axis?\\
(a) The interference pattern is identical to the case when polarizers 1 and 2 have orthogonal axes.\\
(b) The interference pattern vanishes when the transmission axes of polarizers 1 and 2 are horizontal and $45^0$.\\
(c) An interference pattern is observed, in contrast to the case
when polarizers 1 and 2 were orthogonal to each other.\\
(d) No photons reach the screen when the transmission axes of polarizers 1 and 2 are horizontal and $45^0$.\\
\item
If you insert polarizer 1 with a horizontal transmission
axis and polarizer 2 (between the second
beam splitter and the screen) with a $45^0$ transmission axis (Figure 5), how does the interference pattern compare with the case when
only polarizer 1 was present?\\
(a) The interference pattern is identical to the case when only polarizer 1 was present.\\
(b) The intensity of the interference pattern changes but the interference pattern is maintained in the presence of polarizer 2.\\
(c) The interference pattern vanishes when polarizer 2 is inserted but some photons reach the screen.\\
(d) An interference pattern reappears that was absent when only polarizer 1 was present.
\item
If you insert polarizer 2 with a $45^0$ transmission axis between the second beam splitter and the screen (Figure 6), how does the interference pattern compare with the case when polarizer 2 was not present?\\
(a) The interference pattern is unchanged regardless of the presence of polarizer 2 because all interference effects occur before
beam splitter 2.\\
(b) The intensity of the interference pattern decreases but the interference pattern is maintained even in the presence of polarizer 2.\\
(c) The intensity of the interference pattern increases in the presence of polarizer 2.\\
(d) The interference pattern vanishes when polarizer 2 is inserted but some photons reach the screen.
\end{enumerate}
\subsection{Survey about QuILTs}
A survey of 12 students whose pre-/post-test data is presented in Table 1 was given to assess the effectiveness
of QuILTs from students' perspective.
Below we provide the questions and student responses:\\
\begin{itemize}
\item Please rate the tutorials for their overall effectiveness where 1 means totally ineffective and 5 means very effective.\\
In response to this question, no student chose 1 or 2, one student chose 3, one chose 3.5, three chose 4, one 4.5 and six chose 5.
\item How often did you complete the tutorial at home that you could not complete during the class?
(1) Never, (2) less than half the time, (3) often, (4) most of the time, (5) always.\\
In response to this question, no student chose (1), one student chose (2), two students chose (3), 6 chose (4), and 3 chose (5).
\item How often were the hints/solutions provided for the tutorials useful?
(1) Never, (2) less than half the time, (3) often, (4) most of the time, (5) always.\\
In response to this question, no student chose (1) or (2), 2 students chose (3), 5 chose (4) and 5 chose (5).
\item Is it more helpful to do the tutorials in class or would you prefer to do them as homework? Please explain the
advantages and disadvantages as you see it.\\
In response to this question, 10 students felt that doing them in class was more useful.
The students who preferred doing them in class often noted that the tutorials focused on improving their conceptual
understanding which was best done via group discussion and hence in class. They appreciated the fact that any questions
they had could be discussed and they benefited from the reasoning provided by their peers and instructor.
The few students who preferred doing them at home felt that more time and effort will go into them if they did them
at home.
\item How frequently should the tutorials be administered in the class (e.g., every other class,
once a week, once every other week)? Explain your reasoning.\\
A majority of students liked having the tutorials once a week. This frequency was considered to be the best
by some students who felt that the concepts learned in the tutorials made it easier for them to understand the textbook
and homework problems later in the week and integrate the material learned. Others felt that once a week was the best
because tutorials helped them
focus on concepts that were missed in lectures, book, and student/teacher conversation.
\item Do you prefer a multiple-choice or open-ended question format for the tutorial questions?
Explain your reasoning.\\
Students in general seemed to like the questions that were in the multiple-choice format but most of them also
appreciated the open-ended questions. Some students noted that the multiple-choice questions helped focus their attention
on important issues and common difficulties and misconceptions while the open-ended questions stimulated creative thought.
Some students felt that multiple-choice format may be better for the ``warm-up" tutorial done at home and the open-ended
questions may be better for the main tutorial done in the class. Some students felt that a mix of the two types of questions was
best because the multiple-choice format was a good way to get the fundamental concepts across and the open-ended questions gave
them an opportunity to apply these concepts and deepen their understanding of the concepts.
\end{itemize}
\section{Summary}
We have given an overview of the development of QuILTs and discuss the preliminary evaluation of three QuILTs
using pre-/post-tests in the natural classroom setting.
QuILTs adapt visualization tools to help students build physical intuition about quantum processes.
They are designed to help undergraduates sort through challenging quantum mechanics concepts.
They target misconceptions and common difficulties explicitly,
focus on helping students integrate qualitative and quantitative understanding,
and learn to discriminate between concepts that are often confused.
They strive to help students develop the ability to apply quantum principles in different
situations, explore differences between classical and quantum ideas, and organize knowledge hierarchically.
Their development is an iterative process.
During the development of the existing QuILTs, we have conducted more than 100 hours of
interviews with individual students to assess the aspects of the QuILTs that work well and those that require refinement.
QuILTs naturally lend themselves to dissemination via the web.
They provide appropriate feedback to students based upon their need and
are suited as an on-line learning tool for both undergraduates (and beginning graduate students)
in addition to being suitable as supplements to lectures for a one or two-semester undergraduate quantum mechanics courses.
\section{Acknowledgments}
We are very grateful to Mario Belloni and Wolfgang Christian for their help in developing and adapting
their Open Source Physics simulations for QuILTs. We also thank
Albert Huber for Mach Zehnder interferometer simulation and to Klaus Muthsam for the double slit simulation.
We thank all the faculty who have administered different versions of QuILTs in their classrooms.\\
\bibliographystyle{aipproc}
|
1,108,101,563,153 | arxiv | \section{Introduction}
Globular Clusters (GCs) are ideal laboratories to study the formation and properties of exotic interacting stellar systems such as Blue Stragglers (BSSs), X-ray Binaries, Cataclysmic Variables etc. BSSs are stars located above the main sequence turn-off (MSTO) in the Color-Magnitude Diagram (CMD) \citep{Sandage}. They are brighter and bluer than the upper main sequence (MS) stars in the cluster (see e.g. \cite{Mirko2016}). The two main leading scenarios proposed for their formation are stellar collisions leading to mergers in high density environments \citep{Hills} and mass transfer (MT) from an evolved donor to a lower-mass star in a binary system in low density environments \citep{Mccrea, Chen}.\\
NGC 5466 is a metal poor ($[Fe/H]=-2.0$, \citep{Caretta2009} Galactic GC containing a large fraction of binaries ($\sim$6 $\%$) and BSS \citep{Beccari}. Being a low density GC (log$_{10}$ $\rho_{c} \sim$ 0.84 $L_{\odot}/pc^{3}$, \citep{Mclaugh2005}) as compared to other Galactic GCs of similar luminosity, mass-transfer is expected to be the dominant BSS formation mechanism where the primordial binaries can evolve in isolation in such environments \citep{Beccari}.
Ultra-Violet (UV) images are very effective in identifying BSS binaries with a hot companion as they show excess emission in the UV which, in general, is not expected from BSSs alone. Based on the FUV spectroscopy and Spectral Energy Distributions (SEDs) of 48 blue objects in 47 Tuc obtained with Hubble Space Telescope (HST), \cite{Knigge2008} discovered several interesting binary objects which also includes one BSS-WD binary in the cluster. \cite{Gosnell2014,Gosnell2015} detected white dwarf (WD) companions to seven BSSs in the open cluster NGC 188 based on Far-UV (F140LP, F150LP, and F165LP) observations with the HST. \cite{Subramaniam2016} detected a hot companion (post-AGB/HB) to a BSS in NGC 188 using UVIT data on ASTROSAT, thus showing the importance of UV observations of BSSs.
Using a large sample of GC Color-Magnitude Diagrams (CMDs) from HST observations, \cite{Knigge2009} and \cite{Leigh2011} showed that the number of BSSs in the cores of GCs is strongly correlated with the total stellar mass of the core of the cluster. \cite{Leigh2007} and \citep{Leigh2013} found that there is little or no correlation of the BSS population with the collision rate in the core of the cluster, thus favouring binary evolution as the dominant channel for the formation of the BSSs. It was also found that the frequency of BSSs in GCs is correlated with the binary fraction \citep{Knigge2009, Milone2012}.
\begin{table*}
\centering
\caption{Parameters of NGC 5466 used in this paper.}
\begin{tabular}{ccc}
\hline
Parameter & Value & Reference\\\hline
R.A. (J2000) & 14 05 27.29 & \cite{Goldsbury2010} \\
Dec (J2000) & +28 32 04.0 & \cite{Goldsbury2010} \\
$[Fe/H]$ & $-$2.0 dex & \cite{Caretta2009, Ferro}\\
Distance & 16.0 $\pm$ 0.6 kpc & \cite{Ferro}\\
Core radius, $r_{c}$ & $1\farcm43$ & \cite{Mclaugh2005}\\
Half-light radius, $r_{h}$ & $2\farcm3$ & \cite{Mclaugh2005}\\
$\mu_{RA}$ & $-5.404\pm0.004$ mas yr$^{-1}$ & \cite{Helmi2018} \\
$\mu_{Dec}$ & $-0.791\pm0.004$ mas yr$^{-1}$ & \cite{Helmi2018}\\\hline
\end{tabular}
\label{cluster}
\end{table*}
\begin{figure*}
\centering
\includegraphics[scale=0.37]{uv_cmd_pm.pdf}
\caption{F169M, (F169M$-$N245M) UV CMD (right panel) and its corresponding V, (V$-$I) optical CMD (left panel) with BSSs detected in all the UVIT filters. Open symbols are UVIT cross-matched ground (CFHT) detections and closed symbols are the UVIT cross-matched HST detections. Cyan dots are the objects detected down to F169M = 24 mag. The cross symbol is an anomalous Cepheid whereas the lower triangles are RR lyrae variables. The UV CMD is over plotted with a Padova model isochrone (black line and dots) of age 12.6 Gyr and metallicity [Fe/H]= $-$1.98 \citep{Caretta2009}.}
\label{cmd}
\end{figure*}
\cite{Ferraro2009} discovered two BSS sequences in the optical CMD of GC M30, suggesting that the redder ones arise from the evolution of close binaries that are still experiencing MT which was in agreement with binary evolution models. Another explanation for the two BSS sequences in M30 was given by \cite{Jiang2017} where they showed that binary evolution contributes to the formation of BSS in both sequences. Thus, identification of BSSs with hot companions using UV observations are crucial to understanding their formation mechanism in binary systems.
In this paper, we present the SED analysis of a BSS candidate of GC NGC 5466 based on UVIT and Gemini observations. We outline the observations and data reduction in Section \ref{obs}, the UV CMD in Section \ref{uvcmd}, spectroscopic analysis in Section \ref{specdata}, SED of the BSS in Section \ref{bssed}, followed by discussion, summary and conclusions in Sections \ref{discus} and \ref{sum}.\\
\section{Observations and Data Reduction}
\label{obs}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{bss_nuv.pdf}
\caption{14 BSSs detected by UVIT are overlaid on top of an N245M filter image of UVIT. The red square symbol in the figure is the BSS target in our study. The yellow and magenta circles are the core radius ($\sim 1\farcm43$) and half-light radius ($\sim 2\farcm3$) of the cluster from the center \citep{Mclaugh2005}.}
\label{image}
\end{figure}
The cluster was observed with the UVIT telescope as a part of the GT proposal (G05$\_$009) during 3-4 June, 2016. UVIT is one of five payloads onboard AstroSat, which is operated by the Indian Space Research Organization (ISRO). The calibration of the instrument can be found in \cite{Tandon2017a}. Full details of the telescope and instrument is available in \cite{Tandon2017b} and \cite{Subramaniam_c2016}.
The data were acquired in four filters of UVIT- two in the FUV (F148W and F169M) and two in NUV channels (N245M and N263M). The images were processed using the CCDLAB software \citep{Postma2017} which corrects for the satellite drift, flat field and distortion. Isolated stellar sources in the UVIT images have FWHM $\sim$1\farcs5 and $\sim$1\farcs2 in the FUV and NUV channels, respectively. In terms of angular resolution, the UVIT images are thus far superior to those from GALEX (4\farcs5-5\farcs5).
Crowded-field photometry was performed using DAOPHOT/IRAF tasks and packages \citep{Stetson}. Aperture and saturation corrections were done to obtain the final magnitudes in the AB system, details can be found in \cite{Tandon2017a}. The photometric errors in magnitude for all the UVIT filters are found to be within 0.4 down to 24th magnitude.\\
\section{UV Color-Magnitude Diagram}
\label{uvcmd}
We cross-matched UVIT data with HST-ACS survey data of the GC NGC 5466 \citep{Sarajedini2007} for the central regions with FOV 3$\farcm4 \times 3\farcm4$ and ground based data provided by Peter Stetson for the region beyond the FOV of HST. We separated them into various stellar populations such as HB and BSS based on their locations in both the optical and UV CMDs. The parameters of the cluster adopted in this study are given in Table \ref{cluster}.
To check their cluster membership, we used the GAIA DR2 Proper Motion (PM) catalog of the cluster NGC 5466 given by \cite{Helmi2018}. Their catalog consists of the list of PM member stars of the cluster where the procedure for the member selection is described in detail in Appendix A.1 of their paper. The vector-point diagram of BSSs (blue squares) relative to other cluster members (grey dots) in the PM catalog is shown in Figure \ref{bspm} where we clearly notice that the FUV detected BSSs are grouped around the mean PM derived by \cite{Helmi2018} (Table \ref{cluster}) except BSS NH 48. According to the HST PM study by \cite{Mirko2016}, this BSS is a PM member. In total, we found 14 BSSs detected in all the UVIT filters to be PM members. In addition, 63 HB stars are also found to be PM members. The typical uncertainties in the PMs are $\sim$ 0.12 and 0.42 mas yr$^{-1}$ for the HB and BSSs respectively. These 14 BSSs are marked in the FUV (F169M) and NUV (N245M) images as shown in Figure \ref{uvit_bss}, where we can clearly see that the BSS are spatially resolved and PSF photometry can be successfully performed.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{bss_pm1.pdf}
\caption{Vector-point diagram of the 14 BSSs (blue squares) detected by UVIT relative to other cluster members (grey) in the catalog given by \cite{Helmi2018}. The red triangle in the figure is BSS NH 84.}
\label{bspm}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.255]{nh7.pdf}
\hspace{0.1cm}
\vspace{0.2cm}
\includegraphics[scale=0.255]{nh9.pdf}\\
\includegraphics[scale=0.25]{nh17.pdf}
\hspace{0.1cm}
\vspace{0.2cm}
\includegraphics[scale=0.25]{nh19.pdf}\\
\includegraphics[scale=0.25]{nh21.pdf}
\hspace{0.1cm}
\vspace{0.2cm}
\includegraphics[scale=0.25]{nh22.pdf}\\
\includegraphics[scale=0.25]{nh26.pdf}
\hspace{0.1cm}
\includegraphics[scale=0.25]{nh30.pdf}\\
\caption{Location of BSSs on FUV-F169M (Left) and NUV-N245M (Right) images of UVIT.}
\label{uvit_bss}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.25]{nh31.pdf}
\hspace{0.1cm}
\vspace{0.2cm}
\includegraphics[scale=0.25]{nh48.pdf}\\
\includegraphics[scale=0.25]{nh49.pdf}
\hspace{0.1cm}
\vspace{0.2cm}
\includegraphics[scale=0.25]{nh84.pdf}\\
\includegraphics[scale=0.25]{nh90.pdf}
\hspace{0.1cm}
\includegraphics[scale=0.25]{uv-bss1.pdf}\\
\caption{Continuation of Figure \ref{uvit_bss}}
\end{figure*}
The PM cleaned F169M vs (F169M$-$N245M) CMD (right panel) along with the optical CMD (left panel) are shown in Figure \ref{cmd} where the detected BSSs are shown as blue squares. The UV CMD is over plotted with a Padova model \citep{Marigo2007, Marigo2008} isochrone (black line and dots) of age 12.6 Gyr, metallicity [Fe/H]= $-$1.98 \citep{Caretta2009} for a distance modulus of 16.0 \citep{Ferro}. The isochrone is generated by convolving Padova models with UVIT filter response curves \citep{Tandon2017a} using the Flexible Stellar Population Synthesis (FSPS) code \citep{Conroy, Conroy2010}. The FSPS generates a locus of BSS, assuming them to be MS stars with masses in excess of the turn-off mass, which uniformly populates 0.5 magnitudes above the MSTO to 2.5 magnitudes brighter than the MSTO as shown in the figure. Reddening is not corrected as it is close to zero, E(B$-$V) = 0.00 \citep{Zinn1985}.
The cross symbol in the figures is a known Anomalous Cepheid \citep{Zinn1976}. The bright star in the UV CMD at F169M, F169M$-$N245M $\sim$ 17.5, -0.4, lying close to the PAGB evolutionary model track is classified as a AGB-manqu\'e star by \cite{Schiavon2012}.
\cite{Sandquist} presented BVI photometry of Red Giants and BSSs based on their observations with the KPNO 0.9m telescope. They identified 94 BSS candidates based on their locations in optical CMDs. Of the 14 BSSs detected by UVIT which are listed in Table \ref{bss}, 13 BSSs were found to be in common with their catalog \citep{Sandquist} suggesting that we have an additional BSS named as UV-BSS1 (Table \ref{bss}). The GALEX magnitudes of the BSSs given in the table are obtained by performing PSF photometry on the GALEX FUV and NUV intensity maps (OBJECT ID- GI1\_056017\_NGC5466). We used the BSS nomenclature from \cite{Nemec1987} for our study.
Among the UV detected BSSs, NH 17 is the brightest BSS as shown in the left panel of the Figure \ref{cmd}. NH 19, NH 30 are known W-UMa type contact binaries and NH 31 is an eclipsing binary \citep{Mateo}. BSS NH 49 is a known SX Phe variable \citep{Jeon2004}. One of the BSS from the \cite{Sandquist} catalog, NH 87, is found to be bluer than the BSS model track in the UV CMD (empty blue box in right panel of Figure \ref{cmd}) whereas other BSSs are distributed around the track. This BSS does not have the PM information from GAIA DR2. We obtained the GMOS-N spectra of this object (see Section~\ref{specdata}).
The location of the FUV detected BSSs from Table \ref{bss} are shown as blue circles in Figure \ref{image} overlaid on UVIT's N245M filter image of the cluster. The red square in the figure is BSS NH 84. We found that most of the BSSs (12 out of 14) are located inside the half-light radius of the cluster.
\begin{table*}
\centering
\caption{UV magnitudes of the FUV detected BSS candidates in all the UVIT and GALEX filters that are PM members \citep{Helmi2018}. The UV-BSS1 does not have a counterpart in the BSS catalog of \cite{Sandquist}.}
\setlength{\tabcolsep}{7pt}
\scalebox{0.63}{%
\begin{tabular}{cccccccccccc}
\hline
& & &\multicolumn{5}{c}{UVIT} & \multicolumn{2}{c}{GALEX} & \multicolumn{2}{c}{Proper motion}\\
BSS ID & RA (J2000) & DEC (J2000) & r & F148W & F169M & N245M & N263M & FUV & NUV & $\mu_{RA}$ & $\mu_{Dec}$\\
& [deg] & [deg] & [$\arcmin$] & [mag] & [mag] & [mag] & [mag] & [mag] & [mag] & [mas yr$^{-1}$] & [mas yr$^{-1}$] \\\hline
UV-BSS 1 & 211.3393 & 28.51122 & 1.83 & 23.08 $\pm$ 0.30 & 22.55 $\pm$ 0.19 & 21.02 $\pm$ 0.06 & 21.09 $\pm$ 0.09 & 23.24 $\pm$ 0.20 & 21.13 $\pm$ 0.07 & -5.63 $\pm$ 0.58 & -0.69 $\pm$ 0.49 \\
NH 7 & 211.32196 & 28.53361 & 2.22 & 22.88 $\pm$ 0.30 & 22.26 $\pm$ 0.18 & 20.80 $\pm$ 0.05 & 20.63 $\pm$ 0.08 & 22.63 $\pm$ 0.15 & 20.81 $\pm$ 0.04 & -5.98 $\pm$ 0.34 & -0.38 $\pm$ 0.33 \\
NH 9 & 211.33525 & 28.55766 & 2.06 & 23.15 $\pm$ 0.31 & 23.14 $\pm$ 0.28 & 21.05 $\pm$ 0.07 & 21.65 $\pm$ 0.18 & - & 21.10 $\pm$ 0.04 & -6.09 $\pm$ 0.56 & -0.80 $\pm$ 0.55 \\
NH 17 & 211.35456 & 28.52768 & 0.64 & 20.05 $\pm$ 0.08 & 19.91 $\pm$ 0.06 & 19.45 $\pm$ 0.03 & 19.43 $\pm$ 0.04 & 19.96 $\pm$ 0.08 & 19.56 $\pm$ 0.04 & -4.95 $\pm$ 0.26 & -1.21 $\pm$ 0.25 \\
NH 19\textsuperscript{a} & 211.35267 & 28.53305 & 0.6 & 21.66 $\pm$ 0.16 & 21.76 $\pm$ 0.12 & 20.51 $\pm$ 0.05 & 19.97 $\pm$ 0.10 & 21.68 $\pm$ 0.11 & 20.30 $\pm$ 0.06 & -6.07 $\pm$ 0.37 & -1.34 $\pm$ 0.35 \\
NH 21 & 211.38859 & 28.51381 & 1.79 & 22.39 $\pm$ 0.20 & 22.36 $\pm$ 0.16 & 20.55 $\pm$ 0.05 & 20.37 $\pm$ 0.09 & 22.60 $\pm$ 0.18 & 20.33 $\pm$ 0.04 & -5.52 $\pm$ 0.31 & -0.47 $\pm$ 0.28 \\
NH 22 & 211.37913 & 28.4753 & 3.64 & 23.59 $\pm$ 0.35 & 23.33 $\pm$ 0.26 & 21.48 $\pm$ 0.09 & 21.47 $\pm$ 0.13 & - & 21.59 $\pm$ 0.08 & -5.14 $\pm$ 0.62 & -0.61 $\pm$ 0.57 \\
NH 26 & 211.34568 & 28.54491 & 1.15 & 22.23 $\pm$ 0.18 & 22.12 $\pm$ 0.17 & 20.60 $\pm$ 0.07 & 20.39 $\pm$ 0.08 & - & 20.61 $\pm$ 0.09 & -5.69 $\pm$ 0.46 & -1.51 $\pm$ 0.44 \\
NH 30\textsuperscript{a} & 211.37085 & 28.52038 & 0.92 & 23.32 $\pm$ 0.36 & 22.84 $\pm$ 0.20 & 21.26 $\pm$ 0.09 & 21.66 $\pm$ 0.14 & - & - & -5.73 $\pm$ 0.45 & -0.30 $\pm$ 0.44 \\
NH 31\textsuperscript{a} & 211.39603 & 28.52215 & 1.84 & 22.15 $\pm$ 0.22 & 21.70 $\pm$ 0.13 & 20.55 $\pm$ 0.06 & 20.48 $\pm$ 0.06 & 21.82 $\pm$ 0.12 & 20.42 $\pm$ 0.07 & -5.68 $\pm$ 0.36 & -0.90 $\pm$ 0.35 \\
NH 48 & 211.37318 & 28.54648 & 0.87 & 23.17 $\pm$ 0.33 & 22.84 $\pm$ 0.25 & 21.03 $\pm$ 0.07 & 20.97 $\pm$ 0.13 & - & 21.06 $\pm$ 0.07 & -4.97 $\pm$ 0.50 & 1.43 $\pm$ 0.52 \\
NH 49\textsuperscript{b} & 211.35802 & 28.51746 & 1.07 & 23.68 $\pm$ 0.39 & 23.17 $\pm$ 0.25 & 21.30 $\pm$ 0.07 & 21.31 $\pm$ 0.10 & 23.53 $\pm$ 0.34 & - & -5.56 $\pm$ 0.52 & -0.83 $\pm$ 0.55 \\
NH 90 & 211.3281 & 28.54438 & 1.92 & 22.84 $\pm$ 0.28 & 22.81 $\pm$ 0.25 & 20.87 $\pm$ 0.09 & 20.56 $\pm$ 0.10 & - & - & -5.10 $\pm$ 0.38 & -1.32 $\pm$ 0.35 \\
NH 84 & 211.22649 & 28.69703 & 12.15 & 22.47 $\pm$ 0.25 & 22.04 $\pm$ 0.14 & 20.67 $\pm$ 0.06 & 20.39 $\pm$ 0.09 & 22.28 $\pm$ 0.13 & 20.68 $\pm$ 0.03 & -5.17 $\pm$ 0.44 & -0.87 $\pm$ 0.36 \\\hline
\label{bss}
\end{tabular}
}
\\\textsuperscript{a}Eclipsing and contact binaries \citep{Mateo}
\\\textsuperscript{b}SX Phe variable \citep{Jeon2004, Ferro}\\
\end{table*}
\section{GMOS-N spectroscopic data}
\label{specdata}
We obtained spectroscopic data using the GMOS-N spectrograph mounted on the 8.1-meter Gemini-North telescope for two sources detected in FUV filters, namely NH 87 and NH 84 (see Table \ref{bss}). The observations were part of the Gemini program GN-2018A-FT-113 (PI: M. Simunovic) and were taken during June, 2018. We used the R400\_G5305 grating and a 0\farcs75 long slit which yielded a dispersion of 0.074 nm/pix and a spectral resolution R $\approx$ 1300 for the $\sim$ 460-900 nm spectral range.~We took 4$\times$330 sec exposures in each central wavelength (700 and 705 nm) in order to cover the GMOS-N detector chip gaps. The data was reduced using standard IRAF routines available in the Gemini/GMOS package which resulted in flux-calibrated spectra at a signal-to-noise of $\sim$60, shown in Figure \ref{specsed} for NH 84.
The spectra of NH 87 showed a flat continuum and strong emission lines consistent with an HII region, suggesting it to be a star-forming galaxy at redshift z $\sim$ 0.09. This also supports the previous classification by SDSS of this object as a galaxy. Hence we discard this object as a contaminant and focus on the spectroscopic analysis of NH 84, which is confirmed as a stellar source.
\subsection{Radial velocity and spectral fitting of NH 84}
\label{spec_fit}
The stellar parameters T$_{\mathrm{eff}}$ and log $g$ were obtained by fitting the shape of the H$_\alpha$ line, which is commonly used as a T$_{\mathrm{eff}}$ and log $g$ indicator that is also independent of rotational broadening. The spectral fitting method was a $\chi^2$ minimization using the pPXF python package \citep{capellari17} with a grid of synthetic spectra from the Coehlo library \citep{coelho14}, fixed at [Fe/H] = $-2.0$ dex. The grid was limited to T$_{\mathrm{eff}}$ values between 7000$-$12000\,K in 250\,K steps, whereas log $g$ were taken between 2.0$-$5.0 in 0.5 steps. The synthetic spectra were then degraded to the spectral resolution of the GMOS-N data and the pPXF spectral fitting was performed allowing only a radial velocity shift and no kinematic broadening. The observed spectrum of NH 84 and best-fit model are shown in Figure~\ref{Halpha_fit}. We obtained T$_{\mathrm{eff}}$ = 8000 K and log $g$ = 4.0 for the best-fit parameters of NH 84.~As it can be seen in the lower panel of Figure~\ref{Halpha_fit}, the distribution of $\chi^2$ is not uniformly distributed around the minimum, and hence asymmetric uncertainties are present.~To obtain robust uncertainties, we take the best-fit synthetic model and add random gaussian noise such that its signal-to-noise=60, as in the observed data, and run pPXF to obtain the best-fit model of this artificial data sample. We run 1000 iterations and obtain probability distributions for the best-fit parameters. This way, we adopt T$_{\mathrm{eff}}$ = 8000$^{+1000}_{-250}$ K and log $g$ = 4.0$^{+0.5}_{-0.5}$ as the uncertainties, obtained from the parameter distribution interval that contains 95\% of the probability, as found with our Monte Carlo approach.
We used the Fourier cross-correlation method to derive the radial velocity of NH 84. The data were cross-correlated against the best-fit synthetic spectra using the FXCOR routine in IRAF. The measured heliocentric radial velocity for NH 84 is $v_{helio} = 128 \pm 30$ km s$^{-1}$, which is consistent with previous measurements of the systemic radial velocity of NGC\,5466 found in the literature. \cite{Harris} reports 110.7 km s$^{-1}$, while \cite{shetrone2010} measured a weighted average value of 118.0 $\pm$ 0.4 km s$^{-1}$ for 67 stars, and \cite{lamb2015} obtains an average value of 121.05 km s$^{-1}$ from 3 stars in NGC\,5466. Hence our results are consistent with NH 84 being a kinematic cluster member.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Halpha_fit.pdf}
\caption{Top: GMOS-N spectrum of NH 84 around the H$_{\alpha}$ line. The best-fit model is shown in the red solid line and the residual values in green points. Bottom: Color map and contour plot of the reduced $\chi^2$ as a function of T$_{\mathrm{eff}}$ and log $g$. The best-fit value is marked as a red cross in the upper left corner.}
\label{Halpha_fit}
\end{figure}
\section{SED of BSS NH 84}\label{bssed}
In order to understand the multi-wavelength energy budget of the FUV detected BSS, we generated their SEDs and estimated the temperature, luminosity and radius. We used the virtual observatory tool, VOSA (VO SED Analyzer, \cite{Bayo}) for SED analysis. VOSA calculates synthetic photometry for a selected theoretical model using filter transmission curves. It performs a $\chi^{2}$ minimization test by comparing the synthetic photometry with observed data to get the best fit parameters of the SED. We estimated the reduced $\chi^{2}$ value using the expression given by
\begin{equation}
\small
\chi^{2}_{red} =\frac{1}{N-N_{f}} \sum_{i=1}^{N}\Big\{\frac{(F_{o,i}-M_{d}F_{m,i})^{2}}{\sigma_{o,i}^{2}}\Big\}
\label{chi2}
\end{equation}
where N is the number of photometric data points, N$_{f}$ is the number of free parameters in the model, $F_{o,i}$ is the observed flux, $M_{d}F_{m,i}$ is the model flux of the star, $\displaystyle{M_{d}=\bigg(\frac{R}{D}\bigg)^{2}}$ is the scaling factor corresponding to the star (where R is the radius of the star, and D is the distance to the star) and $\sigma_{o,i}$ is the error in the observed flux. We used Kurucz stellar atmospheric models \citep{Castelli, Castelli2004} for the BSSs which covers the UV to IR wavelength range. The model's free parameters are [Fe/H], T$_{eff}$ and log $g$. We fixed the metallicity [Fe/H] = $-$2.0, close to the cluster metallicity and varied the other two parameters ($T_{eff}$ and log $g$) in the Kurucz models to fit the SED of the BSSs.
The SED of the BSS NH 84 was constructed by combining the flux measurements of UVIT (4 passbands) with GALEX (FUV and NUV), GAIA DR2 (3 passbands) \citep{Gaia2016,Gaia2018}, KPNO (3 passbands), SDSS (4 passbands) \citep{sdss2012}, and PAN-STARRS (2 passbands) \citep{pan3-2016} surveys (upper panel, Figure \ref{sednh84}) obtained from VO photometry. The number of photometric points used for constructing the SED of NH 84 is 16. The UV flux measurements of NH 84 along with the exposure times are given in Table \ref{uvflux}. We found that fitting the full SED with a single Kurucz model spectrum of T$_{eff}$ = 8000 $\pm$ 125 K and log $g$ = 4.0 $\pm$ 0.5 resulted in a large $\chi_{red}^{2} \sim $ 5.75 for the given degrees of freedom (14). This is clear from the residual plot shown in the lower panel of Figure \ref{sednh84}, which shows the difference between the observed flux and the synthetic flux normalised with respect to the observed flux, corresponding to the flux measurements in each passband. We find that the residual plot shows a rise in flux in the UV wavelengths for a single spectrum fit (shown as light-red empty triangles in the figure). Similarly, we checked the SEDs and the residual plot of other 13 BSSs. If we find the residual to be more than 50\% in the FUV wavelengths, we classify the BSS as having UV excess. We found that out of 14 BSSs, there are 6 BSSs which show UV excess. Our focus is on BSS NH 84 in this study as we have the radial velocity membership confirmation from our spectroscopic study (Section \ref{specdata}).
\begin{table}
\centering
\caption{UV Flux measurements of BSS NH 84}
\setlength{\tabcolsep}{0.5pt}
\begin{tabular}{cccc}
\hline
Filter & Exposure time & Flux & Flux Error \\
& sec & erg cm$^{-2}$ s$^{-1}$ $\mathrm{\AA^{-1}}$ & erg cm$^{-2}$ s$^{-1}$ $\mathrm{\AA^{-1}}$\\\hline
\multicolumn{4}{c}{UVIT}\\\hline
F148W & 2609 & 5.12E-17 & 1.06E-17 \\
F169M & 6036 & 6.41E-17 & 7.96E-18 \\
N245M & 6091 & 9.87E-17 & 5.65E-18 \\
N263M & 4258 & 1.10E-16 & 8.29E-18 \\\hline
\multicolumn{4}{c}{GALEX}\\\hline
FUV & 1838 & 5.80E-17 & 6.45E-18 \\
NUV & 3529 & 1.19E-16 & 3.79E-18 \\\hline
\label{uvflux}
\end{tabular}
\end{table}
In order to address the UV-excess found in BSS NH 84, we generated a composite spectrum by combining the fluxes of Kurucz models for BSS \citep{Castelli, Castelli2004} and Koester WD models \citep{Tremblay2009} for the hot component. We independently obtained the SED fit parameters of the BSS using Kurucz models for a fixed metallicity ([Fe/H] = $-$2.0) considering wavelengths longer than 2000 $\AA$ and found that it is in agreement with the parameters obtained from spectra (Section \ref{spec_fit}). The SED fit parameters for the BSS are given in Table \ref{sedpar}. The Gemini spectrum is consistent with the photometric flux measurements and the best fit composite model (grey line), as shown in Figure \ref{specsed}. Note that the observed absorption features redward of the H$_{\alpha}$ line are telluric bands. Note also that the shown model (grey line) comes from very low resolution spectral models, which explains why the Balmer line shapes are not well matched, as compared to Figure \ref{Halpha_fit}. We found T$_{eff}$ = 8000 K as the best fit value for the cool component from both the SED and the spectrum. Keeping the BSS parameters fixed, we varied the parameters of the Koester WD models assuming a log $g$ = 7.5, to get the best fit combination for the full SED as given in Table \ref{sedpar}.
\begin{figure}
\includegraphics[width=\columnwidth]{sed_nh84_new.pdf}
\caption{SED of NH 84 with a composite spectra (gray color) consisting of Kurucz model (light-red color) and Koester WD model (green color). The zoomed in plot shows the FUV part of the SED fitted with a single and composite spectrum where the light-red empty triangles indicate the Kurucz synthetic flux and grey empty squares indicate the combined synthetic flux. The residuals obtained with a single and composite spectrum fit are shown as light-red empty triangles and grey empty squares in the lower panel. See Section \ref{bssed} for details.}
\label{sednh84}
\end{figure}
The UV excess part of the SED fitted with a single Kurucz spectrum and composite spectrum are shown in a zoomed in plot of the SED (upper panel of Figure \ref{sednh84}) where the light-red empty triangles indicate the single-component Kurucz synthetic flux and grey empty squares indicate the combined synthetic flux (Kurucz + Koester) in the respective FUV filters. When inspecting the zoomed in panel in Figure \ref{sednh84}, the reader should focus their attention on the synthetic flux points (light-red empty triangles and grey empty squares) when comparing to the observed data points, instead of comparing to the solid-line model spectra, which give the misleading impression of a bad fit. Large residuals found for single spectrum fit reduce to almost zero with the composite spectrum fit, in particular for the residuals in the UV filters (shown as grey empty squares in the lower panel of Figure \ref{sednh84}). Thus, NH 84 is found to have a hotter WD companion of temperature 32,000 $\pm$ 2000 K. The $\chi^{2}_{red}$ value for the composite spectrum of NH 84 is $\sim$ 1.62 corresponding to a 95\% confidence level. We note that the largest non-zero residuals are still at the far blue end, where the WD fit is supposed to compensate.
We estimated the basic parameters (luminosities and radii) of the components of BSS NH 84 using the values (T$_{eff}$, M$_{d}$) obtained from the SED fitting. For estimating the radii of the components, we used the relation of M$_{d}$ as mentioned in Equation \ref{chi2} by adopting a distance of 16 $\pm$ 0.6 kpc \citep{Ferro}. The radius of the cool component of BSS NH 84 is $\sim$ 1.44 $\pm$ 0.05 R$_{\odot}$ whereas that of the hot component is $\sim$ 0.021 $\pm$ 0.001 R$_{\odot}$ which is close to the typical radii of WDs \citep{Tremblay}. The uncertainties in the radii are estimated using the equation $\displaystyle{\Delta R=\frac {R\Delta D} {D}}$ where, $\Delta D$ = 0.6 kpc taken from \cite{Ferro}. We calculated the luminosities of the components of the BSS using the relation:
\begin{equation}
\frac{L}{L_{\odot}}= \bigg(\frac{R}{R_{\odot}}\bigg)^{2} \bigg(\frac{T_{\mathrm{eff}}}{T_{\odot}}\bigg)^{4}
\end{equation}
which are given in Table \ref{sedpar}. The hot component has a luminosity of $\sim$ 0.42 $\pm$ 0.11 L$_{\odot}$ whereas the cool component has $\sim$ 7.58 $\pm$ 1.10 L$_{\odot}$.
\begin{figure}
\includegraphics[width=\columnwidth]{spec_bss84.pdf}
\caption{Gemini spectrum of NH 84 (cyan color) overplotted on the SED of NH 84. The wavelengths corresponding to H$_{\alpha}$ and H$_{\beta}$ absorption lines of the BSS are marked in the figure. Note that the absorption features redward of the H$_{\alpha}$ line are telluric bands.}
\label{specsed}
\end{figure}
\subsection{Uncertainties in the WD parameters}
\label{error}
In order to evaluate the upper limit of uncertainty found in T$_{eff}$ estimate of the BSS from spectroscopy (Section \ref{specdata}), we checked the SED fits for the BSS temperatures ranging from 8000 - 9000 K using Kurucz models. We found that fitting the SED of the BSS with T$_{eff}$ = 8250 K has less $\chi^{2}_{red}$ than that of 8000 K. Though, it brings down the residuals in the UV wavelengths to 30$\%$ from 50$\%$, but the individual $\chi^{2}$ for the passband near the Balmer jump (KPNO B) increases with increasing T$_{eff}$. As the Balmer jump is very sensitive to T$_{eff}$ of the cooler component, 8000 K is more appropriate for the T$_{eff}$ of the BSS from the SED fits. We also note that, if we consider a T$_{eff}$ of 8250 K for the BSS, then the best fitting parameters using Koester models (T$_{eff}$ and R) are 30,000 K and 0.014 R$_{\odot}$ which are also consistent with the hot component being a WD. The total $\chi^{2}_{red}$ value increases for temperatures larger than 8250 K for the BSS.
The typical log $g$ values for the WDs reported in the GCs (NGC 6397, NGC 6752, 47 Tuc), based on the spectra from earlier studies \citep{Moehler2000, Moehler2004, Knigge2008} lie in the range 7.5 - 7.8. The log $g$ values available in the Koester models that fall in this range are 7.5 and 7.75. We found that, for a fixed temperature and scaling factor, the SED fit is insensitive to the above two log $g$ values. This shows that the log $g$ value is not well constrained by the SED fit. We calculated the mass of the WD for two different log values (7.5 and 7.75) using the relation:
\begin{equation}
\frac{M}{M_{\odot}}= \bigg(\frac{g}{g_{\odot}}\bigg) \bigg(\frac{R} {R_{\odot}}\bigg)^{2}
\end{equation}
For a fixed $T_{eff}$ and R of the WD given in Table \ref{sedpar}, we found that the mass of the WD varies from 0.5 - 0.9 $M_{\odot}$ for log $g$ values of 7.5 and 7.75. We assumed log $g$ = 7.5 in the SED fit as it corresponds to a WD mass $\sim$ 0.51 $M_{\odot}$ for the given radius, which is close to the average mass of WDs (0.53 $\pm$ 0.02 $M_{\odot}$) suggested in GCs \citep{Renzini1988, Renzini1996}.
\begin{table}
\centering
\caption{SED fit parameters of NH 84}
\setlength{\tabcolsep}{8pt}
\begin{tabular}{ccc}
\hline
Parameters & BSS & WD \\\hline
$T_{eff}$ & 8000 $\pm$ 250 K & 32000 $\pm$ 2000 K\\
log $g$ & 4.0 $\pm$ 0.5 & 7.5 - 7.75\\
$M_{d}$ & 4.1E-24 & 8.8E-28\\\hline
\label{sedpar}
\end{tabular}
\end{table}
\section{Discussion}
\label{discus}
From the SED analysis, we found that 6 out of 14 BSSs show significant excess in UV, among which one is a known contact binary (NH 19) and one is a SX Phe variable (NH 49). We studied the UV excess of BSS NH 84 in detail, as we have the radial velocity measurements from Gemini spectra in addition to PM from GAIA DR2. The rest of the UV excess BSSs will be studied in detail in future.
The mass of the BSS NH 84 is $\sim$ 1.1 M$_{\odot}$ when compared with the Padova isochrone (Figure \ref{cmd}). The mass of WD ranges from 0.5-0.9 M$_{\odot}$ as described in Section \ref{error}. This suggests that the hot component of the BSS NH 84 is more likely to be a C/O WD as inferred from the fit parameters and the associated uncertainties.
A comparison of the L and $T_{eff}$ of the hot companion of BSS NH 84 with the Bergeron WD cooling models which are basically for C/O WDs \citep{Tremblay2011} suggests that the mass of the WD varies from 0.45-0.62 M$_{\odot}$ with a cooling age $\sim$ 15 Myr. This indicates that the system might have recently undergone a MT.
According to the initial-final mass relationship by \citep{Althaus2015} for low metallicity systems (Z=0.0001), the progenitor mass corresponds to 0.8 M$_{\odot}$ for a final WD mass of $\sim$ 0.51 M$_{\odot}$ (log $g$ = 7.5). This suggests that the progenitor mass is likely to be only slightly higher than the MSTO mass of the cluster. Thus, we speculate that the BSS could have formed as a result of a Case B or C MT \citep{Paczy1971}.
The WD parameters obtained for BSS NH 84 are similar to the parameters derived by \cite{Knigge2008} for a BSS-WD system in 47 Tuc. This is the second BSS-WD system to be detected in a GC after the first detection of one such system in 47 Tuc \citep{Knigge2008}.
We checked the PM membership of all the BSSs available in the catalog given by \cite{Sandquist} using GAIA DR2 and found 8 of them to be non-members. Of the 8 non-members, 3 of them (NH 64, 83, 86) are classified as quasars by \cite{sdss2012, quasar2015}. 3 BSSs (NH 85, 87, 89) which do not have PM information from GAIA DR2 are classified as galaxies by SDSS survey \citep{sdss2015}. These 6 sources (galaxies and quasars) are mainly located outside 2$r_{h}$ of the cluster. Thus, we find that $\sim$ 15\% of the BSS population reside outside 1$r_{h}$ corresponding to 12 sources. In this study, where we have identified 14 BSSs in FUV, $\sim$ 14\% (2 BSSs) of the BSS population lie outside 1$r_{h}$ of the cluster. This shows that the distribution of FUV detected BSSs is consistent with the distribution of optically identified BSSs in the cluster.
\cite{Beccari} found the radial distribution of BSSs in NGC 5466 to be bimodal with a centrally-concentrated and an outer BSS sub-population, with a minimum in the radial surface density distribution at about $r\approx180\arcsec$. They estimated the binary fraction in the cluster outskirts ($400\arcsec < r < 800\arcsec$) to be $\sim$ 5$\%$. NH 84 is located at $r\approx730\arcsec$ ($\sim 8.5 r_{c}$, \cite{Mclaugh2005}) from the cluster center which is at half the distance of the tidal radius of the cluster ($r_{t} \sim 1580\arcsec$ \citep{Miocchi2013}). They concluded that the unperturbed evolution of primordial binaries could be the dominant formation mechanism for the BSSs in the low density environments. According to \cite{Ferraro2012}, NGC 5466 is in its dynamical infancy. The binaries present in the cluster outskirts has just recently begun to segregate towards the GC center. In light of the bimodal radial BSS density distribution, we speculate that NH 84 might be a MT binary system that has not yet experienced any significant dynamical interaction with the ambient stellar population and has evolved in relative isolation so far. The consistent picture of the location of NH 84 in NGC 5466 and its dynamical age together with the radially bimodal density distribution suggests that MT is one of the primary BSS formation mechanism in the low density environments \citep{Knigge2009, Geller2011, Leigh2013, Gosnell2014}.
\section{Summary and Conclusions}
\label{sum}
The first results for the metal-poor globular cluster NGC 5466 from UVIT are presented here. The results are based on our observations in four filters of UVIT (2 FUV and 2 NUV) along with Gemini spectra.
Our study has led us to the following conclusions:
1. We detected 14 BSSs in NGC 5466, all of which have measured fluxes in all four UVIT filters and are likely proper motion members according to GAIA DR2.
2. The parameters of the BSS NH 84 obtained from the GMOS-N spectrum are T$_{eff}=$ 8000$^{+1000}_{-250}$K and log $g=$ 4.0 $\pm$ 0.5. It is a radial velocity ($\sim$ 128 $\pm$ 30 km/s) member.
3. The SED decomposition analysis found the presence of a hot component in the SED of BSS NH 84. The hot component is found to have a temperature of T$_{eff}=$ 32000 $\pm$ 2000 K and a radius $\sim$ 0.02 R$_{\odot}$ suggesting it to be a WD.
4. NH 84 is the first BSS-WD candidate found in the outskirts of a low density GC. This is the second BSS-WD system reported in a GC. As NGC 5466 is a dynamically young cluster, this result suggests a MT pathway for BSS formation in low density environments.
\section{Acknowledgements}
We thank the anonymous referee for a very insightful and helpful report which improved the paper. We thank Avinash Singh for helping in python programming. This publication uses the data from the AstroSat mission of the Indian Space Research Organisation (ISRO), archived at the Indian Space Science Data Centre (ISSDC) which is a result of collaboration between IIA, Bengaluru, IUCAA, Pune, TIFR, Mumbai, several centres of ISRO, and CSA. Additionally, this research is based on the observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil). This publication makes use of VOSA, developed under the Spanish Virtual Observatory project supported by the Spanish MINECO through grant AyA2017-84089. This research also made use of Topcat, Matplotlib, IPython, and Astropy, a community-developed core Python package for Astronomy.
\bibliographystyle{aasjournal}
|
1,108,101,563,154 | arxiv |
\section{Introduction}
\label{sec:intro}
\begin{figure}[ht]
\centering
{\includegraphics[height=5.9cm, width=8.5cm]{mainpaper/images/teaser-new.png}}
\caption{\textbf{Motivation for MulT.} Our MulT model, which is a transformer-based encoder-decoder model with shared attention to learn task inter-dependencies, produces better results than both the dedicated 1-task transformer models (1-task Swin~\cite{swin}) and the state-of-the-art multitask CNN baseline~\cite{standley2019}.
}\label{fig:teaser}\vspace{-10pt}
\end{figure}
First proposed in~\cite{attention-is-all-you-need}, transformers have made great strides in a wide range of domains. For instance, previous works~\cite{devlin2019bert, Radford2018ImprovingLU, radford2019language, liu2019roberta, JMLR:v21:20-074, NEURIPS2019_dc6a7e65} have demonstrated that transformers trained on large datasets learn strong representations for many downstream language tasks; and models based on transformers have achieved promising results on image classification, object detection, and panoptic segmentation~\cite{pmlr-v80-parmar18a, dosovitskiy2021an, Bello_2019_ICCV, hu2018genet, ramachandran2019standalone, Wang_2018_CVPR, carion2020endtoend, zhu2021deformable, pmlr-v139-touvron21a}. In contrast to these works that focus on a single task, in this paper, we investigate the use of transformers for multitask learning.
Although a few works have studied the use of transformers to handle multiple \emph{input modalities}, such as images and text, they typically focus on a single \emph{task}, e.g., visual question
answering~\cite{hu2021unit,li2019visualbert, Lu_2020_CVPR, DBLP:journals/corr/abs-1911-06258}, with the exception of~\cite{hu2021unit}, which tackles several language tasks but a single vision one. By contrast, our goal is to connect multiple vision tasks covering the 2D, 3D, and semantic domains. To this end we address the following questions: Can a transformer model trained jointly across tasks improve the performance in each task relative to single-task transformers? Can one explicitly encode dependencies across tasks in a transformer-based framework? Can a multitask transformer generalize to unseen domains?
To the best of our knowledge, only~\cite{IPT, spatiotemporalMTL, video-multitask-transformer} have touched upon the problem of addressing multiple tasks with transformers. However, none of these works aims to encode strong dependencies across the tasks beyond the use of a shared backbone.
Furthermore, IPT~\cite{IPT} handles solely low-level vision tasks, such as denoising, super-resolution and deraining, whereas~\cite{spatiotemporalMTL} focuses uniquely on the tasks of object detection and semantic segmentation and~\cite{video-multitask-transformer} on scene recognition and importance score prediction in videos. Here, we cover a much wider range of high-level vision tasks and explicitly model their dependencies.
To this end, we introduce MulT, which consists of a transformer-based encoder to transform the input image into a latent representation shared by the tasks, and transformer decoders with task-specific
heads producing the final predictions for each of the tasks. While the MulT encoder mainly utilizes the self-attention mechanism~\cite{bahdanau2016neural, parikh-etal-2016-decomposable} to extract intrinsic features, as most transformers, we equip the decoders with a shared attention mechanism across the different vision tasks, thus allowing the overall framework to encode task dependencies.
Thus, we leverage the query and key vectors from the encoder along with the task-specific values in the decoder to predict the task-specific outputs.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose an end-to-end multitask transformer architecture that handles multiple high-level vision tasks in a single model.
\item We introduce a shared attention between the transformer decoders of the multiple tasks. This shared attention mechanism further improves the performance of each vision task.
\item Our framework lets us learn the inter-dependencies across high-level vision tasks.
\item We show that our model generalizes and adapts to new domains with a lower average error on the different vision tasks than the existing multitask convolutional models~\cite{zamir2020consistency, standley2019}.
\end{itemize}
Our exhaustive experiments and analyses across a variety of tasks show our MulT model not only improves the performance over single-task architectures, but also outperforms the state-of-the-art multitask CNN-based models (as shown in Figure~\ref{fig:teaser}) on standard benchmarks, such as Taskonomy~\cite{taskonomy2018}, Replica~\cite{replica}, NYU~\cite{NYU} and CocoDoom~\cite{cocodoom} .
\begin{figure*}[ht]
\centering
{\includegraphics[ width=0.90\linewidth ]{mainpaper/images/MulT_detailed.png}}
\caption{{\textbf{Detailed overview of our MulT architecture.} Our MulT model builds upon the Swin transformer~\cite{swin} backbone and models the dependencies between multiple vision tasks via a shared attention mechanism (shown in the bottom left), which we introduce in this work. The encoder module (in green) embeds a shared representation of the input image, which is then decoded by the transformer decoders (in blue) for the respective tasks. Note that the transformer decoders have the same architecture but different task heads. The overall model is jointly trained in a supervised manner using a weighted loss~\cite{gradnorm} of all the tasks involved. For clarity, only three tasks are depicted here. }
}\label{fig:detailed-model}\vspace{-10pt}
\end{figure*}
\section{Related Work}
\paragraph{Multitasking.}
In its most conventional form,
multi-task learning predicts multiple outputs out of a shared encoder/representation for an input~\cite{zhang2021survey}. Prior works~\cite{taskonomy2018, zamir2020consistency, standley2019, strezoski2019taskrouting, endtoendMTL} follow this architecture to jointly learn multiple vision tasks using a CNN. Leveraging this encoder-decoder architecture, IPT~\cite{IPT} was the first transformer-based multitask network aiming to solve low-level vision tasks after fine-tuning a large pre-trained network. This was followed by~\cite{spatiotemporalMTL}, which jointly addressed the tasks of object detection and semantic segmentation. Recently, \cite{video-multitask-transformer} used a similar architecture for scene and action understanding and score prediction in videos. However, none of these works connect such a wide range of vision tasks as we do, including 2D, 3D, and semantic domains. Furthermore, they do not explicitly model the dependencies between the tasks, which we achieve via our shared attention mechanism.
\vspace{-10pt}
\paragraph{Transformers.}
Transformers~\cite{attention-is-all-you-need} were originally introduced for language tasks, in particular for machine translation where they showed impressive improvements over recurrent-based encoder-decoder
architectures. Since then they have been widely applied to a great range of problems, including speech recognition~\cite{gulati2020conformer} and language modeling~\cite{dai2019transformerxl, devlin2019bert}.
In the vision domain, transformers have been used to extract visual features, replacing CNNs for object detection, image classification, segmentation and video representation learning~\cite{carion2020endtoend, zhu2021deformable, pmlr-v80-parmar18a, dosovitskiy2021an, Bello_2019_ICCV, hu2018genet, mttransunet}. Recently, several works, such as UniT~\cite{hu2021unit} and VILBERT-MT~\cite{li2019visualbert}, have learned multiple tasks from multimodal domains, such as vision and text. Here, however, we focus on a single input modality: images.
\vspace{-10pt}
\paragraph{Learning task inter-dependencies.}
Taskonomy~\cite{taskonomy2018} studied the relationships between multiple visual tasks for transfer learning and introduced a dataset with 4 million images and corresponding labels for 26 tasks. Following this, a number of recent works have further studied tasks relationships for transfer learning~\cite{NEURIPS2019_f490c742, DBLP:journals/corr/abs-1903-01092, dwivedi2019representation, achille2019task2vec}. However, these works differ from multitask learning, in the sense that they analyze a network trained on a source task and applied to a different target task, whereas we study the effect of leveraging multiple tasks during training. In~\cite{standley2019}, Standley et al. found notable differences between transfer task affinity and multi-task affinity and showed the benefits of leveraging structural similarities between tasks at all levels for multitask learning. In this work, we further study the task inter-dependencies, but by designing a multitask transformer model instead of a CNN one. Our MulT model lets us learn the inter-dependencies across high-level vision tasks and further improves the task inter-dependencies seen in CNN-based models.
\vspace{-10pt}
\paragraph{Attention mechanisms.}
While there have been a myriad of attention mechanisms~\cite{chu2021Twins, wang2021pvtv2, xu2021coscale, yang2021focal, wang2021crossformer, chen2021regionvit} to exploit long range dependencies using transformers, none of the prior works utilize a cross-task shared attention for multitask learning. This is what we propose in this work to handle multiple vision tasks.
\section{MulT: A Multitask Transformer}
Our model, MulT, follows the principle of a transformer encoder-decoder architecture~\cite{attention-is-all-you-need}. It consists of a transformer-based encoder to map the input image to a latent representation shared by the tasks, followed by transformer decoders with task-specific heads producing the predictions for the respective tasks. Figure~\ref{fig:detailed-model} shows an
overview of our MulT framework.
For our transformer-based encoder, we use a pyramidal backbone, named the Swin Transformer~\cite{swin}
to embed the visual features into a list of hidden states that incorporates global contextual information. We then apply the transformer decoders to progressively decode and upsample the tokenized maps from the encoded image. Finally, the representation from the transformer decoder is passed to a task-specific head, such as a simple two layer classifier (in the case of segmentation), which outputs the final predictions. Given the simplicity of MulT, it can be extended easily to more tasks. We empirically show that our model can jointly learn 6
different tasks and generalizes well to new domains. The following sections
describe the details of each component in MulT.
\subsection{Encoder Module}
For the encoder, we adopt Swin-L~\cite{swin}, which applies stacked transformers to features of gradually decreasing resolution in a pyramidal manner, hence producing hierarchical multi-scale encoded features, as shown in Figure~\ref{fig:detailed-model}. In particular, following the ResNet~\cite{resnet} structure and design rules, four stages are defined in succession: each of them contains a patch embedding step, which reduces the spatial resolution and increases the channel dimension, and a columnar sequence of transformer blocks. The initial basic patch embedding in the first stage is performed with square patches of size $p_H = p_W = 4$ and with channel size C = 192, without the addition of the ‘class’ token; the patch merging in all three subsequent stages takes the output tokens of the previous stage, reshapes them in a 2D representation and aggregates neighboring tokens in non-overlapping patches of size $p_H = p_W = 2$ through channel-wise concatenation and a linear transformation that halves the resulting number of channels (hence doubles the number of channels with respect to the input tokens). This approach halves the resolution and doubles the channel dimension at every intermediate stage, matching the behavior of typical fully-convolutional backbones and producing a feature pyramid (with output sizes of {1/4, 1/8, 1/16, 1/32} of the original resolution) compatible with most previous architectures for vision tasks.
Following~\cite{resnet}, most of the computation is concentrated in the third stage: Out of a total of $N = 24$ transformer encoders, 2 blocks are in the first, second and fourth stage and 18 are in the third stage. In each block, the self-attention is repeated according to the number of heads used and depending on the stage of the encoding process. This is done to match the increase in the channel dimensions, where the dimensions $M = \{6, 12, 24, 48\}$ in the first, second, third and fourth stage, respectively. However, the high resolution in the first two stages does not allow the use of global self-attention, due to its quadratic complexity with respect to the token sequence length. To solve this issue, in all stages, the tokens, that are reshaped in a 2D representation, are divided into non-overlapping square windows of size $h = w = 7$, and the intra-window self-attention is independently computed for each of them. This means that each token attends to only the tokens in its own window, both as a query and as a key/value. A possible downside of this approach could be that the restriction to fixed local windows completely stops any type of global or long-range interaction. The adopted solution is to alternate regular window partitioning with another non-overlapping partitioning in which the windows are shifted by half their size, $\lfloor h/2 \rfloor = \lfloor w/2 \rfloor = 3$, both in the height and width dimensions. This has the effect of gradually increasing the virtual receptive field of the subsequent attention computations.
\subsection{Decoder Module}
Inspired by the two CNN-based decoders proposed in~\cite{SETR}, we develop corresponding conceptually similar transformer-based versions. The general idea is to replace convolutional layers with windowed transformer blocks. Specifically, our decoder architecture consists of four stages, each containing a sequence of 2 transformer blocks for a total of 8. In each stage, the two sequential transformer blocks allow us to leverage inter-window connectivity by alternating regular and shifted window configurations as in the encoder. Between consecutive stages, we use an upsampling layer to double the spatial resolution and half the channel dimension; we therefore adjust the number of attention heads accordingly to {48, 24, 12, 6}, in the first, second, third and fourth stage, respectively. The spatial/channel shape of the resulting feature maps matches the outputs of the encoder stages, which are delivered to the corresponding decoder stages by skip connections. This yields an hourglass structure with mirrored encoder-decoder communication: the lower-resolution stages of the decoder are guided by the higher-level deeper encoded features and the higher-resolutions stages of the decoder are guided by the lower-level shallower encoded features, allowing to gradually recover information in a coarse to fine manner and to exploit the different semantic levels where they are more relevant. Note that the first transformer block in each stage of the decoder uses a regular window partitioning while the second uses a shifted window partitioning; this can easily be extended to using a longer sequence of transformer blocks, as long as the length is a multiple of 2, which makes it possible to alternate between the two configurations.
To perform multitask prediction, we share the encoder across all tasks and use task-specific decoders with the same architecture but different parameter values. We then simply
append task-specific heads to the decoder. For instance, a model jointly trained for semantic segmentation and depth prediction will have two task-specific heads: one predicting $K$ channels followed by a softmax for semantic segmentation and one predicting a single channel followed by a sigmoid for depth estimation.
\subsection{Shared Attention}
To account for the task dependencies beyond sharing encoder parameters, we develop a shared attention mechanism that integrates the information contained in the encoded features into the decoding stream.
Let us now describe how this mechanism works for one particular decoder stage. Note that we apply the same procedure for all decoder stages.
Formally, for one task $t$ and one particular decoder stage, let $x^t$ denote the upsampled output of the previous stage, and $x_{sa}$ the output of the encoding stage operating at the same resolution. As illustrated in Figure~\ref{fig:shared-attention-detailed}, the decoder stage takes both $x^t$ and $x_{sa}$ as input. The standard way to compute self-attention for task $t$ would be to obtain the key, query and value vectors from its own decoder output $x^t$ only. By contrast, for our shared attention, we use only one of the task streams to calculate the attention. That is, we compute a query $q_{sa}^{r}$ and a key $k_{sa}^r$ from $x_{sa}$ (coming from the encoder) by using the linear layers, shown in Figure~\ref{fig:shared-attention-detailed}, of the decoder of one particular reference task $r$. To nonetheless reflect the fact that the output of the decoder for task $t$ should be related to this particular task, we compute the values $v^t$ using the previous stage output $x^t$ for task $t$. Thus, we compute attention values from the reference task $r$ as
\begin{equation} \label{eq:method_shared_attention}
\begin{aligned}
A^r_{sa}=\text{softmax}\left(\frac{q^r_{sa}.{k^r_{sa}}^T}{\sqrt{C^r_{qkv}}}+B^r\right),\\
\end{aligned}
\end{equation}
where $C^r$ is the number of channels and $B^r$ is the bias. For any task $t$, we then compute $\tilde{x}^t=A^r_{sa}v^t.$ This $\tilde{x}^t$ is then used by the self-attention head \text{head}$^t_i(.,.)$ to compute \text{head}$^t_i(\tilde{x}^t_i,{W^t_i})= \tilde{x}^t_i\cdot {W^t_i}$, where ${W^t_i}$ is the learnt attention weight for task $t$ and $\tilde{x}^t_i$ is the \text{i}$^{th}$ channel, respectively. Note that this formulation represents the $i^{th}$ instance of the self-attention, which is repeated $M$ times to obtain a multihead attention as \text{MHA}$^t(.,.)$ for task $t$.
Following which, we compute $x^t_{linear}$ by linearly projecting the output of \text{MHA}$^t(.,.)$. Finally, we obtain $y^t$ as follows:
\begin{equation} \label{eq:method_shared_attention-MHA}
\begin{aligned}
&\text{MHA}^t(\tilde{x}^t,W)=\text{Concat}(\text{head}^t_1,\cdots, \text{head}^t_M)\mathbf{W}\;,\\
& x^t_{linear}= \text{MHA}^t(.,.)\;,\\
& y^t = x^t + x^t_{linear}\;,
\end{aligned}
\end{equation}
where $\mathbf{W}$ indicates the multi-head attention weight. Empirically, we have found that the attention from the surface normal task stream benefits our 6-task MulT model, and we thus take this task as reference task $r$, whose attention is shared across the tasks. As shown in Figure~\ref{fig:shared-attention-detailed}, $x^r$ is the upsampled output of the previous stage of a particular decoder for the reference task, taken here as surface normal prediction.
\begin{figure}[t]
\centering
{\includegraphics[height=6cm, width=8cm ]{mainpaper/images/shared-attention-new.png}}
\caption{Overview of our \textbf{shared attention} mechanism.
}\label{fig:shared-attention-detailed}\vspace{-10pt}
\end{figure}
\mycomment{
\begin{equation} \label{eq:method_shared_attention}
\begin{aligned}
A_{sa}=\text{softmax}\left(\frac{q_{sa}.{k_{sa}}^T}{\sqrt{C_{qkv}}}+B\right),\\
x'=A_{sa}v\\
y=x+SA\left(x,x_{sc}\right)
\end{aligned}
\end{equation}
Depending on the number of stacked blocks in each stage, the first transformer block employs the self-attention on the output of the previous block, as seen in the encoder module of Swin~\cite{swin} and the following adjacent transformer block employs the shared attention between $x$ and $x_{sa}$. In particular, for the shared attention, we use only one of the task streams to calculate the attention, such that it computes query $q_{sa}$ and key $k_{sa}$ representations from the skip connection $x_{sa}$ (coming from the encoder) and a value representation $v$ from the decoding stream $x$. The attention matrices $A_{sa}$ are then computed using $q_{sa}$ and $k_{sa}$ as described in Equation~\ref{eq:method_shared_attention}, and the new tokens are produced by taking the average of the decoded tokens with the weights dependent on the task-wise similarity of the encoded tokens in their embedding space.
}
\mycomment{
Note that, as can be seen from Figure~\ref{fig:shared-attention-detailed},\TZ{where is the defination of $\mathbf{x}^{''}$} the information from $x_{sa}$ only indirectly flows to $y^t$ via $q^r_{sa}$ and $k^r_{sa}$, whereas the information contained in $x^t$ directly flows to $y^t$ through both the values $v^t$ and the residual shortcut.
To compensate for the restricted information flow from $x_{sa}$ to $y^t$, we replace the values $v^t$ by values $v^t_{cat}$ obtained by concatenating $x^t$ and $x_{sa}$ in a channel-wise manner. That is, we compute $v^t_{cat}=\left(x^t+\!\!\!\!\!+x_{sa}\right)(W^t_{v,cat})^T+b^t_{v}$, where the weights $W^t_{v,cat}$ are of shape $\left(C_{qkv},2C\right)$, and $+\!\!\!\!\!\!+$ indicates channel-wise concatenation.
}
Note that our shared attention differs from the co-attention introduced in prior works\cite{Chefer_2021_CVPR}, where the value and key are passed via a skip connection from the encoder layers. Figure~\ref{fig:effect-of-shared-attention} shows the effect of adding our shared attention mechanism across the tasks, where our MulT with the shared attention mechanism improves the results across all the tasks in comparison with our MulT model without the shared attention.
\begin{figure}[t]
\centering
{\includegraphics[ height= 4.8cm, width=8.5cm ]{mainpaper/images/Effect-of-SA.png}}
\caption{{\textbf{Motivation of the shared attention mechanism on our MulT model.} The shared attention mechanism learns the task inter-dependencies and improves the prediction for each task. For instance, the yellow circled region shows how our MulT model with shared attention across tasks improves the semantic segmentation performance, where the chair mask is correctly classified in our predictions as in the ground truth. However in the MulT model without the shared attention, the chair is predicted as a couch mask. Best viewed on screen and when zoomed in.}
}\label{fig:effect-of-shared-attention}\vspace{-10pt}
\end{figure}
\vspace{-15pt}
\paragraph{Task Heads and Loss.} The feature maps from the transformer decoder modules are input to different task-specific heads to make subsequent predictions. Each class head includes a single linear layer to output a $H \times W \times 1$ map, where $H$, $W$ are the input image dimensions. We employ a weighted sum~\cite{gradnorm} based task-specific losses to jointly train the network, where the losses are calculated between the ground truth and final predictions for each task. In particular, we use cross-entropy for segmentation, rotate loss~\cite{taskonomy2018} for depth, and $L1$ loss for surface normals, 2D keypoints, 2D edges and reshading, respectively. Note that we employ these losses to maintain consistency with the baselines~\cite{taskonomy2018, standley2019, zamir2020consistency}.
\section{Experiments and Results}
To provide a thorough analysis of MulT and also compare it with well-established prior work, we
experiment with jointly learning prominent, high-level vision tasks.
\subsection{Datasets}
We evaluate MulT using the following datasets:
\vspace{-13pt}
\paragraph{Taskonomy~\cite{taskonomy2018}} is used as our main training dataset. It comprises 4 million real images of indoor scenes with multi-task annotations for each image. The experiments were performed using the following 6 tasks: \textit{semantic segmentation $\mathcal{(S)}$, depth (zbuffer) $\mathcal{(D)}$, surface normals $\mathcal{(N)}$, 2D keypoints $\mathcal{(K)}$, 2D (Sobel) texture edges (E) and reshading $\mathcal{(R)}$}. The tasks were selected to cover 2D, 3D, and semantic domains and have sensor-based/semantic ground truth. We report results on the Taskonomy test set.
\vspace{-13pt}
\paragraph{Replica~\cite{replica}} comprises high-resolution 3D ground truth and enables more reliable evaluations of fine-grained details. We test all the networks on 1227 images from Replica (with and without fine-tuning).
\vspace{-13pt}
\paragraph{NYU~\cite{NYU}} comprises 1449 images from 464 different indoor scenes. We test all the networks on NYU (with and without fine-tuning).
\vspace{-13pt}
\paragraph{CocoDoom~\cite{cocodoom}} contains synthetic images from the Doom video game. We use it as an out-of-training-distribution dataset.
\subsection{Training Details}
We jointly train MulT on multiple tasks, including, semantic segmentation, depth estimation, 2D keypoint detection, 2D edge detection, surface normal estimation and reshading. In our implementation, we train with a
batch size of 32 on 32 Nvidia V100-SXM2-32GB
GPUs in a distributed fashion, using PyTorch.
We use the weighted Adam optimizer~\cite{adam-w} with a
learning rate of 5e-5 and the warm-up cosine learning rate schedule (using 2000 warm-up iterations). The optimizer updates the model parameters based on gradients from the task losses.
\subsection{Baselines}
We compare our MulT model with the following state-of-the-art baselines.
\vspace{-15pt}
\paragraph{Baseline UNet (for single-task or independent learning)} constitutes our CNN-based baseline. We use it as a reference for all the multitask models.
\vspace{-15pt}
\paragraph{Baseline Swin transformer~\cite{swin} (for single-task or independent learning)} constitutes the single task transformer baseline. It is almost identical to our MulT model, except for not including shared attention and for being trained with only one dedicated task. We use it to evaluate the benefits of our multitask learning strategy
\vspace{-15pt}
\paragraph{Multi-task learning~\cite{kokkinos2016ubernet} (MTL)} comprises a network with one shared encoder and multiple decoders each dedicated to a task. This baseline further identifies if tasks are inter-dependent, such that a shared representation can give comparable performance across multiple tasks, without explicitly adding task constraints.
\vspace{-13pt}
\paragraph{Taskonomy~\cite{taskonomy2018}} studies the relationships between multiple visual tasks for transfer learning.
\vspace{-13pt}
\paragraph{Taskgrouping~\cite{standley2019}} studies task compatibility in multitask learning, thus providing a framework for determining which tasks should be trained jointly and which tasks should be trained separately.
\vspace{-13pt}
\paragraph{Cross-task consistency~\cite{zamir2020consistency}} presents a general and data-driven framework for augmenting standard supervised learning with cross-task consistency. It is inspired from Taskonomy~\cite{taskonomy2018} but adds a consistency constraint to learn multiple tasks jointly.
Note that we do not compare our method with the contemporary work~\cite{hu2021unit} as it focuses on \emph{bimodal} multitask learning for vision- and language-related tasks. By contrast, in this work, we tackle \emph{unimodal} multitask learning for high-level vision tasks. All the multitask baselines were trained using their best model configurations as in~\cite{kokkinos2016ubernet, taskonomy2018, standley2019, zamir2020consistency}, respectively.
\begin{table}[ht]
\setlength\tabcolsep{3pt}
\centering
\scalebox{0.85}{
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{7}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~~\textbf{ ~ Relative Performance On}} \\
& $\mathcal{S}$ & $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{K}$ & \textit{E} & $\mathcal{R}$ \\
\arrayrulecolor{black}\hline
$\mathcal{S}$ & - & +3.83\% & -1.42\% & -1.33\% & +33.9\% & -0.80\% \\
$\mathcal{D}$ & +4.83\% & - & +2.77\% & -1.92\% & +35.2\% & +3.93\% \\
$\mathcal{N}$ & +11.3\% & +8.35\% & - & +91.2\% & +77.1\% & +9.09\% \\
$\mathcal{K}$ & +5.11\% & +0.57\% & -6.88\% & - & +70.1\% & +0.21\% \\
\textit{E} & +6.09\% & +4.33\% & -0.73\% & +4.75\% & - & +5.11\% \\
$\mathcal{R}$ & +8.61\% & +4.45\% & +5.91\% & +1.95\% & +33.9\% & - \\
\arrayrulecolor{black}\hline
\end{tabular}}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Quantitative comparison of our MulT model with a single-task dedicated Swin transformer baseline~\cite{swin}.} Our MulT model is jointly trained in a pairwise manner on the Taskonomy benchmark~\cite{taskonomy2018}. For instance, in the first row, second column, we show the results of our MulT model trained with semantic segmentation and depth in a pairwise manner, and tested on the task of depth estimation. The relative performance percentage for each task is evaluated by taking the percentage increase or decrease w.r.t. the single-task baseline. The results here are reported on the Taskonomy test set. The columns show the task tested on, and the rows show the other task used for
training.}
\label{tb:MulT-pairwise-results}%
\vspace{-10pt}
\end{table}%
\mycomment{
\begin{table}[ht]
\setlength\tabcolsep{3pt}
\centering
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{7}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~~\textbf{ ~ Relative Performance On}} \\
& $\mathcal{S}$ & $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{K}$ & \textit{E} & $\mathcal{R}$ \\
\arrayrulecolor{black}\hline
$\mathcal{S}$ & - & +3.00\% & -2.79\% & -5.20\% & +27.8\% & -1.66\% \\
$\mathcal{D}$ & +1.72\% & - & +1.18\% & -3.52\% & +25.7\% & +0.91\% \\
$\mathcal{N}$ & +7.2\% & +5.05\% & - & +88.9\% & +71.6\% & +4.60\% \\
$\mathcal{K}$ & +3.12\% & -0.41\% & -10.12\% & - & +61.1\% & +0.05\% \\
\textit{E} & +0.03\% & -1.40\% & -4.78\% & -3.05\% & - & +2.26\% \\
$\mathcal{R}$ & +3.11\% & +2.17\% & +3.71\% & +0.06\% & +25.5\% & - \\
\arrayrulecolor{black}\hline
\end{tabular}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Quantitative comparison of a multitask CNN baseline model~\cite{standley2019} with a 1-task independent Unet baseline~\cite{unet}.} The CNN model is jointly trained in a pairwise manner on the Taskonomy benchmark~\cite{taskonomy2018}. For instance, in the first row second column, we show the results of the CNN model trained with semantic segmentation and depth in a pairwise manner, and tested on the task of depth estimation. The relative performance percentages on each task is evaluated by taking the percentage increase or decrease w.r.t. the 1-task baselines. }
\label{tb:baseline-cnn-pairwise-results}%
\vspace{-10pt}
\end{table}%
}
\begin{table*}[ht]
\setlength\tabcolsep{3pt}
\centering
\scalebox{0.85}{
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{7}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ \textbf{ ~ Relative Performance On}} \\
\hline
\multicolumn{7}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ \textbf{ ~ Taskonomy Test Set~\cite{taskonomy2018}}} \\
& $\mathcal{S}$ & $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{K}$ & \textit{E} & $\mathcal{R}$ \\
\arrayrulecolor{black}\hline
MTL~\cite{kokkinos2016ubernet} vs 1-task CNN~\cite{unet} & +2.05\% & +3.11\% & +4.38\% & -1.29 \% & +45.22\% & +2.99\% \\
Taskonomy~\cite{taskonomy2018} vs 1-task CNN~\cite{unet} & +2.63\% & -3.82 \% & +2.95\% & +10.13 \% & +59.05\% & +4.52\% \\
Taskgrouping~\cite{standley2019} vs 1-task CNN~\cite{unet} & +6.24\% & +3.36\% & +4.23\% & +21.77\% & +73.6 \% & +5.79\% \\
Cross-task~\cite{zamir2020consistency} vs 1-task CNN~\cite{unet} & +9.01\% & +6.77\% & +5.61\% & +23.20\% &+75.8 \% & +11.1\% \\
\hline
\textbf{MulT} vs 1-task Swin~\cite{swin} & \underline{+19.7\%} & \underline{+10.2\%} & \underline{+8.72\%} & \underline{+94.75\%} & \underline{+88.8\%} & \underline{+16.4\%} \\
\textbf{MulT} vs 1-task CNN~\cite{unet} & \textbf{+21.6\%} & \textbf{+11.5\%} & \textbf{+9.71\%} & \textbf{+97.04\%} & \textbf{+92.9\%} & \textbf{+21.0\%} \\
\arrayrulecolor{black}\hline
\end{tabular}}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Quantitative comparison of our MulT model with baselines when jointly trained for six tasks on the Taskonomy benchmark~\cite{taskonomy2018}.} Our six-task MulT model consistently outperforms all the baselines, including the multitasking CNN baselines and the single-task CNN and Swin baselines. The relative performance percentage for each task is evaluated by taking the percentage increase or decrease w.r.t. the single-task baseline. The results here are reported on the Taskonomy test set. Bold and underlined values show the best and second-best results, respectively. }
\label{tb:MulT-sixstream-results-taskonomy}%
\vspace{-10pt}
\end{table*}%
\begin{table*}[ht]
\setlength\tabcolsep{3pt}
\centering
\scalebox{0.8}{
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{black}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{6}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ \textbf{ ~ Relative Performance On}} \\ \hline
& \multicolumn{3}{l}{~ ~ ~ ~ ~ \textbf{ ~ Replica Dataset~\cite{replica}}} & \multicolumn{2}{l}{~ ~ \textbf{ ~ NYU Dataset~\cite{NYU}}} \\
& $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{R}$ & $\mathcal{S}$ & $\mathcal{D}$ \\
\arrayrulecolor{black}\hline
MTL~\cite{kokkinos2016ubernet} vs 1-task CNN~\cite{unet} & +2.53\% &+3.03\% & +1.87\% &+1.13\% & +2.72\% \\
Taskonomy~\cite{taskonomy2018} vs 1-task CNN~\cite{unet} &-4.55\% &+1.99\% &+3.33\% & +2.05\% &-4.07\% \\
Taskgrouping~\cite{standley2019} vs 1-task CNN~\cite{unet} & +2.75\% & +4.09\% & +5.47\% & +6.01\% & +2.91\% \\
Cross-task~\cite{zamir2020consistency} vs 1-task CNN~\cite{unet} & +5.10\% & +4.33\% & +9.55\% & +8.10\% & +5.71\% \\
\hline
\textbf{MulT} vs 1-task Swin~\cite{swin} & \underline{+8.33\%} & \underline{+7.05\%} & \underline{+14.2\%} & \underline{+13.3\%} & \underline{+8.54\%} \\
\textbf{MulT} vs 1-task CNN~\cite{unet} & \textbf{+10.1\%} & \textbf{+8.59\%} & \textbf{+19.6\%} & \textbf{+15.7\%} & \textbf{+10.4\%} \\
\arrayrulecolor{black}\hline
\end{tabular}}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Quantitative comparison of our MulT model with baselines on the Replica benchmark and the NYU benchmark.} We apply our MulT model, jointly trained on 6 tasks on the Taskonomy dataset, to test the depth, normals and reshading prediction performances on the Replica dataset~\cite{replica}, and the segmentation and depth prediction performance on the NYU dataset~\cite{NYU}. Our six-task MulT model consistently outperforms all the baselines, including the multitasking CNN baselines and the single-task CNN and Swin baselines. The relative performance percentage for each task is evaluated by taking the percentage increase or decrease w.r.t. the single-task baseline. Bold and underlined values show the best and second-best results, respectively. }
\label{tb:MulT-sixstream-results-replica-nyu}%
\vspace{-10pt}
\end{table*}%
\subsection{Quantitative Results}
\mycomment{
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{1.0\textwidth}
\centering
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__semseg_multitask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2depth_multitask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2normal_multitask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__keypoint2d_multitask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__edge2d_multitask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2reshading_multitask.png}
\end{subfigure}
\begin{subfigure}[b]{1.0\textwidth}
\centering
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__semseg_taskonomy.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2depth_taskonomy.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2normal_taskonomy.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__keypoint2d_taskonomy.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__edge2d_taskonomy.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2reshading_taskonomy.png}
\end{subfigure}
\begin{subfigure}[b]{1.0\textwidth}
\centering
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__semseg_taskgrouping.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2depth_taskgroup.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2normal_taskgroup.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__keypoint2d_taskgrouping.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__edge2d_taskgrouping.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2reshading_taskgroup.png}
\end{subfigure}
\begin{subfigure}[b]{1.0\textwidth}
\centering
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__semseg_crosstask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2depth-crosstask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2normal_crosstask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__keypoint2d_crosstask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__edge2d_crosstask.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2reshading-crosstask.png}
\end{subfigure}
\begin{subfigure}[b]{1.0\textwidth}
\centering
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__semseg_Swin.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2depth-Swin.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2sfnormb-Swin.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__keypoint2d_Swin.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__edge2d_Swin.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__reshading_Swin.png}
\end{subfigure}
\begin{subfigure}[b]{1.0\textwidth}
\centering
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__semseg_MulT.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2depth-MulT.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__rgb2sfnorm-MulT.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__keypoint2d_MulT.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__edge2d_MulT.png}
\includegraphics[width=0.13\linewidth]{mainpaper/images/point_215_view_1_domain_rgb__reshading_MulT.png}
\end{subfigure}
\caption{\textbf{Qualitative comparison of the six vision tasks} on the Taskonomy benchmark~\cite{taskonomy2018}. From top to bottom, we show qualitative results using MTL~\cite{kokkinos2016ubernet}, Taskonomy~\cite{taskonomy2018}, Taskgrouping~\cite{standley2019}, Cross-task consistency~\cite{zamir2020consistency}, single-task dedicated Swin transformer~\cite{swin} and our six-task \textbf{MulT} model. We show, from left to right, the input image, the semantic segmentation results, the depth predictions, the surface normal estimations, the 2D keypoint detections, the 2D edge detections and the reshading results for all the models. All models are jointly trained on six vision tasks, except for the Swin transformer baseline, which is trained on the independent single tasks. Our MulT model outperforms both the single task Swin baselines and the multitask CNN based baselines. Best seen on screen and zoomed in.
}\label{fig:qualitative-result-taskonomybenchmark}\vspace{-10pt}
\end{figure*}
}
\begin{figure*}[ht]
\centering
{\includegraphics[ height= 8cm, width=13.3cm]{mainpaper/images/QR-Taskonomy.png}}
\caption{\textbf{Qualitative comparison on the six vision tasks} of the Taskonomy benchmark~\cite{taskonomy2018}. From top to bottom, we show qualitative results using MTL~\cite{kokkinos2016ubernet}, Taskonomy~\cite{taskonomy2018}, Taskgrouping~\cite{standley2019}, Cross-task consistency~\cite{zamir2020consistency}, the single-task dedicated Swin transformer~\cite{swin} and our six-task \textbf{MulT} model. We show, from left to right, the input image, the semantic segmentation results, the depth predictions, the surface normal estimations, the 2D keypoint detections, the 2D edge detections and the reshading results for all the models. All models are jointly trained on the six vision tasks, except for the Swin transformer baseline, which is trained on the independent single tasks. Our MulT model outperforms both the single task Swin baselines and the multitask CNN based baselines. Best seen on screen and zoomed within the yellow circled regions.
}\label{fig:qualitative-result-taskonomybenchmark}\vspace{-10pt}
\end{figure*}
The results in Table~\ref{tb:MulT-pairwise-results} show the relative performance of our MulT model when trained on pairs of tasks and tested on one of the two tasks. We observe that, out of the pairwise-trained multitask models, surface normals help the other vision tasks. However, the performance of normals tends to decrease w.r.t. its single task dedicated model, except when used in conjunction with either depth predition or reshading. Note that the trends we observe are similar to those shown in~\cite{standley2019} for the CNN case. This suggests that transformers follow a similar behavior to that of CNNs in the presence of multiple tasks.
In cases of more than two tasks, we observe, as in~\cite{standley2019}, that effectively leveraging between 3 and 6 tasks required increasing the size of the decoder modules. Altogether, reporting results for all possible task combinations requires training $(2^6-1)$ models. Here, we focus on the 6-task case, but provide 3-task, 4-task, and 5-task results in the supplementary material.
The results of our 6-task MulT model and of the baselines are reported in Table~\ref{tb:MulT-sixstream-results-taskonomy} and Table~\ref{tb:MulT-sixstream-results-replica-nyu} for the Taskonomy test set~\cite{taskonomy2018}, and the Replica~\cite{replica} and NYU~\cite{NYU} dataset, respectively. Our MulT model outperforms the multitask CNN baselines as well as the 1-task CNN and Swin ones.
Furthermore, as can be verified from the results in the supplementary material, increasing the number of tasks improves the results of our MulT model, e.g., a 6-task network outperforms a 5-task one, which in turn outperforms a 4-task network.
\subsection{Qualitative Results}
We qualitatively compare the results of our MulT model with different CNN-based multitask baselines~\cite{kokkinos2016ubernet, taskonomy2018, standley2019, zamir2020consistency}, as well as with the single task dedicated Swin transformer~\cite{swin}. The results in Figure~\ref{fig:qualitative-result-taskonomybenchmark} show the performance of the different networks on all six vision tasks.
All the multitasking models are jointly trained on the six tasks on the Taskonomy benchmark, and the single task dedicated Swin models are trained on the respective tasks. Our MulT model yields higher-quality predictions than both the single task Swin baselines and the multitask CNN baselines. We provide additional qualitative results in the supplementary material.
\subsection{Generalization to New Domains}
In this section, we demonstrate how well MulT generalizes to new domains without any fine-tuning,
and how efficiently MulT can adapt to a new domain by fine-tuning on a small set of training examples from the new domain. To this end, we compare our MulT model and the two baselines of Taskgrouping (TG)~\cite{standley2019} and Cross-task consistency (CT)~\cite{zamir2020consistency} on two new domains, namely, Gaussian-blurred images from Taskonomy~\cite{gaussian} and images from the Cocodoom~\cite{cocodoom} dataset. Note that all the networks were trained on the vanilla Taskonomy dataset~\cite{taskonomy2018}. When fine-tuning the networks, we use either 16 or 128 images from the new domain. The original training data (Taskonomy) is retained during fine-tuning to prevent the networks from forgetting the original domain.
The results in Table~\ref{tb:MulT-generalization-results} and Figure~\ref{fig:generalization} show that our MulT model yields better generalization and adaptation to new domains, both with and without fine-tuning.
These findings confirm the observations made in~\cite{transformer-more-robust-than-cnns} for the single-task scenario. The cross-task consistency~\cite{zamir2020consistency} model shows improved performance in comparison to the Taskgrouping~\cite{standley2019} baseline because of its explicitly enforced consistency constraint, whereas the Taskgrouping model~\cite{standley2019} suffers due to the joint task pairings and the lack of an attention mechanism or an additional constraint. Nevertheless, our MulT model outperforms both these baselines and shows better generalization.
\begin{figure*}[ht]
\centering
{\includegraphics[ height=6cm, width=13.5cm]{mainpaper/images/generalisation.png}}
\caption{{\textbf{Generalization to new domains.} Our MulT model generalizes better to new domains than the Cross-task~\cite{zamir2020consistency} baseline, both when fine-tuned and not fine-tuned, across the tasks of surface normal prediction and reshading. This shows the benefits of our shared attention module. We test the models on two target domains, Gaussian blur applied to the Taskonomy images~\cite{taskonomy2018} and the out-of-distribution CocoDoom dataset~\cite{cocodoom}. Best viewed on screen and when zoomed in the yellow circled regions.}
}\label{fig:generalization}\vspace{-10pt}
\end{figure*}
\begin{table}[ht]
\setlength\tabcolsep{1pt}
\centering
\scalebox{0.8}{
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{black}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{black}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{8}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textbf{ ~ Generalization to New Domains}} \\ \hline
& No. of &\multicolumn{3}{l}{\textbf{~ Error (w/ Fine-tuning)$\downarrow$}} & \multicolumn{3}{l}{ \textbf{Error (w/o Fine-tuning)$\downarrow$}} \\
Domains & images & MulT & ~ ~CT~\cite{zamir2020consistency} & TG~\cite{standley2019} & MulT& ~ ~CT~\cite{zamir2020consistency} & TG~\cite{standley2019} \\
\arrayrulecolor{black}\hline
Blur~\cite{gaussian} & 128 & \textbf{12.6} & \underline{17.4} & 21.9 & \multirow{2}{*}{\textbf{27.0}} & \multirow{2}{*}{\underline{46.2}} & \multirow{2}{*} {55.1} \\
\small{(Taskonomy)} & 16 & 17.5 & 22.2 & 26.3 & & & \\
\hline
CocoDoom & 128 & \textbf{13.3} & \underline{18.5} & 25.3 & \multirow{2}{*}{\textbf{39.3}} &\multirow{2}{*}{\underline{ 54.3}} & \multirow{2}{*}{67.7} \\
~\cite{cocodoom} & 16 & 20.9 & 27.1 & 39.9 & & & \\
\arrayrulecolor{black}\hline
\end{tabular}}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Domain generalization on Taskonomy blur data~\cite{gaussian} and CocoDoom~\cite{cocodoom}.} Our MulT model shows better abilities to generalize and adapt to new domains, both with and without fine-tuning. Bold and underlined values show the best and second-best results, respectively. }
\label{tb:MulT-generalization-results}%
\vspace{-10pt}
\end{table}%
\vspace{-15pt}
\paragraph{Supplementary Material.} We defer additional discussions and experiments, particularly analyzing the effect of the shared attention in our MulT model and the effect of the network size for different task combinations, as well as additional qualitative results to the supplementary material. We also analyze the number of parameters required by each model and the environmental impact of training such models in the supplementary material.
\section{Conclusion and Limitations}
In this work, we have shown that the transformer framework can be applied to jointly handle multiple tasks within a single end-to-end encoder-decoder framework. Our MulT model simultaneously addresses 6 different vision tasks, learning them in a single training step and
outperforming an independent single task model on each task with a compact set of shared parameters. This allows us to use a single network to handle multiple vision tasks instead of multiple single task networks, thereby reducing the computational cost, for both training and inference. Furthermore, our MulT model outperforms the state-of-the-art CNN-based multitasking models, in terms of both performance in the original domain and generalization/adaptation to new domains.
Our current framework nonetheless suffers from some limitations:
\vspace{-15pt}
\paragraph{Data dependency.} Although we validated our findings using various architectures and benchmarks, the results of our approach, as any deep learning one, are in principle data specific. In particular, MulT is a data intensive architecture, and thus when trained on a limited amount of data, it may not achieve the same performance as reported in this work. Note, however, that this is also the case for both single task transformers and the CNN-based multitask baselines.
\vspace{-15pt}
\paragraph{Unpaired Data.} Our current framework, as the CNN-based multitask baselines, requires paired training data. Extending our approach to unlabeled/unpaired data, as in~\cite{zhu2020unpaired, Bhattacharjee_2020_CVPR}, appears feasible and remains open for future work.
\vspace{-15pt}
\paragraph{Modeling efficient attention.} Our current framework makes use of shared attention across the visual tasks. Extending this concept to incorporate local versus global attention, as in~\cite{yang2021focal}, appears feasible and remains open for future work.
Besides addressing these limitations, in the future, we plan to extend our methodology to learning different types of tasks like edge occlusions, principal curvatures and unsupervised segmentation, and doing zero-shot learning on new tasks. In addition, it would be worthwhile to explore the robustness of large-scale multitask transformers to adversarial tasks, which could become increasingly problematic as the number and variety of tasks grow.
\vspace{2pt}
\textbf{Acknowledgement.} This work was supported in part by the Swiss National Science Foundation via the Sinergia grant CRSII5$-$180359.
\section{Effect of shared attention}
To account for the task dependencies beyond sharing encoder parameters, we develop a shared attention mechanism that integrates the information contained in the encoded features into the decoding stream. Empirically, we have found that the attention from the surface normal task stream benefits our 6-task MulT model and we thus take this task as reference task r, whose attention is shared across the tasks.
In Table~\ref{tb:MulT-shared-attention}, we show the relative performance of our 6-task MulT model with a single-task dedicated Swin transformer baseline~\cite{swin} under two settings. In the first setting the 6-task MulT model is trained \textit{without} the shared attention across the 6 tasks, whereas in the second setting our MulT model is trained \textit{with} the shared attention. The shared attention mechanism benefits the performance of our MulT model, allowing it to learn task inter-dependencies. The models under both the scenarios comprise an increased size of the network.
\begin{table*}[ht]
\setlength\tabcolsep{3pt}
\centering
\scalebox{0.95}{
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{7}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textbf{ ~ Relative Performance On}} \\
& $\mathcal{S}$ & $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{K}$ & \textit{E} & $\mathcal{R}$ \\
\arrayrulecolor{black}\hline
6-task MulT w/o shared attention & \underline{+15.0\%} & \underline{+8.13\% } & \underline{+6.92\% } & \underline{+42.9\% } & \underline{+81.3\% } & \underline{+14.8\% } \\
6-task MulT w/ shared attention & \textbf{+19.7\% } & \textbf{+10.2\% } & \textbf{+8.72\%} & \textbf{+94.75\% } & \textbf{+88.8\% } & \textbf{+16.4\% } \\
\arrayrulecolor{black}\hline
\end{tabular}}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Effect of shared attention on our MulT model.} We show the relative performance of our 6-task MulT model with a single-task dedicated Swin transformer baseline~\cite{swin} under two settings- \textit{without} and \textit{with} the shared attention mechanism. Note that under both the settings, our MulT model comprises the increase network size. We show, the relative performance percentage for each task evaluated by taking the percentage increase or decrease w.r.t. the single-task dedicated Swin transformer baseline~\cite{swin}. The shared attention mechanism benefits the performance of our MulT model, allowing it to learn task inter-dependencies. The results here are reported on the Taskonomy test set. Bold and underlined values show the best and second-best results, respectively. }
\label{tb:MulT-shared-attention}%
\end{table*}%
\paragraph{Feature fusion method.}
We further explore the different feature fusion methods such as concatenation and cross-attention. Concatenating all the features does not benefit our network to learn task interdependencies, as observed in our preliminary experiments, and was thus not reported. Our method \emph{is} a learnable fusion strategy, using a learnable shared attention (SA) mechanism. We also tried the cross-attention (CA) mechanism
from CrossVit~\cite{crossvit}, but it did not beat our SA mechanism in
the given multitask setting, as seen in Table~\ref{tb:MulT-cross-attention}
\vspace{-12pt}
\begin{table}[ht]
\setlength\tabcolsep{3pt}
\centering
\scalebox{0.75}{
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{7}{l}{ ~\textbf{ ~ Relative Performance for 6 task MulT vs 1-task SWIN on Taskonomy}} \\
& $\mathcal{S}$ & $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{K}$ & \textit{E} & $\mathcal{R}$ \\
\arrayrulecolor{black}\hline
MulT w/ CA & +1.06\% & +5.11\% & -3.33\% & +13.3\% & +25.9\% & +0.06\% \\
MulT w/ SA & \textbf{+19.7\% } & \textbf{+10.2\% } & \textbf{+8.72\%} & \textbf{+94.75\% } & \textbf{+88.8\% } & \textbf{+16.4\% } \\
\arrayrulecolor{black}\hline
\end{tabular}}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Quantitative comparison of training the 6-task MulT model on the Taskonomy benchmark~\cite{taskonomy2018} with Cross attention (CA)~\cite{crossvit} and our proposed shared attention (SA).} Our shared attention mechanism benefits MulT where it consistently outperforms the MulT with CA method. Bold values show the best results. }
\label{tb:MulT-cross-attention}%
\end{table}%
\section{Task combinations}
We now show the effect of different task combinations on the relative performance of each task. From our experiments in Table~\ref{tb:task-combinations-network-size}, we observe that the performance of 2D keypoints, 2D edges and segmentation benefits from the inclusion of other tasks like surface normal estimation, depth and reshading. In particular, surface normal estimation is the most beneficial task for the other tasks. For instance, any task with the combination of surface normal estimation, leverages the surface statistics to improve its performance.
We also observe that increasing the number of tasks improves the results of our MulT model, e.g., a 6-task network outperforms a 5-task one, which in turn outperforms a 4-task network. Note all the models in Table~\ref{tb:task-combinations-network-size} are trained with shared attention to learn task inter-dependencies.
\begin{table*}[ht]
\setlength\tabcolsep{3pt}
\centering
\scalebox{0.75}{
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{black}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{14}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textbf{ ~ Effect of the network size on different task combinations for Taskonomy test~\cite{taskonomy2018}}} \\
\hline
& &\multicolumn{6}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textbf{ w/ increased network size}}& \multicolumn{6}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textbf{ w/o increased network size}} \\
No. of
Tasks & Trained on & $\mathcal{S}$ & $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{K}$ & \textit{E} & $\mathcal{R}$ & $\mathcal{S}$ & $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{K}$ & \textit{E} & $\mathcal{R}$ \\
\arrayrulecolor{black}\hline
\multirow{15}{*}{4-task MulT} & $\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{K}$ & +13.8\% & +8.36\% & +6.91\% & +82.2\% & - & - &+7.84\% &+6.95\% &+5.07\% &+75.4\% &- & - \\
&$\mathcal{S}+\mathcal{D}+\mathcal{N}+\textit{E}$ &+14.0\% & +8.38\% & +7.05\% & - & +74.9\% &- &+8.08\% &+7.11\% &+5.10\% &- &+63.3\% &- \\
&$\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{R}$ & +14.2\% & +8.55\% & +7.17\% & - & - & +9.13\% &+8.11\% &+7.20\% &+5.33\% &- &- & +6.77\%
\\
&$\mathcal{S}+\mathcal{D}+\mathcal{K}+\textit{E}$ & +13.5\% & +8.08\% & - & +73.0\% & +74.6\% & - &+7.41\% &+6.84\% &- &+67.7\% &+62.7\% & -
\\
&$\mathcal{S}+\mathcal{D}+\mathcal{K}+\mathcal{R}$ & +14.0\% & +8.22\% & - & +72.4\% & - & +8.91\% &+8.03\% &+6.95\% &- &+66.2\% &- & +6.39\%
\\
&$\mathcal{S}+\mathcal{D}+\textit{E}+\mathcal{R}$ & +14.3\% & +8.30\% & - & - & +73.1\% & +9.04\% &+8.22\% &+6.98\% &- &- &+61.5\% & +6.73\% \\
&$\mathcal{S}+\mathcal{N}+\textit{E}+\mathcal{R}$ & +15.0\% & - & +7.13\% & - & +73.9\% & +9.17\% &+8.80\% &- &+5.28\% &- &+61.8\% & +6.80\% \\
&$\mathcal{S}+\mathcal{N}+\mathcal{K}+\mathcal{R}$ & +14.9\% & - & +7.01\% & +87.5\% & - & +8.99\% &+8.61\% &- &+5.12\% &+79.0\% &- & +6.45\% \\
&$\mathcal{S}+\mathcal{N}+\mathcal{K}+\textit{E}$ & +14.7\% & - & +6.89\% & +88.4\% & +75.4\% & - &+8.55\% &- &+5.05\% &+79.7\% &+66.9\% & - \\
&$\mathcal{S}+\mathcal{K}+\textit{E}+\mathcal{R}$ & +13.7\% & - & - & +73.5\% & +74.5\% & +8.97\% &+7.72\% &- &- &+68.9\% &+62.5\% & +6.42\% \\
&$\mathcal{D}+\mathcal{K}+\textit{E}+\mathcal{R}$ & - & +7.91\% & - & +73.3\% & +74.8\% & +9.88\% &- &+6.63\% &- &+68.4\% &+63.0\% & +7.00\% \\
&$\mathcal{D}+\mathcal{N}+\mathcal{K}+\mathcal{R}$ &- & +8.44\% & +7.20\% & +87.0\% & - & +10.3\% &- &+7.15\% &+5.40\% &+78.8\% &- & +7.33\% \\
&$\mathcal{D}+\mathcal{N}+\textit{E}+\mathcal{R}$ & - & +8.63\% & +7.25\% & - & +75.5\% & +11.1\% &- &+7.29\% &+5.49\% &-&+66.8\% & +8.12\% \\
&$\mathcal{D}+\mathcal{N}+\mathcal{K}+\textit{E}$ & - & +8.40\% & +7.10\% & +87.2\% & +75.8\% & - &- &+7.12\% &+5.20\% &+79.2\% &+67.0\% & - \\
&$\mathcal{N}+\mathcal{K}+\textit{E}+\mathcal{R}$ & - & - & +7.12\% & +88.8\% & +75.0\% & +10.6\% &- &- &+5.27\% &+80.1\% &+66.1\% & +7.74\% \\
\hline
\multirow{6}{*}{5-task MulT} & $\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{K}+\textit{E}$ &+17.2\% & +9.07\% &+8.11 \% & +92.5 \% & +82.6\% & - &+11.6\% &+7.75\% &+6.16\% &+89.9\% &+72.5\% &- \\
&$\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{K}+\mathcal{R}$ & +17.7\% & +9.10\% & +7.59\% & +92.0\% &- & +12.9\% &+12.0\% &+7.91\% &+5.94\% &+89.5\% &- &+10.0 \% \\
&$\mathcal{S}+\mathcal{D}+\mathcal{N}+\textit{E}+\mathcal{R}$ & +16.9\% & +9.22\% & \underline{+8.26\%} & - & \underline{+82.9\%} & +12.7\% &+10.8\% &+8.08\% &\underline{+6.47\%} &- &\underline{+72.9\%} &+9.71\%
\\
&$\mathcal{S}+\mathcal{D}+\mathcal{K}+\textit{E}+\mathcal{R}$ & +15.1\% & +8.86\% &- & +75.0\% & +78.8\% & +10.2\% & +9.10\% &+7.47\% &- &+70.7\% &+67.7\% &+7.80\% \\
&$\mathcal{S}+\mathcal{N}+\mathcal{K}+\textit{E}+\mathcal{R}$ & \underline{+18.3\%} &- & +7.33\% & \underline{+94.1\%} & +82.2\% & +13.0\% &\underline{+12.5\%} &- &+5.55\% &\underline{+91.9\%} &+72.2\% & +10.3\%
\\
&$\mathcal{D}+\mathcal{N}+\mathcal{K}+\textit{E}+\mathcal{R}$ & - & +\underline{9.77\%} & +8.06\% & +93.9\% &+82.6\% & \underline{+13.8\% } &- &\underline{+8.33\%}&+6.11\% &+91.6\% &+72.5\% &\underline{+10.7\% }
\\
\hline
6-task MulT& $\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{K}+\textit{E}+\mathcal{R}$ &\textbf{+19.7\%} &\textbf{+10.2\%} &\textbf{+8.72\%} &\textbf{+94.7\%} &\textbf{+88.8\%}&\textbf{+16.4\%} &\textbf{+13.8\%}&\textbf{+9.11\%}&\textbf{+6.99\%} &\textbf{+92.5\%} &\textbf{+78.3\%} &\textbf{+12.9\%} \\
\arrayrulecolor{black}\hline
\end{tabular}}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Quantitative comparison of training different task combinations in our MulT model on the Taskonomy benchmark~\cite{taskonomy2018}.} Increasing the number of tasks improves the results of our MulT models, where a 6-task network outperforms a 5-task one, which in turn outperforms a 4-task network. Note all the models are trained with shared attention to learn task inter-dependencies. The relative performance percentage for each task is evaluated by taking the percentage increase or decrease w.r.t. the single-task swin~\cite{swin} baseline. Bold and underlined values show the best and second-best results, respectively. }
\label{tb:task-combinations-network-size}%
\vspace{-10pt}
\end{table*}%
\mycomment{
\begin{table*}[ht]
\setlength\tabcolsep{3pt}
\centering
\scalebox{0.75}{
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{black}\vrule}l!{\color{black}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{black}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{14}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textbf{ ~ Effect of the network size on different task combinations for Taskonomy test~\cite{taskonomy2018}}} \\
\hline
& &\multicolumn{6}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textbf{ w/ increased network size}}& \multicolumn{6}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textbf{ w/o increased network size}} \\
No. of
Tasks & Trained on & $\mathcal{S}$ & $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{K}$ & \textit{E} & $\mathcal{R}$ & $\mathcal{S}$ & $\mathcal{D}$ & $\mathcal{N}$ & $\mathcal{K}$ & \textit{E} & $\mathcal{R}$ \\
\arrayrulecolor{black}\hline
\multirow{20}{*}{3-task MulT} & $\mathcal{S}+\mathcal{D}+\mathcal{N}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{S}+\mathcal{D}+\mathcal{K}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{S}+\mathcal{D}+\textit{E}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \%
\\
&$\mathcal{S}+\mathcal{D}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \%
\\
&$\mathcal{S}+\mathcal{N}+\mathcal{K}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \%
\\
&$\mathcal{S}+\mathcal{N}+\textit{E}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{S}+\mathcal{N}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{S}+\textit{E}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{S}+\mathcal{K}+\textit{E}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{S}+\mathcal{K}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{D}+\mathcal{N}+\mathcal{K}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{D}+\mathcal{N}+\textit{E}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{D}+\mathcal{N}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{D}+\mathcal{K}+\textit{E}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{D}+\mathcal{K}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{D}+\textit{E}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{N}+\textit{E}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{N}+\mathcal{K}+\textit{E}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{N}+\mathcal{K}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
&$\mathcal{K}+\textit{E}+\mathcal{R}$ & \% & \% & \% & \% & \% & \% &\% &\% &\% &\% &\% & \% \\
\hline
\multirow{15}{*}{4-task MulT} & $\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{K}$ & +13.8\% & +8.36\% & +6.91\% & +82.2\% & - & - &+7.84\% &+6.95\% &+5.07\% &+75.4\% &- & - \\
&$\mathcal{S}+\mathcal{D}+\mathcal{N}+\textit{E}$ &+14.0\% & +8.38\% & +7.05\% & - & +74.9\% &- &+8.08\% &+7.11\% &+5.10\% &- &+63.3\% &- \\
&$\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{R}$ & +14.2\% & +8.55\% & +7.17\% & - & - & +9.13\% &+8.11\% &+7.20\% &+5.33\% &- &- & +6.77\%
\\
&$\mathcal{S}+\mathcal{D}+\mathcal{K}+\textit{E}$ & +13.5\% & +8.08\% & - & +73.0\% & +74.6\% & - &+7.41\% &+6.84\% &- &+67.7\% &+62.7\% & -
\\
&$\mathcal{S}+\mathcal{D}+\mathcal{K}+\mathcal{R}$ & +14.0\% & +8.22\% & - & +72.4\% & - & +8.91\% &+8.03\% &+6.95\% &- &+66.2\% &- & +6.39\%
\\
&$\mathcal{S}+\mathcal{D}+\textit{E}+\mathcal{R}$ & +14.3\% & +8.30\% & - & - & +73.1\% & +9.04\% &+8.22\% &+6.98\% &- &- &+61.5\% & +6.73\% \\
&$\mathcal{S}+\mathcal{N}+\textit{E}+\mathcal{R}$ & +15.0\% & - & +7.13\% & - & +73.9\% & +9.17\% &+8.80\% &- &+5.28\% &- &+61.8\% & +6.80\% \\
&$\mathcal{S}+\mathcal{N}+\mathcal{K}+\mathcal{R}$ & +14.9\% & - & +7.01\% & +87.5\% & - & +8.99\% &+8.61\% &- &+5.12\% &+79.0\% &- & +6.45\% \\
&$\mathcal{S}+\mathcal{N}+\mathcal{K}+\textit{E}$ & +14.7\% & - & +6.89\% & +88.4\% & +75.4\% & - &+8.55\% &- &+5.05\% &+79.7\% &+66.9\% & - \\
&$\mathcal{S}+\mathcal{K}+\textit{E}+\mathcal{R}$ & +13.7\% & - & - & +73.5\% & +74.5\% & +8.97\% &+7.72\% &- &- &+68.9\% &+62.5\% & +6.42\% \\
&$\mathcal{D}+\mathcal{K}+\textit{E}+\mathcal{R}$ & - & +7.91\% & - & +73.3\% & +74.8\% & +9.88\% &- &+6.63\% &- &+68.4\% &+63.0\% & +7.00\% \\
&$\mathcal{D}+\mathcal{N}+\mathcal{K}+\mathcal{R}$ &- & +8.44\% & +7.20\% & +87.0\% & - & +10.3\% &- &+7.15\% &+5.40\% &+78.8\% &- & +7.33\% \\
&$\mathcal{D}+\mathcal{N}+\textit{E}+\mathcal{R}$ & - & +8.63\% & +7.25\% & - & +75.5\% & +11.1\% &- &+7.29\% &+5.49\% &-&+66.8\% & +8.12\% \\
&$\mathcal{D}+\mathcal{N}+\mathcal{K}+\textit{E}$ & - & +8.40\% & +7.10\% & +87.2\% & +75.8\% & - &- &+7.12\% &+5.20\% &+79.2\% &+67.0\% & - \\
&$\mathcal{N}+\mathcal{K}+\textit{E}+\mathcal{R}$ & - & - & +7.12\% & +88.8\% & +75.0\% & +10.6\% &- &- &+5.27\% &+80.1\% &+66.1\% & +7.74\% \\
\hline
\multirow{6}{*}{5-task MulT} & $\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{K}+\textit{E}$ &+17.2\% & +9.07\% &+8.11 \% & +92.5 \% & +82.6\% & - &+11.6\% &+7.75\% &+6.16\% &+89.9\% &+72.5\% &- \\
&$\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{K}+\mathcal{R}$ & +17.7\% & +9.10\% & +7.59\% & +92.0\% &- & +12.9\% &+12.0\% &+7.91\% &+5.94\% &+89.5\% &- &+10.0 \% \\
&$\mathcal{S}+\mathcal{D}+\mathcal{N}+\textit{E}+\mathcal{R}$ & +16.9\% & +9.22\% & \underline{+8.26\%} & - & \underline{+82.9\%} & +12.7\% &+10.8\% &+8.08\% &\underline{+6.47\%} &- &\underline{+72.9\%} &+9.71\%
\\
&$\mathcal{S}+\mathcal{D}+\mathcal{K}+\textit{E}+\mathcal{R}$ & +15.1\% & +8.86\% &- & +75.0\% & +78.8\% & +10.2\% & +9.10\% &+7.47\% &- &+70.7\% &+67.7\% &+7.80\% \\
&$\mathcal{S}+\mathcal{N}+\mathcal{K}+\textit{E}+\mathcal{R}$ & \underline{+18.3\%} &- & +7.33\% & \underline{+94.1\%} & +82.2\% & +13.0\% &\underline{+12.5\%} &- &+5.55\% &\underline{+91.9\%} &+72.2\% & +10.3\%
\\
&$\mathcal{D}+\mathcal{N}+\mathcal{K}+\textit{E}+\mathcal{R}$ & - & +\underline{9.77\%} & +8.06\% & +93.9\% &+82.6\% & \underline{+13.8\% } &- &\underline{+8.33\%}&+6.11\% &+91.6\% &+72.5\% &\underline{+10.7\% }
\\
\hline
6-task MulT& $\mathcal{S}+\mathcal{D}+\mathcal{N}+\mathcal{K}+\textit{E}+\mathcal{R}$ &\textbf{+19.7\%} &\textbf{+10.2\%} &\textbf{+8.72\%} &\textbf{+94.7\%} &\textbf{+88.8\%}&\textbf{+16.4\%} &\textbf{+13.8\%}&\textbf{+9.11\%}&\textbf{+6.99\%} &\textbf{+92.5\%} &\textbf{+78.3\%} &\textbf{+12.9\%} \\
\arrayrulecolor{black}\hline
\end{tabular}}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Quantitative comparison of training different task combinations in our MulT model on the Taskonomy benchmark~\cite{taskonomy2018}.} Increasing the number of tasks improves the results of our MulT models, where a 6-task network outperforms a 5-task one, which in turn outperforms a 4-task network. Note all the models are trained with shared attention to learn task inter-dependencies. The relative performance percentage for each task is evaluated by taking the percentage increase or decrease w.r.t. the single-task swin~\cite{swin} baseline. Bold and underlined values show the best and second-best results, respectively. }
\label{tb:task-combinations-network-size-updated}%
\vspace{-10pt}
\end{table*}%
}
\vspace{-5pt}
\section{Effect of network size}
As more number of tasks are added to our MulT model, we observe, as in~\cite{standley2019}, that effectively leveraging between 3 and 6 tasks required increasing the size of the network modules. Altogether, reporting results for all possible task combinations requires training $(2^6-1)$ models.
We see that improving the network size has significant effect on the relative performance of the different tasks. We quantitatively evaluate all the task combinations in the
4-task, 5-task and 6-task settings; with and without an increase in the network size. For the normal network size, we use swin-T as the backbone~\cite{swin} containing ${(2,2,6,2)}$ transformer blocks in the respective stages of the encoder, whereas for the increased network we use swin-L~\cite{swin} as the backbone containing ${(2,2,18,2)}$ transformer blocks in the respective stages of the encoder. The increase in the number of transformer blocks in the third stage of the encoder in the swin-L backbone helps to accommodate the increased number of tasks. Our best performing MulT model comprises the increased network size and shared attention.
In Table~\ref{tb:task-combinations-network-size}, we observe that increasing the number of tasks improves the results of our MulT model, where a 6-task network outperforms a 5-task one, which in turn outperforms a 4-task network. Note all the models in Table~\ref{tb:task-combinations-network-size} are trained with shared attention to learn task inter-dependencies.
\section{Parameter comparison}
We show the number of parameters learnt by our 6-task MulT model \textit{without} an increased network size and compare it to the number of parameters learnt by the multitasking Resnet50 baseline and the single dedicated Swin-Tiny (Swin-T) baseline. Further, we show the number of parameters learnt by our 6-task MulT model \textit{with} an increased network size and compare it to the number of parameters learnt by the multitasking Resnet152 baseline and the single dedicated Swin-Large (Swin-L) baseline. We see that our MulT model, both without and with an increased network size, is more parameter efficient than the 1-task dedicated Swin-T and Swin-L models, respectively. Note that the number of parameters and the inference time of six 1-task Swin-T models and six 1-task Swin-L models are added to get the total number of parameters and the total inference time for all the six tasks. Our MulT model learns more number of parameters than the multitasking CNN baselines~\cite{resnet} but infers the final predictions across the six tasks in comparable time.
\begin{table}[ht]
\setlength\tabcolsep{3pt}
\centering
\scalebox{0.8}{
\arrayrulecolor{black}
\begin{tabular}{!{\color{white}\vrule}l!{\color{white}\vrule}c!{\color{white}\vrule}c}
\hline
\multicolumn{3}{l}{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textbf{ ~ Parameter Comparison}} \\
Model & No. of Params (M) & Inference time (ms) \\
\arrayrulecolor{black}\hline
Multitasking Resnet50~\cite{resnet} & 153.6 & 12 \\
six 1-task Swin-T~\cite{swin} & 344 & 90 \\
MulT w/o increased network & 231 & 13 \\
\hline
Multitasking Resnet152~\cite{resnet} & 361.2 & 27 \\
six 1-task Swin-L~\cite{swin} & 728 & 198 \\
MulT w/ increased network & 545 & 29 \\
\arrayrulecolor{black}\hline
\end{tabular}}%
\setlength{\abovecaptionskip}{1mm}
\caption{\textbf{Parameter comparison of our 6-task MulT model with the baselines.} We see that our MulT model, both without and with an increased network size, is more parameter efficient than the 1-task dedicated Swin-T and Swin-L models, respectively. Note that the number of parameters and the inference time of six 1-task Swin-T models and six 1-task Swin-L models are added to get the total number of parameters and the total inference time for all the six tasks. Further, our MulT model learns more number of parameters than the multitasking CNN baselines~\cite{resnet} but infers the final predictions across the six tasks in comparable time.}
\label{tb:parameter-comparison}%
\vspace{-10pt}
\end{table}%
\section{Additional qualitative results}
We qualitatively compare the results of our MulT model with different CNN-based multitask baselines~\cite{kokkinos2016ubernet, taskonomy2018, standley2019, zamir2020consistency}, as well as with the single task dedicated Swin transformer~\cite{swin}. The results in Figure~\ref{fig:qualitative-result-taskonomybenchmark1} and Figure~\ref{fig:qualitative-result-replica} show the performance of the different networks across multiple vision tasks on the Taskonomy benchmark~\cite{taskonomy2018} and Replica test set~\cite{replica}, respectively.
All the multitasking models are jointly trained on the six tasks on the Taskonomy benchmark, and the single task dedicated Swin models are trained on the respective tasks. Our MulT model yields higher-quality predictions than both the single task Swin baselines and the multitask CNN baselines.
\begin{figure*}[ht]
\centering
{\includegraphics[ height= 9.5cm, width=14.3cm]{mainpaper/images/QR-taskonomy-1.png}}
\vspace{13pt}
{\includegraphics[ height= 9.5cm, width=14.3cm]{mainpaper/images/QR-Taskonomy-2.png}}
\caption{\textbf{Qualitative comparison on the six vision tasks} of the Taskonomy benchmark~\cite{taskonomy2018}. From top to bottom, we show qualitative results using MTL~\cite{kokkinos2016ubernet}, Taskonomy~\cite{taskonomy2018}, Taskgrouping~\cite{standley2019}, Cross-task consistency~\cite{zamir2020consistency}, the single-task dedicated Swin transformer~\cite{swin} and our six-task \textbf{MulT} model. We show, from left to right, the input image, the semantic segmentation results, the depth predictions, the surface normal estimations, the 2D keypoint detections, the 2D edge detections and the reshading results for all the models. All models are jointly trained on the six vision tasks, except for the Swin transformer baseline, which is trained on the independent single tasks. Our MulT model outperforms both the single task Swin baselines and the multitask CNN based baselines. Best seen on screen and zoomed within the yellow circled regions.
}\label{fig:qualitative-result-taskonomybenchmark1}\vspace{-10pt}
\end{figure*}
\begin{figure*}[ht]
\centering
{\includegraphics[ height= 8.5cm, width=10.3cm]{mainpaper/images/QR-replica-1.png}}
\vspace{13pt}
{\includegraphics[ height= 9.7cm, width=10.3cm]{mainpaper/images/QR-replica-3.png}}
\caption{\textbf{Qualitative comparison on the three vision tasks} of the Replica benchmark~\cite{replica}. From top to bottom, we show qualitative results using MTL~\cite{kokkinos2016ubernet}, Taskonomy~\cite{taskonomy2018}, Taskgrouping~\cite{standley2019}, Cross-task consistency~\cite{zamir2020consistency}, the single-task dedicated Swin transformer~\cite{swin} and our six-task \textbf{MulT} model. We show, from left to right, the input image, the depth predictions, the surface normal estimations and the reshading results for all the models. All models are jointly trained on the \textit{six} vision tasks of the Taskonomy benchmark and are then fine-tuned to the Replica official training set, except for the Swin transformer baseline, which is trained on the independent \textit{single} tasks. Our MulT model outperforms both the single task Swin baselines and the multitask CNN based baselines. Best seen on screen and zoomed within the yellow circled regions.
}\label{fig:qualitative-result-replica}\vspace{-10pt}
\end{figure*}
\section{Environmental impact}
Models consume power both during training as well as during inference. However, a bigger source of energy consumption today comes from after the models are deployed, i.e. during the inference stage~\cite{consumption-study}. Nvidia estimated that in 2019, 80–90\% of the cost of a model is in the inference. To worsen this, machine learning practitioners waste a ton of resources on redundant training~\cite{consumption-study}.
By being a multitask framework, our MulT model helps to reduce the power consumption during inference unlike the single task baselines that need to be run multiple times to achieve the predictions on the different tasks. A shown in Table~\ref{tb:parameter-comparison}, our MulT model requires less inference time than the single task transformer baselines while reporting better performance.
Nonetheless, running our MulT model in the cloud takes 21 hours to train on 32 Nvidia V100-SXM2-32GB GPUs, where a single GPU emits 3.11kg of $CO_2$ with a $CO_2$ offset of 1.55kg~\cite{mlco2-calculator}. This is equivalent to 12.5 kilometers driven by an average car~\cite{lacoste2019quantifying}.
To mitigate, the carbon footprint of training our model we have reputable carbon offsets as well as follow a centralised cloud infrastructure with sustainable power supplies. Furthermore, by employing an efficient shared attention mechanism as~\cite{yang2021focal}, that operates in linear time, we can extend our mitigation efforts and reduce the overall hours of GPU computation.
\newpage
\clearpage
|
1,108,101,563,155 | arxiv | \subsection{Steady-state Hyper-elasticity}
The first of our examples is a hyper-elastic material model in the
context of large deformation, applied to a cylindrical tube. The
strong form can be expressed as, find $U_i$ such that
\begin{equation}
\begin{cases}
\nabla \cdot \mathbf{P} = \boldsymbol{0}\ \ &\text{on}\ \Omega\\
U_i=G_i\ \ &\text{on}\ \Gamma_{D_i}\\
\mathbf{P}\mathbf{N} = 0\ \ &\text{on}\ \Gamma_{N_i}\\
\end{cases}
\end{equation}
where $\mathbf{P}$ is the first Piola-Kirchhoff stress tensor, $G_i$
is the presribed displacement on the Dirichlet boundary,
$\Gamma_{D_{i}}$, $\mathbf{N}$ is the outward normal of the Neumann
boundary, $\Gamma_{N_i}$ (see~\cite{SimoHughes,CompMech} for more
details). We use a Neo-Hookean material model, which relates the
second Piola-Kirchhoff stress tensor $\mathbf{S}$ to the right
Cauchy-Green strain tensor, $\mathbf{C}$, by the relationship
\[\mathbf{S} = \frac{\lambda}{2}\left(J^2-1\right)\mathbf{C}^{-1} + \mu\left(\mathbf{I}-\mathbf{C}^{-1}\right)\]
where $\mathbf{P}=\mathbf{F}\mathbf{S}$, $\mathbf{F}$ is the gradient
of the deformation, $J=\det(\mathbf{F})$, and $\lambda,\mu$ are the
Lam\'{e} constants from linear elasticity. We solve the linearized
weak form of these equations using Newton's method in an
updated-Lagrangian approach. In figure~\ref{f:tube}(a) we depict the
undeformed geometry, a circular tube discretized with a mesh of
$16\times 64\times 4$ quadratic B-spline functions. The right side of
the tube is fixed and the left side is displaced to the left over 15
load steps. In this case we configured PETSc to interface with MUMPS
and solved the system using this parallel direct solver. The deformed
configuration is shown in figure~\ref{f:tube}(b).
\begin{figure}
\centering
\subfloat[Undeformed tube]{\includegraphics[width=0.48\textwidth]{./figs/tube0.png}}
\subfloat[Deformed tube]{\includegraphics[width=0.48\textwidth]{./figs/tube1.png}}
\caption{Unscaled deformation of an aluminum tube, modeled using a Neo-Hookean material model.}\label{f:tube}
\end{figure}
\subsection{Time-dependent Problems}
The following three examples discretize time-dependent, nonlinear
problems. We first detail our solution strategy for these
problems. For simplicity, we consider a scalar problem. We seek to
find $\dot{u},u \in \mathcal{U}$ such that
\[R(w;\dot{u},u)=0\ \ \forall w \in \mathcal{W}\]
We use a semi-discrete approach by discretizing $u,\dot{u}$ in space
with finite dimensional subspaces $\mathcal{U}^h \subset \mathcal{U}$,
leaving the problem continuous in time. The span of the set of basis
functions $\{N_B(x)\}_{B=1\dots n}\ x \in \Omega$ define the subspace
chosen, $\mathcal{U}^h$. Similarly, we choose $\mathcal{W}^h \subset
\mathcal{W}$ where $\text{span}\left(\{N_A(x)\}_{A=1\dots n}\right)$
defines $\mathcal{W}^h$.
\begin{align*}
u^h(x,t) &= \sum_B U_B(t)N_B(x)\\
\dot{u}^h(x,t) &= \sum_B \dot{U}_B(t)N_B(x)
\end{align*}
We denote $\mathbf{U}=\{U_B\},\mathbf{\dot{U}}=\{\dot{U}_B\}$. We
then can write the residual vector,
\[\mathbf{R}\left(\mathbf{U},\mathbf{\dot{U}}\right) = \{R_A\}\]
where
\[R_A=R(N_A;\dot{u}^h,u^h)\]
We discretize in time using the generalized-$\alpha$ method for first
order systems~\cite{Jansen2000}. Given
$\mathbf{U}_n,\mathbf{\dot{U}}_n$, we seek
$\mathbf{U}_{n+1},\mathbf{\dot{U}}_{n+1},\mathbf{U}_{n+\alpha_f},\mathbf{\dot{U}}_{n+\alpha_m}$
such that
\begin{equation}
\begin{aligned}
\mathbf{R}\left(\mathbf{U}_{n+\alpha_f},\dot{\mathbf{U}}_{n+\alpha_m}\right)&=0\\
\mathbf{U}_{n+\alpha_f}&=\mathbf{U}_{n}+\alpha_f\left(\mathbf{U}_{n+1}-\mathbf{U}_{n}\right)\\
\mathbf{\dot{U}}_{n+\alpha_m}&=\mathbf{\dot{U}}_{n}+\alpha_m\left(\mathbf{\dot{U}}_{n+1}-\mathbf{\dot{U}}_{n}\right)\\
\mathbf{U}_{n+1}&=\mathbf{U}_n + \Delta t\left(\left(1-\gamma\right)\dot{\mathbf{U}}_{n} + \gamma \dot{\mathbf{U}}_{n+1}\right)
\end{aligned}\label{e:ga}
\end{equation}
where $\Delta t=t_{n+1}-t_{n}$, and $\alpha_f,\alpha_m,\gamma$ are
parameters which define the method. The generalized-$\alpha$ method
was designed to filter (damp) high frequency modes of the solution
which are under-approximated. As opposed to linear problems, high and
low frequency modes interact in nonlinear problems. Spurious high
frequency modes lead to contamination of the resolved modes of the
problem. The method parameters $\alpha_f,\alpha_m,\gamma$ can be
chosen using the spectral radius $\rho_{\infty}\in[0,1]$ of the
amplification matrix as $\Delta t\rightarrow\infty$ by
\begin{align*}
\alpha_m &= \frac{1}{2}\left(\frac{3-\rho_{\infty}}{1+\rho_{\infty}}\right)\\
\alpha_f &= \frac{1}{1+\rho_{\infty}}\\
\gamma & = \frac{1}{2} + \alpha_m - \alpha_f
\end{align*}
which leads to a second order, unconditionally stable method. This
$\rho_{\infty}$ uniquely defines the method and can be chosen to
filter a desired amount of high frequency modes.
It is authors' experience that when approaching time-dependent,
nonlinear problems, the generalized-$\alpha$ method is effective. The
impact of the interaction of low and high frequency modes is problem
dependent and not something necessarily understood in advance. In the
case where no filtering is needed ($\rho_{\infty}=1$), the
generalized-$\alpha$ method reduces to Crank-Nicolson's method. Due to
the popularity of the generalized-$\alpha$ method particularly among
researchers in the isogometric community, we have added it to PETSc's
time stepping algorithms and is available apart from the PetIGA
framework. In the examples that follow, we solve equation~\eqref{e:ga}
iteratively using Newton's method, which requires the computation of
the Jacobian of the residual vector $\mathbf{R}$.
\subsubsection{Cahn-Hilliard Equation}
We solve the dimensionless Cahn-Hilliard equation, adapted from
\cite{Gomez2008}. The strong form is expressed as
\begin{equation}
\begin{cases}
\dfrac{\partial c}{\partial t} - \nabla \cdot \left( M_c\nabla\left(\mu_c-\Delta c\right)\right) = 0\ \ &\text{on}\ \Omega \times (0,T]\\
c=c_0\ \ &\text{on}\ \Omega \times \{t=0\}\\
M_c \nabla\left( \mu_c-\Delta c\right)\cdot n = s\ \ &\text{on}\ \Gamma_s \times (0,T]\\
M_c \lambda \nabla c \cdot \boldsymbol{n} = 0\ \ &\text{on}\ \Gamma \times (0,T]\\
c = g\ \ &\text{on}\ \Gamma_D \times (0,T]
\end{cases}
\end{equation}
where $c$ is the concentration, $M_c$ is the mobility and $\mu_c$ is
the chemical potential. We show snapshots of the isocontours of the
solution at different times during the simulation in
figure~\ref{f:ch}.
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{./figs/ch3d0000.png}
\includegraphics[width=0.32\textwidth]{./figs/ch3d0047.png}
\includegraphics[width=0.32\textwidth]{./figs/ch3d0077.png}\\
\includegraphics[width=0.32\textwidth]{./figs/ch3d0128.png}
\includegraphics[width=0.32\textwidth]{./figs/ch3d0186.png}
\includegraphics[width=0.32\textwidth]{./figs/ch3d0235.png}
\caption{Transient solution to the Cahn-Hilliard problem in three
spatial dimensions, subject to a random initial condition and
periodic boundary conditions. The weak form is discretized in space
by a mesh of $256^3$ elements of $C^1$ quadratic
B-splines.}\label{f:ch}
\end{figure}
\subsubsection{Navier-Stokes-Korteweg System}
We solve the isothermal Navier-Stokes-Korteweg equation as detailed in
\cite{Gomez2010}. The strong form is expressed as
\begin{equation}
\begin{cases}
\dfrac{\partial \rho}{\partial t} + \nabla \cdot \left( \rho\boldsymbol{u}\right) = 0\ \ &\text{on}\ \Omega \times (0,T]\\
\dfrac{\partial \left(\rho\boldsymbol{u}\right)}{\partial t} + \nabla \cdot \left( \rho\boldsymbol{u}\otimes\boldsymbol{u}+p\boldsymbol{I}\right) - \nabla\cdot \boldsymbol{\tau} - \nabla \cdot \boldsymbol{\zeta}= \rho \boldsymbol{f}\ \ &\text{on}\ \Omega \times (0,T]\\
\boldsymbol{u} = 0\ \ &\text{on}\ \Gamma \times (0,T]\\
\nabla\rho \cdot \boldsymbol{n} = 0\ \ &\text{on}\ \Gamma \times (0,T]\\
\boldsymbol{u}\left(\boldsymbol{x},0\right) = \boldsymbol{u}_0\left(\boldsymbol{x}\right)\ \ &\text{on}\ \bar{\Omega}\\
\rho\left(\boldsymbol{x},0\right) = \rho_0\left(\boldsymbol{x}\right)\ \ &\text{on}\ \bar{\Omega}
\end{cases}
\end{equation}
where $\boldsymbol{u}$ is the velocity, and $\rho$ is the density
along with their initial values $\boldsymbol{u}_0$ and $\rho_0$,
respectively. The function $\boldsymbol{f}$ represents the body force
per unit mass. The viscous stress tensor is given as
\[\boldsymbol{\tau}=\bar{\mu}\left(\nabla \boldsymbol{u} +\nabla^T \boldsymbol{u}\right) +\bar{\lambda} \nabla\cdot\boldsymbol{u}\boldsymbol{I}\]
where $\bar{\mu}$ and $\bar{\lambda}$ are the viscosity coefficients
and $\boldsymbol{I}$ is the indentity tensor. The Korteweg tensor is
defined as
\[\boldsymbol{\zeta}=\lambda\left(\rho\Delta\rho+\frac{1}{2}\|\nabla\rho\|^2\right)\boldsymbol{I}-\lambda\nabla\rho\otimes\nabla\rho\]
where $\lambda$ is the capillarity coefficient. The pressure is given by the van~der~Waals equation
\[p=Rb\frac{\rho\theta}{b-\rho}-a\rho^2\]
We solve the three-bubble test problem from the referenced work and
plot here the density and magnitude of the velocity for the 2D
solution in figure~\ref{f:nsk} and 3D in figure~\ref{f:nsk3d}.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{./figs/rho.png}\\
\subfloat[Density]{\includegraphics[width=0.8\textwidth]{./figs/nsk_pressure.png}}\\\vspace{0.5cm}
\includegraphics[width=0.3\textwidth]{./figs/vel.png}\\
\subfloat[Magnitude of the velocity]{\includegraphics[width=0.8\textwidth]{./figs/nsk_velocity.png}}
\caption{Time evolution (left to right) of the density and the magnitude of the velocity for the isothermal NSK equations on the three bubble test case problem.}\label{f:nsk}
\end{figure}
\begin{figure}
\centering
\subfloat[Near the beginning of the simulation]{\includegraphics[width=0.8\textwidth]{./figs/vel1.png}}\\
\subfloat[Just prior to the collapse of the smallest bubble]{\includegraphics[width=0.8\textwidth]{./figs/vel4.png}}\\
\caption{Three dimensional version of the three bubble problem for the Navier-Stokes-Kortweg equation. Isocontoured surfaces reflect density values $\rho=\{0.15,0.55\}$ revealing the location of the three bubbles. Velocity vectors are shown on each isocontour and are colored by the velocity magnitude.}\label{f:nsk3d}
\end{figure}
\subsubsection{Residual-based turbulence modeling}
We solve the incompressible Navier-Stokes equations stabilized with
the variational multiscale method as in~\cite{Bazilevs2007}. Again the
strong form is
\begin{equation}
\begin{cases}
\dfrac{\partial \boldsymbol{u}}{\partial t} + \nabla \cdot \left(\boldsymbol{u} \otimes \boldsymbol{u}\right) + \nabla p = \nu \Delta \boldsymbol{u} + \boldsymbol{f}\ \ &\text{on}\ \Omega \times (0,T]\\
\nabla\cdot\boldsymbol{u} = 0\ \ &\text{on}\ \Omega \times (0,T]\\
\boldsymbol{u}\left(\boldsymbol{x},0\right) = \boldsymbol{u}_0\left(\boldsymbol{x}\right)\ \ &\text{on}\ \bar{\Omega}\\
p\left(\boldsymbol{x},0\right) = p_0\left(\boldsymbol{x}\right)\ \ &\text{on}\ \bar{\Omega}
\end{cases}
\end{equation}
where $\boldsymbol{u}$ is the velocity, $p$ is the pressure,
$\boldsymbol{f}$ represents the body force per unit volume, and $\nu$
is the kinematic viscosity.
We solve a turbulent flow in a concentric pipe as presented in
\cite{Motlagh2013} the results of which we plot in figure~\ref{f:vms}.
The domain is periodic in the streamwise direction. No-slip boundary
conditions are set at the inner and outer cylinder surfaces and the
initial condition is set using a laminar flow profile. The simulation
is forced using a pressure gradient in the form of a body force in the
streamwise direction.
\begin{figure}
\centering
\subfloat[Perspective view]{\includegraphics[width=0.575\textwidth]{./figs/nsvms.png}}
\subfloat[Front view]{\includegraphics[width=0.33\textwidth]{./figs/nsvms_tube.png}}
\caption{Turbulent flow through concentric cylinders as in~\cite{Motlagh2013}. The top-left quadrant is a pseudocolor plot of the streamwise velocity. The top-right quadrant shows isocontours of the vorticity magnitude for smaller values. The bottom quadrants shows isocontours of the vorticity magnitude for larger values. Both sets of contours are colored by the streamwise velocity.}\label{f:vms}
\end{figure}
\subsection*{B-splines}
These basis functions are polynomial splines based on the Bernstein
basis. A spline space is defined by specifying a knot vector, a
one-dimensional set of non-decreasing locations in parametric
space. The knot vector is denoted by
\[\Xi=\{\xi_1,\xi_2,\hdots,\xi_{n+p+1}\},\] where $\xi_i \in
\mathbb{R}$ is the $i^{th}$ knot, $p$ is the polynomial order, and $n$
is the number of basis functions. The knot vector encodes the basis
functions, which can be evaluated using the Cox-DeBoor recursion
formula~\cite{Cox1971,DeBoor1972}, described in what follows. The
zeroth order functions are defined as
\[N_{i,0}(\xi)=\begin{cases}
1\ \ \text{if } \xi_i \le \xi < \xi_{i+1},\\
0\ \ \text{otherwise,}\end{cases}\]
while for $p>0$,
\[N_{i,p}(\xi)=\frac{\xi-\xi_i}{\xi_{i+p}-\xi_i}N_{i,p-1}(\xi)+
\frac{\xi_{i+p+1}-\xi}{\xi_{i+p+1}-\xi_{i+1}}N_{i+1,p-1}(\xi).\]
While this is the standard way of expressing the basis functions,
there are more computationally efficient algorithms detailed
in \cite{Piegl1995}. By using a tensor product structure, the basis
can be extended to multi-dimensions. Here we write a three dimensional
extension of the one dimensional B-spline basis.
\begin{align*}
M_{abc}(\xi_1,\xi_2,\xi_3) &= N_a^{[\xi_1]}(\xi_1)N_b^{[\xi_2]}(\xi_2)N_c^{[\xi_3]}(\xi_3)
\end{align*}
or
\begin{align*}
M_{A}(\xi_1,\xi_2,\xi_3) &= N_a^{[\xi_1]}(\xi_1)N_b^{[\xi_2]}(\xi_2)N_c^{[\xi_3]}(\xi_3)\\
\end{align*}
where $A$ is a multi-index.
\subsection*{NURBS basis and parametric derivatives}
Given the B-spline basis functions, denoted $M_A$, we can define the
corresponding NURBS \cite{Piegl1995} basis as:
\begin{align}
R_A\brac{\boldsymbol\xi}=\dfrac{w_AM_A\brac{\boldsymbol\xi}}{w\brac{\boldsymbol\xi}}\label{eq:rational}
\end{align}
where $w_A$ are the projective weights and the weighting function
appearing in the denominator is
\begin{align}
w\brac{\boldsymbol\xi}=\sum_B w_BM_B\brac{\boldsymbol\xi}
\end{align}
Derivatives of $w$ with respect to the parametric coordinates are
simply
\begin{align}
w_{,\alpha}&= \dfrac{\partial w}{\partial\xi_\alpha} =\sum_B w_BM_{B,\alpha} \\
w_{,\alpha\beta}&=\dfrac{\partial^2 w}{\partial\xi_\alpha\partial\xi_\beta} =\sum_B w_BM_{B,\alpha\beta}\\
w_{,\alpha\beta\gamma}&=\dfrac{\partial^3 w}{\partial\xi_\alpha\partial\xi_\beta\partial\xi_{\gamma}} =\sum_B w_BM_{B,\alpha\beta\gamma}
\end{align}
Using the chain rule, the derivative of the rational $R_A$ defined
in~\eqref{eq:rational} with respect to $\xi_\alpha$ can be expressed
as:
\begin{align}
R_{A,\alpha}&=\dfrac{w_AM_{A,\alpha}}{w}-\dfrac{w_AM_A}{w^2}w_{,\alpha}
\end{align}
which using~\eqref{eq:rational} can be simplified to
\begin{align}
R_{A,\alpha}&=\dfrac{w_AM_{A,\alpha}-R_Aw_{,\alpha}}{w}
\label{eq:drational}
\end{align}
Similarly higher-order derivatives of~\eqref{eq:rational} can be
computed by applying the chain rule to~\eqref{eq:drational}. For
second order derivatives we obtain:
\begin{align}
R_{A,\alpha\beta}&=\dfrac{w_AM_{A,\alpha\beta}-\left(R_{A,\beta}w_{,\alpha}+R_Aw_{,\alpha\beta}\right)}{w}-\dfrac{\left(w_AM_{A,\alpha}-R_Aw_{,\alpha}\right)w_{,\beta}}{w^2}
\end{align}
which can be simplified to
\begin{align}
R_{A,\alpha\beta}&=\dfrac{w_AM_{A,\alpha\beta}-R_Aw_{,\alpha\beta}-R_{A,\beta}w_{,\alpha}-R_{A,\alpha}w_{,\beta}}{w}
\label{eq:ddrational}
\end{align}
using equation~\eqref{eq:drational}. Continuing with the same
procedure, we can apply the chain rule to~\eqref{eq:drational} to obtain:
\begin{align}
R_{A,\alpha\beta\gamma}&=\dfrac{w_AM_{A,\alpha\beta\gamma}-R_Aw_{,\alpha\beta\gamma}}{w}\nonumber\\
&-\dfrac{R_{A,\alpha}w_{,\beta\gamma}+R_{A,\beta}w_{,\alpha\gamma}+R_{A,\gamma}w_{,\alpha\beta}}{w}\nonumber\\
&-\dfrac{R_{A,\beta\gamma}w_{,\alpha}+R_{A,\alpha\gamma}w_{,\beta}+R_{A,\alpha\beta}w_{,\gamma}}{w}
\label{eq:dddrational}
\end{align}
\subsection*{Higher-order spatial derivatives}
In~\eqref{eq:rational} we gave a definition of the rational basis
function in terms of its parametric coordinates. However, when using
the isoparametric concept \cite{Hughes2000}, we need to express
derivatives in terms of spatial coordinates, not their parametric
counterparts. Thus, in this section we derive formulas for
higher-order derivatives of basis functions in terms of their spatial
coordinates. The geometric mapping is computed using the isoparametric
concept, that is,
\begin{align}
\boldsymbol x(\boldsymbol\xi)=\sum_B \boldsymbol x_BR_B(\boldsymbol\xi)
\end{align}
where $R_B$ was defined in~\eqref{eq:rational} and $\boldsymbol x_B$ are the
control point locations in physical space. Thus, the parametric
derivative of this mapping is
\begin{align}
\dfrac{\partial \boldsymbol x\left(\boldsymbol \xi\right)}{\partial
\xi_\alpha} = \sum_B \boldsymbol x_B\dfrac{\partial
R_B(\boldsymbol\xi)}{\partial \xi_\alpha} = \sum_B \boldsymbol x_B R_{B,\alpha}
\end{align}
which for simplicity we write in index notation as $x_{i,\alpha}$.
To begin let us state the following identity which is key for
all further developments.
\begin{align}
\dfrac{\partial x_m} {\partial x_n}=\dfrac{\partial x_m}{\partial\xi_\gamma}\dfrac{\partial\xi_\gamma}{\partial x_n}
\end{align}
or alternatively in index notation
\begin{align}
\delta_{mn}=x_{m,\gamma}\ \xi_{\gamma,n}\label{eq:identity}
\end{align}
where $\delta_{mn}$ is Kronecker's delta which is one when $m=n$ and
zero otherwise. Then, we can write
\begin{align}
\xi_{\gamma,n}=\dfrac{\partial\xi_\gamma}{\partial x_n}=\delta_{mn}\brac{\dfrac{\partial x_m}{\partial\xi_\gamma}}^{-1}=\brac{\dfrac{\partial x_n}{\partial\xi_\gamma}}^{-1}\label{eq:derividentity}
\end{align}
Identity~\eqref{eq:identity} is the one conventionally used in
isoparametric finite elements to compute the spatial derivative given
the geometric mapping in parametric coordinates.
To differentiate the basis functions \eqref{eq:rational} with respect
to spatial coordinates, we apply the chain rule, that is,
\begin{align}
\dfrac{\partial R_A}{\partial x_i}&=\dfrac{\partial R_A}{\partial\xi_\alpha}\dfrac{\partial\xi_\alpha}{\partial x_i}
\end{align}
where the parametric derivative is defined in~\eqref{eq:drational} and
the gradient of the inverse mapping
in~\eqref{eq:derividentity}. Higher-order spatial derivatives are
obtained by further application of the chain rule. Here we write the
first, second, and third spatial derivatives in index notation.
\begin{align}
R_{A,i}&=R_{A,\alpha}\ {\xi_{\alpha,i}}\label{eq:dspatial}\\
R_{A,ij}&=R_{A,\alpha\beta}\ {\xi_{\alpha,i}}\ {\xi_{\beta,j}}+R_{A,\alpha}\ {\xi_{\alpha,ij}}\label{eq:ddspatial}\\
R_{A,ijk}&=R_{A,\alpha\beta\gamma}\ \xi_{\alpha,i}\ \xi_{\beta,j}\ \xi_{\gamma,k}\nonumber\\
&+R_{A,\alpha\beta}\brac{
\xi_{\alpha,i}\ \xi_{\beta,jk} + \xi_{\beta,j}\ \xi_{\alpha,ik} + \xi_{\beta,k}\ \xi_{\alpha,ij}}\nonumber\\
& +R_{A,\alpha}\ {\xi_{\alpha,ijk}}\label{eq:dddspatial}
\end{align}
In order to compute~\eqref{eq:ddspatial} and~\eqref{eq:dddspatial}, we
need higher-order derivatives of the inverse mapping. The first
derivative may be written by differentiating the indentity
\eqref{eq:identity} with respect to spatial coordinates.
\begin{align*}
\delta_{mn,\ell}&=\left(x_{m,\epsilon}\ \xi_{\epsilon,n}\right)_{,\ell}\nonumber\\
0 &=x_{m,\epsilon\mu}\ \xi_{\epsilon,n}\ \xi_{\mu,\ell}+x_{m,\epsilon}\ \xi_{\epsilon,n\ell}\nonumber\\
x_{m,\epsilon}\ \xi_{\epsilon,n\ell} &=-x_{m,\epsilon\mu}\ \xi_{\epsilon,n}\ \xi_{\mu,\ell}\nonumber\\
\xi_{\nu,m}\ x_{m,\epsilon}\ \xi_{\epsilon,n\ell} &=-x_{m,\epsilon\mu}\ \xi_{\epsilon,n}\ \xi_{\mu,\ell}\ \xi_{\nu,m}\nonumber\\
\delta_{\nu\epsilon}\ \xi_{\epsilon,n\ell} &=-x_{m,\epsilon\mu}\ \xi_{\epsilon,n}\ \xi_{\mu,\ell}\ \xi_{\nu,m}\nonumber\\
\xi_{\nu,n\ell} &=-x_{m,\epsilon\mu}\ \xi_{\epsilon,n}\ \xi_{\mu,\ell}\ \xi_{\nu,m}
\end{align*}
The second derivative of the inverse mapping may be written by
differentiating the previous expression.
\begin{align*}
\xi_{\nu,n\ell o} =& \left(-x_{m,\epsilon\mu}\ \xi_{\epsilon,n}\ \xi_{\mu,\ell}\ \xi_{\nu,m}\right)_{,o}\nonumber\\
=&-x_{m,\epsilon\mu\omega}\ \xi_{\epsilon,n}\ \xi_{\mu,\ell}\ \xi_{\nu,m}\ \xi_{\omega,o}\nonumber\\
&-x_{m,\epsilon\mu}\left(\xi_{\epsilon,no}\ \xi_{\mu,\ell}\ \xi_{\nu,m}+
\xi_{\epsilon,n}\ \xi_{\mu,\ell o}\ \xi_{\nu,m}+
\xi_{\epsilon,n}\ \xi_{\mu,\ell}\ \xi_{\nu,mo}\right)
\end{align*}
Subsequent derivatives of the inverse map can be taken by applying
these ideas recursively. These higher-order derivatives of the
geometrical mapping are not standard in the literature and are
required if one is to solve a higher-order PDE on a mapped geometry.
\subsection{Periodicity}
Due to the prevalent use of open knot vectors by the IGA community,
applications which require the enforcement of periodic boundary
conditions, typically do so by building a system of constraints on the
coefficients. For example, in \cite{Liu2013}, the authors detail
constraint equations which enforce $C^1$ periodicity.
However, we prefer to build periodicity into the function space for
its simplicity and generality. We do this by \emph{unclamping} the
knot vector as in the parlance of~\cite{Piegl1995}. Given an open (or
\emph{clamped}) knot vector, $U=\{u_0,\ldots,u_m\}$, which encodes a
set of B-spline basis functions, $\{N_{i,p}\}_{i=0\dots n}$, of
polynomial degree $p$, with $m=n+p+1$, algorithm~\ref{a:periodic}
unclamps the knot vector on the left and right ends for a desired
order of continuity $C^k$, for $0 \leq k \le p-1$. In
figure~\ref{f:period} we present a $C^2$ cubic B-spline space with
varying orders of continuity across the periodic boundary. Each of
these knot vectors was obtained by unclamping the open knot vector
$U=(0,0,0,0,0.2,0.4,0.6,0.8,1,1,1,1)$ using
algorithm~\ref{a:periodic}. Each unique basis is labeled with its
global number and colored distinctly such that basis functions which
pass the periodic interface may be identified.
Algorithm~\ref{a:periodic} is sufficient when utilizing the basis in
the parametric domain. In cases where the parametric domain is mapped,
the original control points of the mapping which correspond to the
open knot vector need to be unclamped. In~\cite{Piegl1995}, algorithm
A12.1 performs this operation but is limited to $C^{p-1}$
unclamping. We present a generalization in
algorithm~\ref{a:periodic-curve} for $C^k$ unclamping where {\tt Pw[]}
is the array of control points in homogenenous coordinates. In
figure~\ref{f:unclamp}, we present the effect of our algorithm applied
to a quarter circular arc.
\begin{algorithm}
\caption{Pseudocode for unclamping knot vectors}\label{a:periodic}%
\begin{verbatim}
UnclampKnots(n,p,k,U)
{ /* Unclamp a knot vector */
/* Input: n,p,k,U */
/* Output: U modified in-place*/
m = n + p + 1;
for (i=0; i<=k; i++)
{
U[k-i] = U[p] - U[n+1] + U[n-i];
U[m-k+i] = U[n+1] - U[p] + U[p+i+1];
}
}
\end{verbatim}
\end{algorithm}
\begin{figure}
\centering
\subfloat[$C^0$ periodicity, $U=(-0.2,0,0,0,0.2,0.4,0.6,0.8,1,1,1,1.2)$]{\includegraphics[width=0.85\textwidth]{./figs/period0.pdf}}\\
\subfloat[$C^1$ periodicity, $U=(-0.4,-0.2,0,0,0.2,0.4,0.6,0.8,1,1,1.2,1.4)$]{\includegraphics[width=0.85\textwidth]{./figs/period1.pdf}}\\
\subfloat[$C^2$ periodicity, $U=(-0.6,-0.4,-0.2,0,0.2,0.4,0.6,0.8,1,1.2,1.4,1.6)$]{\includegraphics[width=0.85\textwidth]{./figs/period2.pdf}}
\caption{Three cases of periodicity for a $C^2$ cubic B-spline space. In each case, the open knot vector was unclamed using algorithm~\ref{a:periodic}. Unique basis functions are labeled by their global numbering and colored distinctly.}\label{f:period}
\end{figure}
\begin{algorithm}
\caption{Pseudocode for unclamping curves}\label{a:periodic-curve}%
\begin{verbatim}
UnclampCurve(n,p,k,U,Pw)
{ /* Unclamp a curve */
/* Input: n,p,k,U,Pw */
/* Output: U,Pw modified in-place*/
m = n + p + 1;
for (i=0; i<=k; i++) /* Unclamp at left end */
{
U[k-i] = U[p] - U[n+1] + U[n-i];
}
for (i=p-k-1; i<=p-2; i++)
for (j=i; j>=0; j--)
{
alpha = (U[p]-U[p+j-i-1])/(U[p+j+1]-U[p+j-i-1]);
Pw[j] = (Pw[j]-alpha*Pw[j+1])/(1-alpha);
}
}
for (i=0; i<=k; i++) /* Unclamp at right end */
{
U[m-k+i] = U[n+1] - U[p] + U[p+i+1];
}
for (i=p-k-1; i<=p-2; i++)
for (j=i; j>=0; j--)
{
alpha = (U[n+1]-U[n-j])/(U[n-j+i+2]-U[n-j]);
Pw[n-j] = (Pw[n-j]-(1-alpha)*Pw[n-j-1])/alpha;
}
}
\end{verbatim}
\end{algorithm}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{./figs/unclamp.pdf}
\caption{A quarter circular arc of $p=4$ repeatedly unclamped by algorithm~\ref{a:periodic-curve}.}\label{f:unclamp}
\end{figure}
\subsection{Adjacency Graph}
In any numerical method employing basis functions with local support,
it is important to compute the adjacency graph of interacting degrees
of freedom. This graph contains information required for the proper
preallocation of sparse matrices implemented in compressed sparse row
(CSR) or column (CSC) formats. This preallocation is crucial for
efficient matrix assembly. Furthermore, prior knowledge of the sparse
matrix nonzero pattern enables the use of specialized differentiation
techniques, such as the approximation of Jacobian matrices by colored
finite differences.
Given a knot vector $U$ encoding a set of B-spline basis functions of
degree~$p$, algorithm~\ref{a:stencil} computes the left-most ($\ell$)
and right-most ($r$) indices of the basis funcions interacting with
the $i$-th basis funcion. That is, the support of the basis function
$N_{i,p}$ has non-empty intersection with the support of $N_{j,p}$,
for $\ell \leq j \leq r$. In one dimension, all basis functions with
indices in the set $\{\ell,\ell+1,\ldots,r-1,r\}$ are adjacent to the
$i$-th basis function. For dimensions higher than one, the adjacency
graph is computed by tensor-product of the index sets
$\{\ell_d,\ell_d+1,\ldots,r_d-1,r_d\}$ for each $d$-dimension.
In figure~\ref{f:stencil}(a) we present a cubic basis which varies in
inter-element continuity. Each basis is labeled by a global number and
colored distinctly. In figure~\ref{f:stencil}(b) we show the
corresponding nonzero pattern for the sparse matrix obtained by
applying algorithm~\ref{a:stencil} to each basis.
\begin{algorithm}
\caption{Pseudocode for computing the adjacency graph}\label{a:stencil}%
\begin{verbatim}
BasisStencil(i,p,U,l,r)
{ /* Compute the indices of the leftmost */
/* and rightmost overlapping basis */
/* Input: i,p,U */
/* Output: l,r */
k = i;
while (U[k] == U[k+1]) k++;
l = k - p;
k = i + p + 1;
while (U[k] == U[k-1]) k--;
r = k - 1;
}
\end{verbatim}
\end{algorithm}
\begin{figure}
\centering
\subfloat[A sample representative cubic B-spline basis consisting of four elements where the interelement continuity progressively decreases, $U=(0,0,0,0,2,4,4,6,6,6,8,8,8,8)$]{\includegraphics[width=0.85\textwidth]{./figs/graph_basis.pdf}}\\
\subfloat[Corresponding nonzero structure where the function space represented in (a) is used as test and trial functions. Nonzero entries are indicated by a small circle.]
{\makebox[0.85\textwidth]{ $\begin{bmatrix}
\circ & \circ & \circ & \circ & & & & & & \\
\circ & \circ & \circ & \circ & \circ & & & & & \\
\circ & \circ & \circ & \circ & \circ & & & & & \\
\circ & \circ & \circ & \circ & \circ & \circ & \circ & & & \\
& \circ & \circ & \circ & \circ & \circ & \circ & & & \\
& & & \circ & \circ & \circ & \circ & & & \\
& & & \circ & \circ & \circ & \circ & \circ & \circ & \circ \\
& & & & & & \circ & \circ & \circ & \circ \\
& & & & & & \circ & \circ & \circ & \circ \\
& & & & & & \circ & \circ & \circ & \circ
\end{bmatrix}$}}
\caption{Generation of the nonzero structure of matrices using algorithm~\ref{a:stencil}}\label{f:stencil}
\end{figure}
\section{Introduction}\label{s:intro}\input{intro.tex}
\section{Isogeometric Analysis}\label{s:iga}\input{iga.tex}
\section{PetIGA}\label{s:impl}\input{impl.tex}
\section{Applications}\label{s:app}\input{app.tex}
\section{Performance}\label{s:perf}\input{perf.tex}
\clearpage
\section{Conclusion}\input{conc.tex}
\section*{Acknowledgements}
We would like to acknowledge the open source software packages that
made this work possible: {\tt PETSc}~\cite{petsc1,petsc2}, {\tt
NumPy}~\cite{numpy}, {\tt matplotlib}~\cite{matplotlib}. We would
like to thank Lina Mar\'ia Bernal Martinez, Gabriel Andres Espinosa
Barrios, Federico Fuentes Caycedo, Juan Camilo Mahecha Zambrano for
their work on the hyper-elasticity implementation as a final project
to the {\em Non-linear Finite Element Finite Element} class taught by
VMC and NC for the Mechanical Engineering Department at Universidad de
Los Andes in Bogot\'{a}, Colombia in July 2012. We would like to thank
Adel Sarmiento for the visualization work on figures~\ref{f:nsk3d} and
\ref{f:vms}. The work of Lisandro Dalcin was partially supported by
NumPor.
\input{main.bbl}
\end{document}
|
1,108,101,563,156 | arxiv | \section{Introduction}
The Sudbury Neutrino Observatory (SNO) \cite{SNO1} is a heavy water
Cherenkov detector located at a depth of 2010 m in INCO's Creighton Mine near
Sudbury, Canada. The detector uses 1000 tonnes of ultrapure heavy water as a target
material contained in a 12 m diameter acrylic sphere to detect solar
neutrinos. An array of $\sim$9500 photomultiplier tubes (PMTs), mounted on a
17.8 m diameter stainless-steel geodesic support structure which is immersed
in 7000 tonnes of shielding light water, is used to
observe Cherenkov photons produced in the D$_{2}$O region.
In recent studies,
the SNO collaboration has provided strong evidence that neutrinos change flavor as they
travel from the Sun to Earth \cite{SNO2,SNO3,SNO4,SNO5,SNO6}, independently of solar
model flux predictions.
Wavelength shifters are generally fluorescent organic chemicals containing
polyaromatic hydrocarbons or heterocycles in their molecules which absorb
photons and re-emit them at longer wavelengths. Previous studies have shown that
adding WLS into Cherenkov detectors increases the amount of detected light by a
factor of 1.2 using amino G \cite{Badino1} and by a factor of 3 using umbelliferone \cite{Willis1}.
Fig.~1 illustrates why the use of a wavelength shifter is attractive for increasing the light
yield in the SNO detector. The Cherenkov light production rises strongly in the violet end of
the spectrum but this light is lost due to attenuation in 5.5~cm of acrylic at normal incidence.
The addition of WLS in the D$_{2}$O can boost the Cherenkov signal
without changing the background photons from outside the D$_{2}$O region. This
should significantly increase the detector efficiency and improve the energy resolution of SNO
which would allow for a better sensitivity of the detector to spectral distortions caused by
neutrino interactions with matter in the core
of the Sun. In this paper, we studied
the viability to introduce WLS in the SNO detector. Initially, the chemical
and optical properties of several WLS candidates were tested. Subsequently, a detailed simulation of the
SNO experiment was performed and a WLS cosmic ray telescope was designed to check the response
to Cherenkov radiation.
Even though it was subsequently decided that no WLS would be used for a future phase of the SNO
experiment, we believe the new water based
wavelength shifters that have been found in this study may be useful for some
future Cherenkov detectors and other applications.
\section{Properties of wavelength shifters}
To meet the many requirements for the safe and reliable operation of the SNO detector, a desirable
WLS should have the following characteristics:
(1) soluble and stable in water;
(2) removable from heavy water as the SNO D$_{2}$O has been loaned by Atomic Energy of
Canada Limited (AECL) and it should be returned additive-free at the end of the
SNO experimental program;
(3) no adverse interaction with the materials used in the detector and water
circulation system, including acrylic, polypropylene, and MnOx beads
\cite{MnOx1} and HTiO absorbent \cite{HTiO1}, which are used for the radium assays
of the heavy water;
(4) high absorbency below 350 nm, and high re-emission probability in the range of 350-500 nm
with a high quantum efficiency in the neutral pH range, with no significant overlap
between the excitation and emission regions; (5) to achieve the best light gain,
the emission spectrum should match, as closely as possible, the optimal PMT sensitive
region (see Fig.~1);
(6) short fluorescence decay time (a few nanoseconds), as the longer re-emission time
would potentially incorporate trigger ambiguity and could also increase the uncertainty in the
reconstructed position of the event.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=80mm]{fig1.eps}
\caption{Transmission of the SNO acrylic and PMT quantum efficiency as function
of wavelength superimposed on the Cherenkov spectrum (in arbitrary units).}
\label{fig:snorsp}
\end{center}
\end{figure}
Five water-soluble WLS compounds were initially tested according to the above criteria.
They included four coumarin derivatives (7-hydroxy-4-methylcoumarin,
7-hydroxy-4-methylcoumarin-3-acetic acid, 7-hydroxycoumarin-3-carboxylic acid,
and Alexa Fluor 350, all commercially available at Molecular Probes) and
one carbostyril (carbostyril 124, purchased from Sigma-Alderich Co.). The results are
described as follows.
\subsection{Measurements of optical properties}
The results of the optical measurements for all the wavelength shifters are
summarized in Table 1. Alexa Fluor 350 (AF350) and carbostyril 124 (CS124)
are pH-insensitive in the neutral pH range, whereas the other three coumarin
derivatives show an obvious pH dependence
and their absorption shifts to longer wavelengths as
the pH values increase from 5 to 11. The pH-sensitive coumarins would not be
fully deprotonated, and thus would not be very efficient in converting ultraviolet
photons, until their pH values rise to above 9. The pH value of
the SNO heavy water is 7. Consequently, a large amount of buffer needs be added
into the detector to alter the water pH if a pH-sensitive WLS is chosen. This is not practical as
it would introduce unnecessary materials and greatly
increase the risk of contamination.
\begin{table*}
\caption{Summary of optical properties of the wavelength shifters. Typical uncertainties on the
quantum efficiency and fluorescence lifetime measurements are less then 10\% and 5\%, respectively.}
\protect\label{tbl:1}
\newcommand{\hphantom{$-$}}{\hphantom{$-$}}
\begin{tabular}{lccccccccccccccccc} \hline
& \multicolumn{5}{c}{7-hydroxy-4-} & Alexa
& \multicolumn{5}{c}{7-hydroxy-4-methyl} & carbostyril
& \multicolumn{5}{c}{7-hydroxycoumarin-} \\
& \multicolumn{5}{c}{methylcoumarin} & Fluor 350
& \multicolumn{5}{c}{coumarin-3-acetic acid} & 124
& \multicolumn{5}{c}{3-carboxylic acid} \\ \hline
pH &
5 & 8 & 9 & 10 & 11 & 5-9 &
5 & 8 & 9 & 10 & 11 & 5-9 &
5 & 8 & 9 & 10 & 11 \\ \cline{2-6} \cline{8-12} \cline{14-18}
Maximum Absorption & & & & & & & & & & & & & & & & & \\
\hphantom{$-$} $\lambda_{Abs}$ (nm) &
320 & 336 & 362 & 360 & 360 & 340 &
324 & 326 & 360 & 360 & 360 & 340 &
336 & 386 & 386 & 386 & 386 \\
\hphantom{$-$} $\epsilon$ (10$^{4}$ L/mol/cm) &
1.4 & 1.1 & 1.6 & 1.8 & 1.8 & 2.0 &
1.5 & 1.3 & 1.7 & 1.8 & 1.8 & 1.6 &
1.0 & 1.1 & 3.2 & 3.3 & 3.3 \\
Maximum Emission & & & & & & & & & & & & & & & & & \\
\hphantom{$-$} $\lambda_{Em}$ (nm) &
450 & 450 & 447 & 449 & 450 & 443 &
458 & 455 & 455 & 455 & 455 & 417 &
446 & 446 & 446 & 446 & 446 \\
Quantum Efficiency &
78\% & 78\% & 87\% & 86\% & 87\% & 92\% &
80\% & 78\% & 82\% & 82\% & 84\% & 97\% &
52\% & 70\% & 89\% & 85\% & 79\% \\
Fluorescence lifetime (ns) &
7.0 & - & - & 6.7 & - & 5.6 &
6.5 & - & - & 6.4 & - & 6.2 &
4.6 & - & - & 4.8 & - \\
\hline \hline
\end{tabular}\\[2pt]
$\lambda_{Abs}$: maximum absorption wavelength; \\
$\epsilon$: molar absorptivity at maximum absorption wavelength; \\
$\lambda_{Em}$: maximum emission wavelength. \\
\end{table*}
\subsubsection{Stability tests of WLS solution}
The stability tests on the WLS solutions were performed by examining the changes
of UV/VIS absorption and fluorescence excitation/emission
over several months of storage in glass vials in the dark.
Two pH-insensitive 1.0 ppm WLS (CS124 and AF350) solutions were tested,
and no significant optical change was observed at neutral
pH condition over a period of more than 6 months.
The solutions of three pH-sensitive coumarins were also tested in UPW at a pH of 5
and in phosphate buffers at a pH of 9. Obvious decreases
were found in the emission intensities of
7-hydroxy-4-methylcoumarin-3-acetic acid both at pH of 5 and 9, and in UV/VIS
absorption intensities of 7-hydroxycoumarin-3-carboxylic acid at a pH of 9 within
two months, indicating a lifetime much shorter than what is required
for the SNO experiment.
Therefore, the pH-sensitive coumarins are
not good candidates for the SNO experiment. In the following sections,
discussion will be mainly focused on the two pH-insensitive compounds AF350 and CS124.
\subsubsection{Absorption and fluorescence emission spectra}
The absorption and fluorescence emission spectra were measured using a
Perkin Elmer Lambda 800 ultraviolet and visible (UV/VIS) spectrometer and
a Cary Eclipse fluorescence spectrophotometer made by Varian. Samples were tested
in 1-cm quartz
cuvettes using ultrapure water (UPW) as a blank. As seen in Fig.~2,
both AF350 and CS124 show strong light absorption below 350 nm and
re-emission in the region of 350-500 nm. At a concentration of 1 $\mu$g/g (ppm),
the attenuation lengths for AF350 and CS124 are respectively 9.3 and 4.8~cm
at 340 nm. However, CS124 seems to be a better choice for
the SNO experiment as its absorption and re-emission spectra closely match the
response region of the detector. The presence of absorption
at a wavelength higher than 350 nm for AF350 would lead to unnecessary conversion of
Cherenkov photons in the detectable wavelength range.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=80mm]{fig2.eps}
\caption{The UV/VIS absorption (left) and fluorescence emission spectra (right) for carbostyril
124 and Alexa Fluor 350.}
\label{fig:spectra}
\end{center}
\end{figure}
\subsubsection{Quantum efficiency measurements}
The quantum efficiency measurements were done by following the Parker-Reas
method \cite{Parker1,Fery1}, which compares the spectral yields of the
WLS solutions to those of standard fluorescence solutions, in our case
quinine sulfate in 0.5 mol/L $H{_2}SO{_4}$ (quantum efficiency 54\% \cite{Quinine1})
and harmane in 0.1 mol/L $H{_2}SO{_4}$ (quantum efficiency 83\% \cite{Harmane1}). The quantum
efficiencies for both AF350 and CS124 are higher than 90\% (see Table~1) with an estimated
uncertainty of less than 10\%.
\subsubsection{Fluorescence lifetime measurements}
\label{sec:time}
The fluorescence lifetime measurements were carried out using a TimeMaster Laser-based
Fluorescence Lifetime Spectrometer made by Photon Technology International.
As shown in Table~1, the lifetimes are 5.6 ns for AF350 and
6.2 ns for CS124 with an uncertainty of 5\%. Both are sufficiently
short to meet the reconstruction requirements of the SNO experiment.
\subsection{Chemical compatibility and removal tests}
The tests of chemical compatibility of the WLS with acrylic were done by immersing
4$\times$1$\times$0.5 cm$^3$ of acrylic pieces in 25 ml of the WLS solutions and
examining their optical changes on an approximately biweekly basis over
three months. No obvious variations have been seen for any of the five WLS
solutions, suggesting that the loss of WLS on the wall of acrylic
vessel or the interaction of WLS with acrylic should not be an issue in
selection of a candidate.
The impact of the WLS on the two SNO radium assay techniques \cite{MnOx1} \cite{HTiO1}
was also studied. Small-scale tests were carried out by passing 1 ppm of WLS
solutions through a small plastic column filled with MnOx beads or a small
syringe filter loaded with HTiO absorbent, and the optical properties of
the feed and permeate were measured. All the three pH-sensitive coumarins
show an obvious accumulation and interaction with the MnOx beads and HTiO
absorbent, whereas no change was observed for AF350 or CS124. This indicates
that addition of AF350 or CS124 would not affect the SNO Ra assay techniques.
The Biobeads and activated charcoal used in Milli-Q water systems were tested to extract the
WLS compounds from water. Small scale experiments showed that a reduction factor of about 1000 could
be achievable for the removal of 1.0 ppm WLS from 1000 tonnes of heavy water with activated
charcoal at an equivalent flow rate of 10 l/(min m$^2$) in a single pass. Therefore, it would be
feasible to remove the WLS from the SNO detector after completion of the experiment.
\section{Monte Carlo simulation for SNO}
A detailed Monte Carlo simulation was performed using
the SNO software (SNOMAN) to account for the
full light propagation and attenuation, together with
the complete geometrical acceptance and detector response.
EGS4 \cite{EGS4} is used in SNOMAN to provide accurate
propagation of electromagnetic showers.
The light and heavy water attenuations were set and extrapolated in
the range of 180-620 nm according to the measurements of Ref. \cite{boivin}.
The WLS absorption spectrum, quantum efficiency, and wavelength of the
re-emission peak were parameterized in order to simulate
the interaction between photons and WLS molecules. For each
simulation, SNOMAN interpolated the absorption spectrum
of Fig.~2 with a fifteen-point piecewise-linear function.
It also assumed a Gaussian distribution for the re-emission
wavelength spectrum using the experimentally measured mean
and width. According to the measurements of Section
\ref{sec:time}, the fluorescence lifetimes were assumed
to be 5.6~ns and 6.2~ns for AF350 and CS124, respectively,
and the WLS light was emitted isotropically. Furthermore,
SNOMAN used the WLS concentration to scale the absorption
coefficient accordingly.
The simulations were performed in order to determine the
gain of light as a function of the concentration of WLS.
The concentration values were chosen in the range
0.01 to 10~ppm based on the mean free path of ultraviolet
photons in the detector. A high concentration maximizes
the number of photons converted and reduces the uncertainty
in the event position by shortening their mean free path.
On the other hand, it is important to minimize the quantity
of WLS for obvious financial reasons and an over-saturation of
WLS could result in possible self-absorption cycles
when there is an overlap between the excitation and emission
regions.
\begin{table
\begin{center}
\caption{Light gain as a function of the WLS concentration for both Alexa Fluor 350 and
carbostyril 124. The results come from a Monte Carlo simulation of the SNO detector. The
last column shows the estimated mean free path of Cherenkov light in a WLS solution for
both candidates.}
\protect\label{tbl:2}
\begin{tabular}{cccc} \\ \hline
Concentration & Gain & Gain & Mean Free Path \\
(ppm) & AF350 & CS124 & approx. (cm) \\ \hline
0.01 & 1.4 & 1.6 & 1000 \\
0.05 & 2.0 & 2.4 & 200 \\
0.10 & 2.3 & 2.6 & 100 \\
0.50 & 2.7 & 3.0 & 20 \\
1.00 & 2.8 & 3.0 & 10 \\
5.00 & 2.9 & 3.0 & 2 \\
10.00 & 2.9 & 3.1 & 1 \\
\hline \hline
\end{tabular}\\[2pt]
\end{center}
\end{table}
The simulation procedure was straightforward: 10,000 electrons
with an energy of 10~MeV were generated isotropically
inside the heavy water. The gain is defined as
the ratio of the mean of the number of PMT hits with
and without WLS. The gain and the mean free path of Cherenkov light in a WLS solution
are shown in Table~2 for different WLS concentrations. At
concentration above 0.50~ppm, there was a saturation
plateau in the number of detected photons. Therefore,
there is no significant advantage in choosing a higher
concentration in terms of the number of PMT hits.
However, in order to keep the spatial resolution at a
reasonable level and bring the absorption mean free path
below 10~cm, the concentration should be
higher or equal to 1~ppm. At a few ppm, the light gain
from the Monte Carlo simulation is estimated to be 2.9
and 3.0 for the AF350 and CS124, respectively.
Finally, the energy thresholds with and without WLS were
identified by simulating SNO data in the pure D$_2$O configuration.
The simulation incorporated both the solar neutrino signal events and all
internal and external low energy backgrounds.
In principle, adding a wavelength shifter could allow processes whereby
particles below the Cherenkov threshold produce light through direct
excitation of the wavelength shifter. In a good scintillator, the
production of light is typically 30,000 photons per MeV of incident
energy. At a concentration of a few ppm, we concluded that this is not
likely to produce more than a few photons and neglected this effect,
even for the naturally occurring alpha particles of several MeV.
The result showed a clear
shift in the mean number of PMT hit recorded for a typical Cherenkov event.
At a given statistical significance of the signal over the background
for a 5~MeV analysis threshold without WLS~\cite{SNO3,SNO4},
the addition of 1~ppm of WLS would allow for a 3.7~MeV low energy
threshold.
\section{WLS Cosmic Ray Telescope}
In order to test the WLS compounds in an actual experiment, an apparatus was built that
uses cosmic rays as the source of Cherenkov light.
The telescope allows a direct comparison between the
Cherenkov light produced by cosmic rays and the WLS light.
\subsection{Design of the apparatus}
Two cylindrical barrels made from polyvinylchloride (PVC)
are placed one above the other allowing a vertical cosmic
ray to go through both, as shown in Fig.~3.
Three scintillator panels placed on the top, the center
and the bottom of the apparatus are used in coincidence to
trigger a cosmic ray event. The top and bottom scintillator
panels have larger diameters (30 cm) to maximize the
trigger rate, while the middle one is
smaller (20 cm) to ensure fully
contained events. Two layers of lead bricks with a total
thickness of 20 cm are placed above the bottom scintillator
panel to eliminate the soft component of the cosmic ray
shower in the data. The two barrels are identical in
size, with a height of 42.5~cm and a diameter of 22.9~cm.
They both contain a hermetic sample cell that can either be filled
with pure water or a WLS solution. These cells are
separated from a Hamamatsu R1408 PMT by an ultraviolet
transparent (UVT) acrylic window, using the same type of PMT and
acrylic as in the SNO detector. The sample cells are
cylindrical with a diameter of 20.4~cm and a height of 15~cm,
for a total volume of 4.9 liters of optical medium.
In order to reduce the variation of gain of the PMTs due to their orientation with respect
to the Earth's magnetic field, a cylindrical envelope of mu-metal shields both PMTs
from any variation of the magnetic field.
Depending on the orientation of the cells (PMT looking upward or downward) and the type of
optical medium present in each sample cell, it is possible
to isolate the proportion of WLS light and direct Cherenkov light detected by the PMTs and
perform a light gain measurement. The Cherenkov cone of light always point downward, therefore
only a PMT in the upward facing position would detect it, while the isotropically distributed WLS
light is detected by both downward- and upward-looking PMTs.
Although many concentrations of different WLS candidates would
have been possible to analyze with this apparatus, only CS124 at a concentration of 15.4 ppm was
tested since it seemed the most promising candidate for its addition to the SNO detector. This
concentration is higher than the optimal concentration of a few ppm obtained from the Monte Carlo
analysis for SNO \cite{Rollin} because it was necessary to compensate for the smaller scale of
the apparatus.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=75mm]{fig3.eps}
\caption{Schematic drawing of WLS cosmic ray telescope.}
\label{fig:telescope}
\end{center}
\end{figure}
\subsection{Calibration of the apparatus}
Two types of calibration were performed. Since the resolution
and sensitivity of the SNO PMTs allow the detection of a single photon,
the first method consists of measuring
the charge induced by a single photon hitting the PMT. Thus, the absolute scale
of the apparatus was measured and resulted in a good linearity up to a few
thousand photons, which is the luminosity expected from a cosmic ray event.
The second method uses the data itself to check if both PMTs respond the same way to the
Cherenkov light as a function of the voltage.
The voltages of the two PMTs were adjusted such as to equalize their gains.
Such calibration allows
consideration of the intrinsic sensitivity of each PMT and any other cell asymmetry.
\subsection{WLS timing measurements}
The WLS re-emission time measurement consists of doing a fit of the PMTs pulse shape,
assuming two Gaussianly distributed time components, the direct Cherenkov light
and the WLS light. The fit of the re-emission time of CS124 gave a value of 6.1~$\pm$~0.5~ns, which
is in good agreement with 6.2~$\pm$~0.3~ns found in the fluorescence lifetime measurement using a
laser system (see Table~1). Although a more precise measurement would have been possible with
a faster data acquisition system and a more sophisticated pulse shape analysis, it was
unnecessary since the laser system was faster and easier to operate, and much less susceptible
to systematic errors. Based on the agreement between the timing measurements performed, the WLS
telescope seems to be well calibrated for the gain measurement described in the next section.
\subsection{WLS light gain measurement}
Depending on the orientation of the Cherenkov cell, there are two ways to measure the mean
amplitude of pulses with and without WLS. If a WLS solution is in the upper cell and water is
in the lower cell, only WLS light is detected in the first ($A_W$) and only Cherenkov light
is detected in the second ($A_C$).
After inverting the configuration, both Cherenkov and WLS light
are detected in the lower cell ($A_{W+C}$), while in the upper cell only the dark noise
should be detected ($A_{\rm{noise}}$). Based on the
ratio $\frac{A_{W+C}}{A_C}$ between the mean pulse amplitudes of the PMTs, an increase of 2.0~$\pm$~0.2 in the
number of detected photons was obtained with WLS. The error is dominated by the level of noise
in the PMTs and some residual asymmetry in the cells. A fully consistent results is obtained
when the gain is computed from $\frac{A_W+A_C}{A_C}$.
The main interest of the WLS cosmic ray telescope was to perform a light gain measurement and
to ultimately determine not only the number of photons detected by the PMTs but the total
number of photons produced.
Since the PMT in a given cell detects only a fraction of the light produced and this fraction is
different for Cherenkov and WLS light, a calculation is required to obtain the increase of
photon produced within a $4\pi$ solid angle.
The Monte Carlo method was used for the determination of the WLS cosmic ray telescope acceptance
and the light gain correction factor.
The simulation generated Cherenkov photons, calculated their mean free path in the WLS solution
and if the interaction point was within the cell volume, it changed the direction of the photon
randomly. It has been found that 61.0\% of the Cherenkov photons propagating toward the PMT
were hitting it, while only 18.1\% of the isotropic WLS photons ended their path on the PMT. Therefore,
adding 15.4 ppm of carbostyril 124 into the WLS cosmic ray telescope
resulted in a net light gain of of 4.4~$\pm$~0.5.
\section{Conclusions}
The optical and chemical properties of two new water based wavelength shifter (WLS) molecules,
carbostyril 124 (CS124) and Alexa Fluor 350 (AF350), were tested as candidates to increase the
detection efficiency of Cherenkov light
of the Sudbury Neutrino Observatory (SNO). The tests indicated that these
pH-insensitive WLS chemicals have strong absorbency below 350 nm and high re-emission probability
between 350-500 nm with a short fluorescence decay time. This can significantly boost
the Cherenkov signals detected by the SNO PMTs which surround the D$_{2}$O region. Their long
lifetime stability and good chemical compatibility with the materials used in the
SNO detector as well as in the heavy water system allow addition of WLS with no major
modification to the detector. Monte Carlo simulations allowed a detailed study
of the response of a full-scale solar neutrino
experiment to the addition of WLS. It was shown that a 5 ppm concentration of AF350 or CS124 would
increase the light yield detected by the SNO PMTs by a factor of 2.9 and 3.0, respectively. A cosmic
ray telescope was also built to test the WLS compounds in a well controlled experimental setup, and
an increase of the detected Cherenkov light has been found by adding carbostyril 124. Taking into
account the geometrical acceptance of the detector, an increase of 4.4~$\pm$~0.5 light yield has
been measured. The measured increase in number of photons obtained with the WLS telescope has to be
compared with the light gain of 3.0 obtained with the full simulation of SNO.
It is important to note that
a Cherenkov detector with a low sensitivity to ultraviolet light will greatly benefit from the addition
of WLS. While a larger light gain is obtained in a small scale detector, where the attenuation is
negligible, a reduced increase in the number of detected Cherenkov photons is expected if a wavelength
shifter would be added in the heavy water of the Sudbury Neutrino Observatory.
Ultimately the properties
of the wavelength shifter candidates surveyed in this study can be used toward the conceptual design
of future water based Cherenkov detectors.
\section{Acknowledgments}
The authors would like to thank Alex Davis, Pascal Elahi, Rachel Faust, and
Christian Mallet for their contributions to the experiments.
We wish to thank Mark Chen and Alex Wright for the measurements
of fluorescence decay time of the WLS chemicals using the laser system in the Department of Physics
at Queen's University.
This research was supported in Canada by
the Canadian Foundation for Innovation (CFI), the Canada Research Chair (CRC) program, and
the Natural Sciences and Engineering
Research Council (NSERC).
|
1,108,101,563,157 | arxiv | \section{Introduction}
Differential privacy (DP), proposed by \citet{dwork2006calibrating}, is the state-of-the-art framework in formal privacy protection and is being implemented by tech companies, government agencies, and academic institutions. Over time, the DP community has developed many new DP mechanisms as well as new frameworks. Recently, $f$-DP (\citet{dong2021gaussian} was proposed as a generalization of DP, allowing for tight calculations of group privacy, composition, subsampling, and post-processing. It was shown in \citet{dong2021gaussian} that $f$-DP is provably the tightest version of DP that respects the post-processing property of DP. In particular, $f$-DP can be losslessly converted to R\'enyi-DP (or any $f$-divergence version of DP) as well as $(\epsilon,\delta)$-DP, but not vice-versa \citep{dong2021gaussian}. Furthermore, $f$-DP is equivalent (can be losslessly converted back and forth) to the privacy profile \citep{balle2018privacy,balle2020privacy} and the privacy loss random variables \citep{sommer2019privacy,zhu2022optimal}.
$f$-DP is defined in terms of a \emph{tradeoff function} or \emph{receiver operator curve (ROC)}, which encapsulates the difficulty of conducting a hypothesis test between two distributions. If $f=T(P,Q)$ is the tradeoff function for testing between the distributions $P$ and $Q$, then if a mechanism $M$ satisfies $f$-DP, this means that given the output of $M$ when run on one of two adjacent databases, it is at least as hard to determine which database was used, as it is to test between $P$ and $Q$.
While $f$-DP has the many desirable theoretical properties listed above in its favor, there are limited techniques for working with $f$-DP, and few constructive mechanisms for an arbitrary $f$-DP guarantee. A notable exception is a \emph{canonical noise distribution} (CND) from the recent paper \citet{awan2021canonical}, which builds a one-dimensional additive noise mechanism designed to exactly satisfy $f$-DP, with no wasted privacy budget. Along with the intuitive idea that a CND is optimal in that it optimizes the privacy loss budget, \citet{awan2021canonical} showed that CNDs are crucial to the construction of optimal DP hypothesis tests and free DP $p$-values. However, the CND construction given in \citet{awan2021canonical} does not result in a smooth distribution, and in particular is not \emph{log-concave}.
Log-concavity is a desirable property because it implies that the distribution has a monotone likelihood ratio; this means that higher observed values are always more likely to have come from a higher input value than a lower one. This makes the DP output much more interpretable, easily analyzed, and also has makes the calculation of the privacy cost simpler \citep{dong2021central}. Furthermore, the results of \citet{awan2021canonical} are limited to 1-dimensional distributions.
In this paper, we develop new properties of CNDs and $f$-DP, motivated by the following two questions,
\begin{center}
1. Can we construct log-concave CNDs?\qquad
2. Can we construct multivariate CNDs?
\end{center}
We found that the existence of both log-concave 1-dimensional CNDs and multivariate CNDs were intricately linked with properties related to functional composition of tradeoff functions (which is known to be related to group privacy) and tensor products of tradeoff functions (related to composition of mechanisms). We found that two highly desirable properties of a tradeoff function are \emph{infinite divisibility} and \emph{infinite decomposability} meaning that the tradeoff function can be exactly achieved by $n$-fold group-privacy or $n$-fold composition (respectively). We show that $(\epsilon,0)$-DP does not satisfy either property and in fact has neither a log-concave CND nor any multivariate CND. In contrast to $(\epsilon,0)$-DP, two families that satisfy both infinite divisibility and infinite decomposability are $\mu$-GDP and $(0,\delta)$-DP. While $(0,\delta)$-DP has limited applicability due to its weak protection for events with small probability, $\mu$-GDP and related DP definitions (such as zero concentrated DP) have been gaining popularity. The results of this paper provide a new perspective supporting the adoption of GDP as the default privacy measure instead of $(\epsilon,0)$-DP. Our results are more precisely summarized as follows:
{\bf Our Contributions and Organization }
In Section \ref{s:background}, we review concepts in $f$-DP and canonical noise distributions. In Section \ref{s:1d}, we study 1-dimensional CNDs. In Section \ref{s:pure}, we prove that the Tulap distribution is the \emph{unique} CND for $(\epsilon,0)$-DP. In Section \ref{s:logconcave}, we propose the concept of infinite divisibility and prove that a tradeoff function has a log-concave CND if and only if it is infinitely divisible; we also give a construction to produce the log-concave CND from a family of infinitely divisible tradeoff functions. We prove that piece-wise linear tradeoff functions are generally not infinitely divisible in Section \ref{s:piece}, and in particular $(\epsilon,0)$-DP and several related tradeoff functions do not have log-concave CNDs. In Section \ref{s:multi}, we propose a multivariate extension of CND. We give two general constructions of multivariate CNDs in Section \ref{s:construction} depending on whether a tradeoff function is decomposable or infinitely divisible. We give several examples of multivariate CNDs in Sections \ref{s:gdp}-\ref{s:laplace} for Gaussian DP, $(0,\delta)$-DP, $(\epsilon,\delta)$-DP, and Laplace-DP. In Section \ref{s:nopure}, we show that there is no multivariate CND for $(\epsilon,0)$-DP, which implies that $(\epsilon,0)$-DP is not decomposable. We conclude with discussion in Section \ref{s:discussion}. Proofs and technical details are found in the Supplementary Materials.
{\bf Related Work } While there are many complex DP mechanisms, many use the fundamental building block of additive mechanisms. For example, functional mechanism \citep{zhang2012functional}, objective perturbation \citep{chaudhuri2011differentially,kifer2012private}, stochastic gradient descent \citep{abadi2016deep}, and the sparse vector technique \citep{dwork2009complexity,zhu2020improving}, to name a few. There have been many different additive mechanisms proposed in the literature, for different privacy purposes. We highlight the works that show some optimality property for the proposed noise distributions. This work is most directly building off of \citet{awan2021canonical}, who proposed the concept of canonical noise distributions as a method of quantifying what it means to fully use the privacy budget. There are also other works, which derive optimal mechanisms with respect to other metrics.
\citet{ghosh2012universally} showed that a discrete Laplace distribution is the universal utility maximizer for a general class of utility functions in pure-DP.
\citet{geng2015optimal} proposed the staircase mechanism which they showed optimizes the $\ell_1$ or $\ell_2$ error for pure-DP. For $(\epsilon,\delta)$-DP, \citet{geng2015approx} showed that either the staircase or a uniform distribution can achieve the optimal rate in terms of $\ell_1$ and $\ell_2$ error. \citet{steinke2016between} showed that the $\ell_\infty$-mechanisms is rate optimal when measuring utility in terms of $\ell_\infty$ error.
\citet{awan2020structure} derive optimal mechanisms among the class of $K$-Norm Mechanisms, proposed by \citet{hardt2010geometry}, in terms of various scale-independent measures, for a fixed statistic and sample size.
\section{Differential privacy basics}\label{s:background}
Differential privacy ensures that given the output of a private mechanism, it is difficult for an adversary to determine whether an individual is present in the database or not.
To satisfy DP, a privacy expert employs a \emph{mechanism} $M$, which is a set of probability distributions $M_D$ on a common space $\mscr Y$, indexed by possible databases $D\in \mscr D$.
Let $d(D,D')$ be an integer-valued metric on the space of databases $\mscr D$, which represents the number of entries that $D$ and $D'$ differ in. We call $D$ and $D'$ \emph{adjacent} if $d(D,D')\leq 1$.
While there are now many variants of DP, they all center around the idea that given a randomized algorithm $M$, for any two adjacent databases $D$, $D'$, the distributions of $M(D)$ and $M(D')$ should be ``similar.'' While many DP variants measure similarity in terms of divergences, $f$-DP formalizes similarity in terms of hypothesis tests.
Intuitively, for two adjacent databases $D$ and $D'$, a mechanism $M$ satisfies $f$-DP if given the output of $M$, it is difficult to determine whether the original database was $D$ or $D'$. This is formalized in terms \emph{tradeoff functions}.
For two distributions $P$ and $Q$, the \emph{tradeoff function} (or ROC) between $P$ and $Q$ is $T(P,Q):[0,1]\rightarrow [0,1]$, where $T(P,Q)(\alpha)=\inf \{1-\mathbb{E}_Q \phi \mid \mathbb{E}_P(\phi)\geq 1-\alpha\}$, where the infinimum is over all measurable tests $\phi$. The tradeoff function returns the optimal type II error for testing $H_0=P$ versus $H_1=Q$ at specificity (one minus type I error) $\alpha$, and captures the difficulty of distinguishing between $P$ and $Q$.
\footnote{In \citet{dong2021gaussian}, the tradeoff function was originally defined as a function of type I error. Our choice to flip the tradeoff function along the $x$-axis is for mathematical convenience. The ROC function is usually defined as the power (one minus type II error) as a function of type I error.}
A function $f:[0,1]\rightarrow [0,1]$ is a tradeoff function if and only if $f$ is convex, continuous, non-decreasing, and $f(x) \leq x$ for all $x \in [0,1]$ \citep[Proposition 2.2]{dong2021gaussian}. We say that a tradeoff function $f$ is \emph{nontrivial} if $f(\alpha)<\alpha$ for some $\alpha\in (0,1)$
\begin{defn}[$f$-DP: \citealp{dong2021gaussian}]\label{def:fDP}
Let $f$ be a tradeoff function. A mechanism $M$ satisfies $f$-DP if $T(M(D),M(D'))\geq f,$ for all $D,D'\in \mscr D$ which satisfy $d(D,D')\leq 1$
\end{defn}
Intuitively, a mechanism satisfies $f$-DP, where $f=T(P,Q)$, if testing $H_0:M(D)$ versus $H_1: M(D')$ is at least as hard as testing $H_0:P$ versus $H_1:Q$. Without loss of generality we can assume that $f$ is \emph{symmetric}, meaning that if $f=T(P,Q)$, then $f=T(Q,P)$. This is due to the fact that adjacency of databases is a symmetric relation \citep[Proposition 2.4]{dong2021gaussian}. So, we limit the focus of this paper on symmetric tradeoff functions.
The traditional framework of $(\epsilon,\delta)$-DP is a subclass of $f$-DP: Let $\epsilon\geq 0$ and $\delta\in [0,1]$. A mechanism satisfies $(\epsilon,\delta)$-DP if it satisfies $f_{\epsilon,\delta}$-DP, where $f_{\epsilon,\delta}(\alpha) =\max\{0,1-\delta-e^{\epsilon}+e^{\epsilon}\alpha,\exp(-\epsilon)(\alpha-\delta)\}$. We call $(\epsilon,0)$-DP \emph{pure DP}.
Another popular subclass is Gaussian-DP (GDP): For $\mu\geq 0$, a mechanism satisfies $\mu$-GDP if it satisfies $G_\mu$-DP, where $G_\mu = T(N(0,1), N(\mu,1))$. Gaussian-DP was proposed in \citet{dong2021gaussian} and has several desirable properties, such as being closed under group privacy and closed under composition. \citet{dong2021gaussian} also established a central limit theorem for tradeoff functions as the number of compositions approaches infinity, showing that under general assumptions the tradeoff function of the composed mechanisms approaches $G_\mu$ for some $\mu$.
Two properties of differential privacy is that it also implies privacy guarantees for groups, as well as cumulative privacy guarantees for multiple DP outputs. \citet[Theorem 2.14]{dong2021gaussian} showed that if a mechanism is $f$-DP, then it satisfies $f^{\circ k}$-DP (where $f^{\circ k}$ means the functional composition of $f$ with itself, $k$ times), when the adjacency measure is changed to allow for a difference in $k$ entries. We call this \emph{group privacy}, which is a central topic in differential privacy.
Note that the bound $f^{\circ k}$ is not necessarily the tightest privacy guarantee for a particular mechanism.
\emph{Mechanism Composition} quantifies the cumulative privacy cost of the output of $k$ mechanisms. To express the tradeoff function resulting from composition, \citet{dong2021gaussian} proposed the \emph{tensor product} of tradeoff functions: if $f=T(P,Q)$ and $g=(P',Q')$, then $f\otimes g \vcentcolon= T(P\times P',Q\times Q')$, which they show is well defined, commutative, and associative. They prove that if we have $k$
mechanisms $M_1,\ldots, M_k$, which each satisfy $f_1$-DP, $f_2$-DP,$\ldots, f_k$-DP respectively, then the composition $(M_1,\ldots, M_k)$ satisfies $f_1\otimes\cdots\otimes f_k$-DP (see \citet[Theorem 3.2]{dong2021gaussian} for a more precise statement).
\subsection{Canonical noise distributions}
To satisfy DP, additive mechanisms must introduce noise proportional to the \emph{sensitivity} of the statistic of interest. Let $\lVert \cdot \rVert$ be a norm on $\mathbb{R}^d$. A statistic $S:\mscr D\rightarrow \mathbb{R}^d$ has $\lVert \cdot \rVert$-\emph{sensitivity} $\Delta>0$ if $|S(D)-S(D')|\leq \Delta$ for all $d(D,D')\leq 1$. When $d=1$, we use $|\cdot|$ as the default norm. Any additive mechanism, which releases $S(D)+\Delta N$, satisfies $f$-DP if $T(N,N+v)\geq f$ for all $\lVert v \rVert\leq 1$. The concept \emph{canonical noise distribution} (CND) was proposed by \citet{awan2021canonical} to capture when an additive mechanism satisfies $f$-DP, and ``fully uses the privacy budget.''
\begin{defn}[Canonical noise distribution: \citet{awan2021canonical}]\label{def:CND}
Let $f$ be a symmetric tradeoff function. A random variable $N$ with cumulative distribution function (cdf) $F$ is a \emph{canonical noise distribution} (CND) for $f$ if
\begin{enumerate}
\item
For any $m\in [0,1]$, $T(N,N+m)\geq f$,
\item $f(\alpha)=T(N,N+1)(\alpha)$ for all $\alpha \in (0,1)$,
\item $T(N,N+1)(\alpha) = F(F^{-1}(\alpha)-1)$ for all $\alpha \in (0,1)$,
\item $F(x) = 1-F(-x)$ for all $x\in \mathbb{R}$; that is, $N$ is symmetric about zero.
\end{enumerate}
\end{defn}
In Definition \ref{def:CND}, property 1 ensures that the additive mechanism using a CND satisfies $f$-DP, property 2 ensures that the privacy guarantee is tight, property 3 gives a closed form for the tradeoff function in terms of the CND's cdf, which is equivalent to enforcing a monotone likelihood ratio property, and property 4 imposes symmetry which is mostly for convenience.
An important property of CNDs is that they satisfy the following recurrence relation:
\begin{lemma}[\citet{awan2021canonical}]\label{lem:recurrence}
Let $f$ be a symmetric nontrivial tradeoff function and let $F$ be a CND for $f$. Then $F(x)=1-f(1-F(x-1))$ when $F(x-1)>0$ and $F(x)=f(F(x+1))$ when $F(x+1)<1$.
\end{lemma}
In \citet{awan2021canonical}, they showed that the above recurrence relation can be used to construct a CND for any nontrivial symmetric tradeoff function
\begin{prop}[CND construction: \citet{awan2021canonical}]\label{prop:CNDsynthetic}
Let $f$ be a symmetric nontrivial tradeoff function, and let $c\in [0,1]$ be the solution to $f(1-c)=c$. We define $F_f:\mathbb{R}\rightarrow \mathbb{R}$ as
\[ F_f(x) = \begin{cases}
f(F_f(x+1))&x<-1/2\\
c(1/2-x) + (1-c)(x+1/2)&-1/2\leq x\leq 1/2\\
1-f(1-F_f(x-1))&x>1/2.\\
\end{cases}\]
Then $N\sim F_f$ is a canonical noise distribution for $f$.
\end{prop}
While Proposition \ref{prop:CNDsynthetic} gives a general construction of a CND for an arbitrary $f$, the resulting distribution is generally not smooth or log-concave. \citet{awan2021canonical} showed that in the case of $G_\mu$, this construction does not recover the Gaussian distribution, which is the log-concave CND.
\section{One-dimensional CNDs}\label{s:1d}
In this section, we expand on the results of \citet{awan2021canonical}, by producing new results for one-dimensional CNDs. In Section \ref{s:pure} that the Tulap distribution is the \emph{unique} CND for $(\epsilon,0)$-DP. In Section \ref{s:logconcave}, we propose the concept of an \emph{infinitely divisible tradeoff function} and show that a tradeoff function has a log-concave CND if and only if it is infinitely divisible. We also give a construction to produce the unique log-concave CND for an infinitely divisible family of tradeoff functions. In Section \ref{s:piece}, we determine when a piece-wise linear tradeoff function is divisible, and show that $f_{\epsilon,0}$ and related tradeoff functions are not infinitely divisible, and hence do not have log-concave CNDs.
\begin{comment}
\begin{example}[CND does not imply stochastic dominance or minimum variance]
It was shown in \citet{awan2021canonical} that the Tulap distribution is a CND for $(\epsilon,\delta)$-DP. \citet{dong2021gaussian} showed that the tradeoff curve of Laplace is strictly greater than $f_{\epsilon,0}$, which implies that Laplace is not a CND for $(\epsilon,0)$-DP. In Figure \ref{fig:cnd_counter}, we plot the cdf of Tulap and Laplace distribution in the case of $\epsilon=5$ and $\delta=0$. It can be seen that the Tulap cdf does NOT dominate the Laplace cdf on $(0,\infty)$. This means that we cannot simply add an independent random variable to Tulap to achieve a Laplace. Furthermore, the variance of the Tulap in this case is $0.097$, whereas the variance of the Laplace is $0.08$. This indicates that while a CND optimizes the privacy budget, it does not directly optimize other objectives, such as the variance, and it does not imply stochastic dominance.
\djs{In fact, tulap with $\epsilon$ has variance $\frac{1}{12}+\frac{1}{2\sinh^2\frac{\epsilon}{2}}$, while Laplace with the same $\epsilon$ has variance $\frac{2}{\epsilon^2}$}
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{cdf_cnd_counter.pdf}
\caption{$\epsilon=5$, $\delta=0$. The Tulap cdf is a CND for $f_{\epsilon,\delta}$ and its tradeoff function is strictly lower than the Laplace. The plot indicates that stochastic dominance does not hold here.}
\label{fig:cnd_counter}
\end{figure}
\end{example}
\end{comment}
\subsection{CNDs for \texorpdfstring{$(\epsilon,0)$-DP}{pure-DP}}\label{s:pure}
In \citet{awan2021canonical}, it was shown that in general, the CND is not unique, but it was not clear whether there existed alternative CNDs for $f_{\epsilon,0}$ or $f_{\epsilon,\delta}$. We begin this section by showing that the Tulap distribution, which was shown to be a CND for $f_{\epsilon,\delta}$ by \citet{awan2021canonical} is in fact the \emph{unique} CND for $f_{\epsilon,0}$. The Tulap distribution was proposed by \citet{awan2018differentially} for the purpose of designing uniformly most powerful hypothesis tests for Bernoulli data. In the case of $(\epsilon,0)$-DP, the Tulap distribution coincides with one of the staircase mechanisms \citep{geng2015optimal}. It is also closely related to the discrete Laplace distribution (also known as the geometric mechanism), which is optimal for a wide range of utility functions in \citet{ghosh2012universally}.
\begin{restatable}{prop}{propuniqueCND}\label{prop:uniqueCND}
Let $\epsilon>0$. The distribution $\mathrm{Tulap}(0,\exp(-\epsilon),0)$ is the unique CND for $f_{\epsilon,0}$.
\end{restatable}
\begin{proof}[Proof Sketch.]
By Lemma \ref{lem:recurrence}, the only choice in a CND is on $[-1/2,1/2]$. If the density is non-constant on $[-1/2,1/2]$, we show that the likelihood ratio is not bounded by $e^{\epsilon}$, violating $\epsilon$-DP.
\end{proof}
Proposition \ref{prop:uniqueCND} is a surprising result in that one may expect a more natural CND than the Tulap distribution, which has a discontinuous density. However, we now know that there are no other CNDs for $(\epsilon,0)$-DP. In particular, there is no log-concave CND, which is the topic of the next subsection.
\subsection{Infinite divisibility and log-concavity}\label{s:logconcave}
It has been shown in \citet{dong2021gaussian} and \citet{dong2021central} that tradeoff functions built from location family log-concave distributions have very nice properties for $f$-DP. Log-concave distributions are have a monotone likelihood ratio property which gives a simple closed form expression for the tradeoff function in terms of the cdf of the log-concave distribution.
It is easily observed that a tradeoff function with a log-concave CND satsifies a property that we call \emph{infinite divisibility}. We prove that in fact a tradeoff function has a log-concave CND if and only if it is infinitely divisble. Our proof also results in a construction to produce the unique log-concave CND.
A continuous random variable $X$ is \emph{log-concave} if its density can be written as $g_X(x)\propto \exp(C(x))$, where $C$ is a concave function. We call a (symmetric) tradeoff function $f$ \emph{log-concave} if there exists a log-concave CND $N$ for $f$. Recall that if $N\sim F$ is a CND for $f$, then
$f(\alpha) = F(F^{-1}(\alpha)-1)$. If $N$ is also log-concave, then $f_t(\alpha)\vcentcolon= F(F^{-1}(\alpha)-t)$ is a tradeoff function for every $t\in [0,\infty)$, and the family $\{f_t\mid t\in [0,\infty)\}$ is a monoid satisfying the assumptions of Definition \ref{def:divisible}.
\begin{defn}\label{def:divisible}
A tradeoff function $f$ is \emph{infinitely divisible} if there exists a monoid, under the operation of functional composition, $\{f_t\in \mscr F\mid t\geq 0\}$ containing $f$ such that
\begin{enumerate}
\item $f_t\circ f_s=f_{t+s}$ for all $s,t\geq 0$,
\item $f_s$ is nontrivial for all $s>0$, and
\item $f_s\rightarrow f_0=\mathrm{Id}$ as $s\downarrow 0$.
\end{enumerate}
\end{defn}
The discussion above established that log-concave CNDs are infinitely divisible. The key result of this section is that a tradeoff function is log-concave if and only if it is infinitely divisible. We saw that it is easy to construct the infinitely divisible family given a log-concave CND. Surprisingly, we give a construction to derive the log-concave CND from the infinitely divisible family as well. This result shows an intimate relationship between properties of a tradeoff function and the possible CNDs for that tradeoff function. We will see in Section \ref{s:construction} that the property of infinite divisibility shows up again in the construction of multivariate CNDs.
\begin{restatable}{thm}{thmCNDlimit}\label{thm:CNDlimit}
A nontrivial tradeoff function $f\in \mscr F$ is log-concave if and only if it is infinitely divisible. In particular,
\begin{enumerate}
\item If $f$ is log-concave with log-concave CND $N\sim F$, then $\{f_t\mid t\geq 0\}$ defined by $f_t = F(F^{-1}(\alpha)-t)$ satisfies the assumptions of Definition \ref{def:divisible}.
\item Let $f$ be infinitely divisible, with monoid $\{f_t\in \mscr F\mid t\geq 0\}$, as defined in Definition \ref{def:divisible}, such that $f=f_1$. Let $F_s$ be any CND for $f_s$ (such as constructed in Proposition \ref{prop:CNDsynthetic}). Then the following limit exists $F^*(t) \vcentcolon= \lim_{s\rightarrow 0} F_s(\frac{1}{s} t)$ and $N\sim F^*$ is the unique log-concave CND for $f$.
Furthermore, $F^*(st)$ is the unique log-concave CND for $f_s$, for all $s>0$.
\end{enumerate}
\end{restatable}
\begin{proof}[Proof Sketch.]
It is easy to verify property 1. For property 2, we consider a subsequence $s_n=1/n!$ and observe that $F_{1/n!}(n! t)$ is a CND for $f$ for any $n$, but that as $n$ increases, the number of points at which the CND is uniquely determined also increases, by Lemma \ref{lem:recurrence}. In the limit, this sequence converges to a unique cdf, which we show has the properties of a log-concave CND.
\end{proof}
\begin{example}
We will illustrate the limit of Theorem \ref{thm:CNDlimit} on $G_1$. Let $F_{G_{2^{-n}}}$ be the constructed cdf from Proposition \ref{prop:CNDsynthetic} for $n=0,1,2,3$. The density functions corresponding to $F_{G_{2^{-n}}}(2^{n}t)$ are plotted in Figure \ref{fig:CNDlimit}. We see that as $n$ increases, the pdfs approach that of a standard normal, which we know is the log-concave CND for $f=G_1$.
When the construction of Theorem \ref{thm:CNDlimit} is applied to $f_{\epsilon,0}$, the cdf $F^*$ converges to a Laplace cdf. This seems to reflect the fact that under the limit of group privacy, $(\epsilon,0)$-DP converges to Laplace-DP \citet[Proposition 2.15]{dong2021gaussian}.
\begin{figure}
\centering
\includegraphics[width=.24\linewidth]{CNDlimit1.pdf}
\includegraphics[width=.24\linewidth]{CNDlimit2.pdf}
\includegraphics[width=.24\linewidth]{CNDlimit4.pdf}
\includegraphics[width=.24\linewidth]{CNDlimit8.pdf}
\caption{An illustration of Theorem \ref{thm:CNDlimit} when applied to $G_1$. From left to right, we have the density corresponding to $F_{G_{2^{-n}}}(2^n t)$ for $n=0,1,2,3$.}
\label{fig:CNDlimit}
\end{figure}
\end{example}
Finally, we illustrate why properties 2 and 3 of Definition \ref{def:divisible} are necessary for Theorem \ref{thm:CNDlimit}
\begin{example}
[Non examples for Theorem \ref{thm:CNDlimit}]
First consider why it is necessary to have $f_s\rightarrow \mathrm{Id}$. Set $f_s(\alpha) = I(\alpha=1)$
for all $s>0$. Note that $f_s\circ f_t=f_{s+t}$, but that the construction of Theorem \ref{thm:CNDlimit} results in a point mass at zero, which is not a CND as it is not continuous.
Next, suppose that all of the tradeoff functions are trivial, then $f_s(\alpha)=\alpha$ for all $s>0$, and $f_s\circ f_t=f_{s+t}$. However, there are no CNDs in this case.
\end{example}
\subsection{Piece-wise linear tradeoff functions are not infinitely divisible}\label{s:piece}
We showed in Theorem \ref{thm:CNDlimit} that if a tradeoff function is infinitely divisible, then we can construct a log-concave CND. However, it is not always obvious whether a tradeoff function is infinitely divisible or not. We show that in the case of piece-wise linear tradeoff functions, we can upper bound the number of possible divisions in terms of the number of break points. In particular, the piece-wise linear tradeoff functions considered in this section are not infinitely divisible.
We can characterize the piece-wise linear convex functions in terms of the 2nd derivative behavior: A convex function is piece-wise linear if and only if its 2nd derivative is defined everywhere except for finitely many points, and is zero whenever it is defined.
Part 1 of Proposition \ref{prop:piecewise} shows that a piece-wise linear tradeoff function $f$, which satisfies $f(x)=0$ implies $x=0$, can be sub-divided only a finite number of times. A consequence of this is that $f_{\epsilon,0}$ and several related tradeoff functions are not infinitely divisible and hence do not have log-concave CNDs. In fact, not only is $f_{\epsilon,0}$ not infinitely divisible, but there is in fact \emph{no division} $f_{\epsilon,0} = f\circ g$ into symmetric tradeoff functions, except where either $f$ or $g$ is the identity!
\begin{restatable}{prop}{proppiecewise}\label{prop:piecewise}
\begin{enumerate}
\item Let $f$ be a nontrivial piece-wise linear tradeoff function with $k\geq 1$ breakpoints and such that $f(x)=0$ implies that $x=0$. Then there is no tradeoff function $g$ such that $g^{\circ (k+1)}=f$.
\item Let $\epsilon>0$. There does not exist nontrivial symmetric tradeoff functions $f_1$ and $f_2$ such that $f_{\epsilon,0}=f_1\circ f_2$.
\item Let $f$ be the tradeoff function obtained by an arbitrary sequence of mechanism compositions, functional compositions, or subsampling (without replacement) of $f_{\epsilon,0}$ (could be different $\epsilon$ values for each). Then $f$ is not infinitely divisible and so does not have a log-concave CND.
\end{enumerate}
\end{restatable}
\begin{proof}[Proof Sketch.]
We show in Lemma \ref{lem:piecewise} that divisions of a piece-wise linear tradeoff function are themselves piece-wise linear, and that the functional composition of piece-wise linear tradeoff functions increases the number of breakpoints. This then limits the number of divisions a piece-wise linear tradeoff function can have in terms of the number of its breakpoints.
\end{proof}
\begin{example}[$f_{0,\delta}$ is log-concave]
What if $f(x)=0$ does not imply that $x=0$? The tradeoff functions $f_{0,\delta}$ fit within this setting, and the results of Proposition \ref{prop:piecewise} do \emph{not} apply here. In fact, $f_{0,\delta}$ is infinitely divisible with log-concave CND $U(-1/(2\delta),1/(2\delta))$. That is $f_{0,\delta} = T(U,U+\delta)$ where $U\sim U(-1/2,1/2)$.
While $f_{\epsilon,\delta}$ for $\delta>0$ also does not satisfy the assumption that $f_{\epsilon,\delta}(x)$ implies $x=0$, it is not clear at this time whether $f_{\epsilon,\delta}$ is log-concave or not.
\end{example}
\section{Multivariate CNDs}\label{s:multi}
In this section, we generalize the definition of CND to dimensions greater than one. While in the univariate case, \emph{sensitivity} is measured using the absolute distance between two statistic values, in $\mathbb{R}^d$, there are many choices of norms which can be used to measure the sensitivity \citep{awan2020structure}. So, we will specify the sensitivity norm when talking about a multivariate CND. In Definition \ref{def:CND_MVT} we define a multivariate CND to be a natural generalization of properties 1-4 of Definition \ref{def:CND}.
\begin{defn}\label{def:CND_MVT}
Let $f$ be a symmetric tradeoff function, and let $\lVert\cdot \rVert$ be a norm on $\mathbb{R}^d$. A random vector $N$ with density $g$ is a \emph{canonical noise distribution} (CND) for $f$, with respect to $\lVert\cdot \rVert$, if
\begin{enumerate}
\item
For all $v\in \mathbb{R}^d$ such that $\lVert v\rVert\leq 1$ we have that $T(N,N+v)\geq f$,
\item there exists $\lVert v^*\rVert\leq 1$ such that $T(N,N+v^*)(\alpha)=f(\alpha)$ for all $\alpha \in (0,1)$,
\item for all $v^*$ which satisfy property 2, and all $w\in \mathbb{R}^d$, we have that the likelihood ratio $g(w+tv^*-v^*)/g(w+tv^*)$ is a non-decreasing function of $t\in \mathbb{R}$,
\item $N$ is symmetric about zero: $g(x)=g(-x)$ for all $x\in \mathbb{R}^d$
\end{enumerate}
\end{defn}
When restricted to $d=1$, Definition \ref{def:CND_MVT} recovers Definition \ref{def:CND}. This is clear for properties 1, 2, and 4. Property 3 of Definition \ref{def:CND} can be interpreted as requiring that an optimal rejection set for $T(N,N+1)$ is of the $[x,\infty)$ for some $x$. By the Neyman Pearson Lemma, we know that this holds if and only if the likelihood ratio $F'(x-1)/F'(x)$ is non-decreasing in $x$. We see that when $d=1$, property 3 of Definition \ref{def:CND_MVT} is equivalent to property 3 of Definition \ref{def:CND}. We can interpret Property 3 of Definition \ref{def:CND_MVT} as enforcing a monotone likelihood ratio in directions parallel to $v^*$.
\subsection{Constructions of multivariate CNDs}\label{s:construction}
Composition gives a simple method to construct a multivariate CND whenever a tradeoff function can be decomposed into the composition of $k$ tradeoff functions:
\begin{restatable}{prop}{propCNDcomposition}\label{prop:CNDcomposition}
Suppose that $f = f_1\otimes f_2\otimes\cdots\otimes f_k$ all be nontrivial and symmetric tradeoff functions, and let $F_1,F_2,\ldots, F_k$ be CNDs for $f_1,\ldots, f_k$ respectively. Let $N=(N_1,\ldots, N_k)$ be the random vector where $N_i \sim F_i$ are independent.
Then $N$ is a CND for $f$ with respect to $\lVert \cdot \rVert_\infty$.
\end{restatable}
Interestingly, when a tradeoff function is infinitely divisible and hence has a log-concave CND by Theorem \ref{thm:limitTradeoff}, we can create a multivariate CND with respect to $\lVert \cdot \rVert_1$-sensitivity.
\begin{restatable}{thm}{thmlonemech}\label{thm:lonemech}
Let $f$ be a nontrivial and symmetric log-concave tradeoff function with log-concave CND $F$. Let $N=(N_1,\ldots, N_k)$ be the random vector where $N_i\sim F$ are independent.
Then $N$ is a (log-concave) CND for $f$ with respect to $\lVert \cdot \rVert_1$
\end{restatable}
\begin{proof}[Proof Sketch.]
Since the noise added is i.i.d., we can rephrase the tradeoff function as the tensor product of the individual tradeoff functions. We apply Theorem \ref{lem:otimesCirc} which lower bounds the tensor product of tradeoff functions with the functional composition.
\end{proof}
Theorem \ref{thm:lonemech} was inspired by the i.i.d. Laplace mechanism. In Section \ref{s:laplace}, we show that the i.i.d. Laplace mechanism is a special case of Theorem \ref{thm:lonemech} and gives a multivariate CND for Laplace-DP.
Note that Theorem \ref{thm:lonemech} results in a log-concave multivariate CND, and if each of $N_1,\ldots, N_k$ are log-concave in Proposition \ref{prop:CNDcomposition}, then that constructed multivariate CND is log-concave as well. \cite{dong2021central} showed that log-concave distributions have many nice properties in multivariate settings as well. We leave it to future work to investigate when multivariate log-concave CNDs exist.
\begin{comment}
\begin{example}[Non-example: $\epsilon$-DP]
We could attempt to apply the construction of Theorem \ref{thm:lonemech} to a non-log-concave CND, such as the Tulap distribution for $f_{\epsilon,0}$. In Figure \ref{}, we see that there are points where the log-likelihood is equal to 2, whereas the distribution is intended to satisfy $1$-DP (which requires the likelihood function to be bounded by 1). The key point where the argument of Theorem \ref{thm:lonemech} falls apart in this case is line 2, as shifting by $|x|<1$ does not give $f_{|x|,0}$-DP. \todo{make figure}
\end{example}
\end{comment}
\subsection{Multivariate CND for GDP}\label{s:gdp}
Recall that if $N\sim N(0,I)$ is a $d$-dimensional Gaussian random vector, and $v\in \mathbb{R}^d$ is any vector, then $T(N,N+v)=T(N(0,1),N(\lVert v\rVert_2,1))$ \citep[Proposition D.1(5)]{dong2021gaussian}. This previous result implies that $N(0,I)$ was a multivariate CND for GDP under $\lVert \cdot \rVert_2$-sensitivity. In fact, we show in Proposition \ref{prop:Gauss} that for GDP, any multivariate Gaussian is a CND with respect to any norm.
\begin{restatable}{prop}{propGauss}\label{prop:Gauss}
Let $\Sigma$ be a $d\times d$ positive definite matrix. Let $v^*\in \argmax_{\lVert u\rVert\leq 1} \lVert \Sigma^{-1/2} u\rVert_2$. Then $N(0,\Sigma)$ is a $d$-dimensional CND for $\lVert \Sigma^{-1/2}v^*\rVert_2$-GDP with respect to the norm $\lVert\cdot \rVert$.
\end{restatable}
\begin{remark}
While a multivariate Gaussian is always a multivariate CND for GDP, there is still possibly room for improvement. For Definition \ref{def:CND_MVT}, we only need a single vector to satisfy property 2. However, we could potentially ask that the bound is achieved at all $u$ such that $\lVert u \rVert=1$. Note that if $\lVert \cdot \rVert$ is an elliptical norm, then we do get this stronger property for the multivariate Gaussian, when we choose $\Sigma$ to align with the sensitivity norm.
\end{remark}
\subsection{Multivariate CND for \texorpdfstring{$(0,\delta)$-DP}{zero-delta-DP}
First let's review a few facts about $(0,\delta)$-DP, also known as $f_{0,\delta}$-DP. First, note that $U(\frac{-1}{2\delta},\frac{1}{2\delta})$ is a (log-concave) CND for $f_{0,\delta}$. So, we can write $f_{0,\delta} = T(U,U+\delta)$ where $U\sim U(-1/2,1/2)$. Because of this, we have that $f_{0,\delta}$ is infinitely divisible, and $f_{0,\delta_1}\circ f_{0,\delta_2}=f_{0,\min\{\delta_1+\delta_2,1\}}$. Furthermore, $f_{0,\delta_1}\otimes f_{0,\delta_2} = f_{0,1-(1-\delta_1)(1-\delta_2)}$, as observed in \citet{dong2021gaussian}. This means that $f_{0,\delta}$ is also infinitely decomposable, a property that we had only seen for GDP before. This decomposability implies, by Proposition \ref{prop:CNDcomposition} that we can build a multivariate CND for $f_{0,\delta}$ under $\lVert \cdot\rVert_\infty$-sensitivity. In fact, this construction is a multivariate CND for any sensitivity norm.
\begin{restatable}{prop}{propCNDTVDP}\label{prop:CND_TV_DP}
Let $0<\delta\leq 1$, $d\geq 1$, and $\lVert \cdot \rVert$ be a norm on $\mathbb{R}^d$. Call $v^* \in \underset{\lVert v \rVert\leq 1}\arg\min\prod_{i=1}^d (1-\delta|v_i|)$ and $A = \prod_{i=1}^d(1-\delta|v_i^*|)$. Then $U(\frac{-1}{2\delta},\frac{1}{2\delta})^n$ is a CND for $f_{0,1-A}$ under $\lVert \cdot\rVert$-sensitivity. In the special case of $\lVert \cdot \rVert=\lVert\cdot \rVert_\infty$, this simplifies to $A=(1-\delta)^d$.
\end{restatable}
\subsection{Multivariate CND for \texorpdfstring{$f_{\epsilon,\delta}$}{approximate DP} when \texorpdfstring{$\delta>0$}{delta is positive}}
Let $\epsilon>0$ and $\delta\in (0,1]$. Recall that $f_{\epsilon,\delta}=f_{\epsilon,0}\otimes f_{0,\delta}$ \citep{dong2021gaussian}. Since $f_{0,\delta}$ is infinitely decomposable, we can write $f_{\epsilon,\delta} = f_{\epsilon,0}\otimes f_{0,\delta_1}\otimes \cdots\otimes f_{0,\delta_k}$ where $\delta = \prod_{i=1}^k (1-\delta_i)$. By Proposition \ref{prop:CNDcomposition} we construct a multivariate CND for $f_{\epsilon,\delta}$ with respect to $\lVert \cdot \rVert_\infty$-sensitivity by using $\mathrm{Tulap}(0,\exp(-\epsilon),0)$ in one coordinate, and the uniform distributions $U(\frac{-1}{2\delta_i},\frac{1}{2\delta_i})$ in the other $k$ coordinates.
\subsection{Two multivariate CNDs for Laplace-DP}\label{s:laplace}
Many mechanisms designed to satisfy $(\epsilon,0)$-DP actually satisfy the stronger privacy guarantee of Laplace-DP. In particular, variations on the Laplace mechanism are very common additive mechanisms used to achieve $(\epsilon,0)$-DP. In this section, we show that two multivariate versions of the Laplace mechanism, the $\ell_1$ and $\ell_\infty$ mechanisms, are multivariate CNDs for Laplace-DP.
The Laplace distribution, denoted $\mathrm{Laplace}(m,s)$ is a distribution on $\mathbb{R}$ with density $\frac{1}{2s}\exp(\frac{-1}{s} |x-m|)$. We say a mechanism satisfies $\epsilon$-Laplace-DP if it satisfies $L_\epsilon$-DP, where $L_\epsilon\vcentcolon= T(N,N+\epsilon)$ and $N\sim \mathrm{Laplace}(0,1)$. It is easily seen that $\mathrm{Laplace}(0,1/\epsilon)$ is a log-concave CND for $\epsilon$-Laplace-DP
{\bf i.i.d. Laplace Mechanism}
The i.i.d. Laplace mechanism is defined as follows: Let $\epsilon>0$ be given. If $T:\mscr X\rightarrow \mathbb{R}^k$ has $\lVert\cdot\rVert_1$-sensitivity of $\Delta$, then the i.i.d. Laplace mechanism releases $T(X)+\Delta N$, where $N=(N_1,\ldots, N_k)$ is the random vector with i.i.d. entries $N_i\sim \mathrm{Laplace}(0,1/\epsilon)$. It is well known that the i.i.d. Laplace mechanism satisfies $f_{\epsilon,0}$-DP \citep[Theorem 3.6]{dwork2014algorithmic}. Since $N_1$ is a log-concave CND for $L_\epsilon$, Theorem \ref{thm:lonemech} shows that $N$ is a CND for $L_\epsilon$, with respect to $\lVert \cdot \rVert_1$-sensitivity. As $L_\epsilon\geq f_{\epsilon,0}$ and $L_\epsilon(\alpha)>f_{\epsilon,0}(\alpha)$ for some values of $\alpha$, we can more precisely capture the privacy cost of the i.i.d. Laplace mechanism using tradeoff functions rather than $\epsilon$-DP.
{\bf $\ell_\infty$-Mechanism} The $\ell_\infty$-mechanism, proposed in \citet{steinke2016between} is a special case of the $K$-norm mechanisms \citep{hardt2010geometry}, with density proportional to $\exp(-\epsilon \lVert x\rVert_\infty)$. \citet{steinke2016between} showed that the $\ell_\infty$ mechanism can improve the sample complexity of answering multiple queries, when accuracy is measured by $\ell_\infty$-norm. \citet{awan2020structure} showed that the $\ell_\infty$ mechanism is near optimal in certain applications to private linear and logistic regression. It is well known that when using $\ell_\infty$-sensitivity, the $\ell_\infty$-mechanism satisfies $\epsilon$-DP. In this section, we show that the $\ell_\infty$-mech is a CND for $L_\epsilon$, with respect to $\ell_\infty$-sensitivity
\begin{restatable}{prop}{propLinfty}\label{prop:Linfty}
Let $\epsilon>0$, and $d\geq 1$. Let $X$ be a $d$-dimensional random vector with density $g(x)=\frac{\exp(-\epsilon \lVert x\rVert_\infty)}{d!(2/\epsilon)^d}$. Then $X$ is a CND for the tradeoff function $L_\epsilon$ with respect to $\lVert \cdot \rVert_\infty$.
\end{restatable}
\begin{proof}[Proof Sketch.]
First we show that with the shift of $v^*=(1,1,\ldots, 1)$, the privacy loss random variable coincides with that of $L_\epsilon$. Then, we show that $v^*=(1,1,\ldots, 1)^\top$ is the worst case of any shift $v$ to minimize the tradeoff functions. To deal with the case that some of the entries of $v$ are zero, we establish a convergence theorem for tradeoff functions in Theorem \ref{thm:limitTradeoff} of the Supplementary Materials.
\end{proof}
\subsection{No multivariate CND for \texorpdfstring{$f_{\epsilon,0}$}{pure-DP}}\label{s:nopure}
By the construction of Proposition \ref{prop:CNDsynthetic}, we know that a one-dimensional CND exists for any nontrivial tradeoff function. It turns out that the same cannot be said for the multivariate setting. In Theorem \ref{thm:noCNDpure}, we show that there is \emph{no multivariate CND for $f_{\epsilon,0}$ with respect to any norm}. In fact, we prove the stronger result that it is not even possible to satisfy properties 1 and 2 of Definition \ref{def:CND_MVT}
\begin{restatable}{thm}{thmnoCNDpure}\label{thm:noCNDpure}
Let $d\geq 2$ and let $\lVert\cdot \rVert$ be any norm on $\mathbb{R}^d$. Then for any $\epsilon>0$, there is no random vector satisfying properties 1 and 2 of Definition \ref{def:CND_MVT} for $f_{\epsilon,0}$ with respect to the norm $\lVert \cdot \rVert$. In particular, there is no multivariate CND for $f_{\epsilon,0}$.
\end{restatable}
\begin{proof}[Proof Sketch.]
Suppose to the contrary, then $(\epsilon,0)$-DP imposes strict bounds on the likelihood ratio of the distribution. These bounds allow us to find an arbitrarily long sequence of points, sufficiently far apart, where the density is bounded below. This ultimately shows that the density is not integrable.
\end{proof}
Combining Theorem \ref{thm:noCNDpure} with Proposition \ref{prop:CNDcomposition}, we infer in Corollary \ref{cor:pureDecomp} that $f_{\epsilon,0}$ cannot be written as the tensor product of any two nontrivial tradeoff functions. This means that if we want to design two independent mechanisms such that the joint release exactly satisfies $(\epsilon,0)$-DP, then one of the mechanisms must be perfectly private.
\begin{restatable}{cor}{corpureDecomp}\label{cor:pureDecomp}
Let $\epsilon>0$ be given. There does not exist nontrivial symmetric tradeoff functions $f_1$ and $f_2$ such that $f_{\epsilon,0}=f_1\otimes f_{2}$.
\end{restatable}
\begin{remark}
Theorem \ref{thm:noCNDpure} along with Theorem \ref{thm:lonemech} gives an alternative argument that $f_{\epsilon,0}$ is not log-concave/infinitely decomposable.
\end{remark}
\section{Discussion}\label{s:discussion}
Motivated by the goals of constructing log-concave CNDs and multivariate CNDs, we found some fundamental connections between these constructions and the operations of mechanism composition and functional composition of the tradeoff functions. Surprisingly, the constructions for both log-concave and multivariate CNDs relied on whether a tradeoff function could be decomposed either according to functional composition, or according to mechanism composition. An interesting result of our work was that for pure DP there is a unique 1-dimensional CND and no multidimensional CNDs, which implies that $f_{\epsilon,0}$ can neither be decomposed according to functional composition or mechanism composition. This highlights the limitations of pure-DP as a privacy definition. On the other hand, Gaussian-DP, Laplace-DP, and $(0,\delta)$-DP were seen to have much better properties.
We showed that a multivariate extension of CND can capture the same properties as in the 1-dimensional case. \citet{awan2021canonical} showed that in one dimension, CNDs can be used to obtain DP hypothesis tests with optimal properties. An open question is whether our definition of a multivariate CND has any connections to optimal hypothesis testing.
Most of the constructions of multivariate CNDs presented in this paper are product distributions. Even the multivariate CNDs for GDP are a linear transformation of i.i.d. random variables. The $\ell_\infty$-mechanism is the exception, providing a truly nontrivial CND for Laplace-DP. It is worth exploring whether there are general techniques to produce nontrivial multivariate CNDs like the $\ell_\infty$-mechansism, as well as exploring the merits of such CNDs.
\bibliographystyle{plainnat}
\section{Introduction}
Differential privacy (DP), proposed by \citet{dwork2006calibrating}, is the state-of-the-art framework in formal privacy protection and is being implemented by tech companies, government agencies, and academic institutions. Over time, the DP community has developed many new DP mechanisms as well as new frameworks. Recently, $f$-DP (\citet{dong2021gaussian} was proposed as a generalization of DP, allowing for tight calculations of group privacy, composition, subsampling, and post-processing. It was shown in \citet{dong2021gaussian} that $f$-DP is provably the tightest version of DP that respects the post-processing property of DP. In particular, $f$-DP can be losslessly converted to R\'enyi-DP (or any $f$-divergence version of DP) as well as $(\epsilon,\delta)$-DP, but not vice-versa \citep{dong2021gaussian}. Furthermore, $f$-DP is equivalent (can be losslessly converted back and forth) to the privacy profile \citep{balle2018privacy,balle2020privacy} and the privacy loss random variables \citep{sommer2019privacy,zhu2022optimal}.
$f$-DP is defined in terms of a \emph{tradeoff function} or \emph{receiver operator curve (ROC)}, which encapsulates the difficulty of conducting a hypothesis test between two distributions. If $f=T(P,Q)$ is the tradeoff function for testing between the distributions $P$ and $Q$, then if a mechanism $M$ satisfies $f$-DP, this means that given the output of $M$ when run on one of two adjacent databases, it is at least as hard to determine which database was used, as it is to test between $P$ and $Q$.
While $f$-DP has the many desirable theoretical properties listed above in its favor, there are limited techniques for working with $f$-DP, and few constructive mechanisms for an arbitrary $f$-DP guarantee. A notable exception is a \emph{canonical noise distribution} (CND) from the recent paper \citet{awan2021canonical}, which builds a one-dimensional additive noise mechanism designed to exactly satisfy $f$-DP, with no wasted privacy budget. Along with the intuitive idea that a CND is optimal in that it optimizes the privacy loss budget, \citet{awan2021canonical} showed that CNDs are crucial to the construction of optimal DP hypothesis tests and free DP $p$-values. However, the CND construction given in \citet{awan2021canonical} does not result in a smooth distribution, and in particular is not \emph{log-concave}.
Log-concavity is a desirable property because it implies that the distribution has a monotone likelihood ratio; this means that higher observed values are always more likely to have come from a higher input value than a lower one. This makes the DP output much more interpretable, easily analyzed, and also has makes the calculation of the privacy cost simpler \citep{dong2021central}. Furthermore, the results of \citet{awan2021canonical} are limited to 1-dimensional distributions.
In this paper, we develop new properties of CNDs and $f$-DP, motivated by the following two questions,
\begin{center}
1. Can we construct log-concave CNDs?\qquad
2. Can we construct multivariate CNDs?
\end{center}
We found that the existence of both log-concave 1-dimensional CNDs and multivariate CNDs were intricately linked with properties related to functional composition of tradeoff functions (which is known to be related to group privacy) and tensor products of tradeoff functions (related to composition of mechanisms). We found that two highly desirable properties of a tradeoff function are \emph{infinite divisibility} and \emph{infinite decomposability} meaning that the tradeoff function can be exactly achieved by $n$-fold group-privacy or $n$-fold composition (respectively). We show that $(\epsilon,0)$-DP does not satisfy either property and in fact has neither a log-concave CND nor any multivariate CND. In contrast to $(\epsilon,0)$-DP, two families that satisfy both infinite divisibility and infinite decomposability are $\mu$-GDP and $(0,\delta)$-DP. While $(0,\delta)$-DP has limited applicability due to its weak protection for events with small probability, $\mu$-GDP and related DP definitions (such as zero concentrated DP) have been gaining popularity. The results of this paper provide a new perspective supporting the adoption of GDP as the default privacy measure instead of $(\epsilon,0)$-DP. Our results are more precisely summarized as follows:
{\bf Our Contributions and Organization }
In Section \ref{s:background}, we review concepts in $f$-DP and canonical noise distributions. In Section \ref{s:1d}, we study 1-dimensional CNDs. In Section \ref{s:pure}, we prove that the Tulap distribution is the \emph{unique} CND for $(\epsilon,0)$-DP. In Section \ref{s:logconcave}, we propose the concept of infinite divisibility and prove that a tradeoff function has a log-concave CND if and only if it is infinitely divisible; we also give a construction to produce the log-concave CND from a family of infinitely divisible tradeoff functions. We prove that piece-wise linear tradeoff functions are generally not infinitely divisible in Section \ref{s:piece}, and in particular $(\epsilon,0)$-DP and several related tradeoff functions do not have log-concave CNDs. In Section \ref{s:multi}, we propose a multivariate extension of CND. We give two general constructions of multivariate CNDs in Section \ref{s:construction} depending on whether a tradeoff function is decomposable or infinitely divisible. We give several examples of multivariate CNDs in Sections \ref{s:gdp}-\ref{s:laplace} for Gaussian DP, $(0,\delta)$-DP, $(\epsilon,\delta)$-DP, and Laplace-DP. In Section \ref{s:nopure}, we show that there is no multivariate CND for $(\epsilon,0)$-DP, which implies that $(\epsilon,0)$-DP is not decomposable. We conclude with discussion in Section \ref{s:discussion}. Proofs and technical details are found in the Supplementary Materials.
{\bf Related Work } While there are many complex DP mechanisms, many use the fundamental building block of additive mechanisms. For example, functional mechanism \citep{zhang2012functional}, objective perturbation \citep{chaudhuri2011differentially,kifer2012private}, stochastic gradient descent \citep{abadi2016deep}, and the sparse vector technique \citep{dwork2009complexity,zhu2020improving}, to name a few. There have been many different additive mechanisms proposed in the literature, for different privacy purposes. We highlight the works that show some optimality property for the proposed noise distributions. This work is most directly building off of \citet{awan2021canonical}, who proposed the concept of canonical noise distributions as a method of quantifying what it means to fully use the privacy budget. There are also other works, which derive optimal mechanisms with respect to other metrics.
\citet{ghosh2012universally} showed that a discrete Laplace distribution is the universal utility maximizer for a general class of utility functions in pure-DP.
\citet{geng2015optimal} proposed the staircase mechanism which they showed optimizes the $\ell_1$ or $\ell_2$ error for pure-DP. For $(\epsilon,\delta)$-DP, \citet{geng2015approx} showed that either the staircase or a uniform distribution can achieve the optimal rate in terms of $\ell_1$ and $\ell_2$ error. \citet{steinke2016between} showed that the $\ell_\infty$-mechanisms is rate optimal when measuring utility in terms of $\ell_\infty$ error.
\citet{awan2020structure} derive optimal mechanisms among the class of $K$-Norm Mechanisms, proposed by \citet{hardt2010geometry}, in terms of various scale-independent measures, for a fixed statistic and sample size.
\section{Differential privacy basics}\label{s:background}
Differential privacy ensures that given the output of a private mechanism, it is difficult for an adversary to determine whether an individual is present in the database or not.
To satisfy DP, a privacy expert employs a \emph{mechanism} $M$, which is a set of probability distributions $M_D$ on a common space $\mscr Y$, indexed by possible databases $D\in \mscr D$.
Let $d(D,D')$ be an integer-valued metric on the space of databases $\mscr D$, which represents the number of entries that $D$ and $D'$ differ in. We call $D$ and $D'$ \emph{adjacent} if $d(D,D')\leq 1$.
While there are now many variants of DP, they all center around the idea that given a randomized algorithm $M$, for any two adjacent databases $D$, $D'$, the distributions of $M(D)$ and $M(D')$ should be ``similar.'' While many DP variants measure similarity in terms of divergences, $f$-DP formalizes similarity in terms of hypothesis tests.
Intuitively, for two adjacent databases $D$ and $D'$, a mechanism $M$ satisfies $f$-DP if given the output of $M$, it is difficult to determine whether the original database was $D$ or $D'$. This is formalized in terms \emph{tradeoff functions}.
For two distributions $P$ and $Q$, the \emph{tradeoff function} (or ROC) between $P$ and $Q$ is $T(P,Q):[0,1]\rightarrow [0,1]$, where $T(P,Q)(\alpha)=\inf \{1-\mathbb{E}_Q \phi \mid \mathbb{E}_P(\phi)\geq 1-\alpha\}$, where the infinimum is over all measurable tests $\phi$. The tradeoff function returns the optimal type II error for testing $H_0=P$ versus $H_1=Q$ at specificity (one minus type I error) $\alpha$, and captures the difficulty of distinguishing between $P$ and $Q$.
\footnote{In \citet{dong2021gaussian}, the tradeoff function was originally defined as a function of type I error. Our choice to flip the tradeoff function along the $x$-axis is for mathematical convenience. The ROC function is usually defined as the power (one minus type II error) as a function of type I error.}
A function $f:[0,1]\rightarrow [0,1]$ is a tradeoff function if and only if $f$ is convex, continuous, non-decreasing, and $f(x) \leq x$ for all $x \in [0,1]$ \citep[Proposition 2.2]{dong2021gaussian}. We say that a tradeoff function $f$ is \emph{nontrivial} if $f(\alpha)<\alpha$ for some $\alpha\in (0,1)$
\begin{defn}[$f$-DP: \citealp{dong2021gaussian}]\label{def:fDP}
Let $f$ be a tradeoff function. A mechanism $M$ satisfies $f$-DP if $T(M(D),M(D'))\geq f,$ for all $D,D'\in \mscr D$ which satisfy $d(D,D')\leq 1$
\end{defn}
Intuitively, a mechanism satisfies $f$-DP, where $f=T(P,Q)$, if testing $H_0:M(D)$ versus $H_1: M(D')$ is at least as hard as testing $H_0:P$ versus $H_1:Q$. Without loss of generality we can assume that $f$ is \emph{symmetric}, meaning that if $f=T(P,Q)$, then $f=T(Q,P)$. This is due to the fact that adjacency of databases is a symmetric relation \citep[Proposition 2.4]{dong2021gaussian}. So, we limit the focus of this paper on symmetric tradeoff functions.
The traditional framework of $(\epsilon,\delta)$-DP is a subclass of $f$-DP: Let $\epsilon\geq 0$ and $\delta\in [0,1]$. A mechanism satisfies $(\epsilon,\delta)$-DP if it satisfies $f_{\epsilon,\delta}$-DP, where $f_{\epsilon,\delta}(\alpha) =\max\{0,1-\delta-e^{\epsilon}+e^{\epsilon}\alpha,\exp(-\epsilon)(\alpha-\delta)\}$. We call $(\epsilon,0)$-DP \emph{pure DP}.
Another popular subclass is Gaussian-DP (GDP): For $\mu\geq 0$, a mechanism satisfies $\mu$-GDP if it satisfies $G_\mu$-DP, where $G_\mu = T(N(0,1), N(\mu,1))$. Gaussian-DP was proposed in \citet{dong2021gaussian} and has several desirable properties, such as being closed under group privacy and closed under composition. \citet{dong2021gaussian} also established a central limit theorem for tradeoff functions as the number of compositions approaches infinity, showing that under general assumptions the tradeoff function of the composed mechanisms approaches $G_\mu$ for some $\mu$.
Two properties of differential privacy is that it also implies privacy guarantees for groups, as well as cumulative privacy guarantees for multiple DP outputs. \citet[Theorem 2.14]{dong2021gaussian} showed that if a mechanism is $f$-DP, then it satisfies $f^{\circ k}$-DP (where $f^{\circ k}$ means the functional composition of $f$ with itself, $k$ times), when the adjacency measure is changed to allow for a difference in $k$ entries. We call this \emph{group privacy}, which is a central topic in differential privacy.
Note that the bound $f^{\circ k}$ is not necessarily the tightest privacy guarantee for a particular mechanism.
\emph{Mechanism Composition} quantifies the cumulative privacy cost of the output of $k$ mechanisms. To express the tradeoff function resulting from composition, \citet{dong2021gaussian} proposed the \emph{tensor product} of tradeoff functions: if $f=T(P,Q)$ and $g=(P',Q')$, then $f\otimes g \vcentcolon= T(P\times P',Q\times Q')$, which they show is well defined, commutative, and associative. They prove that if we have $k$
mechanisms $M_1,\ldots, M_k$, which each satisfy $f_1$-DP, $f_2$-DP,$\ldots, f_k$-DP respectively, then the composition $(M_1,\ldots, M_k)$ satisfies $f_1\otimes\cdots\otimes f_k$-DP (see \citet[Theorem 3.2]{dong2021gaussian} for a more precise statement).
\subsection{Canonical noise distributions}
To satisfy DP, additive mechanisms must introduce noise proportional to the \emph{sensitivity} of the statistic of interest. Let $\lVert \cdot \rVert$ be a norm on $\mathbb{R}^d$. A statistic $S:\mscr D\rightarrow \mathbb{R}^d$ has $\lVert \cdot \rVert$-\emph{sensitivity} $\Delta>0$ if $\lVert S(D)-S(D')\rVert \leq \Delta$ for all $d(D,D')\leq 1$. When $d=1$, we use $|\cdot|$ as the default norm. Any additive mechanism, which releases $S(D)+\Delta N$, satisfies $f$-DP if $T(N,N+v)\geq f$ for all $\lVert v \rVert\leq 1$. The concept \emph{canonical noise distribution} (CND) was proposed by \citet{awan2021canonical} to capture when an additive mechanism satisfies $f$-DP, and ``fully uses the privacy budget.''
\begin{defn}[Canonical noise distribution: \citet{awan2021canonical}]\label{def:CND}
Let $f$ be a symmetric tradeoff function. A continuous random variable $N$ with cumulative distribution function (cdf) $F$ is a \emph{canonical noise distribution} (CND) for $f$ if
\begin{enumerate}
\item
For any $m\in [0,1]$, $T(N,N+m)\geq f$,
\item $f(\alpha)=T(N,N+1)(\alpha)$ for all $\alpha \in (0,1)$,
\item $T(N,N+1)(\alpha) = F(F^{-1}(\alpha)-1)$ for all $\alpha \in (0,1)$,
\item $F(x) = 1-F(-x)$ for all $x\in \mathbb{R}$; that is, $N$ is symmetric about zero.
\end{enumerate}
\end{defn}
In Definition \ref{def:CND}, property 1 ensures that the additive mechanism using a CND satisfies $f$-DP, property 2 ensures that the privacy guarantee is tight, property 3 gives a closed form for the tradeoff function in terms of the CND's cdf, which is equivalent to enforcing a monotone likelihood ratio property, and property 4 imposes symmetry which is mostly for convenience.
An important property of CNDs is that they satisfy the following recurrence relation:
\begin{lemma}[\citet{awan2021canonical}]\label{lem:recurrence}
Let $f$ be a symmetric nontrivial tradeoff function and let $F$ be a CND for $f$. Then $F(x)=1-f(1-F(x-1))$ when $F(x-1)>0$ and $F(x)=f(F(x+1))$ when $F(x+1)<1$.
\end{lemma}
In \citet{awan2021canonical}, they showed that the above recurrence relation can be used to construct a CND for any nontrivial symmetric tradeoff function
\begin{prop}[CND construction: \citet{awan2021canonical}]\label{prop:CNDsynthetic}
Let $f$ be a symmetric nontrivial tradeoff function, and let $c\in [0,1]$ be the solution to $f(1-c)=c$. We define $F_f:\mathbb{R}\rightarrow \mathbb{R}$ as
\[ F_f(x) = \begin{cases}
f(F_f(x+1))&x<-1/2\\
c(1/2-x) + (1-c)(x+1/2)&-1/2\leq x\leq 1/2\\
1-f(1-F_f(x-1))&x>1/2.\\
\end{cases}\]
Then $N\sim F_f$ is a canonical noise distribution for $f$.
\end{prop}
While Proposition \ref{prop:CNDsynthetic} gives a general construction of a CND for an arbitrary $f$, the resulting distribution is generally not smooth or log-concave. \citet{awan2021canonical} showed that in the case of $G_\mu$, this construction does not recover the Gaussian distribution, which is the log-concave CND.
\section{One-dimensional CNDs}\label{s:1d}
In this section, we expand on the results of \citet{awan2021canonical}, by producing new results for one-dimensional CNDs. In Section \ref{s:pure} that the Tulap distribution is the \emph{unique} CND for $(\epsilon,0)$-DP. In Section \ref{s:logconcave}, we propose the concept of an \emph{infinitely divisible tradeoff function} and show that a tradeoff function has a log-concave CND if and only if it is infinitely divisible. We also give a construction to produce the unique log-concave CND for an infinitely divisible family of tradeoff functions. In Section \ref{s:piece}, we determine when a piece-wise linear tradeoff function is divisible, and show that $f_{\epsilon,0}$ and related tradeoff functions are not infinitely divisible, and hence do not have log-concave CNDs.
\begin{comment}
\begin{example}[CND does not imply stochastic dominance or minimum variance]
It was shown in \citet{awan2021canonical} that the Tulap distribution is a CND for $(\epsilon,\delta)$-DP. \citet{dong2021gaussian} showed that the tradeoff curve of Laplace is strictly greater than $f_{\epsilon,0}$, which implies that Laplace is not a CND for $(\epsilon,0)$-DP. In Figure \ref{fig:cnd_counter}, we plot the cdf of Tulap and Laplace distribution in the case of $\epsilon=5$ and $\delta=0$. It can be seen that the Tulap cdf does NOT dominate the Laplace cdf on $(0,\infty)$. This means that we cannot simply add an independent random variable to Tulap to achieve a Laplace. Furthermore, the variance of the Tulap in this case is $0.097$, whereas the variance of the Laplace is $0.08$. This indicates that while a CND optimizes the privacy budget, it does not directly optimize other objectives, such as the variance, and it does not imply stochastic dominance.
\djs{In fact, tulap with $\epsilon$ has variance $\frac{1}{12}+\frac{1}{2\sinh^2\frac{\epsilon}{2}}$, while Laplace with the same $\epsilon$ has variance $\frac{2}{\epsilon^2}$}
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{cdf_cnd_counter.pdf}
\caption{$\epsilon=5$, $\delta=0$. The Tulap cdf is a CND for $f_{\epsilon,\delta}$ and its tradeoff function is strictly lower than the Laplace. The plot indicates that stochastic dominance does not hold here.}
\label{fig:cnd_counter}
\end{figure}
\end{example}
\end{comment}
\subsection{CNDs for \texorpdfstring{$(\epsilon,0)$-DP}{pure-DP}}\label{s:pure}
In \citet{awan2021canonical}, it was shown that in general, the CND is not unique, but it was not clear whether there existed alternative CNDs for $f_{\epsilon,0}$ or $f_{\epsilon,\delta}$. We begin this section by showing that the Tulap distribution, which was shown to be a CND for $f_{\epsilon,\delta}$ by \citet{awan2021canonical} is in fact the \emph{unique} CND for $f_{\epsilon,0}$. The Tulap distribution was proposed by \citet{awan2018differentially} for the purpose of designing uniformly most powerful hypothesis tests for Bernoulli data. In the case of $(\epsilon,0)$-DP, the Tulap distribution coincides with one of the staircase mechanisms \citep{geng2015optimal}. It is also closely related to the discrete Laplace distribution (also known as the geometric mechanism), which is optimal for a wide range of utility functions in \citet{ghosh2012universally}.
\begin{restatable}{prop}{propuniqueCND}\label{prop:uniqueCND}
Let $\epsilon>0$. The distribution $\mathrm{Tulap}(0,\exp(-\epsilon),0)$ is the unique CND for $f_{\epsilon,0}$.
\end{restatable}
\begin{proof}[Proof Sketch.]
By Lemma \ref{lem:recurrence}, the only choice in a CND is on $[-1/2,1/2]$. If the density is non-constant on $[-1/2,1/2]$, we show that the likelihood ratio is not bounded by $e^{\epsilon}$, violating $\epsilon$-DP.
\end{proof}
Proposition \ref{prop:uniqueCND} is a surprising result in that one may expect a more natural CND than the Tulap distribution, which has a discontinuous density. However, we now know that there are no other CNDs for $(\epsilon,0)$-DP. In particular, there is no log-concave CND, which is the topic of the next subsection.
\subsection{Infinite divisibility and log-concavity}\label{s:logconcave}
It has been shown in \citet{dong2021gaussian} and \citet{dong2021central} that tradeoff functions built from location family log-concave distributions have very nice properties for $f$-DP. Log-concave distributions are have a monotone likelihood ratio property which gives a simple closed form expression for the tradeoff function in terms of the cdf of the log-concave distribution.
It is easily observed that a tradeoff function with a log-concave CND satsifies a property that we call \emph{infinite divisibility}. We prove that in fact a tradeoff function has a log-concave CND if and only if it is infinitely divisble. Our proof also results in a construction to produce the unique log-concave CND.
A continuous random variable $X$ is \emph{log-concave} if its density can be written as $g_X(x)\propto \exp(C(x))$, where $C$ is a concave function. We call a (symmetric) tradeoff function $f$ \emph{log-concave} if there exists a log-concave CND $N$ for $f$. Recall that if $N\sim F$ is a CND for $f$, then
$f(\alpha) = F(F^{-1}(\alpha)-1)$. If $N$ is also log-concave, then $f_t(\alpha)\vcentcolon= F(F^{-1}(\alpha)-t)$ is a tradeoff function for every $t\in [0,\infty)$, and the family $\{f_t\mid t\in [0,\infty)\}$ is a monoid satisfying the assumptions of Definition \ref{def:divisible}.
\begin{defn}\label{def:divisible}
A tradeoff function $f$ is \emph{infinitely divisible} if there exists a monoid, under the operation of functional composition, $\{f_t\in \mscr F\mid t\geq 0\}$ containing $f$ such that
\begin{enumerate}
\item $f_t\circ f_s=f_{t+s}$ for all $s,t\geq 0$,
\item $f_s$ is nontrivial for all $s>0$, and
\item $f_s\rightarrow f_0=\mathrm{Id}$ as $s\downarrow 0$.
\end{enumerate}
\end{defn}
The discussion above established that log-concave CNDs are infinitely divisible. The key result of this section is that a tradeoff function is log-concave if and only if it is infinitely divisible. We saw that it is easy to construct the infinitely divisible family given a log-concave CND. Surprisingly, we give a construction to derive the log-concave CND from the infinitely divisible family as well. This result shows an intimate relationship between properties of a tradeoff function and the possible CNDs for that tradeoff function. We will see in Section \ref{s:construction} that the property of infinite divisibility shows up again in the construction of multivariate CNDs.
\begin{restatable}{thm}{thmCNDlimit}\label{thm:CNDlimit}
A nontrivial tradeoff function $f\in \mscr F$ is log-concave if and only if it is infinitely divisible. In particular,
\begin{enumerate}
\item If $f$ is log-concave with log-concave CND $N\sim F$, then $\{f_t\mid t\geq 0\}$ defined by $f_t = F(F^{-1}(\alpha)-t)$ satisfies the assumptions of Definition \ref{def:divisible}.
\item Let $f$ be infinitely divisible, with monoid $\{f_t\in \mscr F\mid t\geq 0\}$, as defined in Definition \ref{def:divisible}, such that $f=f_1$. Let $F_s$ be any CND for $f_s$ (such as constructed in Proposition \ref{prop:CNDsynthetic}). Then the following limit exists $F^*(t) \vcentcolon= \lim_{s\rightarrow 0} F_s(\frac{1}{s} t)$ and $N\sim F^*$ is the unique log-concave CND for $f$.
Furthermore, $F^*(st)$ is the unique log-concave CND for $f_s$, for all $s>0$.
\end{enumerate}
\end{restatable}
\begin{proof}[Proof Sketch.]
It is easy to verify property 1. For property 2, we consider a subsequence $s_n=1/n!$ and observe that $F_{1/n!}(n! t)$ is a CND for $f$ for any $n$, but that as $n$ increases, the number of points at which the CND is uniquely determined also increases, by Lemma \ref{lem:recurrence}. In the limit, this sequence converges to a unique cdf, which we show has the properties of a log-concave CND.
\end{proof}
\begin{example}
We will illustrate the limit of Theorem \ref{thm:CNDlimit} on $G_1$. Let $F_{G_{2^{-n}}}$ be the constructed cdf from Proposition \ref{prop:CNDsynthetic} for $n=0,1,2,3$. The density functions corresponding to $F_{G_{2^{-n}}}(2^{n}t)$ are plotted in Figure \ref{fig:CNDlimit}. We see that as $n$ increases, the pdfs approach that of a standard normal, which we know is the log-concave CND for $f=G_1$.
When the construction of Theorem \ref{thm:CNDlimit} is applied to $f_{\epsilon,0}$, the cdf $F^*$ converges to a Laplace cdf. This seems to reflect the fact that under the limit of group privacy, $(\epsilon,0)$-DP converges to Laplace-DP \citet[Proposition 2.15]{dong2021gaussian}.
\begin{figure}
\centering
\includegraphics[width=.24\linewidth]{CNDlimit1.pdf}
\includegraphics[width=.24\linewidth]{CNDlimit2.pdf}
\includegraphics[width=.24\linewidth]{CNDlimit4.pdf}
\includegraphics[width=.24\linewidth]{CNDlimit8.pdf}
\caption{An illustration of Theorem \ref{thm:CNDlimit} when applied to $G_1$. From left to right, we have the density corresponding to $F_{G_{2^{-n}}}(2^n t)$ for $n=0,1,2,3$.}
\label{fig:CNDlimit}
\end{figure}
\end{example}
Finally, we illustrate why properties 2 and 3 of Definition \ref{def:divisible} are necessary for Theorem \ref{thm:CNDlimit}
\begin{example}
[Non examples for Theorem \ref{thm:CNDlimit}]
First consider why it is necessary to have $f_s\rightarrow \mathrm{Id}$. Set $f_s(\alpha) = I(\alpha=1)$
for all $s>0$. Note that $f_s\circ f_t=f_{s+t}$, but that the construction of Theorem \ref{thm:CNDlimit} results in a point mass at zero, which is not a CND as it is not continuous.
Next, suppose that all of the tradeoff functions are trivial, then $f_s(\alpha)=\alpha$ for all $s>0$, and $f_s\circ f_t=f_{s+t}$. However, there are no CNDs in this case.
\end{example}
\subsection{Piece-wise linear tradeoff functions are not infinitely divisible}\label{s:piece}
We showed in Theorem \ref{thm:CNDlimit} that if a tradeoff function is infinitely divisible, then we can construct a log-concave CND. However, it is not always obvious whether a tradeoff function is infinitely divisible or not. We show that in the case of piece-wise linear tradeoff functions, we can upper bound the number of possible divisions in terms of the number of break points. In particular, the piece-wise linear tradeoff functions considered in this section are not infinitely divisible.
We can characterize the piece-wise linear convex functions in terms of the 2nd derivative behavior: A convex function is piece-wise linear if and only if its 2nd derivative is defined everywhere except for finitely many points, and is zero whenever it is defined.
Part 1 of Proposition \ref{prop:piecewise} shows that a piece-wise linear tradeoff function $f$, which satisfies $f(x)=0$ implies $x=0$, can be sub-divided only a finite number of times. A consequence of this is that $f_{\epsilon,0}$ and several related tradeoff functions are not infinitely divisible and hence do not have log-concave CNDs. In fact, not only is $f_{\epsilon,0}$ not infinitely divisible, but there is in fact \emph{no division} $f_{\epsilon,0} = f\circ g$ into symmetric tradeoff functions, except where either $f$ or $g$ is the identity!
\begin{restatable}{prop}{proppiecewise}\label{prop:piecewise}
\begin{enumerate}
\item Let $f$ be a nontrivial piece-wise linear tradeoff function with $k\geq 1$ breakpoints and such that $f(x)=0$ implies that $x=0$. Then there is no tradeoff function $g$ such that $g^{\circ (k+1)}=f$.
\item Let $\epsilon>0$. There does not exist nontrivial symmetric tradeoff functions $f_1$ and $f_2$ such that $f_{\epsilon,0}=f_1\circ f_2$.
\item Let $f$ be the tradeoff function obtained by an arbitrary sequence of mechanism compositions, functional compositions, or subsampling (without replacement) of $f_{\epsilon,0}$ (could be different $\epsilon$ values for each). Then $f$ is not infinitely divisible and so does not have a log-concave CND.
\end{enumerate}
\end{restatable}
\begin{proof}[Proof Sketch.]
We show in Lemma \ref{lem:piecewise} that divisions of a piece-wise linear tradeoff function are themselves piece-wise linear, and that the functional composition of piece-wise linear tradeoff functions increases the number of breakpoints. This then limits the number of divisions a piece-wise linear tradeoff function can have in terms of the number of its breakpoints.
\end{proof}
\begin{example}[$f_{0,\delta}$ is log-concave]
What if $f(x)=0$ does not imply that $x=0$? The tradeoff functions $f_{0,\delta}$ fit within this setting, and the results of Proposition \ref{prop:piecewise} do \emph{not} apply here. In fact, $f_{0,\delta}$ is infinitely divisible with log-concave CND $U(-1/(2\delta),1/(2\delta))$. That is $f_{0,\delta} = T(U,U+\delta)$ where $U\sim U(-1/2,1/2)$.
While $f_{\epsilon,\delta}$ for $\delta>0$ also does not satisfy the assumption that $f_{\epsilon,\delta}(x)$ implies $x=0$, it is not clear at this time whether $f_{\epsilon,\delta}$ is log-concave or not.
\end{example}
\section{Multivariate CNDs}\label{s:multi}
In this section, we generalize the definition of CND to dimensions greater than one. While in the univariate case, \emph{sensitivity} is measured using the absolute distance between two statistic values, in $\mathbb{R}^d$, there are many choices of norms which can be used to measure the sensitivity \citep{awan2020structure}. So, we will specify the sensitivity norm when talking about a multivariate CND. In Definition \ref{def:CND_MVT} we define a multivariate CND to be a natural generalization of properties 1-4 of Definition \ref{def:CND}.
\begin{defn}\label{def:CND_MVT}
Let $f$ be a symmetric tradeoff function, and let $\lVert\cdot \rVert$ be a norm on $\mathbb{R}^d$. A continuous random vector $N$ with density $g$ is a \emph{canonical noise distribution} (CND) for $f$, with respect to $\lVert\cdot \rVert$, if
\begin{enumerate}
\item
For all $v\in \mathbb{R}^d$ such that $\lVert v\rVert\leq 1$ we have that $T(N,N+v)\geq f$,
\item there exists $\lVert v^*\rVert\leq 1$ such that $T(N,N+v^*)(\alpha)=f(\alpha)$ for all $\alpha \in (0,1)$,
\item for all $v^*$ which satisfy property 2, and all $w\in \mathbb{R}^d$, we have that the likelihood ratio $g(w+tv^*-v^*)/g(w+tv^*)$ is a non-decreasing function of $t\in \mathbb{R}$,
\item $N$ is symmetric about zero: $g(x)=g(-x)$ for all $x\in \mathbb{R}^d$
\end{enumerate}
\end{defn}
When restricted to $d=1$, Definition \ref{def:CND_MVT} recovers Definition \ref{def:CND}. This is clear for properties 1, 2, and 4. Property 3 of Definition \ref{def:CND} can be interpreted as requiring that an optimal rejection set for $T(N,N+1)$ is of the $[x,\infty)$ for some $x$. By the Neyman Pearson Lemma, we know that this holds if and only if the likelihood ratio $F'(x-1)/F'(x)$ is non-decreasing in $x$. We see that when $d=1$, property 3 of Definition \ref{def:CND_MVT} is equivalent to property 3 of Definition \ref{def:CND}. We can interpret Property 3 of Definition \ref{def:CND_MVT} as enforcing a monotone likelihood ratio in directions parallel to $v^*$.
\subsection{Constructions of multivariate CNDs}\label{s:construction}
Composition gives a simple method to construct a multivariate CND whenever a tradeoff function can be decomposed into the composition of $k$ tradeoff functions:
\begin{restatable}{prop}{propCNDcomposition}\label{prop:CNDcomposition}
Suppose that $f = f_1\otimes f_2\otimes\cdots\otimes f_k$ all be nontrivial and symmetric tradeoff functions, and let $F_1,F_2,\ldots, F_k$ be CNDs for $f_1,\ldots, f_k$ respectively. Let $N=(N_1,\ldots, N_k)$ be the random vector where $N_i \sim F_i$ are independent.
Then $N$ is a CND for $f$ with respect to $\lVert \cdot \rVert_\infty$.
\end{restatable}
Interestingly, when a tradeoff function is infinitely divisible and hence has a log-concave CND by Theorem \ref{thm:limitTradeoff}, we can create a multivariate CND with respect to $\lVert \cdot \rVert_1$-sensitivity.
\begin{restatable}{thm}{thmlonemech}\label{thm:lonemech}
Let $f$ be a nontrivial and symmetric log-concave tradeoff function with log-concave CND $F$. Let $N=(N_1,\ldots, N_k)$ be the random vector where $N_i\sim F$ are independent.
Then $N$ is a (log-concave) CND for $f$ with respect to $\lVert \cdot \rVert_1$
\end{restatable}
\begin{proof}[Proof Sketch.]
Since the noise added is i.i.d., we can rephrase the tradeoff function as the tensor product of the individual tradeoff functions. We apply Theorem \ref{lem:otimesCirc} which lower bounds the tensor product of tradeoff functions with the functional composition.
\end{proof}
Theorem \ref{thm:lonemech} was inspired by the i.i.d. Laplace mechanism. In Section \ref{s:laplace}, we show that the i.i.d. Laplace mechanism is a special case of Theorem \ref{thm:lonemech} and gives a multivariate CND for Laplace-DP.
Note that Theorem \ref{thm:lonemech} results in a log-concave multivariate CND, and if each of $N_1,\ldots, N_k$ are log-concave in Proposition \ref{prop:CNDcomposition}, then that constructed multivariate CND is log-concave as well. \cite{dong2021central} showed that log-concave distributions have many nice properties in multivariate settings as well. We leave it to future work to investigate when multivariate log-concave CNDs exist.
\begin{comment}
\begin{example}[Non-example: $\epsilon$-DP]
We could attempt to apply the construction of Theorem \ref{thm:lonemech} to a non-log-concave CND, such as the Tulap distribution for $f_{\epsilon,0}$. In Figure \ref{}, we see that there are points where the log-likelihood is equal to 2, whereas the distribution is intended to satisfy $1$-DP (which requires the likelihood function to be bounded by 1). The key point where the argument of Theorem \ref{thm:lonemech} falls apart in this case is line 2, as shifting by $|x|<1$ does not give $f_{|x|,0}$-DP. \todo{make figure}
\end{example}
\end{comment}
\subsection{Multivariate CND for GDP}\label{s:gdp}
Recall that if $N\sim N(0,I)$ is a $d$-dimensional Gaussian random vector, and $v\in \mathbb{R}^d$ is any vector, then $T(N,N+v)=T(N(0,1),N(\lVert v\rVert_2,1))$ \citep[Proposition D.1(5)]{dong2021gaussian}. This previous result implies that $N(0,I)$ was a multivariate CND for GDP under $\lVert \cdot \rVert_2$-sensitivity. In fact, we show in Proposition \ref{prop:Gauss} that for GDP, any multivariate Gaussian is a CND with respect to any norm.
\begin{restatable}{prop}{propGauss}\label{prop:Gauss}
Let $\Sigma$ be a $d\times d$ positive definite matrix. Let $v^*\in \argmax_{\lVert u\rVert\leq 1} \lVert \Sigma^{-1/2} u\rVert_2$. Then $N(0,\Sigma)$ is a $d$-dimensional CND for $\lVert \Sigma^{-1/2}v^*\rVert_2$-GDP with respect to the norm $\lVert\cdot \rVert$.
\end{restatable}
\begin{remark}
While a multivariate Gaussian is always a multivariate CND for GDP, there is still possibly room for improvement. For Definition \ref{def:CND_MVT}, we only need a single vector to satisfy property 2. However, we could potentially ask that the bound is achieved at all $u$ such that $\lVert u \rVert=1$. Note that if $\lVert \cdot \rVert$ is an elliptical norm, then we do get this stronger property for the multivariate Gaussian, when we choose $\Sigma$ to align with the sensitivity norm.
\end{remark}
\subsection{Multivariate CND for \texorpdfstring{$(0,\delta)$-DP}{zero-delta-DP}
First let's review a few facts about $(0,\delta)$-DP, also known as $f_{0,\delta}$-DP. First, note that $U(\frac{-1}{2\delta},\frac{1}{2\delta})$ is a (log-concave) CND for $f_{0,\delta}$. So, we can write $f_{0,\delta} = T(U,U+\delta)$ where $U\sim U(-1/2,1/2)$. Because of this, we have that $f_{0,\delta}$ is infinitely divisible, and $f_{0,\delta_1}\circ f_{0,\delta_2}=f_{0,\min\{\delta_1+\delta_2,1\}}$. Furthermore, $f_{0,\delta_1}\otimes f_{0,\delta_2} = f_{0,1-(1-\delta_1)(1-\delta_2)}$, as observed in \citet{dong2021gaussian}. This means that $f_{0,\delta}$ is also infinitely decomposable, a property that we had only seen for GDP before. This decomposability implies, by Proposition \ref{prop:CNDcomposition} that we can build a multivariate CND for $f_{0,\delta}$ under $\lVert \cdot\rVert_\infty$-sensitivity. In fact, this construction is a multivariate CND for any sensitivity norm.
\begin{restatable}{prop}{propCNDTVDP}\label{prop:CND_TV_DP}
Let $0<\delta\leq 1$, $d\geq 1$, and $\lVert \cdot \rVert$ be a norm on $\mathbb{R}^d$. Call $v^* \in \underset{\lVert v \rVert\leq 1}\arg\min\prod_{i=1}^d (1-\delta|v_i|)$ and $A = \prod_{i=1}^d(1-\delta|v_i^*|)$. Then $U(\frac{-1}{2\delta},\frac{1}{2\delta})^n$ is a CND for $f_{0,1-A}$ under $\lVert \cdot\rVert$-sensitivity. In the special case of $\lVert \cdot \rVert=\lVert\cdot \rVert_\infty$, this simplifies to $A=(1-\delta)^d$.
\end{restatable}
\subsection{Multivariate CND for \texorpdfstring{$f_{\epsilon,\delta}$}{approximate DP} when \texorpdfstring{$\delta>0$}{delta is positive}}
Let $\epsilon>0$ and $\delta\in (0,1]$. Recall that $f_{\epsilon,\delta}=f_{\epsilon,0}\otimes f_{0,\delta}$ \citep{dong2021gaussian}. Since $f_{0,\delta}$ is infinitely decomposable, we can write $f_{\epsilon,\delta} = f_{\epsilon,0}\otimes f_{0,\delta_1}\otimes \cdots\otimes f_{0,\delta_k}$ where $\delta = \prod_{i=1}^k (1-\delta_i)$. By Proposition \ref{prop:CNDcomposition} we construct a multivariate CND for $f_{\epsilon,\delta}$ with respect to $\lVert \cdot \rVert_\infty$-sensitivity by using $\mathrm{Tulap}(0,\exp(-\epsilon),0)$ in one coordinate, and the uniform distributions $U(\frac{-1}{2\delta_i},\frac{1}{2\delta_i})$ in the other $k$ coordinates.
\subsection{Two multivariate CNDs for Laplace-DP}\label{s:laplace}
Many mechanisms designed to satisfy $(\epsilon,0)$-DP actually satisfy the stronger privacy guarantee of Laplace-DP. In particular, variations on the Laplace mechanism are very common additive mechanisms used to achieve $(\epsilon,0)$-DP. In this section, we show that two multivariate versions of the Laplace mechanism, the $\ell_1$ and $\ell_\infty$ mechanisms, are multivariate CNDs for Laplace-DP.
The Laplace distribution, denoted $\mathrm{Laplace}(m,s)$ is a distribution on $\mathbb{R}$ with density $\frac{1}{2s}\exp(\frac{-1}{s} |x-m|)$. We say a mechanism satisfies $\epsilon$-Laplace-DP if it satisfies $L_\epsilon$-DP, where $L_\epsilon\vcentcolon= T(N,N+\epsilon)$ and $N\sim \mathrm{Laplace}(0,1)$. It is easily seen that $\mathrm{Laplace}(0,1/\epsilon)$ is a log-concave CND for $\epsilon$-Laplace-DP
{\bf i.i.d. Laplace Mechanism}
The i.i.d. Laplace mechanism is defined as follows: Let $\epsilon>0$ be given. If $T:\mscr X\rightarrow \mathbb{R}^k$ has $\lVert\cdot\rVert_1$-sensitivity of $\Delta$, then the i.i.d. Laplace mechanism releases $T(X)+\Delta N$, where $N=(N_1,\ldots, N_k)$ is the random vector with i.i.d. entries $N_i\sim \mathrm{Laplace}(0,1/\epsilon)$. It is well known that the i.i.d. Laplace mechanism satisfies $f_{\epsilon,0}$-DP \citep[Theorem 3.6]{dwork2014algorithmic}. Since $N_1$ is a log-concave CND for $L_\epsilon$, Theorem \ref{thm:lonemech} shows that $N$ is a CND for $L_\epsilon$, with respect to $\lVert \cdot \rVert_1$-sensitivity. As $L_\epsilon\geq f_{\epsilon,0}$ and $L_\epsilon(\alpha)>f_{\epsilon,0}(\alpha)$ for some values of $\alpha$, we can more precisely capture the privacy cost of the i.i.d. Laplace mechanism using tradeoff functions rather than $\epsilon$-DP.
{\bf $\ell_\infty$-Mechanism} The $\ell_\infty$-mechanism, proposed in \citet{steinke2016between} is a special case of the $K$-norm mechanisms \citep{hardt2010geometry}, with density proportional to $\exp(-\epsilon \lVert x\rVert_\infty)$. \citet{steinke2016between} showed that the $\ell_\infty$ mechanism can improve the sample complexity of answering multiple queries, when accuracy is measured by $\ell_\infty$-norm. \citet{awan2020structure} showed that the $\ell_\infty$ mechanism is near optimal in certain applications to private linear and logistic regression. It is well known that when using $\ell_\infty$-sensitivity, the $\ell_\infty$-mechanism satisfies $\epsilon$-DP. In this section, we show that the $\ell_\infty$-mech is a CND for $L_\epsilon$, with respect to $\ell_\infty$-sensitivity
\begin{restatable}{prop}{propLinfty}\label{prop:Linfty}
Let $\epsilon>0$, and $d\geq 1$. Let $X$ be a $d$-dimensional random vector with density $g(x)=\frac{\exp(-\epsilon \lVert x\rVert_\infty)}{d!(2/\epsilon)^d}$. Then $X$ is a CND for the tradeoff function $L_\epsilon$ with respect to $\lVert \cdot \rVert_\infty$.
\end{restatable}
\begin{proof}[Proof Sketch.]
First we show that with the shift of $v^*=(1,1,\ldots, 1)$, the privacy loss random variable coincides with that of $L_\epsilon$. Then, we show that $v^*=(1,1,\ldots, 1)^\top$ is the worst case of any shift $v$ to minimize the tradeoff functions. To deal with the case that some of the entries of $v$ are zero, we establish a convergence theorem for tradeoff functions in Theorem \ref{thm:limitTradeoff} of the Supplementary Materials.
\end{proof}
\subsection{No multivariate CND for \texorpdfstring{$f_{\epsilon,0}$}{pure-DP}}\label{s:nopure}
By the construction of Proposition \ref{prop:CNDsynthetic}, we know that a one-dimensional CND exists for any nontrivial tradeoff function. It turns out that the same cannot be said for the multivariate setting. In Theorem \ref{thm:noCNDpure}, we show that there is \emph{no multivariate CND for $f_{\epsilon,0}$ with respect to any norm}. In fact, we prove the stronger result that it is not even possible to satisfy properties 1 and 2 of Definition \ref{def:CND_MVT}
\begin{restatable}{thm}{thmnoCNDpure}\label{thm:noCNDpure}
Let $d\geq 2$ and let $\lVert\cdot \rVert$ be any norm on $\mathbb{R}^d$. Then for any $\epsilon>0$, there is no random vector satisfying properties 1 and 2 of Definition \ref{def:CND_MVT} for $f_{\epsilon,0}$ with respect to the norm $\lVert \cdot \rVert$. In particular, there is no multivariate CND for $f_{\epsilon,0}$.
\end{restatable}
\begin{proof}[Proof Sketch.]
Suppose to the contrary, then $(\epsilon,0)$-DP imposes strict bounds on the likelihood ratio of the distribution. These bounds allow us to find an arbitrarily long sequence of points, sufficiently far apart, where the density is bounded below. This ultimately shows that the density is not integrable.
\end{proof}
Combining Theorem \ref{thm:noCNDpure} with Proposition \ref{prop:CNDcomposition}, we infer in Corollary \ref{cor:pureDecomp} that $f_{\epsilon,0}$ cannot be written as the tensor product of any two nontrivial tradeoff functions. This means that if we want to design two independent mechanisms such that the joint release exactly satisfies $(\epsilon,0)$-DP, then one of the mechanisms must be perfectly private.
\begin{restatable}{cor}{corpureDecomp}\label{cor:pureDecomp}
Let $\epsilon>0$ be given. There does not exist nontrivial symmetric tradeoff functions $f_1$ and $f_2$ such that $f_{\epsilon,0}=f_1\otimes f_{2}$.
\end{restatable}
\begin{remark}
Theorem \ref{thm:noCNDpure} along with Theorem \ref{thm:lonemech} gives an alternative argument that $f_{\epsilon,0}$ is not log-concave/infinitely decomposable.
\end{remark}
\section{Discussion}\label{s:discussion}
Motivated by the goals of constructing log-concave CNDs and multivariate CNDs, we found some fundamental connections between these constructions and the operations of mechanism composition and functional composition of the tradeoff functions. Surprisingly, the constructions for both log-concave and multivariate CNDs relied on whether a tradeoff function could be decomposed either according to functional composition, or according to mechanism composition. An interesting result of our work was that for pure DP there is a unique 1-dimensional CND and no multidimensional CNDs, which implies that $f_{\epsilon,0}$ can neither be decomposed according to functional composition or mechanism composition. This highlights the limitations of pure-DP as a privacy definition. On the other hand, Gaussian-DP, Laplace-DP, and $(0,\delta)$-DP were seen to have much better properties.
We showed that a multivariate extension of CND can capture the same properties as in the 1-dimensional case. \citet{awan2021canonical} showed that in one dimension, CNDs can be used to obtain DP hypothesis tests with optimal properties. An open question is whether our definition of a multivariate CND has any connections to optimal hypothesis testing.
Most of the constructions of multivariate CNDs presented in this paper are product distributions. Even the multivariate CNDs for GDP are a linear transformation of i.i.d. random variables. The $\ell_\infty$-mechanism is the exception, providing a truly nontrivial CND for Laplace-DP. It is worth exploring whether there are general techniques to produce nontrivial multivariate CNDs like the $\ell_\infty$-mechansism, as well as exploring the merits of such CNDs.
\bibliographystyle{plainnat}
|
1,108,101,563,158 | arxiv | \section{Introduction}
Much has been achieved in our understanding of 2d quantum
gravity, both from the point of view of Liouville theory \cite{ddk}
and from a discretized point of view \cite{david,adf,adfo,kkm}.
However, the simplest
and most fundamental concept in gravity, the concept of
{\it distance}, has only recently been analyzed.
In Liouville theory the tool has been the diffusion
equation for a random walk on the ensemble of 2d manifolds
weighted by the Liouville action \cite{watabiki1}.
In the framework of dynamical
triangulations the tool has been the transfer matrix formalism
developed in \cite{transfer}. In this article we show
that standard scaling relations known from
statistical mechanics follow unambiguously even in
quantum gravity once the geodesic distance is used to
set the length scale in the problem.
\section{Scaling relations}
Let us define 2d quantum gravity as the scaling limit of the
so-called simplicial quantum gravity theory. Simplicial quantum
gravity can be defined in any dimensions, but we will here restrict
ourselves to 2d. The partition function will be given by:
\begin{equation}\label{1}
Z(\mu,G_E)= \sum_{T \in {\cal T}} \frac{1}{C_T} {\rm e}^{-S[T]},
\end{equation}
where $S[T]$ is the Einstein-Hilbert action:
\begin{equation}\label{2}
S[T] = \mu N_T -\frac{1}{4\pi G_E} (2 - 2g_T).
\end{equation}
In eqs. \rf{1} and \rf{2} ${\cal T}$ denotes a suitable class of
triangulations of closed 2-manifolds, $T$ a triangulation in ${\cal T}$,
$N_T$ the number of triangles in $T$, $C_T$ a symmetry factor
and $g_T$ the genus of the manifold.
If we fix the topology, as we will always do in the following,
we can drop the last term since it is a topological invariant.
It is known that the partition function $Z(\mu)$
(for a fixed topology)
has a critical bare cosmological constant $\mu_c$ such that
the continuum limit of \rf{1} should be taken for $\mu \to \mu_c$ from above.
Define $\triangle \m \equiv \mu-\mu_c$, then
\begin{equation}\label{2a}
Z (\mu) \sim {\rm const.} (\triangle \m)^{2-\gamma} + {\rm less~singular~terms}
\end{equation}
For 2d gravity $\gamma=-1/2$ for spherical topology.
In the following we will consider only triangulations with
spherical topology.
We define the geodesic distance {\it between
two links}\footnote{It is convenient here to use the geodesic distance
between links rather than between vertices. As will be clear later the
results are independent of this definition. We choose it because it is
technically convenient in the analytic calculations.} as
the shortest path of links connecting the two links in the dual
lattice, i.e. the shortest ``triangle-path'' between the two
links on the original triangulation.
Let us by ${\cal T}_2(r)$ denote the ensemble of (spherical)
triangulations with two marked links separated a geodesic
distance $r$ (we assume for the moment that the link length is $a=1$).
We can now define the 2-point function of quantum gravity by
\begin{equation}\label{3}
G_\mu(r) = \sum_{T \in {\cal T}_2(r)} {\rm e}^{-\mu N_T}.
\end{equation}
The 2-point function falls off exponentially for $r \to \infty$.
\begin{equation}\label{5}
\lim_{r \to \infty} \frac{-\log G_\mu (r)}{r} = m(\triangle \m) \geq 0.
\end{equation}
This trivial but important relation follows from the
fact that
\begin{equation}\label{6}
G_\mu(r_1+r_2) \geq G_\mu(r_1)\, G_\mu(r_2),
\end{equation}
simply because each term on the rhs of eq. \rf{6} can
be given an interpretation as a term belonging to the lhs of
eq. \rf{6}. This is illustrated in fig.\,\ref{figsc1}.
\begin{figure}
\unitlength=1.00mm
\linethickness{0.6pt}
\begin{picture}(143.00,80.00)
\put(20.00,20.00){\line(1,1){10.00}}
\put(30.00,30.00){\line(1,-1){10.00}}
\put(40.00,20.00){\line(-1,0){20.00}}
\put(40.00,20.00){\line(1,1){10.00}}
\put(50.00,30.00){\line(1,-1){10.00}}
\put(60.00,20.00){\line(-1,0){20.00}}
\put(40.00,20.00){\line(1,1){10.00}}
\put(50.00,30.00){\line(1,-1){10.00}}
\put(60.00,20.00){\line(-1,0){20.00}}
\put(60.00,20.00){\line(1,1){10.00}}
\put(70.00,30.00){\line(1,-1){10.00}}
\put(80.00,20.00){\line(-1,0){20.00}}
\put(80.00,20.00){\line(1,1){10.00}}
\put(100.00,20.00){\line(-1,0){20.00}}
\put(80.00,20.00){\line(1,1){10.00}}
\put(100.00,20.00){\line(-1,0){20.00}}
\put(30.00,30.00){\line(1,0){60.00}}
\bezier{72}(90.00,30.00)(90.00,22.00)(100.00,20.00)
\put(100.00,20.00){\line(0,1){15.00}}
\put(100.00,35.00){\line(-2,-1){10.00}}
\put(90.00,30.00){\line(0,1){15.00}}
\put(90.00,45.00){\line(1,-1){10.00}}
\put(100.00,35.00){\line(0,1){20.00}}
\put(100.00,55.00){\line(-1,-1){10.00}}
\put(90.00,45.00){\line(0,1){15.00}}
\put(90.00,60.00){\line(2,-1){10.00}}
\bezier{292}(60.00,5.00)(10.00,2.00)(10.00,25.00)
\bezier{268}(60.00,5.00)(99.00,7.00)(120.00,25.00)
\bezier{216}(90.00,30.00)(72.00,54.00)(90.00,70.00)
\bezier{308}(10.00,25.00)(12.00,52.00)(60.00,40.00)
\bezier{100}(60.00,40.00)(75.00,36.00)(84.00,40.00)
\bezier{256}(112.00,46.00)(143.00,49.00)(120.00,25.00)
\put(91.00,30.00){\circle*{0.00}}
\put(93.00,29.00){\circle*{0.00}}
\put(95.00,28.00){\circle*{0.00}}
\put(96.00,27.00){\circle*{0.00}}
\put(99.00,22.00){\circle*{0.00}}
\put(98.00,24.00){\circle*{0.00}}
\put(97.00,26.00){\circle*{0.00}}
\put(20.00,50.00){\makebox(0,0)[cc]{{\Large $T_1$}}}
\put(118.00,73.00){\makebox(0,0)[cc]{{\Large $T_2$}}}
\put(50.00,15.00){\makebox(0,0)[lc]{$r_1=7$}}
\put(101.00,47.00){\makebox(0,0)[lc]{$r_2=4$}}
\put(23.00,26.00){\makebox(0,0)[rc]{$l_1$}}
\put(96.00,60.00){\makebox(0,0)[rb]{$l_2$}}
\put(90.00,24.00){\makebox(0,0)[rt]{$l$}}
\bezier{244}(110.00,70.00)(122.00,57.00)(100.00,20.00)
\bezier{112}(90.00,70.00)(101.00,80.00)(110.00,70.00)
\end{picture}
\caption[figsc1]{The inequality \rf{6}. Two triangulations with marked
links separated by distances $r_1$ and $r_2$ can be glued together
to a triangulation where the marked links has a distance $r_1+r_2$
but the same number of triangles by cutting open a marked link
in each of the triangulations to a 2-loop boundary and glue together the
two boundaries.}
\label{figsc1}
\end{figure}
Eq. \rf{6} shows that $-\log G_\mu (r)$ is sub-additive and this
ensures the existence of the limit \rf{5}. In addition
$m(\triangle \m) \geq 0$ because $G_\mu (r)$ is a decreasing function
of $r$. Again this follows from general arguments which allow
us to bound the number of triangulations with $N_T$ triangles and two
marked links separated a distance $r$ in terms of the number
of triangulations with $N_T$ triangles and two marked links
separated a distance $r' < r$. The same kind of arguments lead
to the conclusion that $m'(\triangle \m) >0$ for $\triangle \m > 0$, i.e. $m(\triangle \m)$
is a decreasing function as $\mu \to \mu_c$.
Similarly we can define ${\cal T}(l_1,l_2;r)$ as the class of triangulations with
an entrance boundary loop $l_1$ (with one marked link)
and an exit boundary loop $l_2$, separated a geodesic distance $r$.
$l_1$ and $l_2$ are the number of links at the entrance boundary loop
and that at the exit boundary loop, respectively.
We say that $l_1$ and $l_2$ are separated by a geodesic distance $r$
if {\it all} links $l \in l_2$ has the geodesic distance $r$ to $l_1$.
Finally the geodesic distance between a link $l$ and a set of links
(like the loop $l_1$) is the minimum of the geodesic distances between the
link $l$ and the links in the set.
We define\footnote{Sometimes a different notation is used and
the entrance loop is unmarked while the exit loop is marked. The
difference in these functions will be some trivial factors of
$l$: $l_2G_\mu (l_1,l_2;r) = l_1 G'_\mu(l_1,l_2;r)$ where $G'$ denotes
the 2-loop function where the exit loop is marked.}
\begin{equation}\label{4}
G_\mu(l_1,l_2;r) = \sum_{T \in {\cal T}(l_1,l_2;r)} {\rm e}^{-\mu N_T}.
\end{equation}
The 2-loop functions fall off exponentially for $r \to \infty$.
Again this follows from the subadditivity argument since one has
\cite{transfer,watabiki}:
\begin{equation}\label{4a}
G_\mu(l_1,l_2;r_1+r_1) = \sum_{l=1}^{\infty}\; G_\mu(l_1,l;r_1)G_\mu(l,l_2;r_2).
\end{equation}
It is easy to show, by arguments identical to the ones used
originally for random surfaces on hypercubic lattices \cite{dfj},
that the mass is defined by the exponential decay
of the two-loop function is independent of the length of the
boundary loops and consequently identical to the mass defined by the
2-point function.
The important point is that the ``mass'' $m(\triangle \m)$ dictates the scaling in
quantum gravity. We can view $G_\mu(r)$ as the partition function
for universes of linear extension $r$ and in order that this partition
function survives in the continuum limit it is necessary that we have
\begin{equation}\label{7}
m(\triangle \m) r = M\, R,
\end{equation}
where $M$ and $R$ are kept fixed in the continuum limit where
the number of lattice steps $r$ goes to infinity. There can only be
a continuum limit if $m(\triangle \m) \to 0$ for $\triangle \m \to 0$. In addition
almost all critical properties of quantum gravity can be read off
directly from this function in the scaling limit. This has
already been emphasized in the more general context of higher dimensional
quantum gravity \cite{adj} and more specifically in
two dimensions \cite{adj1} (where an explicit solution was
given for a toy model of branched polymers),
but it is worth to repeat the arguments.
First one would in general expect the exponential decay of $G_\mu(r)$ to
be replaced by a power fall off when $m(\triangle \m) =0$, or more precisely
in the region where $1 \ll r \ll 1/m(\triangle \m)$. The behavior $G_\mu (r)$ is
is assumed to be:
\begin{eqnarray}
G_\mu (r) &\sim & {\rm e}^{-m(\triangle \m) r}~~~~~~~~~~~~
{\rm for}~~ m(\triangle \m) r \gg 1\label{8} \\
G_\mu (r) &\sim & r^{1-\eta} ~~~~~~~~~~~~~~~~~~{\rm for}~~
1 \ll r \ll \frac{1}{m(\triangle \m)}
\label{9} \\
\chi_\mu & \equiv & \int \! dr \;G_\mu (r)
\sim \frac{\partial^2 Z(\mu)}{\partial \mu^2} \sim
{\rm const.} \, (\triangle \m)^{-\gamma}.\label{10}
\end{eqnarray}
These are standard definitions in statistical mechanics. The exponent $\eta$
is called the anomalous scaling dimension since a free propagator in any
space-time dimensions has $\eta =0$. This follows by integrating the usual
free propagator over the angular variables corresponding to a fixed value of
$r$.
Two scaling relations follow directly from the definitions if we assume
that $m(\triangle \m) \to 0$ for $\triangle \m \to 0$ as:
\begin{equation}\label{11}
m(\triangle \m) \sim (\triangle \m)^\nu.
\end{equation}
{}From eq. \rf{10} it follows, after differentiating a sufficient number
of times after $\mu$ that
\begin{equation}\label{12}
\gamma = \nu (2-\eta),
\end{equation}
a relation known in statistical mechanics as {\it Fisher's scaling relation}.
In that case it will typically be a relation between the spin susceptibility
exponent $\gamma$, the critical exponent $\nu$ of the spin-spin
correlation length and
the anomalous scaling exponent of the spin-spin correlation function.
It is remarkable that it is still valid in quantum gravity.
The other scaling relation is
\begin{equation}\label{13}
\nu = \frac{1}{d_H},
\end{equation}
where $d_H$ denotes the (internal) Hausdorff dimension of
the ensemble of random surfaces given by eq. \rf{3}.
To be more precise we define the (internal) Hausdorff dimension
of this ensemble by
\begin{equation}\label{14}
\left\langle N \right\rangle_r \sim r^{d_H},~~~~r \to \infty ,~~~m(\triangle \m)r = {\rm const.}
\end{equation}
where
\begin{equation}\label{15}
\left\langle N \right\rangle_r \equiv
\frac{\sum_{T \in {\cal T}_2(r)} N_T\; {\rm e}^{-\mu N_T}}{\sum_{T \in {\cal T}_2(r)}
\,{\rm e}^{-\mu N_T}}.
\end{equation}
It follows from the definitions that:
\begin{equation}\label{16}
\left\langle N\right\rangle_r \sim -\frac{1}{G_\mu (r)}\;\frac{\partial G_\mu (r)}{\partial \mu} \sim
m'(\triangle \m) r \sim r^{1/\nu}.
\end{equation}
It is interesting to give a direct physical interpretation
of the short distance behavior of the $G_\mu (r)$ as defined
by \rf{9}. In order to do so let us change from the {\it grand canonical
ensemble} given by \rf{3} to the {\it canonical ensemble} defined by
\begin{equation}\label{hx1}
G(r,N) = \sum_{T \in {\cal T}_2(r,N)} 1,
\end{equation}
where ${\cal T}_2(r,N)$ denotes the triangulations which $N$ triangles and
two marked links separated a distance $r$.
$G(r,N)$ is the 2-point function where the number of triangles is
fixed to $N$. For $r=0$ we have the following
$N$ dependence (for the $r$ dependence see \rf{hx6} below)
\begin{equation}\label{hx2}
G(0,N) \sim N^{\gamma-2}\, {\rm e}^{\mu_c N}.
\end{equation}
The reason is that the partition function for a finite volume $N$
is assumed to behave like
\begin{equation}\label{hx2a}
Z(N) \sim N^{\gamma -3} \, {\rm e}^{\mu_c N}.
\end{equation}
The 1-point function is for large $N$ proportional to
$N Z(N)$ since it counts the triangulations with one marked
link or triangle or vertex depending on the precise definition,
and for $r=0$ (or just small) there is essntially no difference
between the 1-point function and $G(0,N)$.
$G(r,N)$ is related to $G_\mu(r)$ by a (discrete) Laplace transformation:
\begin{equation}\label{hx3}
G_\mu(r) = \sum_N G(r,N)\, {\rm e}^{-\mu N}.
\end{equation}
The {\it long distance behavior} of $G(r,N)$ is determined by the
long distance behavior of $G_\mu(r)$. Close to the scaling
limit it follows by direct calculation (e.g. a saddlepoint calculation) that
\begin{eqnarray}
G_\mu(r) &\sim &{\rm e}^{-r\,(\triangle \m)^{1/d_H} }~~~ \Rightarrow \nonumber \\
G(r,N) &\sim & {\rm e}^{-c \left(r^{d_H}/N\right)^{\frac{1}{d_H-1}}}\; {\rm e}^{\mu_c N}
{}~~~~{\rm for}~~~~ r^{d_H} > N, \label{hx4}
\end{eqnarray}
where $c=(d_H-1)/d_H^{d_H/(d_H-1)}$.
On the other hand the {\it short distance behavior} of $G_\mu (r)$
is determined by the short distance behavior of $G(r,N)$ which is
simple. Eqs. \rf{14} and \rf{15} defined the concept of
Hausdorff dimension in the grand canonical ensemble. A
definition in the canonical ensemble would
be: Take $N^{1/d_H} \gg r$ and simply count the volume
(here number of triangles) of a ``spherical shell'' of thickness 1 and
radius $r$ from a marked link,
sum over all triangulations with one marked link and $N$ triangles, and
divide by the total number of triangulations with one marked link
and $N$ triangles. Call this number $\left\langle n(r)\right\rangle_N$. The Hausdorff
dimension is then defined by
\begin{equation}\label{hx5}
\left\langle n(r) \right\rangle_N \sim r^{d_H-1}~~~{\rm for}~~~1 \ll r \ll N^{1/d_H}.
\end{equation}
It follows from the definitions that we can write
\begin{eqnarray}
\left\langle n(r) \right\rangle_N &\sim & \frac{G(r,N)}{G(0,N)}, ~~~~{\rm i.e}~~~\nonumber \\
G(r,N)& \sim& r^{d_H-1} N^{\gamma-2} {\rm e}^{\mu_c N}~~~{\rm for}~~~
1 \ll r \ll N^{1/d_H}. \label{hx6}
\end{eqnarray}
We can finally calculate the short distance behavior of $G_\mu(r)$
from eq. \rf{hx3}. From \rf{hx4} the sum is cut off at $N \sim r^{d_H}$.
For $\mu \to \mu_c$, i.e. $\triangle \m$ small we get:
\begin{equation}\label{hx7}
G_\mu(r) \sim r^{d_H-1} \sum_{N=1}^{r^{d_H}} N^{\gamma-2} \sim r^{\gamma d_H -1}.
\end{equation}
This is actually an independent derivation of Fisher's scaling
relation since it shows directly that $\eta = 2-\gamma d_H$, and it has
the advantage that it gives a physical interpretation of the
anomalous scaling dimension $\eta$. In addition it proves that
the canonical and grand canonical definition of Hausdorff dimension
in fact agrees.
The model of branched polymers ($BP$)
provides us with a simple, but non-trivial
example of the above scenario \cite{adj1}.
Here we will define branched polymers as the sum over all
tree graphs (no loops in the graphs) with certain weights
given to the graphs according to the following definition of the
partition function:
\begin{equation}\label{k2}
Z(\mu) = \sum_{BP} \frac{1}{C_{BP}}\rho(BP) \; {\rm e}^{-\mu |BP|},
\end{equation}
where $|BP|$ is the number of links in a $BP$ and $\mu$ is a chemical
potential for the number of links, while
\begin{equation}\label{k3}
\rho(BP)= \prod_{i \in BP} f(n_i),
\end{equation}
where $i$ denotes a vertex, $n_i$ the number of links joining at
vertex $i$ and $f(n_i)$ is non-negative. $f(n_i)$ can be viewed
as the unnormalized branching weight for one link branching into $n_i-1$
links at vertex $i$. Finally $C_{BP}$ is a symmetry factor such
that rooted branched polymers, i.e. polymers with the first link marked,
is counted only once.
This model can be solve \cite{adfo,adj1}.
It has a critical point $\mu_c$ (depending on $f$)
and close to the critical point we have:
\begin{equation}\label{k4}
Z''(\mu) \sim (\triangle \m)^{-1/2},~~~~\triangle \m \equiv \mu -\mu_c,
\end{equation}
i.e. $\gamma =1/2$ for branched polymers.
On the branched polymers we define the ``geodesic distance'' between
two vertices as the shortest link path, which is unique since we
consider tree-graphs. The graphical representation of the
2-point function is show in fig.\,\ref{fig3}.
\begin{figure}
\unitlength=1.00mm
\linethickness{0.6pt}
\begin{picture}(127.00,52.00)
\put(15.00,25.00){\circle*{2.00}}
\put(40.00,25.00){\circle*{2.00}}
\put(15.00,25.00){\dashbox{1.00}(25.00,0.00)[cc]{}}
\put(60.00,25.00){\circle{12.00}}
\put(57.00,23.00){\line(4,5){4.00}}
\put(58.00,21.00){\line(4,5){4.00}}
\put(66.00,25.00){\circle*{2.00}}
\put(66.00,25.00){\dashbox{1.00}(49.00,0.00)[cc]{}}
\put(115.00,25.00){\circle*{2.00}}
\put(75.00,25.00){\line(0,1){15.00}}
\put(75.00,40.00){\circle*{1.50}}
\put(75.00,25.00){\circle*{1.50}}
\put(90.00,25.00){\circle*{1.50}}
\put(90.00,25.00){\line(-1,-1){10.00}}
\put(80.00,15.00){\circle*{1.50}}
\put(90.00,25.00){\line(1,-1){10.00}}
\put(100.00,15.00){\circle*{1.50}}
\put(90.00,25.00){\line(1,3){5.00}}
\put(95.00,40.00){\circle*{1.50}}
\put(105.00,25.00){\circle*{1.50}}
\put(75.00,46.00){\circle{12.00}}
\put(72.00,44.00){\line(4,5){4.00}}
\put(73.00,42.00){\line(4,5){4.00}}
\put(97.00,46.00){\circle{12.00}}
\put(94.00,44.00){\line(4,5){4.00}}
\put(95.00,42.00){\line(4,5){4.00}}
\put(76.00,10.00){\circle{12.00}}
\put(74.00,8.00){\line(4,5){4.00}}
\put(75.00,6.00){\line(4,5){4.00}}
\put(105.00,11.00){\circle{12.00}}
\put(102.00,9.00){\line(4,5){4.00}}
\put(103.00,7.00){\line(4,5){4.00}}
\put(121.00,25.00){\circle{12.00}}
\put(118.00,23.00){\line(4,5){4.00}}
\put(119.00,21.00){\line(4,5){4.00}}
\put(50.00,25.00){\makebox(0,0)[cc]{{\large $=$}}}
\bezier{56}(10.00,30.00)(5.00,25.00)(10.00,20.00)
\bezier{52}(45.00,30.00)(49.00,25.00)(45.00,20.00)
\bezier{76}(28.00,35.00)(39.00,35.00)(45.00,30.00)
\bezier{80}(10.00,30.00)(15.00,35.00)(28.00,35.00)
\bezier{80}(10.00,20.00)(15.00,15.00)(28.00,15.00)
\bezier{76}(28.00,15.00)(40.00,15.00)(45.00,20.00)
\end{picture}
\caption[fig3]{The graphical representation of the 2-point
function for branched polymers. The dashed line represents the
unique shortest path between the two marked vertices. The ``blobs''
represent the contribution from all rooted polymers branching
out from a vertex.}
\label{fig3}
\end{figure}
Had it not been for the ability to branch, the 2-point function
would simply be
\begin{equation}\label{k5}
G_\mu(r) = {\rm e}^{-\mu r}.
\end{equation}
However, the insertion of 1-point functions at any vertex leads
to a non-analytic coupling constant renormalization and the
result is changed to \cite{adj1}
\begin{equation}\label{k6}
G_\mu (r) = {\rm const.}\; {\rm e}^{-\kappa\, r\sqrt{\triangle \m}}~~~
{\rm for}~~~\triangle \m \to 0,
\end{equation}
where $\kappa$ is some positive constants depending on $f$.
We can now find $G(r,N)$ by an inverse Laplace transformation:
\begin{equation}\label{k7}
G(r,N) = {\rm const.} \, N^{-3/2} r \,{\rm e}^{-\kappa^2r^2/4N}.
\end{equation}
We confirm from this explicit expression
that the (internal) Hausdorff dimension of
branched polymers is 2 (like a smooth surface !)
and that $\gamma = 1/2$ since the
prefactor of $G(r,N)$ for small $r$ should be $N^{\gamma-2} r^{d_H-1}$.
It should be emphasized again that these definitions and scaling
relations are valid for simplicial gravity in three and
four dimensions as defined in \cite{adj,aj,am1}.
In the rest of this paper we will study how they are realized in
2d simplicial quantum gravity where the exact solution can be found.
\section{The 2-point function}
Let us first define the generating function for 2-loop amplitudes \rf{4} by:
\begin{equation}\label{x1}
G_\mu(x,y;r) = \sum_{l_1,l_2 =1}^\infty
x^{l_1} y^{l_2} G_\mu(l_1,l_2;r) .
\end{equation}
We can reconstruct $G_\mu(l_1,l_2;r)$ by
\begin{equation}\label{x2}
G_\mu(l_1,l_2;r) = \oint_{C_x} \frac{dx}{2\pi {\rm i}x} \, x^{-l_1} \oint_{C_y} \frac{dy}{2\pi {\rm i}y}\, y^{-l_2} \; G_\mu(x,y;r),
\end{equation}
where the contours $C_x$ and $C_y$ surround the origin and avoid
the cuts of $G_\mu(x,y;r)$.
The fundamental composition law \rf{4a} reads:
\begin{equation}\label{x3}
G_\mu(x,y;r_1+r_2) = \oint_C \frac{dz}{2\pi {\rm i} \,z} \;
G_\mu(x,\frac{1}{z};r_1)\, G_\mu(z,y;r_2).
\end{equation}
The boundary condition to be imposed is that
\begin{equation}\label{x4}
G_\mu (l_1,l_2;r=0) = \delta_{l_1,l_2}~~~~~{\rm or}~~~~~
G_\mu(x,y;r=0)=\frac{xy}{1-xy}.
\end{equation}
The important insight obtained in \cite{transfer,watabiki}
is that the 2-loop function satisfies a simple differential equation.
Using the so-called peeling decomposition defined in \cite{watabiki}
the differential equation has the form \cite{watabiki},
\begin{equation}\label{x5}
\frac{\partial }{\partial r} G_\mu(x,y;r) = x\frac{\partial}{\partial x}
\left(2x^2 f_\mu(x) G_\mu(x,y;r)\right),
\end{equation}
which gives the same differential equation in the limit $\triangle \m \rightarrow 0$
as was obtained by combinatorial arguments in \cite{transfer}.
Let $F_\mu(x)$ denote
the generating functional for 1-loop functions with one marked link:
\begin{equation}\label{x6}
F_\mu(x) = \sum_l x^l F_\mu(l) .
\end{equation}
It is well known that
\begin{eqnarray}
F_\mu(x) &=& \frac{1}{2} \left( \frac{1}{x^2} - \frac{g}{x^3} \right) + f_\mu(x),
{}~~~~g\equiv{\rm e}^{-\mu},\label{x7} \\
f_\mu(x) &=&
\frac{g}{2x} (\frac{1}{x}-c_2)
\sqrt{(\frac{1}{x}-c_1)(\frac{1}{x}-c_0)}, \label{x7a}
\end{eqnarray}
where $c_0 < 0 < c_1 < c_2$ as long as $\mu > \mu_c$. The only thing we need
to know is that {\it at} the critical point $\mu_c$ we have $c_2=c_1$,
(which we denote $1/x_c$) and away from the critical point
\begin{eqnarray}
c_2(\mu) &=& 1/x_c + \frac{\a}{2}\sqrt{\triangle \m} + {\cal O}(\triangle \m),\label{xb7}\\
c_1(\mu) &=& 1/x_c - \a \sqrt{\triangle \m} + {\cal O} (\triangle \m), \label{xb7a} \\
c_0(\mu) &=& c_0(\mu_c) + {\cal O}(\triangle \m).\label{xb7b}
\end{eqnarray}
Here $\a$ is a positive constant\footnote{The values of the constants
which enter are as follows: ${\rm e}^{-\mu_c} = g_c = 1/(2{\cdot}3^{\frac{3}{4}})$,
$x_c = \frac{1}{2} (3^{\frac{1}{4}}-3^{-\frac{1}{4}})$,
$c_0(\mu_c) = 3^{\frac{3}{4}}(1-3^{\frac{1}{2}})$ and $\a = 4 \cdot 3^{-\frac{1}{4}}$.}
of order ${\cal O}(1)$ and the scaling limit is obtained
when $x = x_c - {\cal O}(\sqrt{\triangle \m})$. In this region it is seen that
\begin{equation}\label{x8}
f_\mu(x) \sim (\triangle \m)^{3/4}
\end{equation}
and this is the reason the difference equation originating
from \rf{x3} with $r_1=1$ can be replaced with a differential
equation for $\triangle \m \to 0$ even if $r$ is discrete.
The solution to \rf{x4} and \rf{x5} is:
\begin{equation}\label{x9}
G_\mu(x,y;r) = \frac{\hat{x}^2 f_\mu(\hat{x})}{x^2 f_\mu(x)} \; \frac{\hat{x} y}{1-\hat{x} y}.
\end{equation}
Here $\hat{x}(x,r)$ is the solution to the characteristic equation
of the partial differential equation \rf{x5}.
The integral of the characteristic equation is
\begin{equation}\label{x10}
r = \int_x^{\hat{x}(x,r)} \frac{dx'}{ 2 x'^3 f_\mu(x')} =
\left[ \frac{1}{\delta_0}\,
\sinh^{-1}\sqrt{\frac{\delta_1}{1-c_2 x'}-\delta_2} \;
\right]^{x'=\hat{x}(x,r)}_{x'=x} ,
\end{equation}
and this expression can be by inverted to give:
\begin{equation}\label{y2}
\hat{x}(x,r)= \frac{1}{c_2} \, - \, \frac{\delta_1}{c_2}\;
\frac{1}{\sinh^{2}(\delta_0 r + \sinh^{-1}\sqrt{\frac{\delta_1}{1-c_2 x}-\delta_2)}
+ \delta_2} ,
\end{equation}
where $\delta_0$, $\delta_1$ and $\delta_2$ are all positive and defined by
\begin{eqnarray}
\delta_0 &=& \frac{g}{2} \sqrt{(c_2-c_1)(c_2-c_0)}
= {\cal O}((\triangle \m)^{\frac{1}{4}}),\label{x11a} \\
\delta_1 &=& \frac{(c_2-c_1)(c_2-c_0)}{c_2(c_1-c_0)}
= {\cal O}(\sqrt{\triangle \m}),\label{x11b} \\
\delta_2 &=& - \, \frac{c_0(c_2-c_1)}{c_2(c_1-c_0)}
= {\cal O}(\sqrt{\triangle \m}).\label{x11c}
\end{eqnarray}
It is readily checked that $\hat{x} \to 1/c_2$ for $r \to \infty$ and
$\hat{x}(x,r=0)=x$.
In principle we can calculate $G_\mu(l_1,l_2;r)$ from eqs. \rf{x2}, \rf{x9}
and \rf{y2}. Let us only here verify that the
exponential decay of $G_\mu(l_1,l_2;r)$ is independent of $l_1$ and $l_2$.
For $r \to \infty$ one gets
\begin{equation}\label{y4}
G_\mu(l_1,l_2;r)= {\rm const.\,} \delta_0 \delta_1 \, {\rm e}^{-2\delta_0 r}
+ {\cal O}({\rm e}^{-4 \delta_0 r}),
\end{equation}
where {\it const.} is a function of order ${\cal O}(1)$ which depends
on $c_0, c_1, c_2, l_1$ and $l_2$.
We can express the 2-point function $G_\mu(r)$ in terms of the
2-loop function and the 1-loop function. Let us consider a marked
link. For a given triangulation we can systematically work
our way out to the links having a distance $r$ from the marked link
by peeling off layers of triangles having the distances $1,2,\ldots,r$
to the marked link. After $r$ steps we have a boundary consisting
of a number of disconnected boundary loops, all with a distance $r$ to the
marked link. One of these is the exit loop described by the 2-loop function
and we get the 2-point function by closing the exit loop of length $l_2$
by multiplying the 2-loop function $G_\mu(l_1=1,l_2;r)$ by the 1-loop function
$F_\mu(l_2)$\footnote{To be more precise we have to multiply the 2-loop function
$G_\mu(l_1,l_2;r)$ by $l_2$ since the exit loop is unmarked and we
can glue the marked one-loop cap to it in $l_2$ ways.}
and the perform the sum over $l_2$, i.e., as shown in fig.\,\ref{fig2},
\begin{eqnarray}
\lefteqn{G_\mu(r) = \sum_{l_2=1}^\infty G_\mu(l_1=1,l_2;r)\, l_2 F_\mu(l_2)}\nonumber \\
& &= \frac{\partial}{\partial x} \oint_{C_y} \frac{dy}{2\pi {\rm i}y} \, G_\mu(x,\frac{1}{y};r)
y \frac{\partial}{\partial y} F_\mu(y) \Big|_{x=0} \nonumber \\
& &= \frac{\partial}{\partial x} F_\mu(\hat{x}) \Big|_{x=0}
= \frac{1}{g} \frac{\partial}{\partial r} F_\mu(\hat{x}) \Big|_{x=0} .
\label{x12}
\end{eqnarray}
\begin{figure}
\unitlength=1.00mm
\linethickness{0.6pt}
\begin{picture}(140.00,70.00)
\put(55.00,30.00){\line(0,1){10.00}}
\put(55.00,40.00){\line(1,-1){10.00}}
\put(65.00,30.00){\line(0,1){10.00}}
\put(65.00,40.00){\line(1,-1){10.00}}
\put(75.00,30.00){\line(0,1){10.00}}
\put(75.00,40.00){\line(1,-1){10.00}}
\put(85.00,30.00){\line(0,1){10.00}}
\put(85.00,40.00){\line(1,-1){10.00}}
\put(95.00,30.00){\line(0,1){10.00}}
\put(95.00,40.00){\line(1,-1){10.00}}
\put(105.00,30.00){\line(0,1){10.00}}
\put(55.00,40.00){\line(1,0){50.00}}
\put(55.00,30.00){\line(1,0){50.00}}
\bezier{56}(105.00,40.00)(105.00,49.00)(100.00,50.00)
\bezier{52}(105.00,30.00)(105.00,22.00)(100.00,20.00)
\bezier{344}(40.00,35.00)(42.00,68.00)(95.00,70.00)
\bezier{368}(40.00,35.00)(40.00,8.00)(105.00,5.00)
\bezier{260}(95.00,70.00)(128.00,70.00)(100.00,55.00)
\bezier{48}(100.00,55.00)(95.00,50.00)(100.00,50.00)
\bezier{80}(115.00,15.00)(107.00,17.00)(95.00,15.00)
\bezier{72}(95.00,15.00)(89.00,16.00)(100.00,20.00)
\put(100.00,50.00){\circle*{0.00}}
\put(99.00,49.00){\circle*{0.00}}
\put(98.00,48.00){\circle*{0.00}}
\put(97.00,46.00){\circle*{0.00}}
\put(96.00,44.00){\circle*{0.00}}
\put(96.00,42.00){\circle*{0.00}}
\put(96.00,28.00){\circle*{0.00}}
\put(96.00,26.00){\circle*{0.00}}
\put(97.00,24.00){\circle*{0.00}}
\put(98.00,22.00){\circle*{0.00}}
\put(99.00,21.00){\circle*{0.00}}
\put(100.00,20.00){\circle*{0.00}}
\bezier{188}(100.00,50.00)(127.00,64.00)(135.00,50.00)
\bezier{88}(135.00,50.00)(140.00,40.00)(135.00,30.00)
\bezier{172}(135.00,30.00)(131.00,19.00)(100.00,20.00)
\put(52.00,34.00){\makebox(0,0)[cc]{$l_1$}}
\put(108.00,35.00){\makebox(0,0)[lc]{$l$}}
\put(70.00,27.00){\makebox(0,0)[lt]{$r=10$}}
\put(82.00,53.00){\makebox(0,0)[rc]{{\large $G_\mu(l_1=1,l_2)$}}}
\put(135.00,60.00){\makebox(0,0)[rc]{{\large $l_2 \;G_\mu (l_2)$}}}
\put(118.00,45.00){\vector(-1,0){13.00}}
\put(121.00,45.00){\makebox(0,0)[rc]{$l_2$}}
\put(32.00,34.00){\makebox(0,0)[rc]{{\Huge $\Sigma$}{\LARGE $_{l_2}$}}}
\bezier{220}(105.00,5.00)(137.00,7.00)(115.00,15.00)
\end{picture}
\caption[fig2]{The 2-point function represented as a summation over 2-loop
functions times 1-loop functions.}
\label{fig2}
\end{figure}
As long as $c_2-c_1$ is small
and $r$ is larger than a few lattice spacings, we get:
\begin{equation}\label{x14}
G_\mu (r) = {\rm const.} \delta_0 \delta_1
\frac{\cosh( \delta_0 r)}{\sinh^3 (\delta_0 r)}
\left(1+{\cal O}(\delta_0)\right).
\end{equation}
Formula \rf{x14} shows
how to take the scaling limit: Let us return to the original formulation
and write in the limit $\triangle \m \to 0$:
\begin{equation}\label{x15}
G_\mu(r) = {\rm const.} \; (\triangle \m)^{3/4}
\frac{\cosh \left[(\triangle \m)^{\frac{1}{4}} \b r\right]}{\sinh^3
\left[(\triangle \m)^{\frac{1}{4}} \b r\right]},
\end{equation}
where {\it const.} and $\b$ are positive constants
of order ${\cal O}(1)$ ($\b = \sqrt{6} g_c$).
We conclude the following:
\begin{enumerate}
\item $G_\mu(r)$ falls of like ${\rm e}^{-2(\triangle \m)^{\frac{1}{4}} \b r}$ for $r \to \infty$,
i.e. the critical exponent $\nu = \frac{1}{4}$ and the Hausdorff dimension
$d_H =4$.
\item $G_\mu(r)$ behaves like $r^{-3}$ for $1 \ll r \ll \triangle \m^{-\frac{1}{4}}$, i.e.
the scaling exponent $\eta = 4$.
\item From Fisher's scaling relation we get $\gamma = \nu (2-\eta) = -1/2$.
This well known result can of course also be derived directly from
\begin{equation}\label{yy1}
\chi_\mu = \sum_{r=1}^\infty G_\mu(r) = {\rm const.} -c^2 (\triangle \m)^{\frac{1}{2}} + \cdots
\end{equation}
by use of \rf{x15}, but it should be clear that the explicit calculation
in \rf{yy1} is nothing but a specific example of the general calculation
used in proving Fisher's scaling relation. What is somewhat
unusual compared to ordinary statistical systems is that the
anomalous scaling dimension $\eta >2$. $\eta =0$ is the ordinary
free field result, while $\eta =2$ is the infinite temperature limit,
and for statistical systems we expect $\eta < 2$. A final comment
to \rf{yy1} is that the contant in front of $(\triangle \m)^{\frac{1}{2}}$ is negative,
as indicated by the notation. This has a direct physical interpretation:
$G_\mu(r)$ is by definition positive and the same is the case for $\chi_\mu$.
However, since $\gamma = -1/2$ it follows that
$\chi$ will not be divergent at the critical point $\mu_c$. But
\begin{equation}\label{yy2}
\tilde{\chi}_\mu \equiv -\frac{d \chi_\mu}{d\mu} \sim \frac{c^2}{(\triangle \m)^{\frac{1}{2}}}
+ \cdots
\end{equation}
{\it is} divergent for $\mu \to \mu_c$ and {\it has} to be positive
since it, close to the scaling limit,
has the interpretation as the sum over all triangulations
with three marked links.
\item Any 2-loop function
$G_\mu(l_1,l_2;r)$ has the same behavior as $G_\mu(r)$ as long
as $l_1,l_2$ stay finite as $\triangle \m \to 0$.
\end{enumerate}
\noindent
It is clear that we could have taken the continuum limit almost at
any point in the above calculations, and in fact already in the
basic equation \rf{x5} by the substitution:
\begin{equation}\label{x17}
\frac{1}{x} = \frac{1}{x_c} + \a \xi a,~~~R = \b r \sqrt{a} ,~~~~\triangle \m = \mu_r
a^2,
\end{equation}
\begin{equation}\label{x18}
f_\mu(x) \sim a^{3/2}{\cal F}_{\mu_r}(\xi)+ {\cal O}(a^{5/2}),~~~~
{\cal F}_{\mu_r}(\xi) = (\xi - \frac{1}{2} \sqrt{\m_r})\sqrt{\xi+\sqrt{\m_r}},
\end{equation}
where ${\cal F}_{\mu_r}(\xi)$ is the universal disk-amplitude \cite{david3,am}.
The reason we kept the discretized version thoughout the calculation
was to avoid any ambiguity in going from the 2-loop function to
the 2-point function. Properties of the continuum 2-loop function
have already been studied \cite{transfer,watabiki,gk}, but in the
continuum version the length of the bounday loops $l_i$ is
already taken to infinity by $L = l a$, $a \to 0$, $l\to \infty$, $L$ fixed.
To get the 2-point function we would have to take the limit $L_1 \to 0$
in the continuum version $G_{\mu_r}(L_1,L_2;R)$ of $G_\mu(l_1,l_2;r)$.
We avoid this ambiguity and can write directly for the continuum
2-point function:
\begin{equation}\label{x19}
G_{\mu_r}(R) = \lim_{a\to 0} (\sqrt{a})^{\eta-1} G_\mu (r) \sim
(\mu_r)^{3/4}\frac{\cosh [(\mu_r)^{\frac{1}{4}} R]}{\sinh^3 [(\mu_r)^{\frac{1}{4}} R]}.
\end{equation}
The factor in front of $G_\mu(r)$ is the usual ``wave function renormalization''
present in the path integral representation of the propagator.
In this way the ``mass'' \rf{7} $M=2 \mu_r^{1/4}$, the unusual
power due to $d_H=4$. Again we find by explicit calculation the
continuum version of \rf{yy1}
\begin{equation}\label{yy4}
\chi_{\mu_r} = \int_0^\infty dR\; G_{\mu_r} (R) \sim
\frac{{\rm const.}}{a} - \frac{1}{6} \mu_r^{1/2},
\end{equation}
where the constant in front of $\mu_r^{\frac{1}{2}}$ has to be negative for the
reasons mentioned above.
It is interesting to note that the zero of $f_\mu(x)$ (or ${\cal F}_{\mu_r}(\xi)$)
determines the infinite $r$ limit of the 2-loop (or 2-point) function.
This follows directly from the solution \rf{x10} to
the characteristic equation for \rf{x5}. The distance $r$ can only
diverge if $\hat{x} \to 1/c_2$, the zero of $f_\mu(x)$. This zero is usually
uniquely determined by the requirement that the generating functional
$F_\mu (x)$ is analytical away from a cut on the real axis and
goes to a constant for $|x| \to 0$. We now see a direct
physical interpretation: The zero of $f_\mu(x)$, or more suggestive:
the pole of $1/f_\mu(x)$, determines the mass of the 2-point function.
Recall that for the branched polymer model we found that the
2-point function is $G^{(BP)}_\mu (r) \sim {\rm e}^{-\kappa \sqrt{\triangle \m}\, r}$,
or introducing continuous variables $\sqrt{\triangle \m} = M a$ and
$R = \kappa r a$:
\begin{equation}\label{yy5}
G_M^{(BP)} (R) = {\rm e}^{-M R },
\end{equation}
and compare this to the 2d quantum gravity 2-point function \rf{x19}:
\begin{equation}\label{yy6}
G_M (R) = \frac{1}{8} M^3 \frac{\cosh M R/2}{\sinh^3 M R/2}=
\frac{1}{2} M^3 \sum_{n=1}^\infty n^2\; {\rm e}^{-n MR}.
\end{equation}
While there is only a single mass excitation for the branched polymer
model and it from this point of view seems rather trivial, the
2d gravity model seems to contain an infinite tower of equidistant
mass excitations. Since we get the susceptibility $\chi(\mu_r)$
by integrating $G_M(R)$ with respect to $R$ we get the following
formal expression\footnote{The divergence of the sum comes from
the short distance behavior of $G_M(R)$. In the discretized version
of the theory this singular behavior is modified for
$R \sim 1/\sqrt{a}$ and a ``physical cut off'' would be present
such that $$\chi(\mu_r) = \frac{1}{2} M^2 \sum_n n \,{\rm e}^{-nM\sqrt{a}} =
\frac{1}{2a} - \frac{1}{6} \sqrt{\mu_r}$$ in agreement with \rf{yy4}.}
for $\chi(\mu_r)$
\begin{equation}\label{yy7}
Z''(\mu_r) = \chi(\mu_r) = \frac{1}{2} M^2 \sum_{n=1}^\infty n ~~
(= -\frac{1}{6} \mu_r^{1/2}),
\end{equation}
where the last equality sign uses $\sum_{n=1}^\infty n = -1/12$.
It agrees of course with the universal part of \rf{yy4} but is interesting
since integration (ignoring non-universal parts of $Z(\mu_r)$) leads
to the following representation of $Z(\mu_r)$ as a sum over
``mass excitations'':
\begin{equation}\label{yy8}
Z(\mu_r) = \frac{4}{3} \mu_r^{9/4} \sum_{n=1}^\infty nM.
\end{equation}
\section{discussion}
We have shown that the 2-point function $G_\mu(r)$ is a natural
variable which allows us to extract scaling relations in a simple
way. We tested the procedure in 2d quantum gravity which can be solved
analytically and found $\nu=1/4$ and $\eta=4$. From the scaling relations
\rf{11} and \rf{12} we conclude that $\gamma=-1/2$ (which is of course well known)
and $d_H=4$. At first sight it might be surprising that the Hausdorff
dimension is 4, which implies that the geodesic distances scale like
$r\sqrt{a} $ rather than like $a\,r $, where $a$ is the lattice spacing
in the triangulations. However, such non-trivial scaling is unavoidable
if $d_H \neq 2$ and here it has the following simple interpretation:
A boundary of $l$ links will have the discrete length $l$ in lattice
units, but if we view the boundary from the interior of the surface
its true linear extension $r$ will only be $\sqrt{l}$ since the
boundary can be viewed as a random walk from the interior. If we
insist on a continuum limit where we have surfaces with macroscopic boundaries
of length $L = a\,l$ and ``physical'' area $A=Na^2$, $a$ being the length
unit of the links, such that $L^2 \sim A$, we are led to
\begin{equation}\label{35}
A \sim L^2 \sim a^2 l^2 \sim a^2 r^4 \sim R^4.
\end{equation}
The above calculation is independent of the fact that we used triangulations
as the underlying discretization. Any resonable distribution of
polygons will give the same results. Once the distribution of polygons
is fixed there is a unique critical point $\mu_c$ of the chemical
potential for polygons. The discretized equation will still be given
by \rf{x5}, only will $f(x)$ be a higher order (even infinite order)
polynomium times a square root cut. Close to the critical point
it will still maintain the structure \rf{x7a} as follows from
the general analysis of matrix models \cite{ackm} and the results will
be unchanged for $\triangle \m \to 0$. In particular this shows that we would
have obtained the same results if we used the shortest link length
as geodesic distance, rather than the shortest link length on the dual lattice,
since we could instead perform the summation over $\phi^3$ graphs. These
would be in one-to-one correspondence with the triangulations and our
definition
of geodesic distance on this class of graphs would correspond to the
link-length
definition on the triangulations.
We find formula \rf{yy8} quite interesting. It should be possible to
understand the mass excitations $nM$ in terms of Liouville theory.
It is not clear that the program will work well in the case of
the so-called multicritical matrix models. In these models
the different polygons used in the discretization are not
assigned a positive weight in the summation and the basic inequalities
like \rf{6} are not valid.
In the Liouville formulation the multicritical models correspond
the non-unitary conformal field theories coupled to 2d quantum gravity
and we will face the problem of correlation functions growing with distance.
This problem is currently under investigation.
In the case of unitary models coupled to gravity we expect
our philosophy to apply. These models usually have a representation
at the discretized level as short range interacting spin models
which at specific temperatures become critical. For these models
there is a chance that estimates like \rf{6} might be valid.
The standard example is the Ising model on dynamical triangulations.
We hope to be able to solve this model by the technique outlined
above.
Finally it would be very interesting to generalize the above calculations
to higher dimensional simplicial quantum gravity. Work in this direction
is in progress \cite{aj1}.
\vspace{12pt}
\noindent
{\bf Acknowledgment} It is a pleasure to thank Jerzy Jurkiewicz
for many interesting discussions.
|
1,108,101,563,159 | arxiv | \section{Introduction}
\label{sec:intro}
\glspl{frb} are short-duration, broad-band, dispersed pulses that are detected
at radio frequencies. They are mostly classified by virtue of their dispersion
being far in excess of the expected Galactic contribution. As for radio pulsars,
for FRBs, where we observe the pulse over a frequency band ranging from $\nu_1$
to $\nu_2$, the resulting dispersion delay
\begin{equation}
\Delta t \propto {\rm DM} \, (\nu_1^{-2} - \nu_2^{-2}),
\end{equation}
where the \gls{dm} is the line integral of the electron column
density along the line of sight to the source.
Although the physical process that gives rise to FRBs is unknown, the
possibility that they originate at cosmological distances, and their potential
use as natural probes of large-scale structure and magneto-ionic content of the
Universe makes them worthy of attention. They appear as bright sources at the
telescopes on Earth, which indicates high luminosities given the implied
distance. As short duration bursts, probably emanating from point-like sources,
they offer the unique opportunities to probe the inter-galactic medium
\citep[IGM;][]{2013ApJ...776..125M}, as pulsars do for the Galactic interstellar
medium.
Since the first reported detection \citep{2007Sci...318..777L}, a number of
surveys using a range of radio telescopes have attempted to detect further
bursts. At the time of writing, 25 \glspl{frb} have been reported \citep[for an
up-to-date list, see][]{2016PASA...33...45P}. While the majority of these have
been detected with the Parkes Radio Telescope at 1.4~GHz (L-band), other
telescopes are making important contributions. FRB~121102 was detected in the
Pulsar Arecibo L-band Feed Array (PALFA) \citep{2014ApJ...790..101S}. This
\gls{frb} is the only known \gls{frb} to repeat \citep{2016ApJ...833..177S}.
FRB~110523 was detected with the Green Bank Telescope (GBT) at 820~MHz
frequencies, confirming \glspl{frb} are observable outside L-band
\citep{2015Natur.528..523M}. Recently, a number of very bright \glspl{frb} has
been detected with UTMOST at 843~MHz
\citep{2017MNRAS.468.3746C,atel10697,atel10867} and ASKAP at 1.4~GHz
\citep{2017ApJ...841L..12B}.
Even with the current small sample of FRBs population, it is clear that their
properties vary significantly. The measured \glspl{dm} range from
176~pc~cm$^{-3}$ (FRB~170827) to 2596 pc~cm$^{-3}$ (FRB~160102), with pulse
widths ranging from sub-ms (unresolved) to 26~ms, and apparent flux densities
covering four orders of magnitude. If the population is extragalactic then the
sky distribution is isotropic. But, there is an apparent observational
disparity in the \gls{frb} event rate between high and low Galactic latitudes,
possibly due to diffractive interstellar scintillation
\citep{2015MNRAS.451.3278M}.
Single dish telescopes have been essential to detection of \glspl{frb} and
continue to be useful for population statistics. But, these telescopes provide
limited localization. The unknown detection position in the primary beam, and
one-off nature of most of the \glspl{frb} does also not allow precise
determination of the absolute flux density or the spectral index. Only the
repeater FRB~121102 has been localized using \gls{vlbi}
\citep{2017ApJ...834L...8M, 2017ApJ...834L...7T}. Localization is key to
understanding \glspl{frb}. This requires the use of interferometric arrays with
arc-second accuracy, such as MeerKAT, ASKAP, and the SKA.
Apart from localization, \gls{frb} spectra offer important clues on the nature
of the emission process. Low frequency searches with LOFAR
\citep{2015MNRAS.452.1254K}, MWA \citep{2015AJ....150..199T}, and the GBT
\citep{2017arXiv170107457C} have reported non-detections.
A limited number of \gls{frb} surveys have been above L-band frequencies.
This is, in part, due to the narrowing of beam size which limits sky coverage.
V-FASTR \citep{2016ApJ...826..223B}, a commensal survey on the VLBA, has
reported a non-detection on observations up to 100~GHz.
\cite{2017arXiv170507553L} ran a coordinated-in-time, multi-telescope campaign
of the repeater \gls{frb}. They report non-detection of pulses at VHF, C-band,
Ku-band during periods of detected bursts in L-band and S-band. \cite{atel10675}
report detections of FRB121102 from 4-8 GHz (C-band). In summary, our current
understanding of \glspl{frb} spectra is limited, however they appear not to
follow the steep power law example of radio pulsars, and may even not be smooth
and continuous with frequency.
For single dish telescopes there is a trade-off of sensitivity for survey speed.
Small dishes, such as those in the ATA `Fly's Eye' survey
\citep{2012ApJ...744..109S}, allow for a large sky coverage, but have low
sensitivity. ASKAP dishes with \glspl{paf} provide a large sky coverage with a
significant enough sensitivity to detect bright \glspl{frb}. Conversely,
Arecibo provides the highest sensitivity, but with a very narrow beam.
The majority of \glspl{frb} have been discovered with Parkes using the
multi-beam system. The high sensitivity, large number of survey hours, and
increased field of view from using multiple beams all contribute to the large
number of detections. Interferometric arrays such as CHIME and MeerKAT will
provide both sensitivity and sky coverage. One important question relating to
the nature of the \gls{frb} population is what are the statistics of source
numbers versus source flux density, and whether or not the cumulative flux
density distribution is consistent with a population of cosmologically
distributed standard candles. To answer this question, it is particularly
interesting to sample both extreme ends of the flux density axis: the brightest
\glspl{frb} discovered using small telescopes in long duration and large
sky-coverage surveys, as well as the weakest \glspl{frb} sampled through
high-sensitivity observations with large telescopes, necessarily sacrificing
survey time and sky coverage.
In this paper, we describe results from the ALFABURST survey, which has enabled
high sensitivity observations to better sample the low flux density end of the
population. ALFABURST makes use of the large amount of time spent by the ALFA
receiver for other astronomical surveys. In Section \ref{sec:overview}, we
summarize the survey parameters and observations carried out so far. A
wide-feature, learned model was used to classify each dataset in order to
filter out radio-frequency interference and create a priority queue for visual
examination. This model and the post-processing procedures are discussed in
Section \ref{sec:event_classify}. Although no FRBs have been found in
observations carried out so far, we did detect one pulse that is consistent
with an origin in the Galactic plane. This source is discussed in Section
\ref{sec:18062017}. We discuss the expected event rates in Section
\ref{sec:event_rates}. Finally, in Section \ref{sec:discuss}, we consider
possible explanations for our non-detection of FRBs so far and speculate on
future developments.
\section{Observations}
\label{sec:overview}
\subsection{ALFABURST description}
ALFABURST is an \gls{frb} search instrument which has been used to commensally
observe since July 2015 with other \gls{alfa} observations at the Arecibo
Observatory. This system is a component of the SETIBURST back-end
\citep{2017ApJS..228...21C} and uses ARTEMIS \citep{2015MNRAS.452.1254K} for
automated, real-time pulse detection. We perform inline radio-frequency
interference (RFI) removal, baselining using zero-DM removal
\citep{2009MNRAS.395..410E}, and spectrum normalization before single pulse
detection. During this time period a \gls{sps} was performed from \gls{dm} 0 to
10000~pc~cm$^{-3}$, pulse widths from $256~\mu s$ to $16$~ms (using a logarithmic
decimation factor $D=1,2,4,\ldots,64$), across a 56~MHz bandwidth for
all 7 beams. We return to the effective \gls{dm} of the search in Section
\ref{sec:event_rates}. The gain of Arecibo allows for the most sensitive
\gls{frb} search to date.
Detections above a peak signal-to-noise ratio (S/N) of 10 were recorded along
with an $8.4$~s dynamic spectrum window around the event. When multiple events
were detected in the same time window, these events were pooled together and
recorded to disk. Approximately $2.5 \times 10^5$ 8.4~s datasets were recorded
between July 2015 and August 2017, the vast majority of which are false
detections due to \gls{rfi} signals passing the real-time \gls{rfi} exciser. We
have detected no \glspl{frb} in our commensal survey.
\subsection{Inline RFI Excision}
\label{sec:rfi_excise}
An inline RFI exciser is implemented in the pipeline to mitigate strong RFI
sources. This leads to a significant reduction in the number of false-positive
detections in the dedispersion search. Individual frequency channels in a
spectrum are replaced when the power exceeds a threshold $T_{\textrm{chan}}$
after the spectrum is normalized to zero mean, unity standard deviation
($\mu=0$, $\sigma=1$). Entire spectra are also clipped when their
frequency-integrated power exceeds a threshold $T_{\textrm{spectra}}$. For
standard ALFABURST operation $T_{\textrm{chan}} = 5$ and $T_{\textrm{spectra}} =
10$. The \gls{rfi} exciser operates on data prior to any time decimation and
integration ($D=1$).
For very bright pulses, the \gls{rfi} exciser will erroneously replace channels
or spectra, reducing the overall flux. For the sensitivity of the \gls{alfa}
receiver, individual channels with flux greater than 2.8~Jy and,
frequency-integrated flux greater than $\sim250$ mJy are excised. The peaks of
bright \glspl{frb} such as FRB~150807 and FRB~170827 would be significantly
clipped by the exciser. But, the edges of the pulse would not. Both of these
\glspl{frb} would still be detected at a significant peak S/N. All previously
reported \glspl{frb} would be detected with ALFABURST at high peak S/N even if
partially clipped.
The zero-DM removal and spectral replacement affects low-DM pulses. For
reference, the minimum \gls{dm} before the total dispersive delay across the
band is equal to a time sample is \gls{dm}~$=1.8$~pc~cm$^{-3}$ for the
typical ALFABURST observing band (using Eq. 5.1 of \cite{2004hpa..book.....L}).
The minimum \gls{dm} before the dispersive delay within a single channel equals
the sampling time (also known as the diagonal \gls{dm}) is
\gls{dm}~$=976$~pc~cm$^{-3}$. Single pulses from low-DM pulsars such
as B0834+06 are often clipped by the exciser, but are still detected at
significant peak S/N (see Table \ref{tab:knpsrtab}). As ALFABURST is focused on
detecting high-DM pulses, spectral replacement does not affect the survey
sensitivity.
\subsection{Single Pulse Search Verification}
\label{sec:system_verify}
The PALFA survey schedule includes regular observations of known pulsars to
verify their data analysis pipeline. This provides a consistent verification of
our \gls{sps} to detect dispersed pulses. As the PALFA survey is targeted at the
Galactic plane, a number of high \gls{dm} pulsars were observed. Single pulses
from B1859+03 (\gls{dm}: 402), B1900+01 (\gls{dm}: 245) (Figure
\ref{fig:B1900}), B2002+31 (\gls{dm}: 234), B1933+16 (\gls{dm}: 158), among
others were detected. Table \ref{tab:knpsrtab} lists the parameters for the
known pulsars detected by the \gls{sps}.
\begin{table}
\begin{center}
\begin{tabular}{ld{2.1}d{3.2}d{3.0}d{4.0}d{2.1}}
\hline
\multicolumn{1}{c}{PSR} & \multicolumn{1}{c}{S\subs{1400}} & \multicolumn{1}{c}{DM\subs{cat}} & \multicolumn{1}{c}{DM\subs{obs}} & \multicolumn{1}{c}{N\subs{pulses}} & \multicolumn{1}{c}{S/N\subs{max}} \\
& \multicolumn{1}{c}{(mJy)} & \multicolumn{1}{c}{pc cm\sups{--3}} & \multicolumn{1}{c}{pc cm\sups{--3}} & & \\
\hline
B0525+21 & 9.0 & 50.87 & 50 & 1 & 72.3 \\
B0540+23 & 9.0 & 77.70 & 77 & 1 & 11.7 \\
B0611+22 & 2.2 & 96.91 & 101 & 5192 & 48.8 \\
J0631+1036 & 0.9 & 125.36 & 125 & 7 & 10.2 \\
B0834+06 & 4.0 & 12.86 & 9 & 223 & 35.0 \\
B1133+16 & 32.0 & 4.84 & 7 & 291 & 15.5 \\
B1737+13 & 3.9 & 48.67 & 46 & 1880 & 49.4 \\
B1859+03 & 4.2 & 402.08 & 402 & 2 & 20.4 \\
B1900+01 & 5.5 & 245.17 & 246 & 151 & 35.4 \\
J1908+0457 & 0.9 & 360.00 & 352 & 3 & 12.9 \\
J1908+0500 & 0.8 & 201.42 & 202 & 160 & 18.5 \\
J1910+0728 & 0.9 & 283.70 & 288 & 2 & 10.2 \\
J1913+0904 & 0.2 & 95.30 & 97 & 1524 & 44.7 \\
B1913+10 & 1.3 & 241.69 & 245 & 2 & 16.1 \\
B1933+16 & 42.0 & 158.52 & 154 & 10 & 30.5 \\
B1937+24 & * & 142.88 & 146 & 37 & 24.6 \\
B2002+31 & 1.8 & 234.82 & 250 & 4 & 27.6 \\
\hline
\end{tabular}
\end{center}
\caption{Parameters for known pulsars detected in the ALFABURST survey. The
columns from left to right are, pulsar name, mean flux density at 1400~MHz,
catalog DM, observed DM of the strongest pulse, number of detected
single-pulses, and maximum single-pulse S/N. The mean flux density at 1400
MHz and catalog DM were obtained from the ATNF pulsar catalog (version
1.56).}
\label{tab:knpsrtab}
\end{table}
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/B1900_01.pdf}
\caption{Detection of a single pulse from PSR B1900+01 (DM 245~pc~cm$^{-3}$). The
baseline dip before and after the pulse is due to zero-DM removal
\citep{2009MNRAS.395..410E}. }
\label{fig:B1900}
\end{figure}
\subsection{Survey Coverage}
\label{sec:survey_coverage}
Since ALFABURST was installed, the majority of ALFA observation time is
allocated for the AGES \citep{2006MNRAS.371.1617A} and PALFA
\citep{2006ApJ...637..446C} surveys (Figure \ref{fig:sky_coverage}). The AGES
survey pointings are off the Galactic Plane, which is ideal for \gls{frb}
surveys.. PALFA is a pulsar search survey with pointings near the Galactic
Plane. These lines of sight can introduce significant dispersion due to the
\gls{ism}. We search out to a DM of 10$^{4}$~cm$^{-3}$~pc which is well beyond
the maximum Galactic dispersion, but within the technical capabilities of our
system.
\begin{figure*}
\includegraphics[width=1.0\linewidth]{figures/cartview_sky_coverage.pdf}
\caption{Sky coverage during ALFA usage between July 2015 and June 2017,
shown in a Cartesian projection in Galactic coordinates along with
declination pointing limits (blue dashed). Color represents total time
pointing in a log scale. The majority of ALFA usage during this time was for
the PALFA survey along the Galactic Plane (dot-dashed boxes) and the AGES
survey (dashed box). The S-shaped arcs across the plot are due to fixed
pointings in local azimuth and altitude.
}
\label{fig:sky_coverage}
\end{figure*}
Approximately 65\% of the ALFABURST survey time has been in pointings out of the
Galactic Plane ($|b| > 5^{\circ}$). These pointings are primarily from the
ongoing AGES survey. Pointings in the plane are primarily from the PALFA
survey. The PALFA survey detected the repeating \gls{frb} FRB121102
\citep{2014ApJ...790..101S}, the only \gls{frb} detected with Arecibo thus far.
As ALFABURST has been running commensally with the PALFA survey since 2015 these
two back-ends act as independent single-pulse search pipelines, useful for
detection verification. Since the beginning of ALFABURST observations no
\glspl{frb} have been reported by PALFA. No follow-up observations of FRB~121102
have been conducted using ALFA.
\subsection{Observing Time}
\label{sec:obs_time}
From the beginning of July 2015 to the end of April 2017 \gls{alfa} has been
used for approximately 1400 hours of observing, with all seven beams functional.
Due to pipeline development and hardware reliability, ALFABURST was active and
functional for, on average, 322 hours per beam. The current system is set up to
be reliably in use for all beams any time \gls{alfa} is active and in the
correct receiver turret position. Since April 2017 this stable version of the
pipeline has run for an additional 196 hours. This has resulted in a total of
518 hours of processed observing time since ALFABURST began commensal observations.
\section{Event Classification Strategy}
\label{sec:event_classify}
The significant DM trial range, variety of \gls{rfi} events, and commensal
nature of the survey, leads to a large number of false detections.
Approximately $2 \times 10^5$ unique 8.4~s datasets were recorded with at least
one detection above the minimum peak S/N threshold of 10. In order to reduce the
number of events we need to visually inspect we have developed a prioritizer
model based on a trained probabilistic classifier. The use of trained
classifier models is becoming a common post-processing technique in \gls{frb}
surveys \citep{2016PASP..128h4503W} in order to manage the large number of
detected events. Our model can be found in the survey git
repository\footnote{https://github.com/griffinfoster/alfaburst-survey}.
Building the model involved inspecting and labelling a sample of the events. We
used a sample set of approximately 15,000 event windows. For each event window,
a diagnostic plot was generated which contained the original dynamic spectrum,
the dedispersed dynamic spectrum of the S/N-maximized DM, along with a frequency
collapsed time series of the detection. During figure generation 409 features
were also computed to be used in the model. These features include statistics
such as the number of triggers in the event window, the DM range of these
triggers, and the median, mean, and standard deviation of a coarse pixelization
of the dynamic spectrum ($4 \times 16$) and S/N maximized dedispersed time
series (16 segments). A complete list of the features can be found in the
survey git repository. These raw features were reduced during model
pre-processing to 398 features.
In order to build a classifier model using the derived event statistics, a
sample of events were visually inspected and labelled into 8 classes of RFI,
systematic effects, and astrophysical source (pulsars) (Table
\ref{tbl:event_classes}). These heuristic classes were based on multiple,
iterative inspections of the sampled events. A simple binary astrophysical
classifier of events leads to a poor model because the types of events which
are non-astrophysical take on a variety of forms.
\begin{table}
\centering
\begin{tabularx}{\linewidth}{clX}
\hline
Class ID & $N_{\textrm{events}}$ & Description \\
\hline
1 & 151 & Unclipped low-level RFI \\
2 & 4159 & Wide-band, duration \textgreater 1 second clipped RFI (2016+) \\
3 & 1898 & Wide-band, duration \textless 1 second clipped RFI (2016+) \\
4 & 448 & Wide-band, short duration clipped RFI (2015) \\
5 & 617 & Sharp bandpass transition \\
6 & 4649 & Wide-band, bursty, clipped RFI (2015) \\
7 & 863 & Error in spectra capture or replacement \\
8 & 1594 & Systematic int/float overflow \\
9 & 691 & Astrophysical pulse or unknown event \\
\hline
Total & 15070 &
\end{tabularx}
\caption{Event classes and distribution from the sample of labelled events used
to train the priority classifier model.}
\label{tbl:event_classes}
\end{table}
The class distribution is time-dependent as the detection pipeline has been
updated, the RFI environment has changed, and the telescope observing schedule
has changed over the time the survey has run. Classes 2 and 3 occur after the
inline RFI exciser was improved in 2016. Whereas classes 4 and 6 are events that
occur with the original RFI exciser. Because the ALFABURST is operating in
commensal mode, the band can unexpectedly be changed due to a change in the
observing frequency, these event windows are labelled as class 5 events. Class 7
and 8 are due to packet loss and incorrect digital gain settings. We found that
class 8 events can be removed simply by checking for overflow values in the
spectra, and therefore this class is dropped before building a classifier model.
Pulses from known pulsars were used as a proxy class for the FRB class. The
number of astrophysical pulse detections was low compared to the total number of
false-positive detections. It was necessary to use a large number of classes
as RFI and systematic effects took on a variety of forms. This had the
additional effect of balancing out the number of events in each class, making
model training more robust.
These features along with the labels were used to build a random forest
probabilistic classifier model \citep{Ho:1995:RDF:844379.844681,Breiman2001}
using the \texttt{scikit-learn} package \citep{scikit-learn}.
This model is then used to probabilistically predict which class belongs to an
unlabelled data set. A one vs. the rest multi-class classifier strategy was used
for training. Before training, the features of the labelled data sets were
median removed and standard deviation normalized using an interquartile robust
scaler. A random forest of 80 trees and 20 random features per node split was
found to produce the best score in a hyper-parameter grid search using a
log-loss scoring metric. During training and hyper-parameter optimization a
stratified k-fold cross-validation (3 splits)
\citep{Kohavi:1995:SCB:1643031.1643047} procedure was used.
The trained model is successful at predicting the majority of the astrophysical
events to be astrophysical with high probability, as shown in the confusion
matrix (Figure \ref{fig:confuse}) when using 75\% of the labelled events for
training, and 25\% for testing. Of the non-astrophysical events, only 13 events
were misclassified as being likely astrophysical. A reasonably small number of
false-positive events to inspect. But, of the 163 astrophysical pulses in the
testing set, 6 events were misclassified. This is a more serious issue as we
would like to minimize the number of false-negative events for astrophysical
event windows.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/confusion_matrix.pdf}
\caption{Confusion matrix of labelled testing data set after training the
random forest model with the labelled training data set.
}
\label{fig:confuse}
\end{figure}
In searching for \glspl{frb} we are inclined to allow for a large number
of false-positive events (detection due to RFI or systematics) as long as there
are no false-negative events (pulses classified as RFI), i.e. a high recall for
astrophysical pulses. But, the confusion matrix is computed based on a discrete
class classification. The probabilistic predictions of all the astrophysical
pulses to be of the astrophysical class in the test set are all above 0.25
(Figure \ref{fig:class_hist}), while 20 events are reported as
false-positive for class 9 above this probability. We use this threshold to
select the top candidates from the survey. The events are sorted into a priority
queue based on the probability the event is astrophysical.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/class9_hist.pdf}
\caption{Histogram of the probability that a given astrophysical event
(class 9, blue) and all other classes (green) is predicted to be an
astrophysical event using our probabilistic classifier model.
}
\label{fig:class_hist}
\end{figure}
Using a probabilistic multi-label classifier allows us to prioritize the order
and amount of time we spend on examining event datasets. Those with high
probability of belonging to a single class can be examined as a group quickly.
Datasets which fall into multiple classes are examined more thoroughly, they are
labelled by hand, and the set of features extracted during the figure generation
process is refined to further differentiate classes. This model building,
prioritizing, and examination process was iterated on multiple times to improve
the classifier. We continue to iterate on this model and will use it for future
prioritization of examining events.
We have not used the classifier model directly in our pipeline as the black-box
nature of the model can lead to misclassification, rather we have used it to
create a priority queue. We have also used our classifier model as a data
exploration tool to add and refine procedural filters to the data. An output of
the random forest model is the sort the features by 'importance' for
classification. For example, the most important feature for correctly
classifying a class 1 event (long duration replaced RFI) was the length of the
longest period of the dynamic spectrum with a derivative of zero. This makes
sense, as wideband RFI is replaced by a mean-zero noise spectrum. The most
important features for correctly predicting an astrophysical pulse were the
statistics from coarse pixelization of the dedispersed time series. This can be
attributed to the detection of a high S/N event in an otherwise noisy time
series.
Understanding the feature importance has led to the development of a
number of simple filters to reduce the number of false-positive detections
without relying on the classifier model. Data sets were cut if any of the
following criteria were met:
\begin{itemize}
\item The maximum DM of events was less than 50~cm$^{-3}$~pc.
\item Given the optimal dispersion measure, DM$_{\textrm{opt}}$, obtained
from the S/N-maximized DM trial, if the DM range exceeds $(0.5 \times {\rm
DM}_{\textrm{opt}}, 1.5 \times {\rm DM}_{\textrm{opt}})$, then the event is
due to long duration RFI.
\item More than 50\% of the spectra were replaced in the dataset.
\item Any values in the dataset exceed the \texttt{int32} maximum value.
These are is class 8 events, and due to errors in receiving packets.
\end{itemize}
These filters were applied to each dataset in post-processing to reduce the
number of datasets to approximately 30,000. The windows were sorted by S/N, and
the top S/N events were examined first. During this process all datasets were
labelled. Astrophysical events were identified based on the beam ID and
pointing information.
\section{The event of 2017, June 18}
\label{sec:18062017}
Though we report no detection of FRBs in the first two years of observations
with ALFABURST we have made an initial detection of an as yet unknown broad-band
(within our band) pulse (Figure \ref{fig:D20170618_spectrum}) at a peak S/N of
18. The peak S/N is maximized by dedispersion using a DM of 281~pc~cm$^{-3}$ and
time decimation factor 8. The main pulse width is approximately 3~ms. The pulse
occurred in beam 5, and there were no other detections in the other beams at the
time.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/Beam5_fb_D20170618T005616_buffer2_spectrum.pdf}
\caption{A broad band pulse (S/N maximized at DM~$=281$~pc~cm$^{-3}$)
detected in beam 5 while the telescope was slewing during a PALFA
observation. There is no known source which has been associated with this
detection. As the observation was in the Galactic Plane it is likely
Galactic in origin.
}
\label{fig:D20170618_spectrum}
\end{figure}
The pulse is made up of two clear components, with the secondary
pulse arriving approximately 20~ms after the primary pulse, as seen in the
dynamic spectrum (Figure \ref{fig:D20170618_spectrum}). In DM-time space the
event is compact, consistent with a $\nu^{-2}$ dispersion relation (Figure
\ref{fig:D20170618_dmspace}), though such a fit has large error bars due to the
small fractional bandwidth that is processed with ALFABURST.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/Beam5_fb_D20170618T005616_buffer2_dmspace.pdf}
\caption{DM-time plot of the 2017 June 18 pulse. The pulse is compact in
DM-time space, consistent with an astrophysical event. The secondary pulse
20~ms after the primary pulse causes the intensity to be slightly elongated
to higher trial DMs.
}
\label{fig:D20170618_dmspace}
\end{figure}
The detection occurred at 04:56:16~UT on 2017, June 18 (MJD 57922) during a
PALFA observing run. The event was not seen by the PALFA collaboration as it
occurred when the telescope was slewing between fields and the PALFA
spectrometers were not running. This is the first known detection of a
transient, broad-band pulse using ALFA during such a slew. However, this makes
it challenging to determine the accurate source position. Pointing information
from Arecibo is reported every second. During the detection the pointing was
changing by approximately $5'$ per second in right ascension $2'$
per second in declination. This rate gives us a conservative estimate
of the error in pointing at the time the pulse was detected. Based on the time
stamp of the pulse and the pointing data the pulse occurred when beam 5 of ALFA
was pointing at right ascension: 18~h 45~m $10 \pm 20$~s, and declination: +00 d
$38 \pm 2$' (Galactic coordinates $l: 32.78 \pm 0.05^{\circ}, ~b: +1.68 \pm
0.05^{\circ}$).
This beam 5 pointing is close to the Galactic plane in the first quadrant. The
DM distance estimated from the NE2001 model \citep{2002astro.ph..7156C} is
approximately 6 kpc, which is well within the Galaxy. The maximum Galactic
contribution along this line of sight would produce a DM of
$\sim800$~pm~cm$^{-3}$. A search of the ATNF pulsar
database\footnote{http://www.atnf.csiro.au/people/pulsar/psrcat}
\citep{2005AJ....129.1993M}, Rotating Radio Transient (RRAT)
catalog\footnote{http://astro.phys.wvu.edu/rratalog}, and recent PALFA
discoveries \footnote{http://www.naic.edu/$\sim$palfa/newpulsars/} revealed no
known source with a DM near 281 pc~cm$^{-3}$ within a degree of the pointing.
As the telescope was slewing at the time, the source was only in the primary
lobe for a fraction of a second (assuming it was in the primary lobe and not a
side lobe). A source on the edge of the \gls{fwhm} beam would transit the beam
in a maximum of 500~ms for the slew rate of the telescope (at the ALFABURST
observing frequency this corresponds to a dispersed pulse with a maximum DM of
3500~pc~cm$^{-3}$). It could therefore be a RRAT which we serendipitously
detected at the correct moment, or it could be an individual pulse from a
pulsar. This event is similar to FRB010621 \citep{2011MNRAS.415.3065K}
which is likely of galactic origin \citep{2014MNRAS.440..353B}.
\cite{2012MNRAS.425L..71K} suggest FRB010621 is due to pulsar giant pulse or
annihilating black holes. The second component would seem to rule out the latter
interpretation in this instance. This region has been previously surveyed with
PALFA and the Parkes Multi-beam Survey \citep{2001MNRAS.328...17M} with no
significant detection of a pulsar at this DM.
The pulse appears brighter at higher frequencies, which could be due to
scintillation. Another reason for this frequency-dependent structure is that the
pointing of the telescope is changing during the total dispersion time of the
pulse within the observed band. As the pointing moves, the corresponding
telescope gain also changes. There was a higher beam gain at the beginning of
the pulse compared to the end of the pulse, inducing a frequency-dependent gain
response due to the beam, also known as \emph{spectral colorization}. A more
detailed analysis of this event and the results of follow-up observations, will
be presented elsewhere.
\section{Expected FRB Events}
\label{sec:event_rates}
The currently known 25 FRBs vary significantly in \gls{dm}, pulse width, and
flux density. Despite this, we assume a simple model to derive an expected event
rate with our survey\footnote{Jupyter notebooks used to carry out this work are
freely available and are hosted at
https://github.com/griffinfoster/alfaburst-initial-survey}. We use a model
\citep[see equation 9 of][]{2013MNRAS.436L...5L} which assumes \gls{frb} sources
are standard candles with a fixed spectral index, uniformly distributed in
co-moving volume. The event rates in this model have been updated to the
event rates reported in \cite{2016MNRAS.460.3370C}. For an observed pulse of
typical width 4~ms (see below), these event rates are in the range 1100--7000
bursts per sky per day, where the range indicates statistical errors for the
99\% credible region.
Taking advantage of the large forward gain of Arecibo, we account for the
sensitivity of the 7 \gls{alfa} beams out to the outer edge of the first side lobe.
In practice we do this by splitting the beam and first side lobe into shells of
progressively lower gain but larger sky coverage, and integrate to obtain the
totals. An \gls{alfa} beam is approximately 3.8'~$\times$~3.3' at \gls{fwhm}
across the band. The ALFA beam is known to be relatively fixed in size
across the band due to the optics \citep{GALFAbeam}. Given the average
observing time per beam of 518 hours this results in a survey coverage of $\sim
10 \; \textrm{deg}^2 \; \textrm{hours}$ when accounting for all 7 beams. This is
a small survey coverage compared to most other \gls{frb} surveys, primarily due
to the narrow beam size of Arecibo. The combined Parkes multi-beam surveys have
a total of 8231 observation hours \citep{2016MNRAS.460.3370C}, and a FWHM survey
metric of $\sim 4500 \; \textrm{deg}^2$ hours. ALFABURST does not compete with
other surveys on sky coverage, rather it competes on sensitivity. This results
in probing a greater redshift range than for Parkes. Using Equation 6 of
\cite{2015MNRAS.452.1254K}, a single-pulse-search pipeline is sensitive to
pulses with a minimum flux density
\begin{equation}
S_{\rm min} = \textrm{SEFD} \frac{\textrm{S/N}_{\rm min}}{\sqrt{D \; \Delta \tau \;
\Delta \nu}}
\end{equation}
which is a function of the telescope \gls{sefd}, the minimum S/N detection level
$\textrm{S/N}_{\rm min}$ and the decimation factor $D$ compared to the native
instrumental time resolution $\tau$, this comes from the search pipeline which
averages together spectra to search for scattered pulses. ALFABURST has a native
resolution of $\Delta \tau = 256 \; \mu s$, effective bandwidth $\Delta \nu = 56
\textrm{MHz}$, and $\textrm{S/N}_{min} = 10$. The FWHM \gls{sefd} of the
\gls{alfa} receiver is approximately 3~Jy across the band for all beams.
The \gls{sps} pipeline is configured to search for pulses from 256~$\mu$s to 16
ms. Considering only the main beam lobe, a perfect matched filter would result
in a sensitivity to pulses with a minimum frequency-averaged flux of $S_{256
\mu\textrm{s}} = 250$ mJy to $S_{16 \; \textrm{ms}} = 31$~mJy
\citep{2015MNRAS.452.1254K}. Figure \ref{fig:fwhm_sefd_z} shows the peak flux
density of using the standard candle \gls{frb} model as a function of source
redshift for different model spectral indices. The dashed lines of constant flux
show the sensitivity of the ALFABURST search pipeline to pulses of different
widths. Assuming a positive spectral index model ($\alpha=1.4$) results in a
sensitivity out to the maximum redshift/\gls{dm} for pulses with widths of at
least 1 ms. A flat spectral index model results in sensitivity from $z \sim 1.5$
(256~$\mu$s) out to $z \sim 5$ (16~ms) depending on pulse width. A negative
spectral index model ($\alpha \sim -1.4$) limits the survey to $z < 3$ for all
pulse widths.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/fwhm_sefd_z_relation.pdf}
\caption{Sensitivity of the ALFABURST search pipeline (dashed) to FRB pulses
assuming a standard candle model using different spectral index models
(solid).
}
\label{fig:fwhm_sefd_z}
\end{figure}
If we assume a simple model of $\alpha=0$ as we have limited information about
the source spectral index, and a pulse width of 4 ms as that is an approximate
median pulse width of reported \glspl{frb}, then this results in a maximum
redshift of $z=3.4$ (a co-moving distance of 6.8 Gpc) and a survey volume of $6
\times 10^5$ Mpc$^3$ when using all 7 \gls{alfa} beams. The number of galaxies
sampled in this volume is $6 \times 10^3$ assuming a constant galaxy number
density of $10^{-2}$ per Mpc$^3$. The volumetric event rate from
\cite{2013Sci...341...53T}, is stated to be $R_{\textrm{FRB}} = 10^{-3}$
\glspl{frb} per galaxy per year. Adopting the more realistic lower rates found
by \cite{2016MNRAS.460.3370C} based on a larger sample of discoveries, we adopt
$R_{\textrm{FRB}}$ to be in the range $1.1 \times 10^{-4}$---$7.0 \times
10^{-4}$ \glspl{frb} per galaxy per year. With these assumptions, we do not
expect any FRB detections based on the current observation time. We note once
again that the areal coverage used in this calculation is only based on the
sensitivity and size of the main beam lobe.
As mentioned above, it is worth also taking into account the entire first side
lobes of the beams as Arecibo would be sensitive to detect most previous
\glspl{frb} in these. Using the parameterized \gls{alfa} beam model (Figure
\ref{fig:alfa_beam}) \citep{GALFAbeam} we can compute the \gls{frb} survey
metric and expected rates as a function of beam sensitivity. The first side
lobes peak at around $-10$ dB and provide a significant increase in sky coverage
compared to just the primary lobes.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/ALFA_beam_1425MHz_dB.pdf}
\caption{Primary and first side lobe model of the AFLA receiver in
decibels, cut-off at $-30$ dB.The first side lobe peak at around $-9$ dB.
}
\label{fig:alfa_beam}
\end{figure}
The total survey metric can be computed as a function of the beam sensitivity by
integrating over the beam (Figure \ref{fig:survey_metric_sense}). We convert the
beam model to units of Jy by assuming that the $-3$ dB point corresponds to the
\gls{fwhm} SEFD of 3~Jy. The survey metric increases to approximately $26 \;
\textrm{deg}^2$ hours by including more of the primary beam beyond the
\gls{fwhm} point. The steep further increase in the survey metric seen in Figure
\ref{fig:survey_metric_sense} arises from including the first side lobes. The
long tail comes from the residual sensitivity by integrating over the remaining
beam. The beam model and polynomial fits to the survey metric curves are
included in the event rate notebooks.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/full_survey_metric_sense.pdf}
\caption{Survey metric as a function of the ALFA receiver minimum
sensitivity using the ALFA primary and first side lobes. The $-9$ dB point
(green circle) which is the beginning of the first side lobe sensitivity and
$-12$~dB point (red square) which is the FWHM of the first side lobe are
marked.
}
\label{fig:survey_metric_sense}
\end{figure}
The survey volume is significantly increased by including a large portion of the
beam. It is not possible to put together a figure similar to Figure
\ref{fig:fwhm_sefd_z} when considering the full beam. It is however possible,
under the assumption of flat intrinsic FRB spectra, to compute the maximum
redshift as a function of beam size and sensitivity. Plotting the survey metric
as a function of maximum redshift (Figure \ref{fig:full_sefd_z}) shows how the
full beam model increases the survey metric as a function of redshift. The total
survey volume is computed by integrating over redshift. Including additional
ALFA side lobes beyond the first side lobes, results in minimal increase in the
survey volume.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/full_sefd_z_relation.pdf}
\caption{Survey metric as a function of redshift using the standard candle
model with a flat spectral index ($\alpha=0$) and pulse width of 4 ms. The
bump out to $z=1.5$ is due to the including the ALFA first side lobes.
Markers indicate the $-9$ dB (green circle) and $-12$ dB (red square) of the
ALFA beam.
}
\label{fig:full_sefd_z}
\end{figure}
The integrated survey volume out to the first side lobe is $5.2 \times
10^6$~Mpc$^3$. The expected number of \glspl{frb} in the survey is 0--2 when
using the galaxy number density and range of $R_{\textrm{FRB}}$ stated above.
Though this event rate is more complex to model, it attempts to provide a more
realistic assessment of the expected detection rates based on the apparent flux
of previously reported \glspl{frb} and a flat spectral index.
Figure \ref{fig:sensitivity_range} shows the ALFABURST sensitivity based on
pulse width and peak flux, assuming detection at boresight. The ALFABURST
sensitivity region (purple) indicates the survey would be able to detect all
previously reported \glspl{frb}. Bright \glspl{frb} such as FRB150807 and
FRB170827 would be partially clipped by the inline RFI exciser (Section
\ref{sec:rfi_excise}), but they would sill be detected at a high peak S/N.
Additionally, in a multiple beam system a bright \gls{frb} would be
picked up at a lower flux in the side lobes of nearby beams. Recent detections
with UTMOST \citep{2017MNRAS.468.3746C,atel10697,atel10867} indicate that the
parameter space in pulse width should be extended. FRB170827 has a measured
pulse width of 26~ms. Currently the pipeline decimates in time out to 16~ms. The
pipeline is still sensitive to wider pulses, but at a loss in S/N as indicated
in the slope on the right side of the shaded region of Figure
\ref{fig:sensitivity_range}. Similarly, the left side of the region is sloped
as ALFABURST is sensitive to bright pulses with widths narrower than $256~\mu$s.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figures/sensitivity_range.pdf}
\caption{ALFABURST single pulse sensitivity (purple region).
Previously detected FRBs from Parkes (black triangle), GBT
(red circle), Arecibo (white diamond), UTMOST (teal pentagon), and ASKAP
(yellow-green hexagon) are plotted for reference. Line of constant fluence
(solid) are plotted for reference. The fluence completeness (dashed) is
0.5~Jy~ms out to pulse widths of 16~ms.
}
\label{fig:sensitivity_range}
\end{figure}
The fluence completeness of the survey \citep{2015MNRAS.447.2852K} is determined
by the minimum detectable fluence at the maximum sampled pulse width in the
survey. ALFABURST has a fluence completeness of $0.5$ Jy ms up to a pulse width
of 16 ms (Figure \ref{fig:sensitivity_range}). All previously reported FRBs are
within this completeness sample except FRB160317 and FRB170827.
\section{Discussion}
\label{sec:discuss}
In addition to the small searched volume, there may be other factors
contributing to our non-detection of \glspl{frb} with the ALFABURST survey. We
derived an expected event rate based on the telescope sensitivity, observing
time, and a standard candle model \citep{2013MNRAS.436L...5L} where the rate of
FRBs per host galaxy is independent of redshift. This is a simple model based on
updates to the empirical event rates from detections in the \gls{htru} survey
\citep{2013Sci...341...53T} by \cite{2016MNRAS.460.3370C}, and assumes that FRBs
are singular events. All of these assumptions are subject to uncertainty. As
shown by recent statistical studies of the Parkes FRBs, there is growing
evidence that they are not standard candles, and their event rate is redshift
dependent \citep{2016MNRAS.458..708C,ranethesis}. In addition, the recent
detections of bright, high-DM \glspl{frb} with ASKAP \citep{2017ApJ...841L..12B}
and UTMOST \citep{2017MNRAS.468.3746C,atel10697} also call into question the
assumption that \glspl{frb} are standard candles. The repeating nature of
FRB121102 \citep{2016Natur.531..202S} indicates that there could be multiple
classes of \gls{frb} progenitors, or this standard candle model does not
accurately model event rates. The fact that our simple estimate of 0--2
detections so far is broadly consistent with our actual null detection indicates
that our results are not highly sensitive to these assumptions, but that further
ALFABURST observations will begin to probe the variety of currently highly
uncertain features of the FRB population. We discuss these issues, and other
potentially relevant factors, further below.
The limited processing bandwidth of ALFABURST may be a cause of the survey
non-detection. Multiple detected \glspl{frb} show apparent scintillation and
steep spectral indices. It is not possible to differentiate between an apparent
spectral index induced by the beam or an absolute spectral index from the
source. The localization and repeated detections of FRB121102, however, show
there is significant spectral variation, either intrinsic to the source or due
to the intervening medium. Other \glspl{frb} show frequency-dependent structure
which could be due to beam colorization, intrinsic structure, or due to an
intermediate effect. Plasma lenses in the \gls{frb} progenitor host galaxy could
be modulating the pulse amplitude as a function of frequency and time (if the
source repeats) \citep{2017ApJ...842...35C}. This effect introduces an
additional uncertainty in the \gls{frb} rate modeling as the apparent spectral
indices of detected \glspl{frb} may not be intrinsic. Thus, the observed
frequency structure in an \gls{frb} (repeating or not) would be dependent on
multiple factors including observing frequency, bandwidth, epoch, and even sky
direction. If an \gls{frb} did occur in the field of view of the telescope while
ALFABURST was in operation we could have been unlucky, scintillation or lensing
having caused the pulse in the band to go below the detection threshold.
Assuming no scintillation or lensing, an increase to the full \gls{alfa} band
would result in a $\sqrt{6}$ increase in sensitivity compared to what we
currently have. But, also important is a more complete sampling of the frequency
space if these effects are modulating the pulse.
\cite{2015MNRAS.451.3278M} conclude that the apparent deficit of \glspl{frb} at
low Galactic latitudes is due to diffractive interstellar scintillation. Their
model shows that the true event rate is a factor of $\sim 4$ lower than the rate
reported in \cite{2013Sci...341...53T}, which is also the rate used in the
standard candle model \citep{2013MNRAS.436L...5L}. The ALFABURST survey is
evenly split across high and low Galactic latitudes. \cite{2015MNRAS.451.3278M}
predict that the increase in sensitivity of Arecibo compared to Parkes should
result in a factor of 14 increase in detections, assuming a similar bandwidth
($\sim 300$ MHz). Accounting for the smaller bandwidth of ALFABURST means there
should still be a factor of a few increase in rates. This non-detection result
indicates that the \cite{2015MNRAS.451.3278M} flux density distribution is not
as steep as predicted, or that the source count distribution begins to
flatten below the Parkes sensitivity threshold \citep[for further discussion on
FRB source counts, see][]{2017arXiv171011493M}.
The sensitivity of Arecibo allows the ALFABURST survey to probe a search volume
out to higher redshifts than other surveys. Our number estimates have assumed
that the density of sources per unit co-moving volume is constant. If FRBs
are standard candles, and that there is a peak similar to the star formation
rate around $z=2$ \citep{2014ARA&A..52..415M} than the expected event rate that
our deeper ALFABURST survey probes would actually be higher than our simple
estimates. \citet{2016MNRAS.458..708C} and \citet{ranethesis} show that a
larger sample of FRBs in the Parkes surveys is currently required in order to
distinguish between a constant density versus a redshift dependent model.
Neglecting other factors that might hinder detection, and keeping in mind
the standard candle assumption, our null result suggests that the density of
FRBs per unit co-moving volume does not change substantially.
If \glspl{frb} are inherently not flat-spectrum sources, then their fluxes will
be modified substantially: a steeper negative spectrum population would be
harder to detect, while a rising spectrum population would be more readily
detectable. \cite{2017arXiv170507553L} report FRB121102 to be band limited
during simultaneous observation campaigns using multiple telescopes to cover a
broad range of the radio band. \cite{atel10675} observed 15 pulses from
FRB121102 across the 4-8 GHz band and reported spectral variation over a brief
period of time. A high redshift, band-limited \gls{frb}, which ALFABURST is
sensitive to, could be shifted below L-band. Such a pulse would not be detected
with ALFABURST.
\section{Conclusions and Future Work}
\label{sec:future_work}
We have described the implementation and initial operations of a commensal
search for transient dispersed pulses using the ALFA receiver on the Arecibo
telescope. In our observations carried out so far, we have detected 17
previously known pulsars and found one new high DM transient in the Galactic
Plane. Follow-up observations of the same will hopefully reveal the true nature
of the source. This serendipitous discovery during a slew shows the importance
of developing commensal backends for transient searches on large radio
telescopes.
No new FRBs were found in our observations to date. This appears to be broadly
consistent with the expectations from a simple model in which FRBs are treated
as flat-spectrum standard candles uniformly distributed per unit co-moving
volume. We expect continued observations with ALFABURST to run commensally with
other ALFA projects, leading to an improvement on the event rate limits of
low-fluence \glspl{frb}. Quadrupling the current time on-sky, for example,
would lead to an expectation of several FRBs and allow us to more
quantitatively test the validity of our assumptions about their underlying
population, especially the rate dependence on redshift.
The current \gls{sps} pipeline is undergoing a significant upgrade. The input
bandwidth is limited to 56 MHz of the full 336 MHz digital band due to IO
limitations. A new pipeline developed for \gls{ska} \gls{nip} will be used to
process the full \gls{alfa} band. This will increase sensitivity, and improve
detection rates for scintillating or lensed \glspl{frb}. An improved version
of the real-time \gls{rfi} exciser is currently being developed and will be
deployed to reduce the false detection rate. The post-processing classifier and
prioritizer model is being updated to make use of an auto-encoder to select
deep features and auto-generate classes. This will allow for an improved
follow-up and analysis cycle.
Over the time period ALFABURST has been active, the use of \gls{alfa} has
decreased as a number of surveys carried out with it have come to an end. We
are currently generalizing the \gls{alfa} specific \gls{sps} pipeline to be
used when other feeds are active. The results from this study would increase
our survey time, and sample a larger portion of frequency space.
\section*{ACKNOWLEDGEMENTS}
We thank the referee for constructive comments on the manuscript. ALFABURST
activities are supported by a National Science Foundation (NSF) award
AST-1616042. MAM was supported by NSF award number AST-1517003. MPS and DRL
were supported by NSF award number OIA-1458952. A.K., J.C., G.F. would like to
thank the Leverhulme Trust for supporting this work. G.F., D.M, A.S.
acknowledges support from the Breakthrough Listen Initiative. Breakthrough
Listen is managed by the Breakthrough Initiatives, sponsored by the Breakthrough
Prize Foundation\footnote{breakthroughinitiatives.org}.
\bibliographystyle{mnras}
|
1,108,101,563,160 | arxiv | \section{Introduction}
In the theory of quantum fields on curved space-times
one considers gravity as a classical background
and investigates quantum fields propagating
on this background.
The structure of spacetime is described by a
manifold ${\cal M}$ with metric $g_{\mu\nu}$.
Because of the
large difference between the Planck scale ($10^{-33}$cm)
and scales relevant for the present standard model
($\geq 10^{-17}$cm) the range of validity
of this approximation should include a wide variety
of interesting phenomena, such as
particle creation near a black hole with Schwarzschild
radius much greater than the Planck length.
The difficulties in the transition from flat
to curved spacetime lie in the absence of the notion
of global inertial observers or of Poincar\'e
transformations which underlie the concept of
particles in Minkowski spacetime.
In flat spacetime, Poincar\'e symmetry is
used to pick out a preferred irreducible representation
of the canonical commutation relations.
This is achieved by selecting an
invariant vacuum state and hence a particle notion.
In a general curved spacetime
there does not appear to be any preferred concept of
particles. If one
accepts that quantum field theory
on general curved spacetime is a quantum theory of \textit{fields},
not particles, then the existence of global
inertial observers is irrelevant for the formulation of
the theory. For linear fields a satisfactory
theory can be constructed.
Recently Brunelli and Fredenhagen \cite{Fredenhagen}
extended the Epstein-Glaser scheme to curved
space-times (generalising
an earlier attempt by Bunch \cite{Bunch})
and proved perturbative
renormalizability of $\lambda\phi^4$.
The framework and structure of Quantum field theory
in curved space-times emerged from Parker's analysis
of particle creation in the very early universe
\cite{Parker}. The theory received enormous impetus
from Hawking's discovery that black holes radiate
as black bodies due to particle creation \cite{Hawking}.
A comprehensive summary of the work
can be found in the books
\cite{Birrell}.
\section{Quantum Fields in Curved Spacetime}
In a general spacetime no analogue of a 'positive frequency
subspace' is available and as a consequence the states
of the quantum field will not possess a physically
meaningful particle interpretation. In addition,
there are spacetimes, e.g. those with time-like singularities,
in which solutions of the wave equation cannot
be characterised by their initial values.
The conditions of \textit{global hyperbolocity}
of $({\cal M},g_{\mu\nu})$ excludes such 'pathological'
spacetimes and ensures that
the field equations have a well posed
initial value formulation.
Let $\Sigma\subset{\cal M}$ be a hypersurface
whose points cannot be joined
by time-like curves. We define the \textit{domain
of dependence of} $\Sigma$ by
\eqnn{
\hbox{D}(\Sigma)=\{p\in{\cal M}\vert\hbox{every inextendible
causal curve through $p$ intersects } \Sigma\}.}
If D$(\Sigma)={\cal M}$, $\Sigma$ is called a
\textit{Cauchy surface} for the spacetime and
${\cal M}$ is called \textit{globally hyperbolic}.
Globally hyperbolic spacetimes can be
\textit{foliated} by a one-parameter family of smooth
Cauchy surfaces $\Sigma_t$, i.e. a smooth 'time
coordinate' $t$ can be chosen on ${\cal M}$ such that
each surface of constant $t$ is a Cauchy surface \cite{Geroch}.
There is a \textit{well posed initial
value problem} for linear wave equations \cite{HawEl}.
For example, given smooth initial data
$\phi_0,\dot\phi_0$, then there
exists a unique solution $\phi$ of the \textit{Klein-Gordon
equation}
\eqnl{
\Box_g\phi+m^2\phi=0,\qquad \Box_g={1\over \sqrt{-g}}\partial_\mu(\sqrt{-g}
g^{\mu\nu}\partial_\nu)}{KK}
which is smooth
on all of ${\cal M}$, such that on $\Sigma$ we have
$\phi=\phi_0\mtxt{and}n^\mu\nabla_\mu\phi=\dot\phi_0,$
where $n^\mu$ is the unit future-directed normal to
$\Sigma$. In addition, $\phi$
varies continuously with the initial data.
\par\noindent
For the phase-space formulation
we slice ${\cal M}$ by space-like
Cauchy surfaces $\Sigma_t$ and introduce unit normal
vector fields $n^\mu$ to $\Sigma_t$. The spacetime metric
$g_{\mu\nu}$ induces a spatial metric $h_{\mu\nu}$ on
each $\Sigma_t$ by the formula
\eqnn{
g_{\mu\nu}=n_\mu n_\nu-h_{\mu\nu}.}
Let $t^\mu$ be a 'time evolution' vector field on ${\cal M}$
satisfying $t^\mu \nabla_\mu t=1$. We decompose it into
its parts normal and tangential to $\Sigma_t$,
\eqnn{
t^\mu=Nn^\mu+N^\mu,}
where we have defined the \textit{lapse function} $N$ and
the \textit{shift vector} $N^\mu$ tangential to the $\Sigma_t$.
Now we introduce adapted coordinates $x^\mu=(t,x^i), i=1,2,3$ with
$t^\mu\nabla_\mu x^i=0$, so that $t^\mu\nabla_\mu=\partial_t$
and $N^\mu\partial_\mu=N^i\partial_i$. The metric coefficients in
this coordinate system are
\eqnn{
g_{00}=g(\partial_t,\partial_t)=N^2-N^iN_i\mtxt{and}
g_{0i}=g(\partial_t,\partial_i)=-N_i,}
where $N_i=h_{ij}N^j$,
so that
\eqngr{
ds^2&=&(Ndt)^2-h_{ij}(N^idt+dx^i)(N^jdt+dx^j)}
{(\partial\phi)^2&=&{1\over N^2}(\partial_0\phi-N^i\partial_i\phi)^2-
h^{ij}\partial_i\phi\partial_j\phi.}
The determinant $g$ of the $4$-metric is
related to the determinant $h$ of the $3$-metric as $g=-N^2h$.
Inserting these results into the Klein-Gordon
action
\eqnn{
S=\int L dt={1\over 2}\int \eta\Big(g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi
-m^2\phi^2\Big),\qquad \eta=\sqrt{\vert g\vert}d^4x,}
one obtains for the momentum density, $\pi$, conjugate
to the configuration variable $\phi$ on $\Sigma_t$
\eqnn{
\pi={\partial L\over \partial\dot\phi}={\sqrt{h}\over N}\big(
\dot\phi-N^i\partial_i\phi\big)=\sqrt{h}\big(n^\mu\partial_\mu\phi\big).}
A point in classical phase space
consists of the specification of functions
$(\phi,\pi)$ on a Cauchy surface.
By the result of Hawking and Ellis, smooth $(\phi,\pi)$
give rise to a unique solution to \refs{KK}.
The space of solutions is independent on the choice
of the Cauchy surface.\par
For two (complex) solutions of the Klein-Gordon equation
the inner product
\eqnn{(u_1,u_2)\equiv
i\int\limits_{\Sigma}\Big(\bar u_1 n^\mu\nabla_\mu u_2-
(n^\mu\nabla_\mu \bar u_1)u_2\Big)\sqrt{h}\,d^3x=
i\int\big(\bar u_1\pi_2-\bar\pi_1 u_2\big)d^3x}
defines a natural symplectic structure.
Natural means, that $(u_1,u_2)$ is independent of the
choice of $\Sigma$.
This inner product is not positive definite.
Let us introduce a complete set of conjugate pairs
of solutions $(u_k,\bar u_k)$ of the
Klein-Gordon equation\footnote{the $k$ are
any labels, not necessarily the momentum}
satisfying the following ortho-normality conditions
\eqnn{
(u_k,u_{k^\prime})=\delta(k,k^\prime)\Rightarrow
(\bar u_k,\bar u_{k^\prime})=-\delta(k,k^\prime)
\mtxt{and}
(u_k,\bar u_{k^\prime})=0.}
There will be an infinity of such sets. Now we expand
the field operator in terms of these modes:
\eqnn{
\phi=\int d\mu(k)\Big(a_k u_k+a_k^\dagger \bar u_k\Big)
\mtxt{and}
\pi=\int d\mu(k)\Big(a_k \pi_k+a_k^\dagger\bar\pi_k\Big),}
so that
\eqnn{
(u_k,\phi)=a_k\mtxt{and}(\bar u_k,\phi)=-a^\dagger_k.}
By using the completeness of the $u_k$ and the
canonical commutation relations
one can show that the operator-valued
coefficients $(a_k,a^\dagger_k)$ satisfy the
usual commutation relations
\eqnl{
[a_k,a_{k^\prime}]=[a^\dagger_k,a^\dagger_{k^\prime}]=0\mtxt{and}
[a_k,a^\dagger_{k^\prime}]=\delta(k,k^\prime).}{comrel}
We choose the Hilbert space ${\cal H}$ to be the Fock space
built from a 'vacuum' state $\Omega_u$ satisfying
\eqnl{
a_k\Omega_u=0\mtxt{for all}k,\qquad (\Omega_u,\Omega_u)_{{\cal H}}
=1.}{comrel1}
The 'vectors' $\Omega_u,a^\dagger_k\Omega_u,\dots$
comprise a basis of ${\cal H}$. The scalar product
given by (\ref{comrel},\ref{comrel1}) is positive-definite.\par\noindent
If $(v_p,\bar v_p)$ is a
second set of basis functions, we may
as well expand the field operator in terms of this
set
\eqnn{
\phi=\int d\mu(p)\Big(b_p v_p+
b_p^\dagger\bar v_p\Big).}
The second set will be linearly related to the first one by
\eqnn{
v_p=\int d\mu(k)\Big((u_k,v_p)u_k
-(\bar u_k,v_p)\bar u_k\Big)\equiv
\int d\mu(k)\Big(\alpha(p,k)u_k+\beta(p,k)\bar u_k
\Big).}
The inverse transformation reads
\eqnn{
u_k=\int d\mu(p)\Big(v_p\bar\alpha(p,k)-\bar v_p\beta(p,k)\Big).}
As a consequence, the Bogolubov-coefficients are related by
\eqnl{
\alpha\al^\dagger-\beta\beta^\dagger=1\mtxt{and}
\alpha\beta^t-\beta\alpha^t=0.}{bogrel}
If the $\beta(k,p)$ vanish, then the 'vacuum' is left
unchanged, but if they do not, we have
a nontrivial \textit{Bogolubov transformation}
\eqnl{
\pmatrix{a&a^\dagger}=\pmatrix{b&b^\dagger}\pmatrix{\alpha&\beta\cr
\bar\beta&\bar\alpha}\mtxt{and}
\pmatrix{b\cr b^\dagger}=\pmatrix{\bar\alpha&-\bar\beta\cr
-\beta&\alpha}\pmatrix{a\cr a^\dagger}}{bogtrans}
which mixes the annihilation and creations operators.
If one defines a Fock space and a 'vacuum' corresponding
to the first mode expansion, $a_k\Omega_u=0$,
then the expectation of the number operator $b^\dagger_p b_p$ defined
with respect to the second mode expansion is
\eqnn{
\big(\Omega_u,b_p^\dagger b_p\Omega_u\big)
=\int d\mu(k)\vert \beta(p,k)\vert^2.}
That is, the old vacuum contains new particles. It may
even contain an infinite number of new particles, in
which case the two Fock spaces cannot be related
by a unitary transformation.
\textbf{Stationary and static spacetimes.}
A spacetime is \textit{stationary} if there exist
coordinates for which the metric
is time-independent.
This property holds iff spacetime admits
a time-like Killing field $K=K^\mu\partial_\mu$
and hence a natural choice for the mode functions $u_k$:
We may scale $K$ such that the Killing time $t$
is the proper time measured by at least one comoving
clock. Now we may choose as basis functions $u_k$ the
eigenfunctions of the Lie derivative,
\eqnn{
iL_Ku_k=\omega(k)u_k\mtxt{and}iL_K\bar u_k=-\omega(k)\bar u_k,}
where the $\omega(k)>0$ are constant.
The $\omega(k)$ are the frequencies relative to
the particular comoving clock and the $u_k$ and $\bar u_k$
are the positive and negative frequency solutions,
respectively. Now the construction of the vacuum and
Fock space is done as described above.\par\noindent
In a \textit{static spacetime}, $K$
is everywhere orthogonal to a family
of hyper-surfaces and hence
satisfies the Frobenius condition
$\tilde K\wedge d\tilde K=0,\quad \tilde K=K_\mu dx^\mu.$
We may introduce adapted coordinates:
$t$ along the congruence ($K=\partial_t$) and $x^i$ in one hypersurface
such that the metric is time-independent
and the shift vector $N_i$ vanishes,
\eqnn{
(g_{\mu\nu})=\pmatrix{N^2(x^i)&0\cr 0&-h_{ij}(x^i)}.}
As modes we use
\eqnn{
u_k={1\over \sqrt{2\omega(k)}}e^{-i\omega(k)t}\phi_k(x^i)}
which diagonalise $L_K$ and for which
the Klein-Gordon equation simplifies to
\eqnn{{\cal K}\phi_k\equiv
\Big(-{N\over \sqrt{h}}\partial_i\big(N\sqrt{h}
h^{ij}\partial_j\big)+N^2m^2\Big)\phi_k=\omega_k^2\phi_k.}
Since $n^\mu\partial_\mu =N^{-1}\partial_t$,
the inner product of two mode functions is
\eqnn{
(u_1,u_2)={\omega_1+\omega_2\over 2\sqrt{\omega_1\omega_2}}\;e^{i(\omega_1-\omega_2)t}
\underbrace{\int \bar\phi_1\phi_2\;N^{-1}\sqrt{h}\,d^3x}_{(\phi_1,
\phi_2)_2}.}
The elliptic operator ${\cal K}$ is symmetric
with respect to the $L_2$ scalar product $(.,.)_2$
and may be diagonalised. Its positive eigenvalues are the
$\omega^2(k)$ and its
eigenfunctions form a complete 'orthonormal' set on $\Sigma$,
$(\phi_k,\phi_{k^\prime})_2=\delta(k,k^\prime)$. It follows then
that the $u_k$ form a complete set with the properties
discussed earlier.
Ashtekar and Magnon \cite{ashmag}
and Kay \cite{kay} gave a rigorous construction
of the Hilbert space and Hamiltonian in a stationary spacetime.
They started with a \textit{conserved positive scalar product} $(.,.)_E$
\eqnn{
(\phi_1,\phi_2)_E=\int\limits_\Sigma T_{\mu\nu}(\phi_1,\phi_2)
K^\nu n^\mu\sqrt{h}d^3x,}
where the bilinear-form on the space of complex solutions
is defined by the metric 'stress tensor':
\eqnn{
T_{\mu\nu}(\phi,\psi)
={1\over 2}\Big(\phi^\dagger,_\mu\psi,_\nu+\phi^\dagger,_\nu\psi,_\mu
-g_{\mu\nu}\big(\nabla\phi^\dagger \nabla\psi-
m^2\phi^\dagger\psi\big)\Big).}
This 'stress tensor' is symmetric and conserved
and hence $\nabla_\mu(T^{\mu\nu}K_\nu)=0$.
It follows that the norm is invariant
under the time-translation map
\eqnn{
\alpha_t^*(\phi)=\phi\circ\alpha_t\mtxt{or}
\big(\alpha_t^*(\phi)\big)(x)=\phi\big(\alpha_t(x)\big),}
generated by the Killing field $K$. When completing the
space of complex solutions in the 'energy-norm'
one gets a complex (auxiliary) Hilbert space $\tilde {\cal H}$.
The time translation map extends to $\tilde{\cal H}$ and defines
a one-parameter unitary group
\eqnn{
\alpha^*_t=e^{i\tilde ht},\qquad \tilde h\mtxt{self-adjoint.}}
Note, that from the definition of the Lie derivative,
\eqnn{
{d\over dt}\big(\alpha^*_t\phi\big)\vert_{t=0}=-L_K\phi=
i\tilde h\phi.}
The conserved inner product
$(\phi_1,\phi_2)$
can be bounded
by the energy norm and hence extends to a quadratic
form on $\tilde{\cal H}$.
Let $\tilde {{\cal H}}^+\subset \tilde {\cal H}$ be the positive spectral subspace
in the spectral decomposition of $\tilde h$
and let $P$ be the projection map
$P:\tilde{\cal H}\to \tilde{\cal H}^+$. For all real solutions
we may now define the \textit{scalar product} as
the inner product of the projected solutions, which
are complex. The one-particle Hilbert space ${\cal H}$ is just the completion
of the space $\tilde {\cal H}^+$ of 'positive frequency solutions'
in the Klein-Gordon inner product.\par\noindent
\textbf{Hadamard states.}
For a black hole the global Killing field is not
everywhere time-like. One
may exclude the non-time-like region from space time
which corresponds to the imposition of boundary
conditions. One may also try to retain
this region but attempt to define
a meaningful vacuum by invoking physical arguments.
In general spacetimes there is no Killing vector at all.
One probably has to give up the particle
picture in this generic situation.
In (globally hyperbolic) spacetimes without any
symmetry one can still construct a well-defined
Fock space over a quasifree vacuum state,
provided that the two-point functions satisfies the
so-called Hadamard condition.
Hadamard states are states, for which the two-point
function has the following singularity structure
\eqnl{
\omega\big(\phi(x)\phi(y)\big)\equiv
\omega_2(x,y)={u\over\sigma}+v\log\sigma +w,}{hadamard}
where $\sigma(x,y)$ is the square of the geodesic distance
of $x$ and $y$ and $u,v,w$ are smooth functions on
${\cal M}$. It has been shown
that if $\omega_2$ has the Hadamard singularity structure
in a neighbourhood of a Cauchy-surface, then it has
this form everywhere \cite{fullingsweeny}. To show that,
one uses that $\omega_2$ satisfies the wave equation.
This result can then be used to show that on a
globally hyperbolic spacetime there is a wide class of
states whose two-point functions have the Hadamard singularity
structure.\par\noindent
The two-point function $\omega_2$ must be positive,
\eqnn{
\omega \big(\phi(f)^\dagger \phi(f)\big)=
\int d\mu(x)d\mu(y)\;\bar f(x)\omega_2\big(x,y\big)f(y)\geq 0,}
and must obey the Klein-Gordon equation.
These requirements determine $u$ and $v$ uniquely and put
stringent conditions on the form of $w$.
In a globally hyperbolic spacetime there are unique retarded and advanced
Green functions
\eqnn{
\Delta_{ret}(x,y)\mtxt{,} \Delta_{adv}(x,y)\mtxt{with }
\hbox{ supp}(\Delta_{ret})=\{(x,y);x\in J_+(y)\},}
where $J_+(y)$ is the causal future of $y$.
The \textit{Feynman Green function} is related
to $\omega_2$ and the advanced Green function as
\eqnn{
i\Delta_F(x,y)=\omega_2(x,y)+\Delta_{adv}(x,y).}
Since $\Delta_{adv}$ is unique,
the ambiguities of $\Delta_F$ are the same as those of
$\omega_2$. The \textit{propagator function}
\eqnn{
i\Delta(x,y)=[\phi(x),\phi(y)]=\Delta_{ret}(x,y)-\Delta_{adv}(x,y)}
determines the antisymmetric part of $\omega_2$,
\eqnn{
\omega_2(x,y)-\omega_2(y,x)=i\Delta(x,y),}
so that this part is without ambiguities.
For a scalar field without self-interaction we expect that
\eqnn{
\omega\big(\phi(x_1)\dots\phi(x_{2n-1})\big)=0,\qquad
\omega\big(\phi(x_1)\dots\phi(x_{2n})\big)=\sum_{i_1<i_2\dots <i_n\atop
j_1<j_2\dots <j_n}\prod_{k=1}^n\omega\big(
\phi(x_{i_k})\phi(x_{j_k})\big).}
A state $\omega$ fulfilling these conditions is called
\textit{quasifree}.
Now one can show that any choice of $\omega_2(x,y)$
fulfilling the properties listed above gives rise to a well-defined
Fock-space ${\cal F}=\oplus {\cal F}_n$
over a quasifree vacuum state.
The scalar-product on the 'n-particle subspace'
${\cal F}_n$ in
\eqnl{
{\cal F}_n=\{\psi\in {\cal D}({\cal M}^n)_{symm}/{\cal N}\}^{completion}
,\qquad n=0,1,2,\dots,}{hilbertn}
where ${\cal D}({\cal M}^n)$ denotes the smooth symmetric functions on
${\cal M}\times \cdots\times {\cal M}$ ($n$ factors) with compact support,
is
\eqnn{
(\psi_1,\psi_2)=\int d\mu(x_1,..,x_n,y_1,..,y_n)
\prod_{i=1}^n\omega_2(x_i,y_i)\bar\psi_1(x_1,..,x_n)\psi_2(y_1,..,
y_n),}
where $d\mu(x_1,x_2,..)=
d\mu(x_1)d\mu(x_2)\dots$. Since $\omega_2$ satisfies the
wave equation,
the functions in the image of $\Box+m^2$ have zero norm. The
set of zero-norm states ${\cal N}$ has been
divided out in order to end up with a
positive definite Hilbert space.\par\noindent
The smeared field operator is now defined in the usual way:
$\phi(f)=a(f)^\dagger+a(\bar f)$,
where
\eqngr{
\big(a(\bar f)\psi\big)_n(x_1,..,x_n)&=&
\sqrt{n+1}\int d\mu(x,y)\omega_2(x,y)f(x)\psi_{n+1}(y,x_1,..,x_n)}
{\big(a(f)^\dagger\psi\big)_n(x_1,..,x_n)&=&{1\over\sqrt{n}}
\sum\limits_{k=1}^n
f(x_k)\psi_{n-1}(x_1,..,x_{k-1},x_{k+1}..,x_n),\quad n>0}
and $(a(f)^\dagger\psi)_0=0$.
It is now easy to see that $\omega_2$ is just the Wightman function
of $\phi$ in the vacuum state $\psi_0$:
$\omega_2(x,y)=\big(\psi_0,\phi(x)\phi(y)\psi_0\big)$.
\section{The Unruh Effect}
We may ask the question how quantum fluctuations
appear to an accelerating observer? In particular,
if the observer was carrying with him a robust detector,
what would this detector register?
If the motion of the observer undergoing
constant (proper) acceleration is confined
to the $x^3$ axis, then
the world line is a hyperbola in the $x^0,x^3$ plane
with asymptotics $x^3=\pm x^0$. These asymptotics
are \textit{event horizons} for the accelerated observer.
To find a natural comoving frame we
consider a family of accelerating observers,
one for each hyperbola with asymptotics $x^3=\pm x^0$.
The coordinate system is then the comoving
one in which along each hyperbola the space coordinate
is constant while the time coordinate $\tau$ is proportional
to the proper time as measured from an
initial instant $x^0=0$ in some inertial frame.
The world lines of the uniformly accelerated particles are
the orbits of one-parameter group of Lorentz boost isometries
in the $3$-direction:
\eqnn{
\pmatrix{x^0\cr x^3}=\rho\pmatrix{\sinh \kappa t\cr \cosh \kappa t}
=e^{\kappa\omega t}\pmatrix{0\cr \rho},
\qquad(\omega^\mu_{\,\nu})=\pmatrix{0&1\cr 1&0}.}
In the comoving coordinates $(t,x^1,x^2,\rho)$
\eqnn{
ds^2=\kappa^2\rho^2 dt^2-d\rho^2-(dx^1)^2-(dx^2)^2.}
so that the proper time along a hyperbola $\rho=$const
is $\kappa\rho t$.
The orbits are tangential to the \textit{Killing field}
\eqnl{
K=\partial_t=\kappa(x^3\partial_0+x^0\partial_3)\mtxt{with}(K,K)=(\kappa\rho)^2=g_{00}.}{kill}
Some typical orbits are depicted in figure \refs{rindler1}.
\begin{figure}[ht]
\begin{minipage}[t]{15cm}
\centerline{\epsfysize=7 cm\epsffile{rindler1.eps}}
\caption{\label{rindler1}\textsl{A
Rindler-observer sees only a quarter of Minkowski
space}}
\end{minipage}
\end{figure}
Since the proper acceleration on the orbit with
$(K,K)=1$ or $\rho=1/\kappa$ is $\kappa$, it is
conventional to view the orbits of $K$ as corresponding
to a family of observers associated with an observer
who accelerates uniformly with acceleration $a=\kappa$.\par\noindent
The coordinate system $t,\rho$ covers the Rindler
wedge $R$ on which $K$ is time-like future directed.
The boundary $H^+$ and $H^-$ of the wedge
is given by $\rho=0$ and appears as a
\textit{Killing horizon}, on which $K$ becomes null.
Beyond this event horizon the Killing vector field
becomes space-like in the regions
$F,P$ and time-like past directed in $L$.
The parameter $\kappa$ plays the role of the \textit{surface
gravity}. To see that, we set $r-2M=\rho^2/8M$
in the Schwarzschild solution
and linearise the metric near the horizon $r\sim 2M$.
One finds that
\eqnn{
ds^2\sim\underbrace{
(\kappa \rho)^2dt^2-d\rho^2}_{\hbox{\tiny 2-dim Rindler}\atop
\hbox{\tiny spacetime}}
-\underbrace{{1\over 4\kappa^2}d\Omega^2}_{\hbox{\tiny 2-sphere of} \atop
\hbox{\tiny radius }1/2\kappa}}
contains the line element of two-dimensional Rindler spacetime,
where $\kappa=1/4M$ is indeed the surface gravity of the Schwarzschild
black hole.\par\noindent
\textbf{Killing horizons
and surface gravity.}
The notion of Killing horizons is relevant
for the Hawking radiation and the thermodynamics
of black holes and can already be illustrated in
Rindler spacetime.
Let $S(x)$ be a smooth function
and consider a family of hyper-surfaces $S(x)=\,$const.
The vector fields normal to the hyper-surfaces are
\eqnn{
l=g(x)(\partial^\mu S)\partial_\mu,}
with arbitrary non-zero function $g$. If $l$ is null,
$l^2=0$, for a
particular hypersurface ${\cal N}$ in the family,
${\cal N}$ is said to be a \textit{null hypersurface}.
For example, the normal vectors to the
surfaces $S=r-2M=\,$const in Schwarzschild spacetime have norm
\eqnn{
l^2=g^2 g^{\mu\nu}\partial_\mu S\partial_\nu S=g^2\Big(1-{2M\over r}\Big),}
and the horizon at $r=2M$ is a null hypersurface.\par\noindent
Let ${\cal N}$ be a null hypersurface with normal $l$. A vector
$t$ tangent to ${\cal N}$ is characterised by $(t,l)=0$.
But since $l^2=0$, the vector $l$ is itself a tangent
vector, i.e.
\eqnn{
l^\mu={dx^\mu\over d\lambda},\mtxt{where} x^\mu(\lambda)\mtxt{is a null curve on}
{\cal N}.}
Now one can show, that $\nabla_l l^\mu\vert_{{\cal N}}\sim l^\mu$,
which means that $x^\mu(\lambda)$ is a geodesic with tangent
$l$. The function $g$ can be chosen such that $\nabla_l l=0$,
i.e. so that $\lambda$ is an affine parameter.
A null hypersurface ${\cal N}$ is a \textit{Killing horizon} of a Killing
field $K$ if $K$ is normal to ${\cal N}$.\par\noindent
Let $l$ be normal to ${\cal N}$ such that $\nabla_l l=0$.
Then, since on the Killing horizon $K=fl$
for some function $f$, it follows that
\eqnl{
\nabla_K K^\mu=fl^\nu\nabla_\nu(fl^\mu)=fl^\mu l^\nu\partial_\nu f
=(\nabla_K\log \vert f\vert)K^\mu\equiv \kappa K^\mu\mtxt{on}
{\cal N}.}{surfgrav}
One can show, that the \textit{surface gravity}
$\kappa={1\over 2}\nabla_K\log f^2$ is constant on orbits of $K$.
If $\kappa\neq 0$, then ${\cal N}$ is a bifurcate
Killing horizon of $K$ with bifurcation $2$-sphere $B$.
In this non-degenerate case $\kappa^2$ is constant on ${\cal N}$.
For example, for the Killing field in Rindler
spacetime \refs{kill} $\nabla_K K=\pm\kappa K$
on the Killing horizon and the bifurcation 'sphere' is at
$\rho=0$.
If ${\cal N}$ is a Killing horizon of $K$ with
surface gravity $\kappa$, then it is also a Killing
horizon of $cK$ with surface gravity $c^2\kappa$.
Thus the surface gravity depends on the normalisation
of $K$. For asymptotically flat spacetimes there
is the natural normalisation
$K^2\to 1$ and $K$ future directed as $r\to \infty$.
With this normalisation the surface gravity is the acceleration of
a static particle near the horizon as measured
at spatial infinity.\par\noindent
A Killing field is uniquely determined by its value and
the value of its derivative $F_{\mu\nu}=\nabla_{[\mu}K_{\nu]}$
at any point $p\in M$. At the bifurcation point $p$ of a bifurcate
Killing horizon $K$ vanishes and hence is
determined by $F_{\mu\nu}(p)$. In two dimensions $F_{\mu\nu}(p)$
is unique up to scaling. The infinitesimal action of the isometries
$\alpha_t$ generated by $K$ takes a vector $v^\mu$ at $p$ into
\eqnl{
L_Kv^\mu=F^\mu_{\;\nu}v^\nu.}{infboost}
The nature of this map on $T_p$ depends upon the signature
of the metric. For Riemannian signature it is an infinitesimal
rotation and the orbits of $\alpha_t$ are closed
with a certain period. For Lorentz signature \refs{infboost}
is an infinitesimal Lorentz boost and the orbits of
$\alpha_t$ have the same structure as in the Rindler case.
A similar analysis applies to higher dimensions.\par\noindent
\par\noindent
The Rindler wedge $R$ is globally hyperbolic with
Cauchy hypersurface $\Sigma_R$ (see fig. \refs{rindler1}). Thus it
may be viewed as a spacetime in its own right,
and we may construct a quantum field theory
on it. When we do that, we obtain a remarkable
conclusion, namely that the standard Minkowski vacuum
$\Omega_M$ corresponds to a thermal state
in the new construction. This means, that an
accelerated observer will feel himself to be
immersed in a thermal bath of particles with
temperature proportional to his acceleration $a$
\cite{Unruh},
\eqnn{
kT=\hbar a/2\pi c.}
The noise along a
hyperbola is greater than that along a geodesic, and
this excess noise excites the Rindler detector:
A uniformly accelerated detector in its ground state
may jump spontaneously to an excited state.
Note that the temperature tends to zero when
$\hbar$ tends to zero. Such a radiation has non-zero
entropy. Since the use of an accelerated
frame seems to be unrelated to any statistical
average, the appearance of a non-vanishing entropy
is rather puzzling.
The Unruh effect shows, that at the quantum level
there is a deep relation between the theory of
relativity and the theory of fluctuations associated
with states of thermal equilibrium, two major aspects
of Einstein's work: The distinction between quantum
zero-point and thermal fluctuations is not an invariant one,
but depends on the motion of the observer.
Note that the temperature is proportional to the acceleration
$a$ of the observer. Since $a=1/\rho$ this means that
$T\rho=\hbox{const}\Longleftrightarrow T\sqrt{g_{00}}=\hbox{const.}$
This is just the \textit{Tolman-Ehrenfest relation} \cite{tolman}
for the temperature in a fluid in hydrostatic equilibrium
in a gravitational field. The factor $\sqrt{g_{00}}$ guarantees
that no work can be gained by transferring radiation
between two regions at different gravitational potentials.
Let us calculate the number of 'Rindler-particles'
in Minkowski vacuum.
To simplify the analysis, we consider a
zero-mass scalar field in
two-dimensional Minkowski space.
In the Heisenberg picture, the expansions in terms
of annihilation and creation operators are
\eqnn{
\phi=\int dk\Big(a_k u_k+h.c.\Big),
\mtxt{where}u_k={1\over\sqrt{4\pi\omega}}e^{-i\omega x^0+ikx^3},\quad
\omega=\vert k\vert}
and
\eqnn{
\phi=\int dp
\Big(b_p v_p+h.c.\Big),\mtxt{where}
v_p={1\over \sqrt{4\pi\epsilon}}\rho^{ip/\kappa}\,e^{-i\epsilon t},\quad
\;\;\epsilon=\vert p\vert.}
The $\beta$-coefficients are found to be
\eqnn{
\beta(p,k)=-(\bar u_k,v_p)=
{1\over 4\pi}\int\limits_0^\infty \Big(\sqrt{\omega\over\epsilon}-\sqrt{\epsilon\over\omega}
{1\over\kappa\rho}\Big)
e^{ik\rho}\rho^{ip}d\rho,}
where we have evaluated the time-independent 'scalar-product'
at $t=0$ for which $x^0=0$.
Using the formula
\eqnl{
\int\limits_0^\infty dx\, x^{\nu-1}e^{-(\alpha+i\beta)x}=
\Gamma(\nu)(\alpha^2+\beta^2)^{-\nu/2}e^{-i\nu\arctan(\beta/\alpha)}}{integral}
we arrive at
\eqnn{
\beta(p,k)=-{\Gamma(ip/\kappa)\over 4\pi\kappa}\,
\omega^{-i p/\kappa}\Big(\sqrt{\epsilon\over \omega}\pm{p\over\sqrt{\epsilon\omega}}
\Big)e^{\mp \pi p/2\kappa}\mtxt{for}{k\over\omega}=\pm 1,}
or at
\eqnn{
\vert \beta(p,k)\vert^2={1\over 2\pi\kappa\omega}{1\over e^{2\pi\epsilon/\kappa}-1}.}
The Minkowski spacetime vacuum is characterised by
$a_k\Omega_M=0\mtxt{for all}k$.
Assuming that this is the state of the system, the
expectation value of the occupation number as
defined by the Rindler observer,
$n_p\equiv b^\dagger_pb_p$, is found to be
\eqnl{
\big(\Omega_M,n_p\Omega_M\big)=
\int dk\vert\beta(p,k)\vert^2=
\hbox{volume}\times
{1\over e^{2\pi \epsilon/\kappa}-1},}{rindlerequ}
Thus for an accelerated observer the quantum field
seems to be in an equilibrium state with temperature
proportional to $T=\kappa/2\pi=a/2\pi$.
An observer with $a=10^{21}$cm/sec$^2$ feels a
temperature $T\sim 1^0K$.
Since $T$ tends to zero as $\rho\to\infty$
the Hawking temperature (i.e. temperature as measured
at spatial $\infty$) is actually zero. This is expected,
since there is nothing inside which could radiate.
But for a black hole
$T_{local}\to T_H$ at infinity
and the black hole must radiate at this temperature.\par\noindent
Let us finally see, how the (massless)
Feynman-Green function in Minkowski spacetime,
\eqnn{
i\Delta_F(x,x^\prime)=\langle 0\vert T\big(\phi(x)\phi(x^\prime)\big)\vert 0\rangle
={i\over 4\pi^2}{1\over (x-x^\prime)^2-i\epsilon},}
appears to an accelerated observer.
Let $x=(t,\rho)$ and $x^\prime=(t^\prime,\rho)$ be two events on the world line
of an accelerated observer. Since the invariant
distance of these two events is $2\rho\sinh{\kappa\over 2}(t-t^\prime)$,
one arrives at the following spectral representation
of the Feynman-propagator as seen by this
observer
\eqnl{\Delta_F(x,x^\prime)={1\over (2\pi)^4}({\kappa\over \rho})^2
\int d^4p\, e^{-iE(t-t^\prime)}\Big(
{1\over p^2+i\epsilon}-2\pi i{\delta(p^2)\over e^{\beta\vert E\vert}-1}\Big).}{therm}
This is the finite temperature propagator. It follows,
that atoms dragged along the world
line find their excited levels populated as predicted
by a temperature $\beta^{-1}=a/2\pi$.
\section{The Stress-Energy Tensor}
Semiclassically one would expect that
back-reaction is described by the 'semiclassical
Einstein equation'
\eqnn{
G_{\mu\nu}=8\pi G \langle T_{\mu\nu}\rangle,}
where the right-hand side contains the expectation
value of the energy-momentum tensor of the relevant
quantised field in the chosen state. If the
characteristic curvature radius $L$ in a region
of spacetime is much greater then the Planck
length $l_{pl}$, then in the calculation
of $\langle T_{\mu\nu}\rangle$ one can expand
in the small parameter $\epsilon=(l_{pl}/L)^2$
and retain only the terms up to first order in
$\epsilon$ (one-loop approximation). The term of order
$\epsilon$, containing a factor $\hbar$, represents
the main quantum correction to the classical result.
In the one-loop approximation or free fields the contributions
of all fields to $\langle T_{\mu\nu}\rangle$ are additive and thus
can be studied independently.
The difficulties with defining $\langle T_{\mu\nu}\rangle=\omega(T_{\mu\nu})$
are present already in Minkowski spacetime. The divergences
are due to the vacuum zero-fluctuations.
The methods of extracting a finite, physically
meaningful part, known as renormalisation procedures,
were extensively discussed in the literature
\cite{enmom}.
A simple cure for this difficulty
is (for free fields) the \textit{normal ordering} prescription.
We first consider the ill-defined object $\phi^2(x)$,
which is part of the stress-energy tensor. We may
split the points and consider first the object
$\omega(\phi(x)\phi(y))$ which solves the Klein-Gordon equation.
This bi-distribution makes perfectly good sense.
For physically reasonable states $\omega$ in the Fock space
(e.g. states with a finite number of particles)
the singular behaviour of this bi-distribution is the
same as that belonging to the vacuum state, $\omega_0\big(\phi(x)\phi(y)\big).$
For such states the difference
\eqnn{
F(x,y)=\omega\big(\phi(x)\phi(y)\big)-\omega_0\big(\phi(x)\phi(y)\big)}
is a smooth function of its arguments. Hence, after performing
this 'vacuum subtraction' the coincidence limit may be taken.
We then define
\eqnn{
\omega\big(\phi^2(x)\big)=\lim_{x\to y}F(x,y).}
The same prescription can be used for the stress-energy
tensor. We define
\eqnl{
\omega\big(T_{\mu\nu}(x)\big)=
\lim_{x\to x^\prime}D_{\mu\nu^\prime}F(x,x^\prime),\quad
D_{\mu\nu^\prime}=
\partial_\mu\partial_{\nu^\prime}-{1\over 2} g_{\mu\nu}
\big[\partial_\alpha\partial^{\alpha^\prime}-m^2\big]\Big).}{enmomm}
In curved spacetime some restrictions should be expected on
the class of states on which $\langle T_{\mu\nu}\rangle$
can be defined this way. The \textit{Hadamard
condition} provides a restriction of exactly this
sort of states.\par\noindent
Although \refs{enmomm} is not a physical
definition of expectation values
of the stress-energy tensor itself
(no preferred vacuum state, vacuum polarisation),
it sensibly defines
the \textit{differences} of the expected stress energy
between two states.
In the absence of an obvious prescription it is useful to take an
axiomatic approach. Wald showed that a renormalised
stress tensor satisfying certain reasonable physical
requirements is essentially unique
\cite{waldaxiom}. Its ambiguity
can be absorbed into redefinitions of the coupling
constants in the (generalised) gravitational field equation. Wald's
requirements are:\par\noindent
\textbf{Consistency:}
Whenever $\omega_1(\phi(x)\phi(y))-\omega_2(\phi(x)\phi(y))$
is a smooth function, then $\omega_1(T_{\mu\nu})-\omega_2(T_{\mu\nu})$
is well-defined and should be given by the above
'point-splitting' prescription.\par\noindent
\textbf{Conservation:}
There is a regularisation which respects the diffeomorphism
invariance, so that $\nabla_\nu T^{\mu\nu}=0$
holds. This property
is needed for consistency of Einstein's gravitational
field equation.\par\noindent
\textbf{Normalisation:} In Minkowski spacetime,
we have $(\Omega_M,T_{\mu\nu}\Omega_M)=0.$
\par\noindent
\textbf{Causality:}
For a fixed in-state in an asymptotically static
spacetime $\omega_{in}\big(T_{\mu\nu}(x)\big)$ is
independent of variations of $g_{\mu\nu}$ outside
the past light cone of $x$. For a fixed out-state,
$\omega_{out}\big(T_{\mu\nu}\big)$ is independent of
metric variations outside the future light cone of $x$.
\par\noindent
The Causality axiom can be replaced by a locality property,
which does not assume an asymptotically static spacetime.
The first and last properties are the key ones,
since they uniquely determine the expected stress-energy
tensor up to the addition of local curvature terms:\par\noindent
\textbf{Uniqueness theorem:} Let $T_{\mu\nu}$
and $\tilde T_{\mu\nu}$ be operators on globally
hyperbolic spacetime satisfying
the axioms of Wald. Then the difference
$U_{\mu\nu}=T_{\mu\nu}-\tilde T_{\mu\nu}$
is a multiple of the identity operator,
is conserved, $\nabla_\nu U^{\mu\nu}=0$ and
is a local tensor of the metric. That is, it
depends only on the metric and its derivatives,
via the curvature tensor, at the same point $x$.
As a consequence
$\omega(T_{\mu\nu})-\omega(\tilde T_{\mu\nu})$
is independent of the state $\omega$ and depends only
locally on curvature invariants.
The proofs of these properties are rather simple and
can be found in the standard textbooks.\par\noindent
\textbf{Calculating the stress-energy tensor.}
A 'point-splitting' prescription where one subtracts
from $\omega(\phi(x)\phi(y))$ the expectation value
$\omega_0(\phi(x)\phi(y))$ in some fixed state $\omega_0$
fulfils the consistency requirement, but cannot
fulfil the first and third axiom at the same time.
However, if one subtracts a locally constructed
bi-distribution $H(x,y)$ which satisfies the wave equation,
has a suitable singularity structure and is equal
to $(\Omega_M,\phi(x)\phi(y)\Omega_M)$ in Minkowski spacetime,
then all four properties will be satisfied.\par\noindent
To find a suitable bi-distribution one recalls the
singularity structure \refs{hadamard}
of $\omega_2(x,y)$. In Minkowski spacetime and for massless fields
$w=0$ and this suggests that we take the bi-distribution
\eqnn{
H(x,y)={u(x,y)\over \sigma}+v(x,y)\log\sigma}
For massless fields the resulting stress-energy obeys all
properties listed above (for massive fields a slight modification
is needed). \par\noindent
\textbf{Effective action.}
The classical metric energy momentum
tensor
\eqnn{
^{cl}T_{\mu\nu}(x)=
{2\over \sqrt{\vert g\vert}}{\delta S\over \delta g^{\mu\nu}(x)}}
is symmetric and conserved (for solutions
of the field equation) for a diffeomorphism-invariant
classical action $S$. If we could construct a
diffeomorphism-invariant \textit{effective action }
$\Gamma$, whose variation
with respect to the metric yields an expectation value of
the energy momentum tensor,
\eqnn{
\langle T_{\mu\nu}(x)\rangle ={2\over \sqrt{\vert g\vert}}{\delta \Gamma\over
\delta g^{\mu\nu}(x)},}
then $\langle T_{\mu\nu}\rangle$ would be conserved by construction.
There exists a number of procedures for regularising
$\langle T_{\mu\nu}\rangle$, i.e. dimensional, point-splitting or zeta-function
regularisation, to mention the most popular ones.
Unfortunately the 'divergent' part' of
$T_{\mu\nu}$ cannot be completely absorbed into the
parameters already present in the theory, i.e.
gravitational and cosmological constant and parameters
of the field theory under investigation. One finds
that one must introduce new, dimensionless parameters.\par\noindent
The regularisation and renormalisation of the
effective action is more transparent. The divergent geometric parts
of the effective action, $\Gamma=\int \eta \gamma_{div}+\Gamma_{finite}$
have in the one-loop approximation the form
\eqnn{
\gamma_{div}=A+BR+C(\hbox{Weyl})^2+D\big[(\hbox{Ricci})^2-R^2\big]
+E\nabla^2R+FR^2.}
Only the part containing $A$ and $B$ can be absorbed into
the classical action of gravity. The remaining terms
with dimensionless parameters $C-F$ lead,
upon variation with respect to the metric,
to a $2$-parameter ambiguity in the expression
for $T_{\mu\nu}$.\par\noindent
\textbf{Effective actions and $\langle T_{\mu\nu}\rangle$ in two dimensions.}
In two dimensions there are less divergent terms
in the effective action. They have the form
$\gamma_{div}=A+BR$.
The last topological term does not
contribute to $T_{\mu\nu}$ and the first one leads to
an ambiguous term $\sim Ag_{\mu\nu}$ in the energy
momentum tensor.
The symmetric stress-energy tensor has
$3$ components, two of which are (almost) determined
by $T^{\mu\nu}_{\;\; ;\nu}=0$. As independent
component we choose the trace $T=T^\mu_{\;\mu}$
which is a scalar of dimension $L^{-2}$.
The ambiguities in the reconstruction of $T^{\mu\nu}$
from its trace is most transparent if we choose
isothermal coordinates for which
\eqnn{
ds^2=e^{2\sigma}\Big((dx^0)^2-(dx^1)^2)\Big).}
This is possible in two dimensions.
Introducing null-coordinates
\eqnn{
u={1\over 2}(x^0-x^1)\mtxt{and} v={1\over 2}(x^0+ x^1)\Rightarrow
ds^2=4e^{2\sigma}dudv,}
the non-vanishing Christoffel symbols are
$\Gamma^u_{uu}=2\partial_u\sigma,\;
\Gamma^v_{vv}=2\partial_v\sigma$
and the Ricci scalar reads $R=-2e^{-2\sigma}\partial_u\partial_v\sigma$.
Rewriting the conservation in null-coordinates we obtain
\eqnl{
\partial_u \langle T_{vv}\rangle+e^{2\sigma}\partial_v\langle T\rangle=0\mtxt{,}
\partial_v \langle T_{uu}\rangle+e^{2\sigma}\partial_u\langle T\rangle=0,}{conslc}
where $T=T^\mu_{\;\mu}=e^{-2\sigma}T_{uv}$.
The trace $\langle T\rangle$ determines $\langle T_{vv}\rangle$
up to a function $t_v(v)$ and $\langle T_{uu}\rangle$ up
to a function $t_u(u)$. These free functions contain
information about the state of the quantum system.\par\noindent
In the case of a classical conformally invariant
field, $^{cl}T^\mu_{\;\mu}=0$. An important feature of
$\langle T_{\mu\nu}\rangle$ is that its trace does not vanish
any more. This trace-anomaly
is a state-independent local scalar of dimension $L^{-2}$
and hence must be proportional to the Ricci scalar,
\eqnn{
\langle T\rangle={c\over 24\pi}R=-{c\over 12\pi}e^{-2\sigma}\partial_u\partial_v\sigma,}
where $c$ is the \textit{central charge}.
Inserting this trace anomaly into \refs{conslc}
yields
\eqnl{
\langle T_{uu,vv}\rangle=-{c\over 12\pi}e^\sigma \partial^2_{u,v}
e^{-\sigma}+t_{u,v}\mtxt{and} \langle T_{uv}\rangle=-
{c\over 12\pi}\Box_0\sigma.}{enmink}
Formally, the expectation value of the stress-energy
tensor is given by the path integral
\eqnn{
\langle T_{\mu\nu}(x)\rangle=-{1\over Z[g]}\int {\cal D}\phi\;
{2\over \sqrt{g}}{\delta\over \delta g^{\mu\nu}}e^{-S[\phi]}
={2\over \sqrt{g}}{\delta\over \delta g^{\mu\nu}}\Gamma[\phi],}
where the effective action is given by
\eqnn{
\Gamma[g]=-\log Z[g]=-\log \int {\cal D}\phi\; e^{-S[\phi]}=
{1\over 2}\log\det(-\triangle_c)}
and we made the transition to Euclidean spacetime (which is
allowed for the $2d$ models under investigation).
For arbitrary spacetimes the spectrum of $\triangle_c$ is not
known. However, the variation of $\Gamma$ with respect
to $\sigma$ in
$g_{\mu\nu}=e^{2\sigma}\hat g_{\mu\nu}$
is proportional to the expectation value of
the trace of the stress-energy tensor,
\eqnn{
{\delta\Gamma\over\delta\sigma(x)}=-2g^{\mu\nu}(x){\delta \Gamma
\over \delta g^{\mu\nu}(x)}=-\sqrt{g}\langle T^\mu_{\;\mu}(x)\rangle}
and can be calculated for conformally coupled particles
in conformally flat spacetimes. From the conformal
anomaly one can (almost) reconstruct the effective action.
In particular, in two dimensions the
result is the \textit{Polyakov effective action}
\eqnn{
\Gamma[g]-\Gamma[\delta]={c\over 96\pi}\int \sqrt{g}R{1\over \triangle}R,}
where the central charge $c$ is $1$ for
uncharged scalars and Dirac fermions
\footnote{see \cite{wipfsachs} for modifications of
this result, for a spacetime with nontrivial topology.}.
The $\langle T_{\mu\nu}\rangle$
is found by differentiation with respect to the metric.
The covariant expression is
\eqnl{
\langle T_{\mu\nu}\rangle={c\over 48\pi}\Big(2g_{\mu\nu}R-2\nabla_\mu\nabla_\nu S
+\nabla_\mu S\cdot\nabla_\nu S-
{1\over 2} g_{\mu\nu}\nabla^\alpha S\cdot\nabla_\alpha S\Big),\qquad
S={1\over \triangle }R,}{stressen2d}
and in isothermal coordinates this simplifies to \refs{enmink},
as it must be.
This energy-momentum tensor is consistent, conserved
and causality restricts the choice of the
Green function $1/\triangle$. The ambiguities in
inverting the wave operator in
\refs{stressen2d} shows up in the free
functions $t_{u,v}$.
A choice of these functions is
equivalent to the choice of a state.\par\noindent
Let us now apply these results
to the $(t,r)$ part of the Schwarzschild
black hole
\eqnn{
ds^2=\alpha(r)dt^2-{1\over \alpha(r)}\,dr^2,\qquad \alpha(r)=1-{2M\over r},\qquad
(G=1)}
which we treat as two-dimensional black
hole\footnote{The resulting energy-momentum tensor is not
identical to the tensor that one gets when one quantises
only the $s$-modes in the four-dimensional
Schwarzschild metric \cite{wimu}.}.
We use the 'Regge-Wheeler tortoise coordinate'
$r_*=r+2M\log\big(r/M-2\big)$,
such that the metric becomes conformally flat,
$ds^2=\alpha\big(dt^2-dr_*^2\big)$.
and introduce null-coordinates
$2u=t-r_*\mtxt{and}2v=t+r_*.$
Using $\partial_{r_*}=\alpha \partial_r$ we obtain
for the light-cone components \refs{enmink}
of the energy momentum tensor
\eqnn{
\langle T_{uu,vv}\rangle=
-{c\over 12\pi}\Big({2M\alpha\over r^3}+{M^2\over r^4}\Big)+t_{u,v},\quad
\langle T_{uv}\rangle=-{c\over 12\pi}{2M\alpha\over r^3}}
or for $\langle T_{\mu\nu}\rangle$ in the
$x^\mu=(t,r_*)$ coordinate system
\eqnl{
\langle T_\mu^{\;\,\nu}\rangle=-{cM\over 24\pi r^4}\pmatrix{
4r+{M\over\alpha}&0\cr 0&-{M\over\alpha}}
+{1\over 4\alpha}\pmatrix{t_u+t_v&t_u-t_v\cr t_v-t_u&-t_u-t_v}.}{2denergy}
The \textit{Boulware state} is the state appropriate
to a vacuum around a static star and contains no radiation at spatial
infinity ${\cal J}^\pm$. Hence $t_u$ and $t_v$
must vanish. This state is singular at the horizon. To see
that, we use regular Kruskal coordinates:
\eqnl{
U=-e^{-u/2M}\mtxt{and}V=e^{v/2M}\mtxt{so that}
ds^2={16M^3\over r}e^{-r/2M}dUdV.}{aha}
With respect to these coordinates the energy-momentum
tensor takes the form
\eqnn{
\langle T_{UU}\rangle=4\big({M\over U}\big)^2\langle T_{uu}\rangle,\qquad
\langle T_{VV}\rangle=4\big({M\over V}\big)^2\langle T_{vv}\rangle\mtxt{and}
\langle T_{UV}\rangle=-4{M^2\over UV}\langle T_{uv}\rangle.}
For the Boulware vacuum
$t_u=t_v=0$ and $\langle \dots\rangle$ is singular at the past horizon at $V=0$
and future horizon at $U=0$.
The component $\langle T_{UU}\rangle$ is regular at the future horizon
if $M^2t_u=c/192\pi$ and $\langle T_{VV}\rangle$ is regular at
the past horizon if $M^2t_v=c/192\pi$.
The state regular at both horizons is the
\textit{Israel-Hartle-Hawking state}. In this state
the asymptotic form of the energy-momentum tensor is
\eqnl{
\langle 0_{HH}\vert T^\mu_{\;\nu}\vert 0_{HH}\rangle\sim
{c\over 384\pi M^2}\pmatrix{1&0 \cr 0&-1}={c\pi\over 6}(kT)^2
\pmatrix{1&0 \cr 0& -1}}{hartlehawing}
with $T=1/8\pi kM= \kappa/2\pi k$. This is the stress-tensor
of a \textit{bath} of thermal radiation at temperature $T$.
Finally, demanding that energy-momentum is regular at
the future horizon and that there is no incoming radiation, i.e.
$M^2 t_u=c/192\pi$ and $t_v=0$, results in
\eqnl{
\langle 0_{U}\vert T^\mu_{\;\nu}\vert 0_{U}\rangle\sim
{c\over 768\pi M^2}\pmatrix{1&1 \cr -1&-1}={c\pi\over 12}(kT)^2
\pmatrix{1&1 \cr - 1& -1}}{unruh}
The \textit{Unruh state} is regular on the future
horizon and singular at the past horizon. It describes
the Hawking evaporation process with only outward
flux of thermal radiation.\par\noindent
\textbf{Euclidean Black Holes.}
The most elegant and powerful derivation of the Hawking
radiation involves an adaption of the techniques
due to Kubo to show that the Feynman propagator
for a spacetime with stationary black hole satisfies
the KMS condition.
Consider a system with time-independent
Hamiltonian $H$.
The time evolution of an observable $A$ in the Heisenberg picture is
$A(z)=e^{izH}Ae^{-izH}$,
where $z=t+i\tau$ is complex time. For $\tau=0$ ($t=0$) it is
the time-evolution in a static spacetime with
Lorentzian (Euclidean) signature.
If $\exp(-\beta H),\beta>0$ is trace class, one can define
the equilibrium state of temperature $T=1/\beta$:
\eqnl{
\langle A\rangle_\beta={1\over Z}\hbox{tr}\, e^{-\beta H}A,\qquad Z=\hbox{tr}\, e^{-\beta H}.}
{therev}
Let us introduce the finite temperature correlation functions
\eqngr{
G^\beta_+(z,\vec{x},\vec{y})&=&\langle \phi(z,\vec{x})\phi(0,\vec{y})\rangle_\beta
={1\over Z}\hbox{tr}\,\Big(e^{i(z+i\beta) H}\phi(0,\vec{x})e^{-izH}\phi(0,\vec{y})\Big)}
{G^\beta_-(z,\vec{x},\vec{y})&=&\langle \phi(0,\vec{y})\phi(z,\vec{x})\rangle_\beta
={1\over Z}\hbox{tr}\,\Big(\phi(0,\vec{y})e^{izH}\phi(0,\vec{x})e^{-i(z-i\beta)H}\Big)}
We have used the cyclicity under the trace.
Both exponents in $G_+$ have negative real parts
if $-\beta<\tau<0$; for $G_-$ the condition reads $0<\tau<\beta$.
Therefore, these formulae define holomorphic functions
in those respective strips with boundary values
$G_\pm^\beta(t,\vec{x},\vec{y})$.
It follows immediately, that
\eqnl{
G_-^\beta(z,\vec{x},\vec{y})=G_+^\beta(z-i\beta,\vec{x},\vec{y})}{KMS1}
which is the KMS-condition \refs{KMS1}.
This condition is now accepted
as a definition of 'thermal equilibrium at temperature
$1/\beta$'. \par\noindent
So far the analytic functions $G_\pm$ have been defined
in disjoint, adjacent strips in the complex time plane.
The KMS-condition states that one of these is the
translate of the other and this allows us to define
a periodic function throughout the complex plane, with
the possible exception of the lines $\tau=\Im(z)=n\beta$.
Because of locality $\phi(x)$ and $\phi(y)$ commute
for space-like separated events and
\eqnn{
[\phi(t,\vec{x}),\phi(0,\vec{y})]=0\mtxt{for}t\in I\subset R.}
Then the boundary values of $G_\pm^\beta$
coincide on $I$ and we conclude (by the edge-of-the-wedge
theorem) that they are restrictions of a single holomorphic,
periodic function, ${\cal G}^\beta(z,\vec{x},\vec{y})$, defined in
a connected region in the complex time plane except parts
of the lines $\tau=n\beta$. \par\noindent
With these preparations we are now ready to show
that the Green function in Schwarzschild spacetime
satisfies the KMS-condition.
Starting with the analytically continued Schwarzschild metric
\eqnn{
ds^2=\alpha dz^2-{1\over \alpha}dr^2-r^2d\Omega^2,\qquad \alpha=1-2M/r,\quad
z=t+i\tau,}
we perform the same
coordinate transformation to (complex) Kruskal coordinates
as we did for the Lorentzian solution:
\eqnn{
Z=V+U=2e^{r_*/4M}\sinh{z\over 4M}\mtxt{and}
X=V-U=2e^{r_*/4M}\cosh{z\over 4M}.}
The line element reads
\eqnn{
ds^2={16M^3\over r}e^{-r/2M}\Big(dZ^2-dX^2\Big)-r^2d\Omega^2}
and the Killing field takes the form
\eqnn{
K=\partial_z={1\over 4M}\Big(Z\partial_X+X\partial_Z\Big)=
{1\over 4M}\Big(V\partial_V-U\partial_U\Big).}
Setting $Z=T+i{\cal T}$ the orbits of $K$ are
\eqnn{
\pmatrix{T\cr X}=2e^{r_*/4M}\pmatrix{\sinh t/4M\cr \cosh t/4M}
\mtxt{and}
\pmatrix{{\cal T}\cr X}=2e^{r_*/4M}\pmatrix{\sin \tau/4M\cr
\cos \tau/4M},}
in the Lorentzian and Euclidean slices, respectively.
As expected from the general properties of
bifurcation spheres, these are Lorentz-boosts and
rotations, respectively. Since the Euclidean slice
is periodic in $\tau$, the
analytic Green function ${\cal G}(z=t+i\tau,\vec{x},\vec{y})$
is periodic in imaginary time $\tau$ with period $8\pi M$.
This corresponds to a temperature $T=1/8\pi M$, the Hawking
temperature.\par\noindent
The vector field (with affine parametrisation)
normal to the Killing horizon ${\cal N}$ (the past
and future horizons) is $l=\partial_V$ on the future horizon and
$l=\partial_U$ on the past horizon.
It follows that the surface gravity $\kappa$ (see \refs{surfgrav})
is $1/4M$ on the future horizon and
$-1/2M$ on the past horizon.\par\noindent
\textbf{Energy-momentum tensor near a black hole.}
In any vacuum spacetime $R_{\mu\nu}$ vanishes and so do
the two local curvature terms which enter the formula for
$T_{\mu\nu}$ with undetermined coefficients.
Hence $T_{\mu\nu}$ is well-defined in the Schwarzschild spacetime.
The symmetry of $\langle T^\mu_{\;\nu}\rangle$ due to the
$SO(3)$ symmetry of the spacetime of a non-rotating black hole
and the conservation $\nabla_\nu\langle T^{\mu\nu}\rangle$
reduce the number of independent components
of $\langle T^\mu_{\;\nu}\rangle$. Christensen
and Fulling \cite{crfu} showed that in the coordinates
$(t,r_*,\theta,\phi)$ the tensor is block diagonal.
The $(t,r_*)$ part admits the representation
\eqnl{
\langle T^\mu_{\;\nu}\rangle=\pmatrix{
{T\over 2}-{H+G\over \alpha r^2}-2\Theta&0\cr
0&{H+G\over \alpha r^2}}
+{W\over 4\pi\alpha r^2}\pmatrix{1&-1\cr 1&-1}
+{N\over \alpha r^2}\pmatrix{-1&0\cr 0&1}}{bhten1}
and the $(\theta,\phi)$-part has the form
\eqnl{
\langle T^\mu_{\;\nu}\rangle=\big({T\over 4}+\Theta\big)\pmatrix{1&0\cr 0&1}.}{bhten2}
Here $N$ and $W$ are two constants
and
\eqngr{
\alpha(r)&=&\Big(1-{2M\over r}\Big),\;\quad
T(r)=\langle T^\mu_{\;\mu}\rangle,\quad\;
\Theta(r)=\langle T^\theta_{\;\theta}\rangle-{1\over 4}T(r)}
{H(r)&=&{1\over 2} \int\limits_{2M}^r\big(r^\prime\!-\!M\big)T(r^\prime)dr^\prime,\quad
G(r)=2\int\limits_{2M}^r\big(r^\prime\!-\!3M)\Theta(r^\prime)dr^\prime.}
The energy-momentum tensor is characterised
unambiguously by fixing two functions $T(r),\Theta(r)$
and two constants $N,W$. The constant $W$ gives the intensity
of radiation of the black hole at infinity and $N$
vanishes if the state is regular on the
future horizon. \par\noindent
The radiation intensity $W$ is non-vanishing only in
the \textit{Unruh vacuum}. It has been calculated for the
massless scalar field $(s=0)$, two-components neutrino
field $(s=1/2)$, electromagnetic field $(s=1)$
and gravitational field $(s=2)$ by Page and Elster \cite{pel}:\par\noindent
\centerline{
\begin{tabular}{l|l|l|l}
$M^2W_0$&$M^2W_{1/2}$&$M^2W_1$&$M^2W_2$\\ \hline
& & & \\
$7.4\cdot 10^{-5}$&$8.2\cdot 10^{-5}$&$3.3\cdot 10^{-5}$&$0.4\cdot
10^{-5}$\\
\end{tabular}}\par\noindent
The coefficient $N$ vanishes for the Unruh and
Israel-Hartle-Hawking states.\par\noindent
The calculation of the functions in (\ref{bhten1},\ref{bhten2})
meets technical difficulties connected with the fact that
solutions of the radial mode equation (see below) are not
expressed through known transcendental functions and,
consequently, one needs to carry out renormalisation
in divergent integrals within the framework of numerical
methods. The results for $\langle T^t_{\;t}\rangle$ and $\langle T^r_{\;r}\rangle$
for the Israel-Hartle-Hawking and the Unruh states
have been calculated by Howard/Candelas and Elster \cite{hoca}.\par\noindent
In the Hartle-Hawking state the Kruskal coordinate components
of $\langle T_{\mu\nu}\rangle$ near the horizon are found to be of order
$1/M^4$. The energy flux into the black hole
is negative, as it must be since the 'Hartle-Hawking vacuum'
is time independent and the energy flux at future infinity
is positive. This is possible since $\langle T_{\mu\nu}\rangle$
need not satisfy the energy conditions.\par\noindent
\textbf{$s$-wave contribution to $\langle T_{\mu\nu}\rangle$.}
The covariant perturbation theory for the $4d$
effective action $\Gamma$ as developed in \cite{efac}
is very involved for concrete calculations. Here we
shall simplify the problem by considering $s$-modes of
a minimally coupled massless scalar field propagating
in an arbitrary (possibly time-dependent)
spherically symmetric four-dimensional spacetime.
The easiest way to perform this task is to compute the contribution
of these modes to the effective action. We choose
adapted coordinates for which the Euclidean metric takes the form
\eqnn{
ds^2=\gamma_{ab}(x^a)\,dx^adx^b+\Omega^2(x^a)\omega_{ij}dx^idx^j,}
where the last term is the metric on $S^2$. Now one
can expand the (scalar) matter field into spherical
harmonics. For $s$-waves, $\phi=\phi(x^a)$,
the action for the coupled gravitational and scalar field is
\eqnn{
S=-{1\over 4}\int\big[\Omega^2\;^\gamma{\cal R}+\;^\omega{\cal R}+2(\nabla\Omega)^2\big]
\sqrt{\gamma}d^2x+2\pi\int \Omega^2(\nabla\phi)^2\sqrt{\gamma}d^2x,}
where $^\gamma{\cal R}$ is the scalar curvature of the $2d$ space
metric $\gamma_{ab}$, $^\omega{\cal R}=2$ is the scalar curvature of $S^2$
and $(\nabla\Omega)^2=\gamma^{ab}\partial_a\Omega\partial_b\Omega.$
The purely gravitational part of the action is almost
the action belonging to $2d$ dilatonic gravity with two
exceptions: first, the numerical coefficient in front
of $(\nabla\Omega)^2$ is different and second, the action
is not invariant under Weyl transformation due to the
$^\omega{\cal R}$ term. The action is quite different from the
actions usually considered in $2d$ (string-inspired)
field theories, because of the unusual coupling of $\phi$
to the dilaton field $\Omega$. Choosing isothermal
coordinates, $\gamma_{ab}=e^{2\sigma}\gamma^f_{ab}$,
where $\gamma^f_{ab}$ is the metric of the flat $2d$ space,
one arrives with $\zeta$-function methods
at the following exact result for the effective action
for the $s$-modes \cite{wimu}
\eqngrr{
\Gamma_s&=&^{(n)}\Gamma_s+^{(i)}\Gamma}
{^{(n)}\Gamma[\sigma,\Omega]&=&{1\over 8\pi}\int\Big(
{1\over 12}{^\gamma{\cal R}}{1\over\triangle_\gamma}{^\gamma{\cal R}}-{\triangle_\gamma\Omega\over
\Omega}{1\over \triangle_\gamma}{^\gamma{\cal R}}\Big)\sqrt{\gamma}d^2x}
{^{(i)}\Gamma[\Omega]&=&\Gamma_s[\sigma=0,\Omega]=
{1\over 2}\log\det\Big(-\triangle_f+{\triangle_f\Omega\over\Omega}\Big).}
The second contribution $^{(i)}\Gamma$ is invariant
under $2d$ Weyl transformation, whereas the first one
is not. Unfortunately, the determinant cannot be calculated
exactly and one must resort to some perturbation expansion.
For details I refer to \cite{wimu}. Ignoring backscattering
one finds
\eqnn{
^{(1)}\Gamma={1\over 8\pi}\int\Big(
{1\over 12}{^\gamma{\cal R}}{1\over\triangle_\gamma}{^\gamma{\cal R}}-{\triangle_\gamma\Omega\over
\Omega}\times\Big[1+\log{\triangle_\gamma\Omega\over
\mu^2\Omega}\Big]\Big)\sqrt{\gamma}d^2x.}
Due to backscattering one needs to add the following term:
\eqnn{
^{(2)}\Gamma=-{\xi\over 12\cdot 8\pi}\int\Big({^\gamma{\cal R}}
{1\over\triangle_\gamma}{^\gamma{\cal R}}+\hbox{local terms}\Big)\sqrt{\gamma}d^2x,}
where $\xi\sim 0.9$. From the action $\Gamma_2=^{(1)}\!\Gamma+
^{(2)}\!\Gamma$ one
obtains $\langle T_{\mu\nu}\rangle$ by variation with respect
to the metric. To get the flux of the Hawking radiation
we need to continue back to Lorentzian spacetime by changing
the signs in the appropriate places. According to
\cite{efac} we arrive at the in-vacuum energy-momentum
tensor by replacing $-1/\triangle$ by the retarded
Green function. Neglecting backscattering, the luminosity
of the black hole is found to be
\eqnn{
L=-{\pi\over 12}{1\over (8\pi M)^2}.}
This coincides with the total $s$-wave flux of the
Hawking radiation obtained with other methods \cite{Birrell}
without taking backscattering effects into account.
With backscattering, the Hawking radiation is modified
and compares well with that obtained by other means
\cite{Simkin}.
\section{Wave equation in Schwarzschild spacetime}
We study the classical wave propagation of a Klein-Gordon
scalar field in fig.\ref{confdiag}.
\begin{figure}[ht]
\begin{minipage}[t]{15cm}
\centerline{\epsfysize=8 cm\epsffile{confdiag.eps}}
\caption{\label{confdiag}\textsl{The propagation
of particles in the geometric optics approximation.}}
\end{minipage}
\end{figure}
At late times, one expects that
every solution will propagate into the black hole
region $II$ and/or propagate to ${\cal J}^+$.\par\noindent
In the spherically symmetric spacetime we may set
\eqnn{
\phi={f(t,r)\over r}Y_{lm}e^{-i\omega t}}
and the wave equation $(\Box+m^2)\phi=0$
reduces to the radial equation
\eqnl{
{\partial^2f\over \partial t^2}-{\partial^2f\over \partial r^2_*}-V(r_*)f=0,
\quad V(r_*)=\Big(1-{2M\over r}\Big)
\Big({2M\over r^3}+{l(l+1)\over r^2}+m^2\Big),}{radialeq}
where $M$ is the mass of the black hole and $m$ that
of the Klein-Gordon field.
As $r_*\to -\infty$ (i.e. $r\to 2M$) the potential falls
off exponentially, $V\sim \exp(r_*/2M)$, and as
$r_*\to\infty$ the potential behaves as $\sim m^2-2Mm^2/r_*$ in
the massive case and $\sim l(l+1)/r^2$ in the massless case.
In the asymptotic region $r\to\infty$ this equation
possesses outgoing solution $\sim e^{i\omega r_*}$ and
ingoing solutions $\sim e^{-i\omega r_*}$. In terms of
the null-coordinates the asymptotic solutions look like
\eqnl{
f_\omega^{out}\sim e^{-2i\omega u}\mtxt{and}
f_\omega^{in}\sim e^{-2i\omega v}.}{inout}
Consider a geometric optics approximation in which
a particle's world line is a null ray, $\gamma$, of constant
phase $u$ and trace this ray backwards in time from
${\cal J}^+$. The later it reaches ${\cal J}^+$ the closer it
must approach $H^+$. As $t\to\infty$ the ray $\gamma$
becomes a null geodesic generator $\gamma_H$ of $H^+$.
We specify $\gamma$ by its affine distance from
$\gamma_H$ along an ingoing null geodesic through $H^+$
\begin{figure}[ht]
\begin{minipage}[t]{15cm}
\centerline{\epsfysize=5 cm\epsffile{hawkingh.eps}}
\caption{\label{hawkingh}\textsl{ }}
\end{minipage}
\end{figure}
(see fig.\ref{hawkingh}a).
The affine parameter on the ingoing null geodesic is $U$,
so that according to \refs{aha}
\eqnn{
U=-\epsilon\Rightarrow u=-{1\over 2\kappa} \log\epsilon,\quad
f_\omega^{out}\sim \exp\Big({i\omega\over \kappa}\log\epsilon\Big).}
This oscillates rapidly at later times $t$ and
this justifies the geometric optics approximation.
Now we must match $f_\omega^{out}$ onto a solution
near ${\cal J}^-$. In our approximation we just need
to parallel-transport $n$ and $l$ along
the continuation of $\gamma_H$ back to ${\cal J}^-$.
We choose $v$ such that this continuation meets
${\cal J}^-$ at $v=0$. The continuation of $\gamma$ will meet
${\cal J}^-$ at an affine distance $\epsilon$ along an outgoing null
geodesic on ${\cal J}^-$. Since $ds^2=4dudv+\dots$ on ${\cal J}^-$
the coordinate $2v$ is the affine parameter measuring
this distance, so $2v=-\epsilon$ on $\gamma$ and
\eqnn{
f_\omega\sim \exp\Big({i\omega\over \kappa}\log(-2v)\Big)\theta(-v),}
where we took into account, that null rays with $v>0$
do not reach ${\cal J}^+$. Now we take the Fourier transform
\eqnn{
\tilde f_\omega(\omega^\prime)=\int\limits_{-\infty}^0e^{2i\omega^\prime v} f_\omega(v)\,dv
={1\over 2}\int\limits_0^\infty \tilde v^{i\omega/\kappa}e^{-i\omega^\prime \tilde v}d\tilde v,\quad
{\omega}^\prime >0.}
Using \refs{integral} one sees, that
\eqnn{
\tilde f_\omega(\omega^\prime)=-e^{\pi\omega/\kappa}\tilde f_\omega(-\omega^\prime)\mtxt{for}
\omega^\prime >0.}
It follows, that a mode of positive frequency $\omega$ on ${\cal J}^+$
matches onto mixed positive and negative frequency modes
on ${\cal J}^-$. We see, that the Bogolubov coefficients are
related by $\beta_{ij}=-\exp(-\pi\omega_i/\kappa)\alpha_{ij}$.
From the Bogolubov relations \refs{bogrel} one then gets
\eqnl{
\Big(\beta\beta^\dagger\Big)_{ii}={1\over e^{2\pi\omega_i/\kappa}-1}.}{borrel1}
For calculating the late time particle flux through
${\cal J}^+$ we need the inverse $\beta$-coefficients, $\beta^\prime=-\beta^t$.
One easily finds, that
$\langle N_i\rangle_{{\cal J}^+}=(\beta^{\prime\dagger} \beta^\prime)_{ii}=
(\beta\beta^\dagger)_{ii}$.
This is the Planck-distribution at the Hawking
temperature $T_H=\hbar\kappa/2\pi$.\par\noindent
The detailed form of the potential in $\refs{radialeq}$
is irrelevant in the geometric optics
approximation. But the incoming waves will partially
scatter off the gravitational field (on the $l$-dependent
potential $V$ in \refs{radialeq})
to become a superposition of incoming and outgoing waves.
The backscattering is a function of $\omega$ and the spectrum
is not precisely Planckian. The total luminosity of
the hole is given by
\eqnl{
L={1\over 2\pi}\;\sum_{l=0}^\infty \;(2l+1)\int\limits_0^\infty
d\omega \;\omega {\Gamma_{\omega l}\over e^{8\pi M\omega}-1}.}{luminosity}
A black hole is actually grey, not black.
The dependency on the angular momentum (and spin) of
the particles resides in the grey-body factor $\Gamma_{\omega l}$.
\section{Back-reaction}
The main effect of the quantum field will be a
decrease of $M$ at the rate at which energy is
radiated to infinity by particle creation.
Since the spacetime is static outside the
collapsing matter, the expected energy current
$J_\mu=\langle T_{\mu\nu}\rangle K^\nu$ is conserved in that region.
The calculation showed, that there will be a steady nonzero flux $F$.
In \cite{evap} the contribution of the
different particle species to this flux
has been determined. The contribution of massive particles
of rest mass $m$ is exponentially small if $m>\kappa$.
Black holes of mass $M>10^{17}$g can only emit neutrinos,
photons and gravitons. Black holes of mass $5\cdot 10^{14}$g$\leq M
\leq 10^{17}$g can also emit electrons and positrons.
Black holes of smaller mass can emit heavier particles.
A non-rotating black hole emits almost as a
body heated to the temperature
\eqnn{
T[^0\hbox{K}]={\hbar\kappa \over 2\pi c}={\hbar c^3\over 8\pi G kM}\sim
10^{26}{1\over M[\hbox{g}]}.}
The deviation from thermal radiation is due to the
frequency dependence of the penetration coefficient
$\Gamma_{s\omega l}$. This coefficient is also strongly
spin-dependent,
$\Gamma_{s\omega l}\sim \omega^{2s+1}$.
As spin increases, the contribution of particles
to the radiation of a non-rotating black hole
diminishes. The distribution of the radiated
particles in different mass-intervals is shown
in the following table:
\vskip 0.5truecm
\par\noindent
\centerline{
\begin{tabular}{l|l|l}
$M\;$[g] & $L\;\big[{\hbox{erg}\over \hbox{sec}}\big]$
& particles radiated \\ \hline
& & \\
$M>10^{17}$ & $3.5\times 10^{12}\Big({10^{17}g\over M}\Big)^2$&
$81.4\%\;\,\;\nu_e,\bar\nu_e,\nu_\mu,\bar\nu_\mu$ \\
& & $16.7\%\;\,\gamma\quad 1.9\%\;\,g$ \\ \hline
$10^{17}>M>5\times 10^{14}$ & $ 6.3\times 10^{16}\Big({10^{15}g\over M}\Big)^2$ & $45\%\;\,\;\nu_e,\bar\nu_e,\nu_\mu,\bar\nu_\mu$ \\
& & $9\%\;\,\gamma\quad 1\%\;\,g$ \\
& & $45\%\;\,e^-,e^+$ \\ \hline
$10^{14}>M>10^{13.5}$ & $ 10^{19}\Big({10^{14}g\over M}\Big)^2$ &
$48\%\;\,\nu_e,\bar\nu_e,\nu_\mu,\bar\nu_\mu$ \\
& & $28\%\;\,e^-,e^+\quad 11\%\;\,\gamma$\\
& & $1\%\;\,g\quad 12\%\;\,N,\bar N$ \\ \hline
\end{tabular}}
\vskip 0.5truecm
\par\noindent
The following formula describes the rate of mass loss
\eqnl{
-{dM\over dt}\sim 4\cdot 10^{-5}f\cdot\Big({m_{pl}\over M}\Big)^2
{m_{pl}\over t_{pl}}=7.7\cdot 10^{24}f\cdot\Big({1\over M[\hbox{g}]}\Big)^2
{\hbox{g}\over \hbox{sec}}={\alpha\over M^2}.}{loss}
The contributions of the (massless) particle species
are encoded in $f(M)$. From Page we take
\eqnn{
f=1.02 h({1\over 2})+0.42 h(1)+0.05 h(2),}
where $h(s)$ is the number or distinct polarisations
of spin-$s$ particles.
The rate equation \refs{loss} is easily integrated to yield
\eqnn{
M(t)=\big(M_0^3-3\alpha t\big)^{1/3},}
We see that a black hole
radiates all of its mass in a finite time
$\tau\sim M_0^3/3\alpha$. Inserting for $\alpha$ yields
\eqnn{
\tau\sim 10^{71}\big({M\over M_\odot}\big)^3\hbox{sec}.}
If primordial black holes of mass $\sim 5\cdot 10^{14}$g were
produced in the early universe, they would be in
the final stages of evaporation now. Primordial
black hole of smaller mass would have already
evaporated and contributed to the $\gamma$-ray background.
See the review of Carr \cite{Carr} for the possibility
of observing quantum explosions of small black holes.\par\noindent
The magnitude of the Kruskal coordinate components of
$\langle T_{\mu\nu}\rangle_H$ near the black
hole are found to be of order $1/M^4$ in Planck units,
as expected on dimensional grounds. Since the
background curvature is of order $1/M^2$ the quantum
field should only make a small correction to the structure
of the black hole for $M\gg 1$, or $M\gg 10^{-5}$g.
\section{Generalisations and Discussion}
In the previous section we have studied the
Hawking effect in the case of the Schwarzschild
black hole. Lets us consider now different
generalisations of this effect and its possible
consequences.\par\noindent
\textbf{Hawking radiation of rotating and charged holes.}
The \textit{Kerr solution} has null-hypersurfaces at
\eqnn{
r=r_\pm=M\pm \sqrt{M^2-a^2},}
where $a=J/M$, which are Killing horizons of the Killing fields
\eqnn{
K_\pm=k+\Omega m=
k+\Big({a\over r^2_\pm+a^2}\Big)m\qquad k=\partial_t,\quad m=\partial_\phi,}
with surface gravities
\eqnn{
\kappa_\pm={r_\pm-r_\mp\over 2(r_\pm^2+a^2)}.}
For the extreme Kerr solution with $a^2=M^2$ the
surface gravity vanishes.
For a Schwarzschild hole the number of particles
per unit time in
the frequency range $\omega$ to $\omega+d\omega$ passing out through
a surface of the sphere is
\eqnn{
{1\over e^{8\pi M\omega}-1}\;{d\omega\over 2\pi}.}
For a Kerr Black hole $\omega$ is replaced
by $\omega-m\Omega$ in this formula, where $m$ is the
azimuthal quantum number of the spheroidal
harmonics, and $\Omega$ is the angular speed of the
event horizon. Hence, the Planck factor at $J^+$ becomes
\eqnn{
{1\over e^{2\pi(\omega-m\Omega)/\kappa}\pm 1},\qquad
+\,\hbox{fermions},\quad -\hbox{bosons}.}
The emission is stronger for positive $m$ than
for negative $m$. In the boson case the Planck
factor becomes negative when $\omega<m\Omega$
and super-radiance occurs: the effect of radiation
amplifies the incoming classical wave with positive $m$.
The result admits the following interpretation: Consider
a rotating black hole enclosed in a mirror-walled
cavity. A scattering of a 'particle' in a super-radiant
mode by the black hole increases the number of
quanta. After reflection by the mirror, these quanta
are again scattered on the black hole and their number
increases again, and so on. No stationary equilibrium
distribution is possible for such modes. However,
if the size of the cavity is not too large, $r<1/\Omega$,
then the super-radiative modes are absent and equilibrium
is possible.
A related effect is that the rotation of the hole
enhances the emission of particles with higher spins.
For a charged hole with \textit{Reissner-Nordstr\"{o}m metric}
\eqnn{
ds^2=\alpha(r) dt^2-
{1\over \alpha(r)}dr^2+r^2d\Omega^2,\qquad \alpha(r)=
1-{2M\over r}+{q^2\over r^2}}
the event horizon is at $r=r_+=M+\big(M^2-q^2\big)^{1/2}$
and the surface gravity is found to be
\eqnn{
\kappa={1-16\pi^2q^4/A^2\over 4M},}
where $A=4\pi r_+^2$ is the area of the horizon.
If follows that the presence of the charge depresses
the temperature $kT_H=\kappa/2\pi$ of the hole.
For an extremal hole with charge $q=M$ or with $a^2=M^2$
the Hawking temperature is zero, whereas the
area is not ($A=4\pi M^2$ for the extreme
Reissner-Nordstr\"{o}m hole). In the laws of black
hole thermodynamics the entropy of
a black hole is $S=A/4$ and hence non-vanishing
for extreme black holes. The formulation
of the third law, namely that $S\to 0$ as $T\to 0$,
is not true for extremal holes\footnote{see the
contribution of Claus Kiefer: the canonical theory
of gravity predicts $S(T\to 0)=0$, whereas
superstring-theory predicts $S(T\to 0)=A/4$.}.
The failure
of the formulation of the third law may not be
too disturbing. There other quantum systems
with a degenerate ground state
for which it fails as well.\par\noindent
\textbf{Loss of Quantum Coherence.}
Consider the behaviour of the quantum field in the
spacetime of a collapse, fig.\ref{collaps}
in which back-reaction effects are not taken
into account. The state of the field at late times
in region $I$, and in particular the flux
of thermal particles reaching infinity, must be described
by a density matrix. The particles which entered
the black hole at early times are correlated with
the particles in region $I$.
\begin{figure}[ht]
\begin{minipage}[t]{14cm}
\centerline{\epsfysize=7 cm\epsffile{collaps.eps}}
\caption{\label{collaps}\textsl{A conformal diagram
of the spacetime resulting from a complete
collapse of a spherical body. The region $II$
lies outside of the chronological past of $J^+$.}}
\end{minipage}
\end{figure}
There is always a
loss of information whenever one performs
an inclusive\footnote{not all commuting observables
are measured} measurement outside the horizon. Such entropy
increase is common to all inclusive measurements in
physics. Perhaps we can understand this situation
better if we recall the resolution of the well-known
question raised by Einstein, Podolsky and Rosen.
A pure quantum state is defined globally; its coherence
may extend over field variables located at well-separated
points on a space-like surface. \par\noindent
Let us distinguish between the set of out-states
corresponding to particles moving away from the black hole (the visible ones)
and those falling into the hole (the invisible ones).
When one calculates expectation values
$\langle A\rangle=\big(\psi,A\psi)$ of operators $A$
depending only on the creation and
annihilation operators belonging to the visible modes,
this expectation value can be written as
$\langle A\rangle =\hbox{tr}\,\rho A$.
In a Fock space construction one can derive an explicit
formula for the density matrix $\rho$ in
terms of the pure state $\psi$. Here it suffices to
sketch the emergence of a mixed state from a pure
one. let $\psi=\psi^I_i\otimes\psi^{II}_j$ be
orthonormal pure states in the big Hilbert space
${\cal H}={\cal H}_I\otimes {\cal H}_{II}$. Let us further assume
that the observable $A$ is the identity in ${\cal H}_{II}$.
Then the expectation value
\eqnn{
(\psi,A\psi)\mtxt{in the pure state}\psi=\sum \alpha_i\psi_i^I\otimes
\psi_i^{II},\quad
\sum \vert\alpha_i\vert^2=1}
becomes
\eqnn{
(\psi,A\psi)=\sum_{ij}\bar\alpha_i\alpha_j\big(\psi_i^I\otimes \psi^{II}_i,
A\psi^{I}_j\otimes \psi^{II}_j\big)
=\sum p_i(\psi_i^I,A\psi^I_i)=\hbox{tr}\, (\rho A),}
where $p_i=\vert\alpha_i\vert^2$ and $\rho=\sum p_iP_i$.
The $P_i$ are the projectors on the states $\psi^I_i$.
We have used, that the $\psi^{II}_i$ are orthonormal.
Thus, if we are only measuring observables in the region
$I$ outside of the black hole and ignore the information
about the inside, then pure states become indeed mixed
states. For a black hole $\alpha_i\sim \exp(-\pi\omega_i/\kappa)$
(see \refs{borrel1}) and $\rho$ is the thermal state.
As is also clear, for operators $A$ which are not the identity
in ${\cal H}_I$ the expectation values $(\psi,A\psi)$ cannot
be written as $\hbox{tr}\,\rho A$.
\par\noindent
Consider now the spacetime fig.\ref{evap} in which back-reaction
causes the black hole to 'evaporate'.
\begin{figure}[ht]
\begin{minipage}[t]{14cm}
\centerline{\epsfysize=7 cm\epsffile{evap.eps}}
\caption{\label{evap}\textsl{A conformal diagram
of a spacetime in which black hole formation and
evaporation occurs. The contour labelled $M=0$
lies at the (retarded) time corresponding
to the final instant of evaporation.}}
\end{minipage}
\end{figure}
The visible
particles propagating to infinity can be described by
a (thermal) density matrix. The particle creation
and scattering will be described by a unitary $S$-matrix,
provided that the invisible particles are represented
in the 'out'-Hilbert space. What happens now when the black
hole disappears from the spacetime? Apparently at late
times, if one takes the 'out'-Hilbert space
to be the Fock space associated with visible particles,
the entire state of the field is mixed. Then one cannot
describe particle creation and scattering by a unitary
$S$-matrix, since an initial pure state evolved into a
density matrix. This is the phenomenom of \textit{loss of
quantum coherence}.
What are the possible ways out of this problem?
A complete calculation including all back-reaction
effects might resolve the issue, but even this is
controversial, since the resolution very probably
requires an understanding of the Planck scale physics.
For example, $QFT$ predicts that $T_{loc}\to\infty$
on the horizon of a black hole. This should not
be believed when $T$ reaches the Planck energy.
The quantum aspects of gravity cannot be any longer
ignored and this temperature is then of the order
of the maximum (Hagedorn) temperature of string theory
\footnote{See the contribution of G. 't Hooft.}.\par\noindent
A natural approach to dealing with this situation
is to consider 'toy models', for example in two spacetime
dimensions, in which the semiclassical
analysis could be done. In lower dimensions one adds
a 'dilaton' field to render gravity non-trivial
(this field naturally arises in low energy
string theory). The resulting two-dimensional theories
are dynamically nontrivial and mimic many
features of four-dimensional general relativity:
they possess black-hole solutions, Hawking radiation
and there exist laws of black hole thermodynamics
which are completely analogous to the laws in
four dimensions. Callen et.al \cite{callen} studied the model
\eqnl{
S={1\over 2\pi}\int d^2x\sqrt{-g}\Big(e^{-2\sigma}\big[
R+4(\nabla\sigma)^2+4\lambda^2\big]+{1\over 2}(\nabla f)^2\Big),}{callan}
containing a metric field $g_{\mu\nu}$, a dilaton field
$\sigma$ and a matter field $f$. The Hawking radiation
of the $f$-'particles' can be calculated the way
we explained in our two-dimensional model calculations
above. So far these model calculations
have not resolved the problems with the final stage
of the black hole evaporations (the problems
are the same as those with the Liouville
theory at strong-coupling). A further simplification of \refs{callan}
has been discovered by Russo, Susskind and Thorlacius \cite{RST}.
Rather recent calculations seem to indicate\footnote{
see the contribution of C. Kiefer} that information
is not destroyed, but slowly
released as the black hole decays back to vacuum \cite{strom}.
|
1,108,101,563,161 | arxiv | \section{Introduction}
\label{sec:introduction}
The effort in developing non-linear image reconstruction algorithms for X-ray
computed tomography (CT) has been steadily
increasing over the past couple of decades. The non-linearity arises from incorporation
of some forms of prior information in the reconstruction process or some forms of physics modeling.
For example, edge-preserving
regularization and spectral response modeling both yield an image reconstruction algorithm
that yields images that depend non-linearily on the CT data \cite{elbakri2002statistical,mccollough2009strategies}. Exploitation
of sparsity or transform sparsity also involves non-linear image reconstruction \cite{sidky2008image,chen2008prior,ritschl2011improved,batenburg2011dart}.
Most recently,
deep-learning based data processing is being investigated for generating tomographic images directly
from CT projection data using convolutional neural networks (CNNs) \cite{gupta2018cnn,adler2018learned}.
Such CNNs also process the tomographic
data in a non-linear fashion.
While non-linear image reconstruction may allow for accurate image reconstruction
in CT systems involving low-dose illumination or sparse sampling, the resulting image characteristics
can depend strongly on the scanned object. This object dependence presents a difficult challenge
for developing meaningful image quality metrics needed to guide algorithm parameter selection in a
non-subjective fashion. As a result, much work on non-linear image reconstruction techniques present
images resulting from algorithms where the parameters are tuned by eye. Such an approach may be fine
for an initial introduction of a new image reconstruction algorithm or if the CT system/reconstruction
parameter space is limited enough where it is feasible to tune by eye. The tune by eye method, however,
blunts the impact of advanced image reconstruction because such image reconstruction techniques themselves
involve numerous parameters and they
aim to broaden the scope of possible CT system configurations -- enlarging the parameter space of CT hardware.
Attempting to perform comparisons between different non-linear image reconstruction algorithms only complicates
the matter further. With a large parameter space, the tune-by-eye method becomes impractical.
Avoiding the subjective tune-by-eye method, many researchers in advanced CT image reconstruction turn to
one of three image fidelity metrics in their simulations: root-mean-square-error (RMSE), peak signal-to-noise ratio (PSNR),
or structural similarity index (SSIM). These metrics are useful, in a simulation setting, because they present a measure
of how close a reconstructed image is to a ground truth image. This information in turn is useful for investigating the
underlying inverse problem. When considering clinical imaging tasks that rely on viewing subtle image features, optimizing
system and reconstruction parameters on these global image fidelity metrics
can lead to significantly over-regularized images.
One problem is that
these image fidelity metrics do not provide a sense of image resolution, noise
level, or noise quality. PSNR, from its name, would seem to provide information on the image noise level, but what is
called ``noise'' in PSNR is actually the difference between the reconstructed and truth images, and this difference includes
both image noise and deterministic artifacts from either unmodeled non-stochastic physics or insufficient sampling.
For non-linear image reconstruction algorithms, concepts such as the point-spread function and the noise power spectrum
do not have a simple and direct interpretation as they do for linear systems theory. For example, in
non-linear image reconstruction,
the resulting image cannot be interpreted as a convolution of a reconstructed point-like object and the underlying
true object function. As a result, they are used rarely
in the evaluation of non-linear image reconstruction.
In order to prevent over-smoothing by optimizing non-linear image reconstruction solely on image RMSE, an image quality metric
is needed that is sensitive to subtle features in the image and that is easy to interpret. To develop such a metric, we turn
to signal detection theory, and investigate the use of the ideal observer for a simple signal-known-exactly/background-known-exactly
detection task \cite{barrett2004foundations}. Signal detection theory has been investigated in the context of evaluation of image
reconstruction algorithms \cite{abbey1996observer,abbey2001human,wunderlich2008image,das2010penalized,sanchez2013comparison,gang2014task,sanchez2014task,xu2015task,rose2017investigating}.
For the present work, the signal is chosen to be a point-like object and its amplitude is set so that it is at the limit of detectability in
the CT data space. It is known, that image reconstruction or any other image processing operations cannot increase signal detectability
(see pages 829-30 in Barrett and Myers\cite{barrett2004foundations}),
but it is possible that image reconstruction can reduce signal detectability. Quantifying this loss of detectability is precisely what
we would like to use as a measure of over-regularization. Having such a measure would allow optimization of image RMSE under the condition
that signal detection is constrained to be at or above a desired set level and thus prevent over-regularization.
The setting for developing this metric, here, is a dedicated breast CT simulation where image reconstruction is performed
by total-variation (TV), least-squares optimization (TV-LSQ). The TV-LSQ algorithm is non-linear and it allows for accurate
image reconstruction from sparse-view CT data under ideal noiseless conditions. When TV-LSQ is employed for noisy, realistic
data it is often reported that the images are patchy or blocky, and one solution to avoid this subject quality is
to generalize the TV-norm \cite{liu2013total,niu2014sparse}. For the present work, however, we argue that the patchiness
resulting from use of TV regularization can also be a side effect of over-regularization due to parameter optimization using
image RMSE. Using the proposed signal detectability metric can help to disallow parameter settings that cause over-regularization
and, specifically, the patchy appearance from over-regularization with the TV-norm.
We point out that the patchy appearance for over-regularization with the TV-norm is a somewhat subjective assessment, and
therefore the claim that the proposed metric characterizes patchiness quantitatively is also subjective and cannot be proven
mathematically. We do attempt to design the simulation so that the subjectivity is limited as much as possible, but in the
end the utility of the proposed metric can only be demonstrated by showing metric correspondences with images and it is left
to the observer to decide whether this correspondence is useful or not.
In Sec. \ref{sec:methods}
we present the parameters of the breast CT simulation, the details of the TV-LSQ
algorithm, and the channelized Hotelling observer (CHO) for the signal-known-exactly/background-known-exactly (SKE/BKE) detection task.
The results, presented in Sec. \ref{sec:results},
demonstrate the correspondences between the proposed signal detection metric and reconstructed images for select parameter
settings of the breast CT simulation and TV-LSQ algorithm. The results are discussed in Sec. \ref{sec:discussion}, and
finally, we conclude the paper in Sec. \ref{sec:conclusion}.
\section{Methods}
\label{sec:methods}
\subsection{Breast CT simulation}
For the studies presented here, we consider
a fixed dose simulation, where the number of projections is varied while keeping the total patient exposure constant.
The configuration is 2D circular, fan-beam scanning and is representative of the mid-plane slice of a 3D circular cone-beam scan.
The mean continuous data function, $g$, is modeled as the X-ray transform of the object function
\begin{linenomath}
\begin{equation}
\label{xray}
g(\theta,\xi)=Pf(\theta,\xi) = \int_0^\infty f(r_0(\theta) + t \, \hat{\phi}(\theta,\xi)) dt,
\end{equation}
\end{linenomath}
where $f$ represents the continuous object function; $Pf$ is the continuous X-ray transform of $f$;
$\theta$ indicate the view angle of the X-ray source;
$\xi$ is the detector bin location on a linear detector; $r_0(\theta)$ indicates the X-ray source position; and the unit
vector $\hat{\phi}$ points from the X-ray source to the detector bin indicated by $\xi$, accordingly $\hat{\phi}$ is a function
of $\theta$ and $\xi$. The data function is sampled at a variable number of views $N_\text{views}$ and 512 detector bins.
The noise level in the measured transmission data is specified by fixing the total number of incident photons to
\begin{linenomath}
\begin{equation*}
N_\text{photons} = 10^{10}.
\end{equation*}
\end{linenomath}
In the simulations we consider varying $N_\text{views}$ between 128 and 512, and for the maximum end of this
range the number of incident photons along each measured ray is approximately 38,000 photons, which is on the low
end of actual breast CT systems \cite{boone2005technique,sanchez2014task}. To model noise due to the detection of finite numbers of quanta,
a Poisson distribution is assumed in the X-ray transmission measurements. Accounting for the logarithm processing
needed to arrive at the line-integration model, Eq. (\ref{xray}), we model the noisy discrete data with a Gaussian
distribution with mean
\begin{linenomath}
\begin{equation}
\label{datamean}
\bar{g_\ell} = g(\theta_\ell,\xi_\ell),
\end{equation}
\end{linenomath}
and variance
\begin{linenomath}
\begin{equation}
\label{datavar}
\text{Var}(g_\ell) = \left(\frac{N_\text{photons}}{N_\text{views}} \exp(-g(\theta_\ell,\xi_\ell))\right)^{-1},
\end{equation}
\end{linenomath}
where $\ell$ is an index for each of the transmission rays in the projection data.
It is clear from Eq. (\ref{datavar}) that the noise variance decreases with decreasing numbers of views,
and there is a tradeoff between $N_\text{views}$ and signal-to-noise ratio in each projection.
\subsection{TV-LSQ image reconstruction}
In order to formulate the TV-LSQ optimization the continuous data model in Eq. (\ref{xray})
is discretized, taking the form of a large linear system
\begin{linenomath}
\begin{equation*}
g=Xf,
\end{equation*}
\end{linenomath}
where the pixelized 512$\times$512 image is represented by $f$; X-ray projection becomes the matrix $X$;
and the $N_\text{views} \times 512$ data is denoted by $g$. Because we consider $N_\text{views} \le 512$ this linear
system can be under-determined.
The TV-LSQ optimization problem is formulated as
\begin{linenomath}
\begin{equation}
\label{TVLSQ}
f^\star = \argmin_f \frac{1}{2} \| g-Xf\|^2_2 \text{ such that } \|(|Df|_\text{mag})\|_1 \le \gamma,
\end{equation}
\end{linenomath}
where $D$ is the finite differencing approximation to the image gradient; $|\cdot|_\text{mag}$ is the pixelwise
magnitude of the spatial gradient vector $Df$; $\|(|Df|_\text{mag})\|_1$ is the image total variation (TV);
and $\gamma$ is the TV constraint value. When the data $g$ are generated from a test image $f_\text{test}$ with no noise added,
the test image can be recovered with TV-LSQ choosing $\gamma=\|Df_\text{true}\|_1$ for
sparse-view sampling with $N_\text{views} < 512$. The degree of under-sampling permitted depends on the sparsity
in the gradient magnitude image (GMI) $|D f_\text{test}|_\text{mag}$ \cite{jorgensen2015little}.
This possibility of accurate image reconstruction for sparse-view CT enables the consideration of the CT configurations
described in the breast CT simulation.
The TV-LSQ optimization problem can be efficiently solved by the Chambolle-Pock primal-dual (CPPD)
algorithm \cite{chambolle2011first,Pock2011,SidkyCP:12}.
For completeness we provide the pseudocode for this algorithm in Appendix \ref{app:cppd}.
We do consider early stopping of the CPPD algorithm and allow the total number
of iterations, $N_\text{iter}$, to vary from 10 to 500. At $N_\text{iter}=500$
the TV-LSQ problem is solved to a high degree of numerical accuracy for all scan configurations considered in this work.
In total, three parameters are varied in the breast CT simulation: $N_\text{views}$, $N_\text{iter}$, and the TV constraint $\gamma$.
Even under this restricted simulation with three parameters specifying the image,
it is difficult to tune-by-eye; not only is the parameter space too large but the image qualities
are difficult to compare. As will be seen, quantitative image fidelity metrics such as image RMSE,
alone, may not provide a reasonable objective means
of image comparison and optimization, particularly when small subtle signals are the features of interest.
\subsection{SKE/BKE signal detection model}
To provide an objective metric that characterizes the preservation of subtle details in the TV-LSQ reconstructed images,
signal detection theory is employed to measure the loss of signal detectability for an ideal observer model.
The design of the detectability metric involves the following steps:
select the signal properties such that it is on the border
of detectability in the sinogram data domain; generate multiple realizations of signal-present and signal-absent sinograms;
perform TV-LSQ reconstruction of all data realizations; divide the resulting image set into training and testing data;
train the signal-present/signal-absent classifier; and finally, measure the image domain detectability with the testing images.
The data model and data signal detection task is set up so that the ideal observer performance can be analytically computed.
In this way, the data domain detectability serves as a precisely known upper bound to the image domain detectability. The loss in detectability,
passing through image reconstruction, provides a quantitative measure that is an indication of loss of fine details in the image
and may reflect the subjective property of image patchiness.
\begin{figure}[!t]
\centerline{\includegraphics[width=0.5\columnwidth]{detectionTaskWithBox-eps-converted-to.pdf}}
\caption{(Left) Background image used for the signal-known-exactly/background-known-exactly detection task.
The gray scale window is [0.174,0.253] cm$^{-1}$.
(Right) Central 128$\times$128 ROI of the mean difference of 200 filtered back-projection (FBP) reconstructed noise realizations
from the signal-present and signal-absent sinograms. The size of this central ROI is indicated with the yellow box on the background image.
The gray scale window is [-0.0075,0.02] cm$^{-1}$.
}
\label{fig:detection}
\end{figure}
The images in Fig. \ref{fig:detection} illustrate the detection task employed in this work.
The background disk attenuation is representative of fat tissue and is set to 0.194 cm$^{-1}$. The ring at the edge represents
the skin-line with attenuation 0.233 cm$^{-1}$.
The phantom is defined on a 2048$\times$2048 grid and is 18$\times$18 cm$^{2}$
in physical dimensions. The pixel size is chosen much smaller than the detector resolution so that the phantom
can be regarded as quasi-continuous. Projection of this background image yields the mean background sinogram.
The signal is defined as a Gaussian function with full-width-half-maximum of 100 microns (the reconstructed image
grid uses a pixel size of 350 microns) and amplitude of 0.04 cm$^{-1}$. Projection of the background plus signal
yields the mean signal-present sinogram.
To appreciate the difficulty of the detection task, we also show in Fig. \ref{fig:detection}
the mean difference image of both hypotheses
over 200 noise realizations, reconstructed by FBP for $N_\text{views}=512$.
The reconstruction grid is a 512$\times$512 pixel array.
It is apparent that the speckle noise is still visible even after averaging over 200 realizations; the signal
would not be visible in the reconstructed image from a single noise realization.
The data domain ideal observer detectability is computed as a signal-to-noise ratio (SNR) for detection,
see Sec. 13.2.8 in Barrett and Myers\cite{barrett2004foundations},
which is straight-forward for the data model specified in Eqs. (\ref{datamean}) and (\ref{datavar}). For additive Gaussian noise,
using the small signal approximation, the ideal observer and ideal linear observer are equivalent.
The ideal linear observer performance is computed by first solving for the Hotelling template
\begin{linenomath}
\begin{equation*}
w_\text{data} = \frac{\bar{g}_\text{sig+} - \bar{g}_\text{sig--}}{\text{Var}(g_\text{sig--})},
\end{equation*}
\end{linenomath}
where
\begin{linenomath}
\begin{align*}
\bar{g}_\text{sig+} &= P f_\text{sig+},\\
\bar{g}_\text{sig--} &= P f_\text{sig--};
\end{align*}
\end{linenomath}
and the small signal approximation is assuming
\begin{linenomath}
\begin{equation*}
\text{Var}(g_\text{sig+}) \approx
\text{Var}(g_\text{sig--}).
\end{equation*}
\end{linenomath}
The SNR for detection in the data domain is computed from the dot product of the Hotelling template and the signal projection data
\begin{linenomath}
\begin{equation*}
\text{SNR}^2_\text{data} = w_\text{data}^\top (\bar{g}_\text{sig+} - \bar{g}_\text{sig--}).
\end{equation*}
\end{linenomath}
The SNR metric can be converted to a receiver operating characteristic (ROC) area-under-the-curve (AUC), or equivalently
a percent-correct (PC) on a two-alternative-forced-choice (2-AFC) observer experiment (page 823 in Barrett and Myers\cite{barrett2004foundations})
\begin{linenomath}
\begin{equation*}
\text{PC}_\text{data} = \text{AUC}_\text{data} = \frac{1}{2} + \frac{1}{2} \text{erf} \left(\frac{\text{SNR}^2_\text{data}}{2}\right).
\end{equation*}
\end{linenomath}
For the equal-dose breast CT simulation at the specified noise level and the given signal properties, the signal detectability
in the data domain corresponds to
\begin{linenomath}
\begin{equation*}
\text{PC}_\text{data} = 86.57 \%,
\end{equation*}
\end{linenomath}
where the range of possible performance values are 50\%, corresponding to guessing on the 2-AFC experiment, to 100\%, a 2-AFC perfect
score. That the ideal observer performance is significantly less than 100\% in the data domain is intended by design.
This design requirement is why it is necessary to use the subtle signal shown in Fig. \ref{fig:detection}.
As pointed out in Sec. 13.2.6 of Barrett and Myers\cite{barrett2004foundations}, image reconstruction can only maintain or lose signal detectability with the ideal
observer, and as a result the ideal observer is not commonly used for assessing tomographic images after reconstruction.
Essentially, from the ideal observer perspective, image reconstruction should not be performed at all. Constrained by the fact
that human observers can interpret reconstructed images much more easily than sinograms, there is still potentially useful
knowledge to be gained from the ideal observer in assessing the efficiency of the image reconstruction algorithm; namely,
it can address the question of how well the separability between signal-present and signal-absent hypotheses is preserved
in passing through image reconstruction. In other words, does the image reconstruction algorithm wipe out the signal
in the detection task? This is a particularly relevant question for recent efforts in non-linear image reconstruction where
strong assumptions are being exploited to obtain tomographic images for sparse sampling conditions or low-dose scanning.
The image-domain ideal observer performance is also useful in that it provides a theoretical upper bound on human observer
performance, and no amount of post-processing will allow this bound to be exceeded.
For computing the image-domain detectability, we employ the 2-AFC PC figure-of-merit for the ideal observer in the image domain
because it is easy to interpret; the 2-AFC test intuitively connects image ensemble properties with single image noise realizations;
and we have a hard theoretical upper bound that it cannot exceed $\text{PC}_\text{data}=86.57$\%. This last property that,
\begin{linenomath}
\begin{equation*}
\text{PC}_\text{image} \le \text{PC}_\text{data},
\end{equation*}
\end{linenomath}
also naturally provides a measure for the loss of signal-detectability passing through image reconstruction.
To provide an accurate and precise estimate of $\text{PC}_\text{image}$, 4000 noisy data realizations of both signal-present
and signal-absent hypotheses are generated. All of the data realizations are reconstructed with the TV-LSQ algorithm.
Half of the resulting images under each hypothesis are used to train an ideal-observer classifier, and the remaining
half of the images is used to generate the $\text{PC}_\text{image}$ metric and its error bars.
(Because $\text{PC}_\text{image}$ is computed from noise realizations, it is necessary to work with a small signal
due to its inherent uncertainty.
If the data domain PC is close to 100\%,
the resulting drop in going to the image domain PC may be too small to be significant.)
The large number of image realizations leads to a high precision, and the accuracy results from surveying a number
of classifiers including both ideal linear observer and ideal observer estimators. For the ideal linear observer, we have
investigated the channelized Hotelling observer \cite{gallas2003validating} with different channel formulations and a single-layer neural network (SLNN)
\cite{zhou2019approximating}. For the ideal observer, several implementations of a
convolutional neural network (CNN) \cite{zhou2019approximating} have been explored.
We have found that a hybrid-CHO yields $\text{PC}_\text{image}$ equal to the results, within error bars, from the NN classifiers
over the range of simulation parameters investigated.
We present the hybrid-CHO because of its relative simplicity, but the equivalence of the hybrid-CHO with the SLNN and CNN is significant
because the hybrid-CHO exploits approximate rotational symmetry in the detection task while the SLNN and CNN do not.
This approximate symmetry allows for a reduction in the number of channels needed for the hybrid-CHO, and the equivalence with the NN-based
observers indicates that the reduced set of channels in the hybrid-CHO is not compromising performance of the hybrid-CHO as an observer model.
\subsubsection{Hybrid-CHO}
The theory for estimation of the CHO $\text{PC}_\text{image}$ and its variance is covered in Gallas and Barrett\cite{gallas2003validating} and
Chen {\it et al.}\cite{chen2012classifier}.
The hybrid-CHO developed here exploits approximate rotational symmetry that
results from use of a small rotationally-symmetric signal and uniform angular sampling in the sinogram.
Because the detection task design is approximately rotationally symmetric, it
lends itself well to the use of standard Laguerre-Gauss channels \cite{gallas2003validating}, which are circularly symmetric.
The Laguerre-Gauss channels on their own, however, do not provide an optimal basis because of the small size of the signal
in combination with the fact that the image is discretized on a Cartesian grid. To account for both of these aspects of the
CT imaging set-up, we propose a hybrid channel set composed of Laguerre-Gauss channels combined with single-pixel channels
at the location of the signal. The observer model is referred to as a hybrid-CHO, reflecting this hybrid channel set.
The data for computing the hybrid-CHO performance consist of the central 128x128 region of pixels from each of the 512x512 image realizations;
thus there are a total of 4,000 signal-present and signal-absent 128x128 ROIs for training and testing the hybrid-CHO.
The continuous definition of the Laguerre-Gauss channels is
\begin{linenomath}
\begin{align}
\label{LG}
L_n &= \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k!} x^k \\
u_n(r|a) &= \frac{\sqrt{2}}{a} \exp \left( \frac{-\pi r^2}{a^2} \right) L_n \left(\frac{2 \pi r^2}{a^2} \right), \notag
\end{align}
\end{linenomath}
where the radius $r$ is defined $r^2=x^2+y^2$; $x,y$ indicate location on the 128x128 ROI; and the units of $x$ and $y$ are scaled
so that $(x,y)=(0,0)$ is the center of the ROI and $(x,y)=(1,1)$ is the upper right corner of the ROI. The parameters
of the Laguerre-Gauss channels are the order $n$ and Gaussian radial decay parameter $a$, which is specified in the same
scaled units as $r$.
The discrete representation of the Laguerre-Gauss channels is obtained by evaluating $u_n(r|a)$ at the center of each of the
pixels in the ROI.
The single-pixel channels are defined as
\begin{linenomath}
\begin{equation}
\label{SP}
u(s,t) =
\begin{cases}
1 & (i,j) = (s,t) \\
0 & (i,j) \neq (s,t)
\end{cases},
\end{equation}
\end{linenomath}
where $(i,j)$ are the integer coordinates of the pixels in
the discrete channel function; $(s,t)$ is the location of the unit impulse;
the origin of the integer coordinates $(0,0)$ is at the lower left corner of the ROI.
The specific channel set employed for the breast CT simulation consists of fourteen channels.
The first ten are the discrete Laguerre-Gauss channels, $u_n(r|a)$, with $n\in [0,9]$ and $a=0.5$, and the remaining
for are the single-pixel channels, $\{ u(63,63),u(63,64),u(64,63),u(64,64) \}$.
Considering the channel functions as column vectors of length 128x128, where the pixel elements are in lexicographical order,
the 14 channels form a channelization matrix $U$ of size 16,384$\times$14 (16,384 = 128$\cdot$128).
The channelized linear classifier is computed by estimating the mean channelized signal and the channelized image
covariance. To compute these quantities,
the channelized images are first obtained from the reconstructed training images by
\begin{linenomath}
\begin{align*}
[u_\text{sig+}]_i &= U^\top \left[ f^\text{(recon)}_\text{sig+}\right]_i,\\
[u_\text{sig--}]_i &= U^\top \left[ f^\text{(recon)}_\text{sig--}\right]_i,
\end{align*}
\end{linenomath}
where $i$ is the realization index, which runs from 1 to $N_\text{real}=4000$;
and $f^\text{(recon)}_\text{sig+}$ ($f^\text{(recon)}_\text{sig--}$) is a column vector with pixels values from
the central 128x128 ROI extracted from the
reconstruction from signal-present (signal-absent) data.
The first $i=1$ through $N_\text{train}=2000$ realizations are assigned to the training set,
and the rest of the realizations $i=N_\text{train}+1$ through $N_\text{train}+N_\text{test}$ are assigned
to the testing set.
The mean channelized signal is
\begin{linenomath}
\begin{equation*}
s_u = (1/N_\text{train}) \sum_{i=1}^{N_\text{train}} ([u_\text{sig+}]_i -[u_\text{sig--}]_i).
\end{equation*}
\end{linenomath}
Using the small signal approximation, the training images under both hypotheses can be combined to provide
a covariance estimate
\begin{linenomath}
\begin{align*}
K_u = & (1/(2N_\text{train}-1)) \\
& \left[\sum_{i=1}^{N_\text{train}}
([u_\text{sig+}]_i - \bar{u}_\text{sig+})^\top
([u_\text{sig+}]_i - \bar{u}_\text{sig+}) \right. \\
+ & \left. \sum_{i=1}^{N_\text{train}}
([u_\text{sig--}]_i - \bar{u}_\text{sig--})^\top
([u_\text{sig--}]_i - \bar{u}_\text{sig--}) \right],
\end{align*}
\end{linenomath}
where the barred variables indicate mean over the ensemble of corresponding realizations.
The channelized Hotelling template is computed as
\begin{linenomath}
\begin{equation*}
w_u = K_u^{-1} s_u,
\end{equation*}
\end{linenomath}
and the ROI Hotelling template can be reconstituted by matrix-vector multiplication
\begin{linenomath}
\begin{equation*}
w_\text{image} = U w_u.
\end{equation*}
\end{linenomath}
Dotting a test image with the Hotelling template $w_\text{image}$ provides the test statistic,
which can be compared with a set threshold to make the classification into either signal-present
or signal-absent hypotheses.
The detectability metric in the image domain is estimated by running a 2-AFC experiment with
the hybrid-CHO for
every possible combination of signal-present and signal-absent test images
\begin{linenomath}
\begin{align*}
\label{pcimage}
\text{PC}_\text{image} &= (1/N_\text{test}^2) \sum_{i=1}^{N_\text{test}}\sum_{j=1}^{N_\text{test}}
c(a_i;b_j),\\
a_i &= w^\top_\text{image} \left[ f^\text{(recon)}_\text{sig+} \right]_{i+N_\text{train}}, \notag \\
b_j &= w^\top_\text{image} \left[ f^\text{(recon)}_\text{sig--} \right]_{j+N_\text{train}}, \notag
\end{align*}
\end{linenomath}
and the two-sample kernel function $c(a;b)$ is defined
\begin{linenomath}
\begin{equation*}
c(a;b) =
\begin{cases}
1 & a>b \\
0.5 &a=b \\
0 &a<b
\end{cases}.
\end{equation*}
\end{linenomath}
In the 2-AFC experiment, the Hotelling template is dotted with a pair of test images, where
one is drawn from the signal-present realizations and the other is drawn from the signal-absent realizations.
Whichever dot product yields the higher value, the hybrid-CHO classifies the corresponding image
as a signal-present image. The summation over the two-sample kernel function essentially counts all of the
times that the hybrid-CHO identified the signal-present image correctly.
Once $\text{PC}_\text{image}$ is computed it can be compared with $\text{PC}_\text{data}$ to provide a measure
of loss of signal detectability. The quantity $\text{PC}_\text{data}$ is known analytically so the corresponding
value does not have error bars. The value $\text{PC}_\text{image}$, on the other hand, is estimated from realizations,
and thus it has variability due to the randomness of the testing set. There is also variability in the training
of the hybrid-CHO because it is computed from the random training images. To account for both sources of variability
we employ the level 2 variability estimation from Chen {\it et al.}\cite{chen2012classifier}, and the 95\% confidence intervals are reported.
\subsection{Test phantom for visual correspondence}
\begin{figure}[!t]
\centerline{\includegraphics[width=0.5\columnwidth]{phantom-eps-converted-to.pdf}}
\caption{
Computerized breast phantom with a contrast-detail (CD) insert.
The displayed images are the image of the phantom (top, left), the ROI
focused on the CD insert (top, right), an unregularied FBP reconstruction
(bottom, left), and a regularized FBP image (bottom, right).
For reference, the RMSE values of the unregularized and regularized FBP images are 0.0198 and 0.01155 cm$^{-1}$,
respectively.
The gray scale window for all panels is [0.174,0.253] cm$^{-1}$.}
\label{fig:phantom}
\end{figure}
In order to illustrate the correspondence between visual image quality and the image quality metrics,
the same simulation parameters and scan configurations are investigated using
a test phantom with a structured fibro-glandular tissue model \cite{Reiser10}, shown in Fig. \ref{fig:phantom}.
This breast phantom is composed of a 16 cm disk containing
background fat tissue, attenuation 0.194 cm$^{-1}$, skin-line and randomly
generated fibro-glandular tissue at attenuation 0.233 cm$^{-1}$.
These components of the phantom are defined on a 2048$\times$2048 grid of dimensions
18$\times$18 cm$^{2}$.
The structured background allows for visualization of fine details. In order to have a more direct
comparison with a signal detection task, a contrast-detail (CD) insert is included in the phantom
consisting of an 8x8 grid of point-like signals.
The signals are defined as analytic disks so that
the line-integrals through the signals can be computed exactly and their
contribution to the projection data is not subject to pixelization of the test phantom image.
The disk contrast in the CD insert increases linearly from 0.01 cm$^{-1}$ to 0.05 cm$^{-1}$ going from left to right,
and the disk radius starts at 200 microns and increases linearly to 500 microns going from top to bottom.
For reference, the reconstruction grid's image pixel width is 350 microns.
To appreciate the noise level of the breast CT simulation, ROIs are shown of images reconstructed
by FBP using a ramp filter and FBP followed by Gaussian blurring.
For the FBP reconstructions, the $N_\text{views}=512$ scan configuration is used. Due to the high-level
of speckle noise in the unregularized FBP image, it is difficult to see even the most conspicuous
of signals in the CD insert. With regularization, the larger, higher contrast corner of the CD insert
becomes visible.
\section{Results}
\label{sec:results}
The hybrid-CHO signal detection figure-of-merit and image RMSE are computed alongside
TV-LSQ reconstructed images of the breast phantom, exploring the three parameters
of the CT-simulation: $N_\text{iter}$, $N_\text{views}$, and TV constraint parameter $\gamma$.
The TV constraint is reported as a fraction of the TV of the ground truth image.
\subsection{Signal detectability as a function of iteration number}
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{traj128images-eps-converted-to.pdf}}
\centerline{~~~}
\centerline{\includegraphics[width=0.5\columnwidth]{traj128RMSErevised-eps-converted-to.pdf}
\includegraphics[width=0.5\columnwidth]{pciter-eps-converted-to.pdf}}
\caption{ (Top row)
Images reconstructed by TV-LSQ for $N_\text{views}=128$ and $\gamma=1.0$ with iteration number
increasing from left to right. The iteration number is indicated in each panel of the figure.
The gray scale window is [0.174,0.253] cm$^{-1}$.
(Bottom, left) Plot of the corresponding image RMSE values.
For reference, the RMSE of the FBP and regularized FBP images from Fig. \ref{fig:phantom}
are 0.0198 and 0.01155 cm$^{-1}$, respectively. Both FBP values are indicated in the plot with dashed lines in red and black, respectively.
(Bottom, right) Plot of the corresponding signal detectability metric, percent correct for an ideal-observer 2-AFC experiment.
The dashed line indicates the theoretical
maximum PC performance inherent in the data domain; it does not depend on iteration number and is indicated
for reference.
}
\label{fig:iterresults}
\end{figure}
The first set of results focus on $N_\text{views}=128$ and $\gamma=1.0$, i.e. the TV constraint is equal
to the ground truth phantom TV.
A series of ROI images are shown in Fig. \ref{fig:iterresults} as a function of iteration number for the TV-LSQ
reconstruction of the breast phantom. From the perspective of accurate recovery of the phantom, the gray level
estimation appears to improve with increasing iteration number, as a general trend, which is to be expected
because the TV constraint is selected to be the TV of the test phantom. From the perspective of visualizing
the fine details in the image, the trend with iteration is more complex.
The structure detail in the fibro-glandular tissue and many of the signals are visible already
at 20 iterations, where it is clear from the overall gray value that the image is far from the solution
to the TV-LSQ problem. As the iterations progress, the larger signals of the CD insert appear more conspicuous as the
speckle noise amplitude is reduced. On the other hand, some of the more subtle features in the image appear to become
distorted as the iterations progress, and the numerically converged image has a classic patchy look where it
is difficult to distinguish noise from real structures.
Corresponding to the image series in Fig. \ref{fig:iterresults}, quantitative image quality metrics
are also plotted, showing image RMSE and signal-detection
$\text{PC}_\text{image}$. As expected, the RMSE trend shows improvement with iteration number,
and the RMSE converges to a value well below that of the FBP reference images in Fig. \ref{fig:phantom}.
Again, $\gamma$ is set to the truth value and the test phantom has a high-degree of gradient sparsity;
thus the solution to the TV-LSQ optimization problem is expected to yield a mathematically accurate solution
and this is reflected in low RMSE values and the fact that the RMSE steadily improves as the TV-LSQ algorithm progresses
toward the solution. This trend coincides with the visual gray-level accuracy seen in the series of images.
It is interesting to note that the RMSE at $N_\text{iter}=500$ is substantially below the value of 0.01155 corresponding to the regularized
FBP image in Fig. \ref{fig:phantom}.
The iteration number trend for $\text{PC}_\text{image}$, however, runs opposite to the
image RMSE. There is a clear decline in the signal detectability at early iterations, and as
convergence is achieved this metric plateaus to a value well below the data domain signal detectability.
The trend in image detectability coincides with the visual appearance of the
the increasing patchiness of the images shown
in Fig. \ref{fig:iterresults}.
The main point of the $\text{PC}_\text{image}$ metric is that it should reflect the disappearance of small subtle
details in the image, and in this example we see correspondence between this metric
and the overall patchiness of the images.
Thus the quantitative $\text{PC}_\text{image}$ metric appears to capture
the desired image properties, providing a quantitative measure of over-regularization.
How to use this information to determine algorithm parameters depends on the goal of the CT system design.
Clearly, the results of the iteration number study indicate that $\text{PC}_\text{image}$ cannot be used alone
to determine the optimal iteration number, because it has the largest value with one iteration.
As an aside, we note that a similar behavior was observed for the maximum likelihood expectation maximization (MLEM) algorithm
using a ROI-observer \cite{abbey1996observer}, and we take up a comparison of these experiments in Sec. \ref{sec:discussion}.
Using $\text{PC}_\text{image}$ in concert with image RMSE, which has the opposite trend, provides complimentary information.
As an example of how it can be used, the desired
image could be specified by minimizing RMSE with the constraint that the loss in signal detectability
is bounded by a parameter $\epsilon$, i.e. $\text{PC}_\text{image}/\text{PC}_\text{data} \ge \epsilon$.
Subjectively, the first frame at 20 iterations, in the series of images shown in Fig. \ref{fig:iterresults},
has the best visibility for the signals in the CD insert and image texture realism.
The next image at 50 iterations already has a patchy appearance. Visualization of the intermediate frames (not shown)
suggests that a value of $\epsilon = 97.0$\% for this particular example provides a useful bound.
However, the details of how $\epsilon$ is chosen and how the detection task is designed depends on the
desired imaging goal. Here, we only aim to establish correspondence between $\text{PC}_\text{image}$
and the subjective image quality of patchiness or over-regularization with non-linear image reconstruction.
\subsection{Signal detectability as a function of $N_\text{views}$ and $\gamma$}
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{imagegrid-eps-converted-to.pdf}}
\centerline{~~~}
\centerline{\includegraphics[width=0.5\columnwidth]{gridRMSErevised-eps-converted-to.pdf}
\includegraphics[width=0.5\columnwidth]{pcgrid-eps-converted-to.pdf}}
\caption{(Top row)
Images reconstructed by TV-LSQ for $N_\text{iter}=100$, varying $N_\text{views}$ from top to
bottom and varying $\gamma$ from left to right.
These parameters are indicated in the figure panels.
The gray scale window is [0.174,0.253] cm$^{-1}$.
(Bottom, left) Plot of the corresponding image RMSE values.
For reference, the RMSE of the FBP and regularized FBP images from Fig. \ref{fig:phantom}
are 0.0198 and 0.01155 cm$^{-1}$, respectively, and the latter value is indicated in the plot with a dashed line.
(Bottom, right) Plot of the corresponding signal detectability metric, percent correct for an ideal-observer 2-AFC experiment.
The dashed line indicates the theoretical
maximum PC performance inherent in the data domain.
}
\label{fig:gridresults}
\end{figure}
For the next set of results, we fix $N_\text{iter}=100$ and vary the other two parameters of the breast CT simulation.
In Fig. \ref{fig:gridresults}, a grid of images is shown with each row and column corresponding to fixed $N_\text{views}$ and
$\gamma$, respectively.
As a general trend the lower $\gamma$ values reduce the speckle noise and streaks in the image; however, it is also
clear that the heavy regularization imposed by $\gamma=0.75$ effectively renders the borderline signals in the CD insert invisible.
In terms of conspicuity of the signals in the CD insert, the images for $\gamma=1.5$ and above appear to have similar numbers of signals visible.
The trend in $N_\text{views}$ is more difficult to discern because the conditions of the scan are set up to be equal dose.
For the larger $\gamma$-values $N_\text{views}=128$ images appear to have streak artifacts in addition to the speckle noise.
In general, there is a different noise texture for the various equal-dose scan configurations.
The corresponding image RMSE and $\text{PC}_\text{image}$ IQ metrics are also plotted in Fig. \ref{fig:gridresults}.
The image RMSE favors $\gamma=1.0$, the ground truth TV value, although the RMSE for $\gamma=0.75$ is only slightly larger.
Also, the RMSE values decrease weakly with increasing $N_\text{views}$. The $\text{PC}_\text{image}$ values favor an opposite
trend, where the signal detectability increases with $\gamma$. Interestingly, for the different $N_\text{views}$ configurations,
the intermediate value $N_\text{views}=256$ is slightly favored, although the values for 256 and 512
have overlapping error bars.
Again, we point out that the metrics are complimentary.
Going by $\text{PC}_\text{image}$ alone the TV constraint would be abandoned.
Going by image RMSE alone, however, can also lead to an equally pathological situation where the image is over-regularized.
Using $\text{PC}_\text{image}$ in concert with RMSE
yields a more useful picture. We observe that, while it is true that $\text{PC}_\text{image}$ is monotonically increasing with $\gamma$,
there is clearly diminishing returns for $\gamma \ge 1.5$, where this metric appears to plateau. The RMSE, on the other hand,
favors lower $\gamma$ on the $\text{PC}_\text{image}$-plateau. Thus a prescription that combines the two metrics could
reasonably select an intermediate $\gamma$ value such as $\gamma=1.5$, where again
$\epsilon = \text{PC}_\text{image}/\text{PC}_\text{data} \ge 97$\%.
At this setting, we observe that the TV-LSQ reconstructed images in Fig. \ref{fig:gridresults} do not have the patchy appearance of
over-regularization with TV. Also, compared with the FBP images, the image RMSE is lower and more CD insert signals are visible for TV-LSQ at $\gamma=1.5$.
\subsection{Estimation of subject TV and its impact on IQ metric trends}
The dependence of the simulation results on $\gamma$ are all referred to the ground truth TV value, which is
object dependent. Thus applying the simulation-based IQ metrics to an actual scanning situation,
where the ground truth is unknown, raises two important questions: (1) how to determine the subject TV, and
(2) does the subject TV reference value yield universal IQ metric dependence on $\gamma$. Two simulations
are performed to address both of these questions.
\begin{figure}[!t]
\centerline{\includegraphics[width=0.5\columnwidth]{validRMSErevised-eps-converted-to.pdf}}
\caption{Plot of RMSE for the data used in image reconstruction (blue) and the RMSE on the left-out testing
data used for validation (red) set as a function
of the TV constraint parameter. The validation RMSE has a minimum at $\gamma=0.9$ in units scaled to
the ground truth image TV.
}
\label{fig:validation}
\end{figure}
To estimate
the subject TV, $\gamma_0$, we have successfully applied a validation technique \cite{schmidt2017spectral} where
image reconstruction is performed with a fraction of the available data and the remaining test data
are compared with the projection of the estimated image. The constraint value is estimated to be the value
that yields the smallest discrepancy between the test data and the corresponding estimated data.
We perform this computation in the context of the
present breast CT simulation for $N_\text{views}=128$ and $N_\text{iter}=500$. Image reconstruction is performed with 90\% of the
available line-integration data, chosen from the sinogram at random. This leaves 10\% of the data for independent testing.
The resulting reconstructed image is projected and the RMSEs
for the reconstruction and testing data are plotted in Fig. \ref{fig:validation} as a function of $\gamma$.
From Fig. \ref{fig:validation}, we observe that there is a monotonically decreasing trend in the reconstruction data RMSE
as a function of $\gamma$, but the data RMSE of the testing set shows a minimum at $\gamma=0.9$, which is close to the
true value of $\gamma=1.0$.
This result demonstrates that this validation technique can provide
an estimate of the subject TV to within 10 percent.
\begin{figure}[!t]
\centerline{\includegraphics[width=0.5\columnwidth]{pcscale-eps-converted-to.pdf}}
\caption{Detectability metrics using different background images. The label ``uniform''
and refers to the use of the
background image shown in Fig. \ref{fig:detection}, and ``structured'' refers to using the breast
phantom in Fig. \ref{fig:phantom} as the background image. Note that the data domain percent correct
is lower for the structured background because it is more attenuating.
}
\label{fig:ucho}
\end{figure}
To address the universality question, the uniform background used in the
process of estimating $\text{PC}_\text{image}$ is changed to the non-uniform, but known, background of the
breast phantom. This modification alters $\gamma_0$ dramatically; thus it is of interest to compare
the resulting $\text{PC}_\text{image}$ curve as a function of $\gamma$. In Fig. \ref{fig:ucho} this metric
is plotted for $N_\text{views}=128$ and $N_\text{iter}=100$.
From the graph it is clear that there is some numerical discrepancy between the numerical values
of $\text{PC}_\text{image}$ for the same value of the scaled parameter $\gamma$; however, the trend of this
metric as a function of $\gamma$ matches fairly well. That there is discrepancy in the
absolute numerical values is perhaps not too surprising considering the large difference in
background structure. The similarity in trends is further evidence of the potential utility of the
proposed IQ metric.
\section{Discussion}
\label{sec:discussion}
The proposed signal detectability index for non-linear image reconstruction bears some similarity
with the signal detectability studies on MLEM iteration number studies presented by Abbey {\it et al.}\cite{abbey1996observer}.
In particular, the ROI observer from that investigation showed a steadily decreasing trend with iteration number.
The two detectability indices, however, are different and need to be interpreted differently.
The detection task considered in Abbey {\it et al.}\cite{abbey1996observer} was meant to have direct relevance to a clinical detection
task, and furthermore the authors were seeking correspondence between model and human observers on signal detection.
For the detectability metric, presented here, the signal size and amplitude are chosen so that the signal
is on the edge of detectability by the ideal observer in the data space. This signal is much too small to be detected
by a human observer; thus the detection task design itself is not directly relevant to a clinical detection task.
The design and purpose of this detection task is meant to be a surrogate for the subjective image
property of patchiness specific to over-regularization in TV-LSQ reconstructed images.
The reduction of $\text{PC}_\text{image}$ relative to $\text{PC}_\text{data}$ represents an irretrievable loss
of information in distinguishing signal-present and signal-absent hypothesis. No post-processing operations
can improve on $\text{PC}_\text{image}$. This metric, however, only captures loss of detectability
due to non-invertibility of the image reconstruction algorithm. It does not necessarily reflect distortion of the
signal. For example, regularizing FBP images with moderate blurring, such as what is seen in Fig. \ref{fig:phantom},
is invertible and does not cause a reduction in PC even though the signal itself is broadened by the blurring operation.
Reconstructing FBP images onto an image grid of large pixels, on the other hand, is a non-invertible and does cause
loss in detectability \cite{sanchez2013comparison}.
The SKE/BKE detection task paradigm with a small rotationally-symmetric signal and uniform projection-angle sampling
allows for the hybrid-CHO to accurately represent the ideal linear observer with a relatively small set of channels,
because the detection task is well-suited to the rotationally-symmetric LG channels. Considering non-rotationally symmetric
signals or scanning angular ranges less than 2$\pi$ breaks this symmetry. In such cases, a different channel
representation and possibly a larger channel set would need to be developed in order for the hybrid-CHO to represent
the ideal linear observer. The SKE/BKE detection task also considers the signal at one location in the image.
For the presented non-linear image reconstruction algorithm, this limitation does not impact the utility of the
metric because the TV regularization is applied isotropically over the image and results are not expected
to change appreciably for different signal locations. Regularization techniques that involve
spatially varying weighting need to consider either multiple SKE/BKE detection tasks with signals at different locations
or a signal-known-statistically (SKS) detection task where the signal location is drawn from a spatially uniform
probability distribution.
\section{Conclusion}
\label{sec:conclusion}
We have developed and presented an image quality metric that is sensitive to the removal of subtle details
in the image and that can be applied to the non-linear TV-LSQ image reconstruction algorithm. The metric
is based on the detection of a small signal at the border of detectability by the ideal observer, and this
metric is hypothesized to quantify
the subjective visual removal of subtle image details.
The design of the proposed detection task, use of the ideal observer, and connection with the 2-AFC
experiment makes the metric easy to interpret.
The detectability index, which is an estimate of a property
of an ensemble of reconstructed images, is connected to single image realizations through the interpretation as
a PC on a 2-AFC experiment.
Loss of detectability through the image reconstruction process, i.e. $\text{PC}_\text{image} < \text{PC}_\text{data}$,
unambiguously represents a quantitative decrease in the ability to distinguish signal-present and signal-absent images.
The bounds on this metric are clear: $0.5 \le \text{PC}_\text{image} \le \text{PC}_\text{data}$, where the lower
limit of 0.5 represents guessing on the 2-AFC experiment and the upper limit is the analytically known $\text{PC}_\text{data}$.
Correspondence between $\text{PC}_\text{image}$ and visual assessment of the reconstructed images of the
breast CT simulation shows that this metric may serve to quantify TV-LSQ over-regularization.
A decrease in this metric is seen to coincide with loss of borderline signals in the CD insert and with
patchiness in the appearance of the images.
This metric is seen to be complimentary to widely used image fidelity metrics such as image RMSE, and it may
help to provide an objective means to establish useful tomographic system parameter settings.
The presented methodology may also prove useful for quantifying over-regularization with other non-linear image
reconstruction techniques.
|
1,108,101,563,162 | arxiv | \section{Introduction}
The discovery of several peculiar Type Ia supernovae (SNe~Ia) has drawn the attention both to the photometric and spectroscopic diversity among this class of otherwise homogeneous transients. The dispersion of the luminosity--decline rate relation \citep{phillips:1993,hamuy:1995,hamuya:1996} can be explained
by an additional correlation between the decline rate and the colour at maximum light \citep{hamuyb:1996,tripp:1998,branch:1998,tripp:1999}. Hence SNe~Ia can be arranged into a photometric sequence extending from luminous, blue, slowly declining SN~Ia, like
SN~1991T, to normal events \citep{branch:1993}, and finally to sub-luminous, red, quickly declining objects, like SN~1991bg \citep{filippenkoa:1992,filippenkob:1992,lei:1993,turatto:1996}.
SNe Ia also appear to form a spectroscopic sequence based on the
systematic variations in the flux ratios of several spectral features near maximum light \citep[e.g. Si~II $\lambda\lambda$5972, 6355, see][]{nugent:1995}.
The common view is, despite their diversity, peculiar events such as the luminous 1991T-like and sub-luminous 1991bg-like SNe~Ia, just like the normal population of SNe~Ia,
originate from the thermonuclear explosion of a C/O white dwarf (WD) that exceeds the Chandrasekhar mass after accreting mass from a companion star in a binary system.
However, there is a group of peculiar SNe~Ia that challenges the canonical Chandrasekhar-mass explosion channel. The prototype of this class is SN~2002cx \citep{li:2003}, which shows a peak luminosity significantly lower than that of normal SNe~Ia, even though its light curve decline-rate parameter is comparable to normal events. Spectra obtained near maximum light resemble those of over-luminous 1991T-like objects (with a blue continuum and absorption from higher-ionisation species), even if a low ejecta velocity ($\sim6000$~km~s$^{-1}$ at the epoch of $B$-band maximum light) points towards a moderate kinetic energy from the explosion \citep{li:2003}. The late-time spectra show narrow iron and cobalt lines \citep{li:2003,jha:2006}, in stark contrast to normal SNe~Ia at similar epochs. After the pioneering studies by \cite{li:2003} and \cite{jha:2006} on SN~2002cx, both new and old SN discoveries have been classified or reclassified as 2002cx-like events, and it has become clear these transients are not so rare. This class was labelled Type~Iax supernovae (SNe~Iax) by \cite{foley:2013}, who presented a review on the entire group and defined clear observational criteria to classify a Type~Iax event.
A variety of explosion scenarios and potential progenitors or progenitor systems have been proposed to explain each event \citep[see][and references therein for a recent review]{liua:2015}.
Although the leading models for SNe~Iax are thermonuclear explosion of a C/O WD \citep{foley:2009,jordan:2012,kromer:2013,fink:2014,stritz:2015,kromer:2015,liub:2015}, a core-collapse scenario has been proposed at least for SN~2008ha \citep{valenti:2009,foley:2009,moriya:2010}, which is the most extreme member of SN~Iax class to date. The latter however is controversial because of the detection of C/O burning products in the maximum-light spectrum of SN~2008ha \citep{foley:2009,foley:2010b}, providing a link to thermonuclear explosions.
In principle, the best way to shed light on this issue would be the detection of a progenitor in pre-explosion images. Recently, \cite{mccully:2014} reported the detection of a luminous blue source coincident (at the 0.8$\sigma$ level) with the location of Type~Iax SN~2012Z in {\em Hubble Space Telescope} ({\em HST}) pre-explosion images.
Although the photometric properties of this object suggest a C/O WD primary plus a He-star companion progenitor system, the explosion of a single massive star has not definitely been ruled out. In this case, post-explosion imaging, obtained after the SN fades away, should help to distinguish between the two models.
For two other SNe~Iax, no sources were detected in pre-explosion images, but limits were obtained that exclude massive stars as potential progenitors \citep[SNe~2008ge and 2014dt, see][]{foley:2010a,foley:2015}.
Given the diversity of this SN class, one may consider the possibility that multiple progenitor channels may lead to the production of SNe~Iax.
In fact, the forty-some objects classified as SNe~Iax have a number of similarities, but also noteworthy differences.
In particular, they show a large range in luminosity at maximum, from $M_{V}$ $\approx$ $-$14.2~mag of the faint SN~2008ha \citep{valenti:2009,foley:2009,stritz:2014} to $M_{V}$ $\approx$ $-$18.5~mag of SNe~2009ku \citep{narayan:2011} and 2012Z \citep{stritz:2015,yamanaka:2015}. The ejecta velocities near maximum brightness also exhibit a large spread, ranging from $\sim2000$ to $\sim8000$~km~s$^{-1}$.
For the majority of SNe~Iax, there appears to be a correlation between ejecta velocity and peak luminosity, with the higher-velocity objects being also the brighter ones \citep{mcclelland:2010,foley:2013}. However, SN~2009ku, a low-velocity, high-luminosity SN studied by \cite{narayan:2011}, does not follow the trend \citep[note however that the first spectrum was taken long after maximum and so the inferred ejecta velocity is uncertain, see][]{foley:2013}.
In this paper, we present the results of a comprehensive observational campaign of SN~2014ck, which started well before maximum light. It turns out that SN~2014ck is an outlier among SNe~Iax, as it mirrors SN~2002cx from a photometric point of view, while the early spectra exhibit extremely narrow spectral lines, indicating very low expansion velocities of the ejecta.
This paper is organised as follows: in Section~2, we give some basic information about the SN discovery and the host galaxy, and we describe the follow-up campaign. In Section~3, we analyse {\it HST} pre-discovery images.
We discuss data reduction and present the photometric evolution and visual and near-infrared spectroscopic sequences of SN~2014ck in Section~4. In Section~5 the Galactic and host galaxy reddening is estimated.
Descriptions of the photometric and spectroscopic properties of SN~2014ck are reported in Sections~6 and 7, respectively. Expansion velocities of the ejecta, along with the photospheric temperatures, are deduced from the spectra. Spectral modelling with the {\tt SYNOW} code is used to assist in line identification. A final discussion of the available data in the context of the explosion models follows in Section~8.
\section{SN 2014ck discovery and follow-up observations}\label{discovery}
SN 2014ck was discovered by the Lick Observatory Supernova Search \citep[LOSS;][]{filippenko:2001}, on 2014 June 29.47 UT, at an apparent magnitude of 16.4~mag using the Katzman Automatic Imaging Telescope \citep[KAIT;][]{hayakawa:2014}. A marginal detection on 2014 June 24.5 UT was also reported by LOSS with an approximate $R$-band magnitude of 17.0~mag. However, a subsequent analysis of KAIT images on 2014 June 13, 23, 24, 25, 28 and 29, performed independently by the LOSS team (Zheng 2016, private communication) and by us\footnote{We thank WeiKang Zheng and Alex Filippenko for sending us LOSS/KAIT pre-discovery images.}, postpones this marginal detection by approximately one day (2014 June 25.5 UT, with an approximate $r$-band magnitude of 18.15$\pm$0.44 mag, as reported in Table~\ref{sloan}).
The SN is located 4\farcs3~E and 0\farcs5~S from the centre of the spiral galaxy UGC~12182 (Figure~1).
An heliocentric recessional velocity of 1490~km~s$^{-1}$ for UGC~12182 is listed in the NASA/IPAC Extragalactic Database (NED), as taken from ``The Updated Zwicky Catalogue'' \citep{falco:1999}. The distance and distance modulus (adopting $H_0 = 73 \pm 5$~km~s$^{-1}$~Mpc$^{-1}$), corrected for the Virgo, Great Attractor and Shapley infall, are $24.4 \pm 1.7$~Mpc and $\mu = 31.94 \pm 0.15$~mag, respectively \citep{mould:2000}.
We note that the correction for Virgo infall \cite[see Appendix~A in][]{mould:2000} includes two components: the correction for the infall to Virgo plus the vector contribution due to the Local Group's peculiar velocity with respect to Virgo.
We also note that in the Local Group the radial peculiar velocity dispersion is estimated to be $\sim 60$~km~s$^{-1}$ \citep[see for example][]{feldman:2003}, which accounts for about 25\% of the total error budget on $\mu$.
Soon after discovery, spectroscopic classifications of SN~2014ck were obtained independently at the Lick Observatory and under the Asiago Classification Program \citep{tomasella:2014}.
The earliest spectrum indicated it was a SN~Iax on the rise \citep{masi:2014}, resembling SN~2005hk \citep{phillips:2007}, SN~2008ha \citep{valenti:2009, foley:2009} and SN~2010ae \citep{stritz:2014}.
Given the relatively small number of well-observed SNe~Iax in the literature and the early detection and classification, we initiated a follow-up campaign aimed to collect detailed optical and near-infrared (NIR) observations using several telescopes available to our collaboration.
Basic information of SN~2014ck and its host galaxy are summarised in Table~\ref{info}. Following the discussion in Section~6, we adopt a $V$-band maximum estimate of ${\rm MJD} = 56845.6\pm0.1$, which we use as a reference epoch throughout this work.
\begin{table}\label{info}
\caption{Basic information on SN 2014ck and its host galaxy, UGC 12182.
}
\begin{center}
\begin{tabular}{llll}
\hline
Host galaxy & UGC 12182 \\
Galaxy type & Sbc \\
Heliocentric radial velocity & $1490 \pm 19$ km s$^{-1}$\\
Distance modulus & $31.94 \pm 0.15$ mag\\
Galactic extinction A$_{V}$ & $1.26 \pm 0.15$ mag\\
Total extiction A$_{V}$ & $\approx 1.5 \pm 0.3$ mag \\
& \\
SN type & Iax\\
R.A.\ (J2000.0) & $22^{\rm h}45^{\rm m}38^{\rm s}\mathllap{.}88$\\
Dec.\ (J2000.0) & $+73^\circ09'42\farcs7$ \\
Offset from nucleus & 4\farcs3 E, 0\farcs5 S\\
Estimated date of explosion (MJD) & $56828.2^{+2.7}_{-4.5}$ \\
Date of first detection (MJD) & 56832.5 \\
Date of $V$-band maximum (MJD) & $56845.6 \pm 0.1$ \\
$M_V$ at maximum & $-17.29 \pm 0.15$ mag\\
$M_B$ at maximum & $-17.37 \pm 0.15$ mag\\
$L_{\rm bol}$ at maximum & $1.91 \times 10^{42}$ erg s$^{-1}$ \\
\hline
\end{tabular}
\end{center}
\label{info}
\end{table}%
\begin{figure}
\includegraphics[scale=.47,angle=0]{14ck_map.png}
\caption{UGC 12182 and SN~2014ck: $r$-band image taken on 2014 October 28.84 UT with the Copernico 1.82~m Telescope (Asiago), with an inset of the SN region on the bottom left. The local sequence stars used for the calibration of non-photometric nights are indicated.}
\label{map1}
\end{figure}
\section{Hubble Space Telescope pre-explosion images}
\begin{figure}
\includegraphics[scale=.23,angle=0]{hst4.png}
\caption{{\it HST} pre-explosion image (F625W filter). The position of SN~2014ck is marked with ellipses. The outer ellipse corresponds to three times the uncertainty in the SN position.}
\label{map}
\end{figure}
Another transient, SN~2006fp \citep{puckett} was previously discovered in the host galaxy of SN~2014ck.
The nature of this transient is unclear, but it was likely a SN~IIn or a SN impostor (i.e. the outburst of a luminous blue variable star), the latter being favoured by its spectral characteristics \citep{blondin}.
{\sl HST} imaging was obtained of the host of SN~2006fp with the Ultraviolet-Visible (UVIS) Channel of the Wide Field Camera 3 (WFC3) (pixel scale $0\farcs04$ pix$^{-1}$).
Images were taken on 2013 February 22 UT ({\sl HST} proposal ID 13029; PI: A.~Filippenko) with the F625W (roughly $r$) and F814W (roughly $I$) passbands.
The archival flat-fielded images ({\sc flt}) were retrieved from the {\sl HST} MAST
Archive\footnote{https://archive.stsci.edu/hst/} and re-reduced using the WFC3 UVIS CTE correction software\footnote{http://www.stsci.edu/hst/wfc3/}
and the {\sc AstroDrizzle} software from the {\sc DrizzlePac} package \citep{gonzaga:2012}.
Next the absolute astrometry was registered to match the ground-based $g$-band images which were obtained with the LCOGT 2.0~m Telescope (Haleakala, Hawaii, USA) on 2014 August 11.37 UT. Astrometric alignment was accomplished by fitting a second-order Legendre polynomial with the {\sc iraf}\footnote{{\sc iraf} is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.}
tasks {\sc geomap} and {\sc geoxytran}, measuring the position of 16 stars that were visible in both the LCOGT and HST frames. This yielded an astrometric precision of 0\farcs033 and 0\farcs022 in the east-west and north-south directions, respectively.
The position of SN~2014ck in the LCOGT image was determined by fitting a Gaussian to the SN. We estimated the uncertainties in the position of the SN by running Markov chain Monte Carlo (MCMC) analysis using the {\sc emcee} Python package \citep{emcee}. We found that the uncertainty on the SN position was 50~milliarcseconds.
Adding the astrometric solution and the positional uncertainties of the SN position in quadrature, we adopt a total uncertainty on the position in the F625W pre-explosion images to be 0\farcs06 and 0\farcs055 in the east-west and north-south directions, respectively (see Figure~\ref{map}).
We next used {\sc dolphot}\footnote{{\sc dolphot} is a stellar photometry package that was adapted from HSTphot \citep{dolphin00}.} to measure the photometry of all the stars in the pre-explosion images. The sky subtraction (in this case, the sky subtraction includes the diffuse contribution from the host galaxy) and PSF fits were done using the recommended parameters from the {\sc dolphot} manual.
No source was detected within $3\sigma$ of the position of SN~2014ck. Using the detected sources from a $200\times200$ box centered around the SN position, we found $3\sigma$ limiting magnitudes of $m_{\rm F625W} > 26.95$ and $m_{\rm F814W} > 26.35$~ mag in the Vega system. Adopting a distance modulus $\mu = 31.94$~mag and reddening estimate $E(B-V)_{\rm tot} \approx 0.5$~mag (see Section~5), we obtain absolute luminosity limits $M_{\rm F625W} > -6.5$~mag. In passing, we note that there is no evidence of a stellar source at the position of SN~2006fp.
The search for progenitor candidates in pre-explosion {\it HST} images has previously been performed for the Type~Iax SNe~2008ge \citep{foley:2010a}, 2012Z \citep{mccully:2014}, and 2014dt \citep{foley:2015}.
At position of SN~2012Z, \cite{mccully:2014} detected a bright ($M_{\rm F435W}=-5.43 \pm 0.15$~mag, $M_{\rm F814W}=-5.24 \pm 0.16$~mag, i.e. $M_{V} \sim -5.3$~mag) blue source, which they interpret as a non-degenerate He-star companion to a C/O WD. The source associated with SN~2012Z is the only probable progenitor system detected in pre-explosion images of SNe~Iax and of any SN~Ia \citep[][and references therein]{li:2011}.
\cite{wang:2014} and \cite{liua:2015,liub:2015} performed binary evolution simulations indicating this could indeed explain the observed photometry.
Yet the possibility that the source associated with SN~2012Z is a massive star cannot be entirely ruled out.
Planned observations after the fading of the SN may help to finally distinguish between these progenitor models \citep{mccully:2014}.
In the cases of SNe~2008ge and 2014dt, non-detections were reported and {\it HST} images
were used by \cite{foley:2010a} and \cite{foley:2015} to place $3\sigma$ limits on the absolute magnitudes of the progenitors, corresponding to $M_{V} > -6.7$~mag
and a relatively deep $M_{F450W} > -5.0$~mag, respectively.
For SN~2014dt, \cite{foley:2015} excluded a massive star as the progenitor and suggested a C/O WD primary plus a He-star companion progenitor system, similar to SN~2012Z. For both SN~2008ge \citep{foley:2010a} and SN~2014ck, the constraints on the luminosity of the undetected progenitor, a magnitude brighter than the SN~2012Z detection, rules out only the most-luminous Wolf-Rayet stars \citep[roughly corresponding to stars with initial masses of $60-65 M_\odot$,][]{crowther:2007}.
\section{Observation and data reduction}
\subsection{Photometry}
\begin{table*}
\caption{List of observing facilities employed for optical and NIR photometry.} \label{telescopes_phot}
\begin{tabular}{lllcc}
\hline
Telescope$^1$ & Instrument & Site & FoV & Scale \\
& & & [arcmin$^{2}$] & [arcsec pix$^{-1}$]\\
\hline
\multicolumn{5}{c}{ \bf Optical facilities }\\
LCOGT & Spectral$^2$ & Haleakala, Hawaii (USA) & $10 \times 10$ & 0.30 \\
LCOGT & SBIG & McDonald Observatory, Texas (USA) & $16 \times 16$ & 0.47 \\
Copernico & AFOSC & Asiago, Mount Ekar (Italy) & $8.8 \times 8.8$ & 0.48 \\
NOT & ALFOSC & Roque de los Muchachos, La Palma, Canary Islands (Spain) & $ 6.4 \times 6.4 $ & 0.19 \\
TNG & LRS & Roque de los Muchachos, La Palma, Canary Islands (Spain) &$8.6 \times 8.6 $ & 0.25 \\
\multicolumn{5}{c}{ \bf NIR facilities }\\
NOT & NOTCam & Roque de los Muchachos, La Palma, Canary Islands (Spain) & $4 \times 4 $ & 0.23\\
TNG & NICS & Roque de los Muchachos, La Palma, Canary Islands (Spain) &$4.2 \times 4.2 $ & 0.25 \\
\hline
\end{tabular}
$^1$ LCOGT = Las Cumbres Observatory Global Telescope Network\citep{brown:2013}; Copernico = INAF Osservatorio Astronomico di Padova 1.82~m Telescope (Mt.~Ekar, Asiago, Italy); NOT = 2.56~m Nordic Optical Telescope (La Palma, Spain); TNG = 3.58m Telescopio Nazionale Galileo (La Palma, Spain).
$^2$ Spectral is a photometric camera mounted on the Faulkes Telelescopes of the LCOGT network.
\end{table*}
\begin{figure}
\includegraphics[scale=.52, angle=0]{14ck_lc.png}
\caption{Light curves of SN~2014ck in the $uBVgriJHK$ bands. Sloan $ugri$ AB magnitudes have been here plotted as Vega magnitudes for uniformity with $BVJHK$ bands, following Blanton \& Roweis 2007. For clarity, the light curves have been shifted vertically as indicated in the legend. The uncertainties for most data points are smaller than the plotted symbols. The last $BVgri$ photometric epoch is an upper limit.
(A colour version of this figure is available in the online journal).} \label{lc}
\end{figure}
Optical ($uBVgri$) and NIR ($JHK$) imaging of SN~2014ck started a few days after discovery and continued over the course of about six months. The telescopes and their associated instruments used for the photometric campaign are listed in Table~\ref{telescopes_phot}.
All frames were pre-processed using standard procedures in {\sc iraf} for bias subtraction and flat fielding. For the NIR exposures, sky background subtraction was also performed. Multiple exposures obtained in the same night were aligned and combined to increase the signal-to-noise ratio.
Over the course of multiple photometric nights, \cite{landolt:1992} and Sloan Digital Sky Survey (SDSS)\footnote{http://www.sdss.org} standard fields were observed in order to calibrate a local sequence of stars in the field of UGC~12182 (see Table~\ref{sequence} and Figure~1). The local sequence was used to compute zero-points for non-photometric nights. In the NIR, stars from the 2MASS catalog were used for the calibration.
We verified that photometry taken at similar phases but with different instrumentation were in excellent
agreement with each other, checking, for all bands, the RMS dispersion of the whole data set with respect to the dispersion of the sub-sets coming from each instrument. Thus no additional $S$-correction \citep{stritz:2002} was applied.
\begin{table*}
\caption{Magnitudes for the local sequence stars, as indicated in Figure~1, with associated errors in parentheses (Vega mag).}\label{sequence}
\begin{tabular}{cccccccc}
\hline
ID & R.A.\ (J2000.0) & Dec.\ (J2000.0) & $U$ & $B$ & $V$ & $R$ & $I$ \\
& & & [mag]&[mag]&[mag]&[mag]&[mag]\\
\hline
a&22:45:38.708 & 73:05:53.33 &19.241 (0.024)& 18.137 (0.016)& 16.848 (0.023)& 16.047 (0.019)& 15.443 (0.013)\\
b&22:45:31.661 &73:06:33.20 &18.992 (0.026)& 18.566 (0.015)& 17.566 (0.019)& 16.918 (0.006)& 16.393 (0.013)\\
c&22:45:23.408 & 73:07:22.70 &19.537 (0.019)& 19.373 (0.008)& 18.496 (0.016)& $-$ &$-$\\
d&22:45:16.752 &73:07:52.77 &18.649 (0.002)& 17.425 (0.008)& 16.042 (0.015)& $-$ &14.471 (0.006)\\
e&22:45:06.405 &73:08:25.75 &18.402 (0.006)& 18.189 (0.002)& 17.295 (0.008)& 16.646 (0.002)& 16.119 (0.014)\\
f&22:45:05.321 &73:08:42.04 &18.833 (0.028)& 18.526 (0.003)& 17.568 (0.012)& 16.867 (0.004)& 16.325 (0.022)\\
g&22:45:51.990 &73:06:56.89 &17.739 (0.024)& 17.501 (0.012)& 16.631 (0.013)& 16.078 (0.004)& 15.570 (0.010)\\
h&22:45:01.180 &73:09:54.94 &18.027 (0.010)& 16.653 (0.004)& 15.310 (0.016)& 14.397 (0.021)& 13.565 (0.027)\\
i&22:45:19.245 &73:09:13.70 &19.239 (0.008)& 18.956 (0.010)& 17.986 (0.011)& 17.408 (0.028)& 16.747 (0.008)\\
j&22:45:21.834 &73:09:53.57 &19.537 (0.008)& 19.171 (0.012)& 18.166 (0.014)& 17.636 (0.018)& 16.975 (0.008)\\
k&22:45:42.885 &73:09:05.03 &18.547 (0.019)& 18.216 (0.008)& 17.381 (0.010)& 16.928 (0.014)& 16.365 (0.012)\\
l&22:45:33.912 &73:09:55.99 &19.115 (0.005)& 18.457 (0.010)& 17.398 (0.006)& 16.804 (0.011)& 16.144 (0.016)\\
m&22:45:32.298 &73:12:45.27& 18.003 (0.002)& 17.157 (0.005)& 15.966 (0.017)& 15.329 (0.012& 14.566 (0.006)\\
n&22:45:56.437 &73:11:45.34 &17.787 (0.017)& 17.024 (0.009)& 15.908 (0.010)& 15.322 (0.014)& 14.613 (0.013)\\
o&22:45:38.507 &73:11:34.89 &18.715 (0.008)& 18.475 (0.002)& 17.628 (0.026)& 17.171 (0.006)& 16.534 (0.002)\\
p&22:46:05.921 &73:09:59.35 &18.456 (0.005)& 18.117 (0.005)& 17.233 (0.019)& 16.754 (0.012)& 16.144 (0.008)\\
q&22:45:15.643 &73:11:50.85 &15.884 (0.008)& 15.661 (0.011)& 14.865 (0.012)& 14.406 (0.014)& 13.813 (0.016)\\
r&22:45:24.907 &73:11:16.49 &17.944 (0.017)& 17.647 (0.011)& 16.715 (0.012)& 16.195 (0.005)& 15.529 (0.016)\\
s&22:46:07.248 &73:09:17.79 &19.529 (0.001)& 19.220 (0.009)& 18.343 (0.011)& 17.887 (0.017)& 17.313 (0.009)\\
t&22:45:43.815 &73:10:11.37 &18.262 (0.002)& 17.925 (0.013)& 17.024 (0.018)& $ -$ &$-$\\
u&22:45:02.577 &73:10:36.09 &17.689 (0.003)& 17.329 (0.007)& 16.410 (0.005)& 15.770 (0.005)& 15.171 (0.027)\\
v&22:45:15.320 &73:10:35.72 &17.378 (0.000)& 17.130 (0.005)& 16.199 (0.002)& 15.660 (0.004)& 15.066 (0.013)\\
\hline
\end{tabular}
\end{table*}
All photometry was performed via point spread function (PSF) fitting using the {\sc SNOoPY} package \citep{Cappellaro:2014}. SNOoPY is a collection of {\sc python} scripts calling standard {\sc iraf} tasks (through {\sc pyraf}) and specific data analysis tools such as {\sc sextractor} for source extraction and {\sc daophot} for PSF fitting.
The sky background at the SN location is first estimated with a low-order
polynomial fit to data in the surrounding area. Then, the PSF model derived from isolated field stars is simultaneously fitted to the SN and any point source projected nearby (i.e. any star-like source within a radius of $\sim 5\times {\rm FWHM}$ from the SN). The fitted sources are removed from the original images, an improved estimate of the local background derived and the PSF fitting procedure iterated. The residuals are visually inspected to validate the fit.
An alternative approach for the measurement of transient magnitudes is template subtraction. The application of this technique requires the use of exposures of the field obtained before the SN explosion or after the SN has faded. The template images need to be in the same filter and have good signal-to-noise and seeing. Unfortunately, we could not find archival images suitable for use as templates, so only the PSF-fitting procedure was performed. On the contrary, for earlier epochs of LOSS/KAIT imaging, the pre-explosion image obtained on 2014 June 13 was used as a subtraction template (see Section~\ref{discovery}).
Error estimates for the SN magnitudes are obtained through artificial star experiments in which a fake star with a similar magnitude to the SN is placed in the fit residual image at a position close to, but not coincident with, the SN location.
The simulated image is processed through the same PSF fitting procedure and
the standard deviation of magnitudes of the fake stars is taken as an
estimate of the instrumental magnitude error, which is mainly due to the
uncertainty in the background fitting. This is combined (in quadrature) with the PSF fit error returned by {\sc daophot} and the propagated errors from the photometric calibration chain.
Johnson/Bessel and Sloan optical magnitudes of the SN and associated errors are listed in Tables~\ref{joh} and \ref{sloan}, respectively, while the NIR photometry is given in Table~\ref{nir_data}.
Magnitudes are in the Vega system for the Johnson/Bessel filters and are close to the AB system (${\rm SDSS} = {\rm AB} - 0.02$~mag) for the Sloan filters.
The $uBVgriJHK$ light curves of SN~2014ck are plotted in Figure~\ref{lc}. Note
that since only a handful of $RIz$ epochs are available, we list their values in Tables~4 and 5 but do not plot them.
\subsection{Spectroscopy}
\begin{table*}
\caption{Journal of spectroscopic observations.} \label{telescope_spec}
\begin{tabular}{c c c c c c}
\hline
Date &MJD &Phase$^1$ & Instrumental & Range & Resolution$^3$ \\
& &[d] & configuration$^2$& [\AA] & [\AA] \\
\hline
20140701 & 56839.58 & $-$6.0 &LCOGT+FLOYDS &3200-10000&13\\
20140702 & 56840.52 & $-$5.0 &LCOGT+FLOYDS &3200-10000&13\\
20140703 & 56841.52 & $-$4.0 &LCOGT+FLOYDS &3200-10000&13\\
20140703 & 56841.98 & $-$3.6 &Ekar+AFOSC+gm4 &3500-8200&24 \\
20140704 & 56842.57 & $-$3.0 &LCOGT+FLOYDS &3200-10000&13\\
20140706 & 56844.52 & $-$1.0 &LCOGT+FLOYDS &3200-10000&13 \\
20140706 & 56844.55 & $-$1.0 & Gemini-N+GNIRS & 9800-25000 & 4 \\
20140707 & 56845.50 & $-$0.1 & Gemini-N+GNIRS& 9800-25000 & 4 \\
20140709 & 56847.21 & 1.7 & NOT+ALFOSC+gm4 &3400-9000 &14 \\
20140710 & 56848.49 & 2.9 & LCOGT+FLOYDS &3200-10000 &13 \\
20140711 & 56849.19 & 3.6 & TNG+LRS+LR-B &3200-8000&10 \\
20140711 & 56849.45 & 3.9 & Gemini-N+GNIRS & 9800-25000 & 4 \\
20140712 & 56850.58 & 5.0 & LCOGT+FLOYDS &3200-10000 &13 \\
20140718 & 56856.99 & 11.4 & Ekar+AFOSC+gm4+VPH6 &3500-9300 &24 \\
20140724 & 56862.20 & 16.6 & NOT+ALFOSC+gm4 &3400-9000 &14 \\
20140725 & 56863.58 & 18.0 & LCOGT+FLOYDS &3200-10000 &13 \\
20140726 & 56864.51 & 19.0 & Gemini-N+GNIRS & 9800-25000 & 4 \\
20140727 & 56865.56 & 20.0 & LCOGT+FLOYDS &3200-10000 &13 \\
20140728 & 56866.54 & 21.0 & LCOGT+FLOYDS &3200-10000 &13 \\
20140731 & 56869.28 & 23.7 & Gemini-N+GNIRS & 9800-25000 & 4 \\
20140801 & 56870.18 & 24.6 & NOT+ALFOSC+gm4 &3400-9000 &14 \\
20140805 & 56874.50 & 28.9 & LCOGT+FLOYDS &3200-10000 &13 \\
20140807 & 56876.39 & 30.8 & Gemini-N+GNIRS & 9800-25000 & 4 \\
20140812 & 56881.98 & 36.4 & TNG+LRS+LR-B+LR-R & 3500-10000& 10\\
20140815 & 56884.54 & 39.0 & LCOGT+FLOYDS &3200-10000 &13\\
20140823 & 56892.36 & 46.8 & LCOGT+FLOYDS &3200-10000 &13 \\
20140825 & 56894.00 & 48.4 & NOT+ALFOSC+gm4 &3400-9000 &14 \\
20140831 & 56900.94 & 55.4 & TNG+NICS+IJ+HK & 9000-17500& 6\\
20140926 & 56926.94 & 81.4 & Ekar+AFOSC+gm4+VPH6 &3500-9300 &24 \\
20141025 & 56955.94 & 110.4 & TNG+LRS+LR-B+LR-R & 3500-10000& 10\\
20141220 & 57011.89 & 166.3 & GTC+OSIRIS+R300B & 3500-9000 & 16 \\
\hline
\end{tabular}
$^1$ The phase is relative to the adopted epoch of the $V$-band maximum, ${\rm MJD} = 56845.6 \pm 0.1$.
$^2$ NOT = 2.56~m Nordic Optical Telescope (La Palma, Spain); Ekar = Copernico 1.82~m Telescope (Mt.~Ekar, Asiago, Italy); TNG = 3.58~m Telescopio Nazionale Galileo (La Palma, Spain); LCOGT = LCOGT 2.0~m Telescope (Haleakala, Hawaii, USA); Gemini-N = 8.1~m Telescope (Hilo, Hawaii, USA); GTC = 10.4~m Gran Telescopio Canarias (La Palma, Spain).
$^3$ The resolution is estimated from the FWHM of the night sky lines.
\end{table*}
A sequence of 24 low-resolution visual-wavelength spectra for SN~2014ck were obtained extending from $-6.0$~d to $+166.3$~d relative to the epoch of $V$-band maximum. Seven epochs of NIR spectra were also taken extending from $-1.0$~d to $+55$~d.
A summary of all spectroscopic observations is provided in Table~\ref{telescope_spec}.
Optical spectra were reduced using standard {\sc iraf} tasks. After bias and flat-field correction, the SN spectrum was extracted and calibrated in wavelength through a comparison to arc lamp spectra. The flux calibration was derived from observations of spectrophotometric standard stars obtained, when possible, on the same night as the SN. All the flux-calibrated spectra were verified against photometry and, when necessary, a correction factor was applied. Corrections for the telluric absorption bands were derived using the spectrophotometric standard star spectra.
In some cases a non-perfect removal can affect the SN features that overlap with the strongest atmospheric absorptions,
in particular with the telluric O$_{2}$ $A$-band at $7590-7650$~\AA\/ and the H$_{2}$O, CO$_{2}$, CH$_{4}$ bands in NIR spectra (their positions are marked in Figures~8, 9, 10, 11, 13, 14 and 15 with the $\oplus$ symbol and, for the strongest ones, with vertical grey bands).
The NIR spectra obtained with GNIRS attached to the Gemini North telescope were reduced using the {\sc gnirs} Gemini {\sc iraf} package \citep[see][for details]{hsiao:2013}.
The TNG spectrum obtained with the Near Infrared Camera Spectrograph (NICS) was reduced using standard {\sc iraf} packages.
In brief, following the standard infrared technique, each night several pairs of spectra were taken at different positions along the slit, and consecutive pairs were subtracted from each other in order to remove the sky background.
The subtracted images were aligned to match the stellar profile and added together. Finally, the source spectrum was extracted from the combined images. Wavelength calibration, telluric correction and flux calibration were done in the standard manner. Lastly, spectra were corrected to match the broadband photometry.
\section{Galactic and host reddening}\label{extinction}
The Galactic extinction in the direction of UGC~12182, as derived
from the \cite{schlafly:2011} recalibration of the \cite{schlegel:1998} infrared-based dust map, is $E(B-V)_{\rm G} = 0.40 \pm 0.05$~mag (via NED), which corresponds to a Galactic extinction $A_V = 1.26 \pm 0.15$~mag when adopting a standard
$R_V = 3.1$ reddening law \citep{cardelli:1989}.
The extinction within the host galaxy is more uncertain. A standard approach for SNe Ia is to measure the colour excess by comparing the SN colour with that of an unreddened SN template. However, the comparative study of the $B-V$, $V-R$ and $V-I$ colour curves for a sample of
SNe~Iax presented by \cite{foley:2013} shows significant scatter that does not improve after reddening corrections. So far, it is unclear if these objects have similar intrinsic colours or not.
High-dispersion observations of Na~I~D $\lambda\lambda$5890, 5896 are used as an independent means of probing dust extinction to extragalactic sources \citep{poznanski:2012}. However, for medium- to low-resolution spectra, when the doublet is blended, there is a large scatter in the data \citep{turatto:2003,poznanski:2011} and the correlation has less predictive power. Moreover, \cite{phillips:2013} showed that the column density and/or equivalent width (EW) of the Na~I~D lines are, in general, unreliable indicators of the extragalactic dust extinction suffered by SNe~Ia.
The exception to this statement is that weak or undetectable Na~I absorption appears to be consistent with little or no extinction.
With this caveat in mind, the earlier spectra with the highest signal-to-noise ratio were selected from our spectral sequence and used to measure an average EW for the Galactic Na~I~D of $2.8 \pm 0.3$~\AA. Following \cite{turatto:2003}, a lower limit for the colour excess within the Milky Way of $E(B-V)_{\rm G} = 0.44 \pm 0.05$~mag was found, in fair agreement with $E(B-V)_{\rm G} = 0.40 \pm 0.05$~mag obtained from the infrared maps of the galactic dust distribution.
Only a weak absorption line, ${\rm EW} \lesssim 0.3$~\AA, may be attributed to the host Na~I~D. This is consistent with little extinction in the host galaxy
($E(B-V)_{\rm host} \lesssim 0.05$~mag). Therefore, the total colour excess of the SN is estimated to be
$E(B-V)_{\rm tot} \approx 0.5 \pm 0.1$~mag (i.e. $A_V \approx 1.5 \pm 0.3$~mag).
\section{Photometric evolution}\label{photo}
\begin{table*}
\caption{Optical photometry of SN~2014ck in the Johnson/Cousins $UBVRI$ filters (Vega mag), with associated errors in parentheses.} \label{joh}
\begin{tabular}{cccccccc}
\hline
Date & MJD & $U$ & $B$ & $V$ & $R$ & $I$ & Instrument\\
& & [mag]&[mag]&[mag]&[mag]&[mag]&\\
\hline
20140630 & 56838.83 & $-$ & $-$ & $-$ & 16.1 (0.5) & $-$ & Masi$^{1}$ \\
20140701 & 56839.38 & $-$ & $-$ & 17.34 (0.23) & $-$ & $-$ & Brimacombe$^{1,2}$ \\
20140701 & 56839.91 & $-$ & $-$ &$-$ & 16.1 (0.5) &$-$ & James$^{1}$ \\
20140704 & 56842.01 & $-$ & $-$ & 16.64 (0.22) &$-$ &$-$& AFOSC \\
20140705 & 56843.57 & $-$ & $-$ & 16.49 (0.20) & $-$ & $-$ & Spectral$^3$ \\
20140706 & 56844.42 & $-$ & 16.89 (0.06) & 16.44 (0.04) & $-$ & $-$ & SBIG \\
20140706 & 56844.60 & $-$ & $-$ & 16.44 (0.22) & $-$ & $-$ & Spectral$^3$ \\
20140708 & 56846.30 & $-$ & 16.91 (0.03) & 16.43 (0.03) & $-$ & $-$ & SBIG \\
20140711 & 56849.10 & $-$ & 17.22 (0.02) & 16.53 (0.01) & $-$ & $-$ & LRS \\
20140716 & 56854.44 & $-$ & 18.11 (0.04) & 16.85 (0.03) & $-$ & $-$ & Spectral$^3$ \\
20140719 & 56857.07 & $-$ & 18.35 (0.05) & 17.20 (0.07) &$-$&$-$ & AFOSC \\
20140724 & 56862.58 & $-$ & $-$ & 17.31 (0.15) &$-$&$-$ & Spectral$^3$ \\
20140725 & 56863.45 & $-$ & 18.97 (0.04) & 17.47 (0.04) & $-$ & $-$ & Spectral$^3$ \\
20140726 & 56864.31 & $-$ & $-$ & 17.49 (0.39) & $-$ & $-$ & SBIG \\
20140728 & 56866.49 & $-$ & $-$ & 17.62 (0.17) & $-$ & $-$ & Spectral$^3$ \\
20140731 & 56869.54 & $-$ & 19.35 (0.05) & 17.76 (0.04) &$-$&$-$ & Spectral$^3$ \\
20140803 & 56872.97 & $-$ & 19.29 (0.06) & 17.72 (0.03) & $-$ & $-$ & AFOSC \\
20140805 & 56874.13 & 20.18 (0.06) & 19.45 (0.02) & 17.91 (0.01) & 17.27 (0.08) & 16.64 (0.01) & ALFOSC \\
20140812 & 56881.36 & $-$ & $-$ & 18.19 (0.23) &$-$ &$-$& Spectral$^3$ \\
20140813 & 56882.01 & $-$ & 19.69 (0.03) & 18.16 (0.02) &$-$ &$-$ & LRS \\
20140816 & 56885.36 & $-$ & $-$ & 18.29 (0.18) &$-$&$-$& Spectral$^3$ \\
20140820 & 56889.46 & $-$ & 19.69 (0.15) & 18.29 (0.03) & $-$ & $-$ & Spectral$^3$ \\
20140824 & 56893.42 & $-$ & 19.82 (0.14) & 18.40 (0.03) & $-$ &$-$ & Spectral$^3$ \\
20140824 & 56893.97 & 20.95 (0.07) & 19.84 (0.02) & 18.45 (0.03) & 17.80 (0.02) & 17.16 (0.02) & ALFOSC \\
20140825 & 56894.36 & $-$ & 19.67 (0.06) & 18.43 (0.04) & $-$ & $-$ & Spectral$^3$ \\
20140825 & 56894.44 & $-$ & $-$ & 18.53 (0.21) & $-$ &$-$ & SBIG \\
20140829 & 56898.36 & $-$ & 19.92 (0.15) & 18.52 (0.04) & $-$ &$-$ & Spectral$^3$ \\
20140829 & 56898.47 & $-$ & $-$ & 18.56 (0.20) & $-$ & $-$ & SBIG \\
20140902 & 56902.26 & $-$ & 19.84 (0.13) & 18.58 (0.04) &$-$& $-$ & SBIG \\
20140902 & 56902.53 & $-$ & 19.84 (0.05) & 18.58 (0.04) &$-$ &$-$& Spectral$^3$ \\
20140906 & 56906.32 & $-$ & 19.99 (0.07) & 18.59 (0.05) & $-$ & $-$ & Spectral$^3$ \\
20140909 & 56909.44 & $-$ & $-$ & 18.67 (0.09) & $-$ & $-$ & SBIG \\
20140910 & 56910.30 & $-$ & 19.89 (0.10) & 18.74 (0.05) & $-$ &$-$ & Spectral$^3$ \\
20140914 & 56914.44 & $-$ & 20.17 (0.07) & 18.79 (0.04) & $-$ & $-$ & Spectral$^3$ \\
20140918 & 56918.50 & $-$ & 20.06 (0.05) & 18.87 (0.03) & $-$ &$-$& SBIG \\
20140922 & 56922.39 & $-$ & 20.13 (0.04) & 18.93 (0.04) & $-$ &$-$ & Spectral$^3$ \\
20140923 & 56923.39 & $-$ & $-$ & 19.00 (0.04) & $-$ &$-$ & Spectral$^3$ \\
20140927 & 56927.03 & $-$ & 20.11 (0.18) & 18.95 (0.21) &$-$& $-$ & AFOSC \\
20140928 & 56928.32 & $-$ & 20.21 (0.18) & 19.03 (0.11) & $-$ &$-$& Spectral$^3$ \\
20141013 & 56943.83 & 21.94 (0.18) & 20.58 (0.03) & 19.32 (0.02) & 18.86 (0.06) & 18.04 (0.02) & ALFOSC \\
20141024 & 56954.83 & $-$ & 20.47 (0.19) & 19.34 (0.21) &$-$&$-$& AFOSC \\
20141028 & 56958.82 & $-$ & 20.58 (0.21) & 19.41 (0.21) & $-$ & $-$ & AFOSC \\
20141125 & 56986.82 & $-$ & 20.93 (0.07) & 19.93 (0.10) & 19.30 (0.10) & 18.68 (0.15) & ALFOSC \\
20141216 & 57007.85 & $-$ & 21.71 (0.08) & 20.34 (0.05) & 19.89 (0.06) & 18.76 (0.05) & ALFOSC \\
20150110 & 57032.75 & $-$ & 22.41 (0.52) & 21.52 (0.56) & $-$ & $-$ & AFOSC \\
\hline
\end{tabular}
Notes: $^{1}$ From IAU CBET 3949 (Masi et al. 2014); $^{2}$ $V$ for reference; $^3$ ``Spectral'' is a photometric camera mounted
on the Faulkes Telescopes of the LCOGT network.
\end{table*}
\begin{table*}
\caption{Optical photometry of SN~2014ck in the Sloan $ugriz$ filters (AB mag), with associated errors in parentheses.} \label{sloan}
\begin{tabular}{cccccccc}
\hline
Date & MJD & $u$ & $g$ & $r$ & $i$ & $z$ & Instrument\\
& & [mag]&[mag]&[mag]&[mag]&[mag]&\\
\hline
20140624 & 56832.46 & $-$ &$-$ & 18.41 (2.00) & $-$ &$-$ & KAIT$^{1,2}$ \\
20140625 & 56833.49 & $-$ &$-$ & 18.15 (0.44) & $-$ &$-$ & KAIT$^{1,3}$ \\
20140628 & 56836.49 & $-$ &$-$ & 16.96 (0.36) & $-$ &$-$ & KAIT$^{1}$ \\
20140629 & 56837.47 & $-$ &$-$ & 16.89 (0.11) & $-$ &$-$ & KAIT$^{1}$ \\
20140630 & 56838.41 & $-$ &$-$ & 16.71 (0.27) & $-$ &$-$ & KAIT$^{1}$ \\
20140704 & 56842.01 & $-$ &$-$ & 16.37 (0.02) & 16.32 (0.02) &$-$ & AFOSC \\
20140705 & 56843.57 &$-$& $-$ & 16.32 (0.05) & 16.19 (0.06) & $-$ & Spectral$^5$ \\
20140706 & 56844.61 & $-$ & 16.73 (0.10) & 16.20 (0.08) & 16.09 (0.11) &$-$ & Spectral$^5$ \\
20140708 & 56846.32 &$-$ & 16.70 (0.06) & 16.19 (0.04) & 16.14 (0.05) & $-$ & SBIG \\
20140711 & 56849.18 & 18.35 (0.02) & 16.81 (0.09) & 16.28 (0.03) & 16.06 (0.01) & $-$ & LRS \\
20140713 & 56851.53 &$-$ & $-$ & 16.27 (0.03) & 16.18 (0.04) & $-$ & Spectral$^5$ \\
20140716 & 56854.44 & $-$ & 17.67 (0.04) & 16.48 (0.06) & 16.24 (0.05) &$-$& Spectral$^5$ \\
20140719 & 56857.07 & 20.20 (0.14) & 17.85 (0.03) & 16.60 (0.04) & 16.10 (0.03) & 17.70 (0.06) & AFOSC \\
20140724 & 56862.58 & $-$& 18.27 (0.05) & 16.83 (0.09) & 16.51 (0.11) & $-$ & Spectral$^5$ \\
20140725 & 56863.46 & $-$ & 18.49 (0.06) & 16.89 (0.04) & 16.59 (0.08) & $-$ & Spectral$^5$ \\
20140726 & 56864.32 & $-$ & 18.50 (0.07) & 16.95 (0.06) & 16.64 (0.07) & $-$ & SBIG \\
20140728 & 56866.50 & $-$ & 18.57 (0.06) & 17.08 (0.05) & 16.70 (0.08) & $-$ & Spectral$^5$ \\
20140731 & 56869.54 & $-$ & 18.78 (0.05) & 17.26 (0.06) & 16.86 (0.09) & $-$ & Spectral$^5$ \\
20140803 & 56872.97 & 21.08 (0.19) & 18.72 (0.04) & 17.34 (0.02) & 17.01 (0.02) &$-$& AFOSC \\
20140805 & 56874.13 & 21.03 (0.06) &$-$ & $-$ &$-$ & $-$ & ALFOSC$^{4}$ \\
20140812 & 56881.38 & $-$& 19.13 (0.08) & 17.67 (0.05) & 17.37 (0.07) & $-$ & Spectral$^5$ \\
20140813 & 56882.01 & 21.46 (0.21) & 19.10 (0.09) & 17.75 (0.11) & 17.34 (0.13) &$-$& LRS \\
20140816 & 56885.36 & $-$ & 19.19 (0.04) & 17.79 (0.06) & 17.39 (0.07) & $-$ & Spectral$^5$ \\
20140820 & 56889.47 & $-$ & 19.18 (0.05) & 17.93 (0.05) & 17.53 (0.08) &$-$& Spectral$^5$ \\
20140824 & 56893.43 & $-$ & $-$ &$-$ & 17.52 (0.34) & $-$ & Spectral$^5$ \\
20140824 & 56893.97 & 21.81 (0.07) & $-$ &$-$& $-$ & $-$ & ALFOSC$^{4}$ \\
20140825 & 56894.36 & $-$ & 19.27 (0.06) & 18.07 (0.05) & 17.75 (0.08) & $-$ & Spectral$^5$ \\
20140825 & 56894.46 & $-$& 19.26 (0.11) & 18.06 (0.07) & 17.82 (0.07) &$-$& SBIG \\
20140829 & 56898.38 & $-$ &$-$ & 18.08 (0.17) & 17.69 (0.08) & $-$ & Spectral$^5$ \\
20140829 & 56898.45 & $-$& 19.32 (0.07) & 18.21 (0.06) & 17.82 (0.09) & $-$ & SBIG \\
20140902 & 56902.45 & $-$& 19.49 (0.11) & 18.23 (0.06) & 17.85 (0.08) & $-$ & SBIG \\
20140902 & 56902.54 &$-$ & 19.34 (0.07) & 18.25 (0.07) & 17.91 (0.08) & $-$ & Spectral$^5$ \\
20140906 & 56906.33 & $-$& $-$ & 18.32 (0.06) & 17.96 (0.10) &$-$& Spectral$^5$ \\
20140909 & 56909.42 & $-$& 19.37 (0.15) & 18.36 (0.08) & 17.95 (0.20) & $-$ & SBIG \\
20140910 & 56910.32 & $-$& 19.42 (0.06) & 18.44 (0.07) & 18.06 (0.04) & $-$ & Spectral$^5$ \\
20140914 & 56914.46 & $-$& 19.52 (0.07) & 18.53 (0.05) & 18.18 (0.10) & $-$ & Spectral$^5$ \\
20140918 & 56918.51 & $-$& 19.54 (0.07) & 18.55 (0.06) & 18.14 (0.10) & $-$& Spectral$^5$ \\
20140922 & 56922.40 & $-$& 19.68 (0.07) & 18.70 (0.08) & $-$ &$-$& Spectral$^5$ \\
20140923 & 56923.40 & $-$& 19.70 (0.07) & 18.64 (0.16) & 18.31 (0.17) & $-$ & Spectral$^5$ \\
20140927 & 56927.04 & $-$& 19.67 (0.19) & 18.77 (0.17) & 18.30 (0.04) & 19.33 (0.13) & AFOSC \\
20141013 & 56943.83 & 22.79 (0.18) &$-$ & $-$ & $-$ & $-$ & ALFOSC$^{4}$ \\
20141024 & 56954.83 & $-$& 19.87 (0.17) & 19.32 (0.19) & 18.80 (0.18) & $-$ & AFOSC \\
20141028 & 56958.83 & $-$& 20.03 (0.10) & 19.42 (0.10) & 18.93 (0.08) & $-$ & AFOSC \\
20150110 & 57032.77 & $-$& 22.80 (0.36) & 22.07 (0.45) & 21.64 (0.47) & $-$ & AFOSC \\
\hline
\end{tabular}
Notes: $^{1}$ LOSS/KAIT unfiltered images calibrated to $r$-band; $^{2}$ upper limit; $^{3}$ marginal detection; $^{4}$ $U$-band magnitude converted into $u$-band following \cite{chonis:2008}; $^5$ ``Spectral'' is a photometric camera mounted
on the Faulkes Telescopes of the LCOGT network.
\end{table*}
\begin{table*}
\caption{Near-infrared photometry of SN~2014ck, with associated errors in parentheses.} \label{nir_data}
\begin{tabular}{cccccc}
\hline
Date & MJD & $J$ & $H$ & $K$ & Instrument\\
& &[mag]&[mag]&[mag]&\\
\hline
20140807 & 56876.08 & 16.94 (0.03) & 16.30 (0.01) & 16.39 (0.03) & NOTCam \\
20140831 & 56900.08 & 17.58 (0.29) & 16.59 (0.17) & 16.91 (0.26) & NICS \\
20140905 & 56905.03 & 17.70 (0.03) & 16.67 (0.05) & 17.08 (0.05) & NOTCam \\
20141007 & 56937.07 & 18.16 (0.04) & 17.39 (0.07) & 17.75 (0.05) & NOTCam \\
20150105 & 57027.83 & 18.89 (0.04) & 18.55 (0.09) & 19.34 (0.26) & NOTCam \\
\hline
\end{tabular}
\end{table*}
\subsection{Broad band photometry}\label{photo_evol}
\begin{figure}
\includegraphics[scale=.38,angle=0]{lc_conf.pdf}
\caption{Comparison of normalised (to maximum magnitude) $B$- and $V$-band light curves of SNe~2002cx and 2014ck. (A colour version of this figure is available in the online journal).}
\label{lc2}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{col_BVR.pdf}
\includegraphics[scale=.5]{col_gri.pdf}
\caption{From top to bottom: $B - V$ and $V - R$ (Vega mag), $g - r$ and $r - i$ (AB mag) extinction-corrected colours of SN~2014ck compared with those of SN~2002cx (blue triangles) and SN~2005hk (green squares). (A colour version of this figure is available in the online journal). }
\label{col}
\end{figure}
The photometric evolution of SN~2014ck shown in Figure~\ref{lc} is well sampled in the optical bands (except the pre-maximum evolution in $u$- and $B$-bands), while only a handful of NIR measurements were obtained. The light curves are characterised by a rise to maximum and a subsequent decline that is slower at longer wavelengths (e.g. in the $r$- and $i$-band). Moreover, as already noted for other SNe~Iax \citep{li:2003,foley:2013,stritz:2015}, the NIR bands show no evidence of a secondary maximum, characteristic of normal SNe~Ia \citep{hamuya:1996,hamuyb:1996}.
By using of a low-order polynomial fit to the optical light curves around maximum,
an estimate of the magnitude and epoch of maximum light for each band was obtained. SN~2014ck reached peak brightness $M_B=-17.37 \pm 0.15$~mag and $M_V=-17.29 \pm 0.15$~mag. All the absolute magnitudes are listed in Table~\ref{fit} along with their associated uncertainties estimated from the dispersion around the polynomial fit. Whereas the $gVri$ light curves are well sampled around maximum, the $B$-band light curve is already declining from the first $B$ point. Consequently, the time of $B$-band maximum might be ill constrained.
With best-fit peak apparent magnitudes in hand, absolute magnitudes were also computed, with associated uncertainties obtained by adding in quadrature the errors of the fit to the peak apparent magnitudes and the errors in the adopted extinction and distance.
Finally, our polynomial fits also provide a measure of the decline-rate parameter $\Delta m_{15}$, the magnitude drop from the epoch of maximum brightness to 15 days later.
In the case of normal SNe~Ia, $\Delta m_{15}$ is known to correlate with luminosity \citep{phillips:1993}.
Examining the results of the polynomial fits, we find that maximum light is reached earlier in the blue bands compared to the red bands, with a delay of $\sim 4$ days between the epochs of $B$- and $i$-band maximum.
Furthermore, the blue bands have faster decline rates, with $\Delta m_{15} (B) = 1.76 \pm 0.15$~mag and $\Delta m_{15} (i) = 0.39 \pm 0.15$~mag. Both of these characteristics are common to all SNe~Iax \citep{foley:2013,stritz:2014,stritz:2015}
As revealed from the comparison in Figure~\ref{lc2} of the $B$- and $V$-band light curves of SN~2002cx and SN~2014ck, the two objects show nearly identical evolution.
Moreover, as shown in Figure~\ref{bol}, the two objects reached the same peak bolometric luminosity.
The decline rates of the two objects are also nearly identical: $\Delta m_{15}(B) = 1.76 \pm 0.15$~mag for SN~2014ck vs.\ $\Delta m_{15}(B) = 1.7 \pm 0.1$~mag for SN~2002cx. These values are significantly slower than the decline rate of the faint and fast SN~2008ha \citep[$\Delta m_{15}(B) = 2.03 \pm 0.20$~mag]{valenti:2009} and SN~2010ae \citep[$\Delta m_{15}(B) = 2.43 \pm 0.11$~mag]{stritz:2014}. In conclusion, SN~2014ck follows the general trend for SNe~Iax (and SNe~Ia, in general): more luminous objects tend to have broader light curves \citep{foley:2013}.
Various optical-band colour curves of SN~2014ck, corrected for reddening, are plotted in Figure~\ref{col}. At early phases the colours are blue. As the SN evolves, the colours change towards the red, reaching a maximum value around three weeks past maximum.
Subsequently, the colours slowly evolve back towards the blue.
Inspecting the colour curves of SN~2014ck compared to those of the Type~Iax SN~2002cx \citep{li:2003} and SN~2005hk \citep{phillips:2007,stritz:2015}, we note similar evolution, with SN~2014ck appearing marginally bluer over all epochs.
\subsection{Explosion date and rise time estimates}\label{rise}
\begin{figure}
\includegraphics[scale=.4, angle=0]{fit_rise.pdf}
\caption{Power-law fit to the 5 pre-maximum $r$- and $V$-band points using an ``expanding fireball'' model (index of the power law $n=2$). For comparison, a few post-maximum epochs are also shown, although they are not included in the fit. (A colour version of this figure is available in the online journal).} \label{fit_rise}
\end{figure}
The early detection of SN~2014ck and the analysis of LOSS/KAIT pre-discovery and discovery images gives a unique opportunity to obtain an accurate estimate of the rise time for a SN~Iax.
In order to constrain the explosion date of SN~2014ck, we fit the pre-maximum portion of the $r$- and $V$-band light curves (5 epochs for each band) with an ``expanding fireball'' model, i.e. $f_{\rm model}(t) = \alpha(t - t_0)^n$ with $n=2$ \citep[][and references therein]{riess:1999}. The time of the first light ($t_0$) obtained from the fit (see Figure~\ref{fit_rise}) is ${\rm MJD} = 56828.2^{+2.7}_{-4.5}$ (2014 June 20.2 UT). With regards to the index of the power law $n$, \cite{firth:2015} presented an analysis of the early, rising light curves for a sample of 18 SNe~Ia: their data highlighted in many cases a departure of $n$ from the simple fireball model ($n=2$), with significant diversity from event to event (cf.\ their Table~4 and Figure~14) and a mean value of the distribution of $n = 2.44 \pm 0.13$. \cite{gane:2011}, using a sample of about sixty low-redshift LOSS SNe~Ia, found a best fit of $n = 2.20^{+0.27}_{-0.19}$, consistent ($1\sigma$) with the expanding fireball modelled by a parabola. In any case, these recent studies provide evidence for a range of $n$ for SNe~Ia events with the centre of the distribution slightly above 2 \citep[see also][]{piro:2014}. This deviation has implications for the distribution of $^{56}$Ni throughout the SN ejecta and so, in principle, the fit of the light curves should be $n$-free. Unfortunately, pre-maximum data for SN~2014ck are not enough for a convergent solution with a free $n$ parameter. To account for possible deviations from the fireball model, we fit a range of power laws with $2 \leq n \leq 2.5$ to the pre-maximum $r$- and $V$-band light curves independently. The reported uncertainty on $t_0$ is the standard deviation of the $t_0$ parameters for these fits.
We note that the spectral phases obtained by comparing the early spectra of SN~2014ck with similar-phase spectra of SN~2005hk are consistent with this estimate.
The rise-time
to maximum is estimated to range from $\sim 17$~days in the $B$-band to $\sim 21$~days in the $i$-band.
The $BVgri$-band rise time estimates are listed in Table~\ref{fit}. The associated errors are largely dominated by the error on $t_0$.
From the bolometric luminosity (see Section~\ref{bolometric}) we infer a rise time of $t_{\rm rise} = 16.9^{+4.5}_{-2.7}$~days, in agreement with the $B$-band rise time.
This is not surprising, as the $B$-band roughly traces the bolometric behaviour \citep{gane:2011}.
Note the rise times for SNe~Iax range from SN~2008ha, at $\sim 10$~days,
to SN~2008ge, which might be $> 20$~days \citep{foley:2013}.
\begin{table*}
\caption{Optical light curve parameters for SN 2014ck, with associated errors in parentheses.}\label{fit}
\begin{tabular}{cccccc}
\hline
Filter & Peak & $m_{Peak}$ & $M_{Peak}$ & $\Delta m_{15}$ & $t_{\rm rise}$ \\
& MJD & [mag] & [mag] & [mag] & [days] \\
\hline
$B$ & 56845.05 (0.50) & 16.87 (0.01) & $-$17.37 (0.15)&1.76 (0.15)&16.9$^{+4.5}_{-2.7}$\\
$g$ & 56846.31 (0.15) & 16.65 (0.04) & $-$17.42 (0.15) &1.59 (0.10)&18.1$^{+4.5}_{-2.7}$\\
$V$ & 56845.60 (0.10) & 16.41 (0.01) & $-$17.29 (0.15)&0.88 (0.05)&17.4$^{+4.5}_{-2.7}$\\
$r$ & 56846.62 (0.20) & 16.20 (0.03) & $-$17.29 (0.15)&0.58 (0.05)&18.4$^{+4.5}_{-2.7}$\\
$i$ & 56849.20 (0.60) & 16.08 (0.02) & $-$17.04 (0.15)& 0.39 (0.15)&21.0$^{+4.5}_{-2.7}$\\
\hline
\end{tabular}
\end{table*}
\subsection{Bolometric light curve and explosion parameter estimates}\label{bolometric}
\begin{figure}
\includegraphics[scale=.44,angle=0]{bolom.png}
\caption{OIR bolometric light curve of SN~2014ck, computed by integrating the fluxes from the $uBVgrizJHK$-bands. For comparison the OIR lightcurves are also shown for the Type~Iax SNe~2002cx \citep{li:2003,phillips:2007}, 2005hk \citep{phillips:2007,stritz:2015}, 2008ha \citep{foley:2009,valenti:2009,stritz:2014}, 2010ae \citep{stritz:2014} and 2012Z \citep{stritz:2015}. (A colour version of this figure is available in the online journal).}
\label{bol}
\end{figure}
Using the multi-band photometry of SN~2014ck extending from $u$- to $K$-band, we
constructed the pseudo-bolometric optical-infrared (OIR) light curve shown in Figure~\ref{bol}.
Unfortunately, no ultraviolet observations of SN~2014ck are available to compute a UVOIR bolometric light curve\footnote{The abbreviation UVOIR is used with different meanings in the literature. In this paper use it to mean the flux integrated from 1600 \AA\ ({\it Swift}/UVOT $uvw2$-band) to 25~\micron\ ($K$-band). If the integration starts from 3000 \AA\ (ground based $U$/$u$-band) we use the label OIR.}
For each epoch and passband the observed magnitude was converted to flux at the effective wavelength.
If observations were not available for a given filter on a particular night, the magnitude was estimated through interpolation between adjacent epochs or, if necessary, extrapolated assuming a constant colour from the closest available epoch.
The fluxes were next corrected for reddening ($E(B-V)_{\rm tot} \approx 0.5 \pm 0.1$~mag), yielding a full spectral energy distribution (SED) at each epoch.
The SEDs were then integrated using a trapezoidal integration technique, assuming zero flux at the integration boundaries (the edges of $u$ and $K$ bands).
Finally, the fluxes at each epoch were converted to luminosities assuming our adopted distance to the host galaxy.
For comparison, the OIR pseudo-bolometric light curves of the Type~Iax SNe~2005hk \citep[][adopting $E(B-V) = 0.11$~mag, $\mu = 33.46 \pm 0.27$~mag]{phillips:2007,stritz:2015}, 2012Z \citep[][$E(B-V) = 0.11$~mag, $\mu = 32.59 \pm 0.09$~mag]{stritz:2015}, 2010ae \citep[][$E(B-V) = 0.62$~mag, $\mu = 30.58 \pm 0.58$~mag]{stritz:2014} and 2008ha \citep[][$E(B-V) = 0.30$~mag, $\mu = 31.64 \pm 0.15$~mag]{valenti:2009,foley:2009,stritz:2014} were computed with the same prescription, using the optical and NIR photometry found in the literature.
For SN~2002cx, only $BVRIr$ photometry is available \citep[][$E(B-V) = 0.034$~mag, $\mu = 35.09 \pm 0.32$~mag]{li:2003}, but given the striking photometric similarities between SNe~2002cx and 2014ck (see Figures~\ref{lc2} and \ref{col}), we assume the $u$- and NIR bands give the same contribution to the total flux (at least near maximum light) for both objects. This contribution was estimated from
the ratio in flux between the OIR
and $BVri$-band bolometric light curves constructed for SN~2014ck, which is around 1.35 at maximum light
and decreases to 1.08 ten days after maximum, and applied to SN~2002cx.
\begin{figure*}
\includegraphics[scale=.6,angle=0]{sequence1.pdf}
\caption{Early spectral evolution and line identification. Phases relative to $V$-band maximum are reported. The insets on the top show regions of the $-4$~d spectrum centred on C~III $\lambda$4647 (right) and C~II $\lambda\lambda$6580, 7234 (left), with the synthetic spectra over-plotted, calculated with (solid red curve) and without (dotted blue curve) C~II and C~III ions (see text for details). Wavelength is in the rest frame, and the positions of major telluric absorption lines are marked with the $\oplus$ symbol (in particular the vertical grey band marks the strong O$_{2}$ $A$-band absorption at 7600 $\AA\/$). (A colour version of this figure is available in the online journal).}
\label{spec_early}
\end{figure*}
Assuming that the light curve is powered by energy deposition from the $^{56}{\rm Ni} \rightarrow {}^{56}{\rm Co} \rightarrow {}^{56}{\rm Fe}$ radioactive decay chain, the amount of $^{56}$Ni synthesised during the explosion can be estimated using Arnett's rule \citep{arnett:1982}. We applied the \cite{stritz:2005} analytic expression for deriving the energy input from the decay of $^{56}$Ni evaluated at the time of bolometric maximum (their Eq.~6). From the observed peak luminosity of SN~2014ck, $L_{\rm max}=1.91_{-0.26}^{+0.30} \times 10^{42}$~erg~s$^{-1}$, and rise time, $t_{\rm rise} = 16.9^{+4.5}_{-2.7}$ days (cf. Sections~\ref{photo_evol} and \ref{rise}), we obtain
$M_{\rm Ni} \simeq 0.09^{+0.04}_{-0.03} M_\odot$. The uncertainty includes the error both in the determination of the rise time and in the adopted distance, which contribute $\sim 20\%$ and $\sim 16\%$, respectively, to the total error budget of the bolometric flux.
In principle, the contribution of UV light to the bolometric luminosity of SNe~Ia can be significant, particularly at the earliest epochs when the high temperature yields a large UV flux \citep{brown:2009,brown:2015}, affecting the calculated amount of $M_{\rm Ni}$. In the absence of UV data for SN~2014ck, it is interesting to note that SN~2005hk was already fading in the UV when {\it Swift} observations began, nearly 10 days before the optical maximum \citep{phillips:2007,brown:2009}. Similarly, the UV light curves of SN~2012Z reach maximum well before the optical light curves \citep[][]{stritz:2015}. For both of them, the bolometric flux is
dominated by the optical flux; the flux in the UV drops well below 10\% of the total flux before maximum. The same percentage was found for normal SNe~Ia \citep{suntzeff:1996,contardo:2000,brown:2009}.
Thus, considering a maximum additional correction of $\sim 10\%$ for the contribution of the UV flux at $L_{\rm max}$, the $M_{\rm Ni}$ estimate for SN~2014ck increases to
$\simeq 0.10^{+0.04}_{-0.03} M_{\odot}$, but remains significantly lower than the typical values for normal SNe~Ia \citep[$\sim 0.2$ to $0.8M_\odot$, see][] {stritz06,hayden:2010}.
The rise time inferred for SN~2014ck and the extremely low expansion velocity of the ejecta ($v_{\rm ph} \simeq 3.0 \times 10^3$~km~s$^{-1}$, see Section~\ref{spec_evol}), suggest low ejecta mass ($M_{\rm ej}$) and kinetic energy ($E_{\rm k}$) compared to normal SNe~Ia and also to SN~2002cx \citep[for which $v_{\rm ph} \simeq 6.0 \times 10^3$~km~s$^{-1}$, see][]{li:2003}. Using Arnett's equations \citep{arnett:1982} as per \cite{valenti:2008} -- a typo in their Eq.~2 was corrected by Wheeler et al. (2015) -- the OIR bolometric light curve is consistent with $M_{\rm ej} \sim 0.2$ to $0.5 M_\odot$, placing SN~2014ck close to the cluster made of Type~Iax SNe~2002cx, 2008A, 2005hk and 2009ku, just below the fast declining peculiar 1991bg-like SNe~Ia, in the $M_{\rm ej}$ vs.\ $M_{\rm Ni}$ plane plotted in Figure~15 of \cite{mccullyb:2014}.
With regard to the reliability of the $M_{\rm ej}$ estimate, it is well known that the opacities have a strong dependence on the temperature, and therefore that they vary with time \citep{hoeflich:1992}. Hotter, more luminous events should be characterised by higher opacities \citep{hoeflich:1996,nugent:1997,pinto:2001,maeda:2003,baron:2012}.
We stress that molecules, such as CO, that are predicted to form efficiently in SNe~Iax \citep[and, in general, in sub-luminous SNe~Ia, see][]{hoeflich:1995}, do not provide significant opacity in the OIR spectral range. Hence, the above value of $M_{\rm ej}$ should be considered a lower limit, as discussed by \cite{stritz:2015} for SN~2012Z.
\section{Spectral evolution}\label{spec_evol}
\subsection{Optical spectroscopy from $-6.0$ to $+110$ days}\label{spec_opt}
\begin{figure*}
\centering
\includegraphics[width=8.8cm]{sequence2.pdf}
\includegraphics[width=8.8cm]{sequence3.pdf}
\caption{Spectral evolution and line identifications. Phases are reported relative to $V$-band maximum. Left panel: spectra between $+2.9$~d and $+21$~d. Right panel: spectra between $+24.6$~d and $+110.4$~d. Wavelength is in the rest frame and the positions of major telluric absorption features are marked with the $\oplus$ symbol (in particular the vertical grey band marks the strong O$_{2}$ $A$-band absorption at 7600 $\AA\/$). }
\label{fig_spectra1}
\end{figure*}
\begin{figure*}
\includegraphics[scale=.6,angle=0]{spec_synow.pdf}
\caption{Optical spectrum of SN~2014ck at $+1.7$~d (black) and our best-fit {\tt SYNOW} synthetic spectrum (red). The contribution of each ion is also shown. Major telluric features are indicated with the $\oplus$ symbol (in particular the vertical grey band marks the strong O$_{2}$ $A$-band absorption at 7600 $\AA\/$). The insets on the right of the plot show the regions around Fe~III $\lambda$4404 (bottom) and Fe~III $\lambda$5129 (top) with an synthetic spectrum calculated without Fe~III over-plotted (dotted blue). (A colour version of this figure is available in the online journal).}
\label{synow}
\end{figure*}
\begin{figure}
\includegraphics[scale=.43,angle=0]{spec_conf.pdf}
\caption{Comparison of the rest-frame spectra of SN~2014ck at phases $-3.5$~d, $+25$~d and $+48$~d with those of SNe~2002cx \citep{li:2003,phillips:2007} and 2008ha \citep{valenti:2009,foley:2009} at similar phases.}
\label{spec_conf}
\end{figure}
The spectral evolution of SN~2014ck at optical wavelengths is shown in Figures~\ref{spec_early}, \ref{fig_spectra1} and \ref{fig_spectra4}. There is no sign of helium or hydrogen features. The pre-maximum spectra plotted in Figure~\ref{spec_early} exhibit a blue continuum with a weak, narrow Si~II~$\lambda$6355 absorption line -- the hallmark of SN~Ia -- as well as Fe~III~$\lambda\lambda$4404, 5129 and a relatively strong feature at 4670~\AA, tentatively identified as C~III~$\lambda$4647.
\citep[There is some indication that the early spectra of SNe~2005hk and 2012Z contain C~III; see][]{chornock:2006,foley:2013}.
At the blue end of the spectra, Ca~II~H\&K and Fe~II~$\lambda$4555 are also identified, while redward of 5000~\AA, features associated with S~II~$\lambda\lambda$5454, 5640 and C~II~$\lambda\lambda$6580, 7234 are detected. C~II absorption lines have been reported in SN~2008ha \citep{foley:2010b} and possibly identified in SNe~2002cx \citep{parrent:2011}, 2005hk \citep{chornock:2006}, 2007qd \citep{mcclelland:2010}, 2012Z \citep{stritz:2015,yamanaka:2015} and several other SNe~Iax \citep[][their Figure~23]{foley:2013}.
To verify the consistency of line identification and photospheric parameters, we make use of the parametrised synthetic-spectrum code {\tt SYNOW} \citep{fisher:1997,branch:2002,branch:2003,branch:2004}. In short, {\tt SYNOW} generates a synthetic spectrum, starting from an adopted blackbody continuum temperature ($T_{\rm bb}$), photospheric expansion velocity ($v_{\rm ph}$) and, for each selected ion, a few specific parameters \citep[i.e. the optical depth of the reference line; the excitation temperature; the minimum, maximum and optical-depth e-folding velocities; see][]{branch:2002,branch:2004}.
With {\tt SYNOW}, synthetic spectra were computed to match the observations at different epochs using ions believed to be present in SN~2014ck, following \cite{branch:2004,jha:2006,chornock:2006,sahu:2008}. In particular we included iron-peak elements, intermediate-mass elements and unburned carbon.
Examples of {\tt SYNOW} spectra are shown in the insets of Figure~\ref{spec_early} (phase $-4$~d) and in Figure~\ref{synow} (phase $+1.7$~d).
For the pre-maximum spectra, $T_{\rm bb} \approx 7000$~K and $v_{\rm ph} \approx 3500$~km~s$^{-1}$ (see Section~\ref{bb}) were adopted.
The parameters for the fit to the $+1.7$~d spectrum are $T_{\rm bb} = 5600$~K and $v_{\rm ph} = 3000$~km~s$^{-1}$.
We included the set of ions and input parameters used by \cite{branch:2004} for the analysis of the early spectra of
SN~2002cx, i.e. Fe~II, Fe~III, Co~II, Cr~II, Ca~II, Na~I, Si~II, Si~III, S~II and Ti~II (see their Tables~1 and 2).
C~III and C~II ions were added in our {\tt SYNOW} spectral model to obtain reasonable fits to absorption features at $\sim 4650$~\AA\ and $\sim 6580$ and 7230~\AA, respectively. This is shown in the insets in the top panel of Figure~\ref{spec_early}, where the synthetic models obtained with and without C~III and C~II ions are compared to the observed $-4$~d spectrum.
Spectra obtained near maximum light contain Fe~III features (see the insets around Fe~III $\lambda\lambda$4404, 5129 in Figure~\ref{synow}), while
soon after maximum Fe~III lines have vanished and strong Fe~II lines have developed \citep[as already noted for SN~2002cx by][]{branch:2004}
We tried also to include Sc~II, Ni~I and Ni~II, instead of Fe~II and Co~II, which might contribute with features blueward of 4000~\AA\ (especially Sc~II).
However, lines of Fe~II produce most of the observed features, and line blanketing by Co~II lines is needed to get a reasonable fit in the blue.
Na~I, Ca~II, Mg~II and O~I produce just one feature each (see Figure~\ref{synow}).
Carbon is very likely overabundant in the outer layer of SN~2014ck, since in the very early spectra C~II and C~III are the strongest lines, along with Fe~III.
The detection of unburned (C+O) material in the ejecta and, specifically, the spectroscopic signatures and velocity structure of C, is of great importance for constraining our understanding of the explosion mechanisms. In particular, it might be related to the mechanism by which the explosive flame propagates throughout the WD star \citep{parrent:2011,folatelli:2012} and/or the type of progenitor system \citep[C/O WD or O/Ne/Mg WD, see ][]{nomoto:2013}. Actually, the hallmark of pristine material from C/O WD progenitor star is the presence of carbon, as oxygen is also a product of carbon burning.
For SN~2014ck, the measured pseudo-EW \citep[see][]{folatelli:2012} of
C~II~$\lambda$6580 is $\sim 10$~\AA\ at phase $-6$~d, which decreases to $\sim 4$~\AA\ two days after $V$-band maximum.
For comparison, for normal SNe~Ia, \cite{folatelli:2012} find a C~II~$\lambda$6580 pseudo-EW of about 4~\AA\ five days before maximum and $\sim 1$~\AA\ at maximum.
This supports the findings from the analysis of SNe~2005cc and 2008ha, presented by \cite{foley:2013}, where the C~II~$\lambda$6580 signature is quite strong \citep[see also][]{parrent:2011}: a large fraction of unburned material is present in the ejecta of at least some SNe~Iax, and almost every SN~Iax with a spectrum before or around maximum light has some indication of carbon absorption.
Taking the Si~II $\lambda$6355 absorption line as indicator of the photospheric velocity at early epochs \citep{patat:1996},
the ratio between the Doppler velocity of C~II~$\lambda$6580 and Si~II~$\lambda$6355 (see Section~\ref{bb}, Table~\ref{tbb_vel}) at maximum is $\sim 0.95$. It was $\sim 0.89$ for SN~2012Z \citep[][their Figure~9]{stritz:2015} and around 0.6 for SN~2008ha \citep{parrent:2011}. This ratio is generally slightly above unity among SNe~Ia \citep{parrent:2011,folatelli:2012}, indicating either a layered distribution of carbon-rich material or multiple clumps having similar filling factors. On the other hand, a Doppler velocity of C~II significantly below that of the photospheric velocity may indicate ejecta asymmetries, as might be the case for SN~2008ha.
The post-maximum spectra show the emergence of several Fe~II (and even Co~II) lines becoming dominant over a two-week period. Ni~I and Ni~II might also contribute blueward of 4000~\AA, likely blended with numerous Fe~II and Co~II lines.
The Si~II~$\lambda$6355 feature is clearly visible until 15~days after maximum brightness, as in SN~2002cx \citep{li:2003,branch:2004}.
On the other hand, in the case of SN~2008ha and other faint SN~Iax, this feature is only visible near maximum light \citep{valenti:2009,foley:2009,foley:2010b,stritz:2014}. Carbon features are clearly detected before maximum.
From $+24.6$~d to $+110.4$~d the spectra are dominated by Fe~II and Co~II lines, as well as by the progressive emergence of the Ca~II~NIR triplet.
In Figure~\ref{spec_conf} the spectra of SN~2014ck are compared to those of SNe~2002cx and 2008ha at similar phases.
Notably, the pre-maximum spectrum of SN~2014ck resembles SN~2008ha (rather than SN~2002cx), with the exception of the Si~II~$\lambda$6355 absorption line which is clearly stronger in SN~2008ha.
Twenty-five days after maximum brightness, the Ca~II NIR triplet in SN~2014ck is as strong as in SN~2008ha \citep{valenti:2009}, while this feature is much weaker in SN~2002cx. Around fifty days after maximum, [Ca~II]~$\lambda\lambda$7291, 7324 emission lines begin to emerge. At similar phases, these forbidden lines are stronger in SN~2008ha and extremely weak in SN~2002cx.
Overall, the spectra of SN~2014ck show a strong similarity to SN~2008ha and clear differences from SN~2002cx, particularly due to the smaller expansion velocities, but possibly also due to different ejecta composition and opacity.
The very low expansion velocity of SN~2014ck may enhance the visibility of Sc~II, tentatively identified in the narrow-line SNe~2007qd \citep{mcclelland:2010} and 2008ha \citep{valenti:2009,foley:2009}.
\subsubsection{Expansion velocities of the ejecta}\label{bb}
\begin{table*}
\caption {Blackbody temperatures (Kelvin) and expansion velocities of the ejecta (km~s$^{-1}$) at the absorption feature minimum for various ions in SN~2014ck. Estimated uncertainties are in parentheses. Phase is from the adopted epoch of the $V$-band maximum, ${\rm MJD} = 56845.6 \pm 0.1$.}
\tiny
\label{tbb_vel}
\begin{tabular}{ccccccccccccccc}
\hline
Phase &$T_{\rm bb}$& Si II & Ca II& Ca II & C II &C II &S II& O I &Fe III &Fe II & Fe II & Co II & Co II & Co II \\
(d) & (K) &$\lambda6355$& H\&K &$\lambda8498$& $ \lambda6580$&$\lambda7234$&$\lambda\lambda5454,5640$ &$\lambda7774$& $\lambda5129$ &$\lambda6149$& $ \lambda6247$ & $ \lambda15759$& $\lambda16064$ &
$\lambda16361$ \\
\hline
$-$6.0 & 8140(100)& 3445(50)& 4110(200)& 3970(200)& 3390(50) & 3460(100)&3180(100)& 3826(50)&3200(100) &5300(200) & 4740(200) & $-$ & $-$ &$-$\\
$-$5.0 & 7340(100)& 3339(50)& 3800(100)& 3760(200)& 3030(50) & $-$ &2797(100)& 3200(50)&3000(100) &5200(200) & 4700(200) & $-$ & $-$ &$-$ \\
$-$4.0 & 6720(100)& 2950(50)& 3540(100)& $-$ & 2935(100)& 3130(100)&2395(100)& 3020(50)&2820(100) &4690(100) & 4620(100) & $-$ & $-$ & $-$ \\
$-$3.6 & 6900(100)& 2890(50)& 3450(100)& 3400(200)& 2800(50) & 2960(100)& $-$ & 2962(50)&$-$ &4590(100) & 4500(100) & $-$ & $-$ & $-$ \\
$-$3.0 & 6300(200)& 2611(50)& 3305(200)& $-$ & 2800(50) & 2960(100)& $-$ & 2866(50)& $-$ &4490(100) & 4200(100) & $-$ &$-$ & $-$ \\
$-$1.0 & 5800(200)& $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ & $-$ &$-$\\
$-$1.0 & $-$ & $-$ &$-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ &$-$ & $-$ & 2700(200) &2360(200)&2450(200)\\
$-$0.1 & $-$ & 2610(100)& 3140(200)& 3100(200)& $-$ & $-$ & $-$ & 2750(50)& $-$ &4180(100) & 3800(100) & 2656(200) &2262(200)&2339(200)\\
1.2
7& 5630(200)& 2560(100)& $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ &4008(100) & 3350(100) & $-$ & $-$ &$-$\\
2.9 & 5360(200)& 2430(100)&$-$ & $-$ & $-$ & $-$ & $-$ & 2600(50)& $-$ &3800(100) & 3160(100) & $-$ & $-$ & $-$ \\
3.6 & 5360(200)& $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ &$-$\\
3.9 & $-$ & 2250(100)& 2800(200)& 2700(200)& $-$ & $-$ & $-$ & $-$ & $-$ &3311(100) & 3201(100) & 2613(200) &2243(200)&2357(200)\\
5.0 & 5050(300)& $-$ &$-$ & $-$ & $-$ & $-$ &$-$ & 2300(50)& $-$ &2700(100) & 2707(100) & $-$ &$-$& $-$ \\
11.4 & 4800(300)&$-$ &$-$ & $-$ & $-$ & $-$ &$-$ & 1900(50)& $-$ &2404(100) & 2418 (100) & $-$ & $-$ & $-$ \\
16.6 & 4500(300)& $-$ &$-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ &2224(100) & 2313(100) & $-$ & $-$ & $-$ \\
18.0 & 4100(300)& $-$ & $-$& $-$ & $-$ & $-$ &$-$ & $-$ & $-$ & $-$ & $-$ & $-$ &$-$&$-$\\
19.0 &$-$ & $-$ & $-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ &2137(100) & 2180(100) & 1955(50) &1965(50)&1938(50)\\
20.0 & 4110(300)&$-$ &$-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ &2107(100) & 2246(100) & $-$ & $-$ &$-$\\
21.0 & 4050(300)& $-$ &$-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ & $-$ & $-$ & $-$ &$-$&$-$\\
23.7 & $-$ &$-$ &$-$ & $-$ & $-$ & $-$ &$-$ & $-$ &$-$ &2079(100) & 2146(100) & 1776(50) &1835(50)&1865(50)\\
24.6 & 4070(300)& $-$ & $-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ &2011(100) & 2132(100) & $-$ &$-$& $-$ \\
28.9 & 4130(300)&$-$ &$-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ & $-$ & $-$ & $-$ &$-$& $-$ \\
30.8 &$-$ &$-$ &$-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ &1740(100) & 1769(100) & 1634(50) &1742(50)&1683(50)\\
36.4 & 4110(300)& $-$ & $-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ &1919(100) & 1950(100) &$-$ & $-$ & $-$ \\
39.0 & $-$ &$-$ &$-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\
46.8 &$-$ & $-$ &$-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ &1613(100) & 1680(100) & $-$ &$-$& $-$ \\
48.4 & 4120(300)& $-$ &$-$ & $-$ & $-$ & $-$ &$-$ & $-$ & $-$ &1580(100) & 1511(100) & $-$ &$-$&$-$\\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\includegraphics[scale=.43,angle=0]{lines_vel.pdf}
\caption{Velocity evolution of the absorption minima of a selection of spectral lines with minimal line blending in the spectra of SN~2014ck. The typical formal error of the velocities is $\sim 200$~km~s$^{-1}$. The blending of lines can produce a systematic shift of features and increase the uncertainty at least to $\sim 1000$~km~s$^{-1}$.}
\label{vel_lines}
\end{figure}
One of the main properties of SNe~Iax is the low expansion velocity of the ejecta, which suggests low explosion energies compared to normal SNe~Ia (see Section ~\ref{bolometric}).
The ejecta velocity of SN~2014ck was estimated from the location of the absorption minima of spectral lines with little line blending, based on line identifications from {\tt SYNOW}.
The results are listed in Table~\ref{tbb_vel} and plotted in Figure~\ref{vel_lines}. Early spectra can be used to probe the velocity distribution of various elements, i.e. unburned (C+O) material, intermediate mass elements (IMEs) and completely burned elements close to nuclear statistical equilibrium (NSE), namely iron, cobalt and nickel. In principle, the velocity evolution can provide solid constraints on the explosion physics, as a layered structure might be revealed \citep[a signature of detonations,][]{stritz:2014,stritz:2015} unless extensive mixing has destroyed the original stratification \citep[a signature of deflagrations,][]{phillips:2007}. However, for SNe~Iax there is severe blending of lines over the full optical and NIR spectral range that prevent secure line identifications and plague our velocity estimates \citep{szalai:2015}.
Before maximum light, expansion velocities are measured from the minima of Ca~II H\&K and $\lambda$8498; C~II $\lambda\lambda$6580, 7234; S~II $\lambda\lambda$5454, 5640; and Si~II $\lambda$6355 absorption features and are found to lie between 2800 and 4100~km~s$^{-1}$. Mg~II $\lambda$4481 soon becomes blended with Fe~III and is not easily distinguished. On the contrary, the O~I line at $\lambda$7774 is clearly detected redward of the telluric $A$-band at 7590-7650 \AA\ and the calculated velocities are similar to the ones inferred for Si~II. At these early phases, iron features Fe~II~$\lambda$6149, 6247 and 4555 exhibit consistent line velocities that are $\sim 1000$~km~s$^{-1}$ higher than those of IMEs and Fe~III~$\lambda5129$. However, we note that complex blending with emerging Co~II and Ti~II features could change the position of Fe~III~$\lambda5129$ (and also Fe~III~$\lambda4404$) absorption minima.
Indeed, around maximum light, the blending with emerging Fe~II, Ti~II and Co~II lines might broaden the observed profile and shift the middles of several lines. In particular, even during pre-maximum phases, the feature around 6300 \AA, mainly attributed to Si~II $\lambda$6355, could be a blend with Fe~II $\lambda6456$ \citep[or a more complex blending either with Fe~II plus Co~II or S~II, see][their Figures~11 and 12]{szalai:2015}.
After maximum, the ``iron curtain'' prevents the secure identification of unburned (C+O) material or IMEs, forming at similar or higher velocities \citep{branch:2004}, and the velocity measurements might be ill constrained.
Close to maximum, the absorption-minimum velocities of the Co~II NIR lines at 1.5759, 1.6064 and 1.6361~\micron\ are in good agreement either with Si~II~$\lambda$6355 or S~II~$\lambda\lambda$5454, 5640, and are $\sim 2000$~km~s$^{-1}$ lower than for Fe~II~$\lambda6149, 6247$.
About twenty days after maximum, the velocities of the Co~II NIR features are systematically
$\sim 300$~km~s$^{-1}$ lower than those inferred from optical Fe~II lines at the same phase. A similar trend was noted by \cite{stritz:2014} for SN~2010ae. Hereafter the line velocities evolve rather slowly.
Overall, the velocity structure of SN 2014ck indicates outer layers rich in iron-group ions, while C+O elements, Si~II and Ca~II, identified at lower velocities, seem to be well mixed. In principle, they should be present even at higher velocities (i.e. in the outer layers) if earlier spectra were available for an in-depth analysis. Consequently, we cannot exclude either a mixed or a layered structure for SN~2014ck and, in turn, it is not easy to discriminate between the different explosion mechanisms (see Section~\ref{discussion}).
In Table~\ref{tbb_vel} we also list our estimates of the photospheric temperature of SN~2014ck as derived from a blackbody fit to the spectral continuum (the spectra were corrected for the redshift and extinction).
At phases beyond $+50$~d, emerging emission lines and line blanketing drive a flux deficit at the shorter wavelengths, and the fit becomes difficult.
The errors are estimated from the dispersion of measurements obtained with different choices for the spectral fitting regions.
The early photospheric temperature of SN~2014ck is above 8000~K, but it decreases quickly to $\sim 5600$~K at maximum light and flattens to about 4000~K afterwards.
\subsection{The late-time spectrum at $+166.3$~d}
\begin{figure*}
\includegraphics[scale=.8,angle=0]{sequence4_bis.pdf}
\caption{Late-phase spectrum of SN~2014ck with line identification based on Li et al. (2003), Jha et al. (2006) and Sahu et al. (2008). The inset shows the region around $\sim 7300$~\AA, where forbidden [Ca II]~$\lambda\lambda$7291, 7324 and [Fe~II]~$\lambda\lambda$7155, 7453 features are clearly identified. [Ni~II]~$\lambda$7378 might be present in the red wing of [Ca II]~$\lambda$7324.}
\label{fig_spectra4}
\end{figure*}
A late-phase spectrum of SN~2014ck was obtained at $+166.3$~d and is plotted in Figure~\ref{fig_spectra4}.
As already remarked for other SNe~Iax \citep{foley:2013}, the late-phase spectrum does not appear to be truly nebular despite no clear indication of a continuum or absorption lines.
At these late epochs, SN~2014ck shows narrow permitted Fe~II lines superimposed on a pseudo-continuum and several forbidden lines associated with of Fe, Co and Ca.
The dominant feature is the Ca~II~NIR triplet, but also comparable in strength is the forbidden [Ca~II]~$\lambda\lambda$7292, 7324 doublet (see Figure~\ref{fig_spectra4}).
Both permitted and forbidden calcium lines are significantly more prominent in SN~2014ck than in SN~2002cx, and are comparable to those of the fainter SNe~2008ha and 2010ae \citep{valenti:2009,foley:2009,stritz:2014}.
The [Fe~II] $\lambda$7155 line is the strongest iron feature in the late-time spectrum. A relatively broad hump at 4700 \AA\ is identified as [Fe~III] and [Co~II].
A number of features blueward of 5800~\AA\ are likely a blend of permitted Fe~II lines.
The same lines are present at earlier epochs but at higher velocities. This identification was suggested by \cite{jha:2006} for SN~2002cx.
As a test, we attempted to fit the late time spectrum with a {\tt SYNOW} model, including Fe~II, Ca~II, Na~I and O~I ions \citep[see][their Table~2 and their Figures~3 and 4]{jha:2006}.
Although the observed spectrum is not fully reproduced by the photospheric model, the synthetic spectrum provides a good match to many of the
absorption features blueward of 5800~\AA, and possibly the P-Cygni profiles of Na~I~D at $\lambda\lambda$5890, 5896. An alternative identification of this feature could be [Co~III]~$\lambda$5888 \citep{dessart:2014}, also suggested for SN~2012Z by \cite{stritz:2015}.
This last interpretation is supported by the unambiguous presence of other [Co~III] lines in the 6000~\AA\ region.
We conclude that the late-time spectrum of SN 2014ck is a combination of P-Cygni profiles of recombination lines and emission lines of forbidden Fe, Ca and Co features, but no [O~I]~$\lambda\lambda$6300, 6364 emission. This feature is typically not present in late spectra of SNe~Ia \citep{blondin:2012}.
In order to get [O~I] emission, we need a significant amount of O in a region where $\gamma$-rays are being absorbed, and the O-emitting region cannot be too contaminated by Ca. In fact, the [Ca~II]~$\lambda\lambda$7291, 7324 feature can limit the strength of [O~I]~$\lambda\lambda$6300, 6364 emission from a region in which both these ions co-exist \citep{fransson:1989,dessart:2015}. The emission of [O~I]~$\lambda\lambda$6300, 6364 is absent from relatively late spectra of SNe~2014ck and 2008ha \citep{foley:2009}, while in both cases O~I~$\lambda$7774 absorption is identified in photospheric phase spectra.
The FWHM values were estimated as 1900, 1200 and 1600~km~s$^{-1}$ for the [Ca II], [Fe II] and Ca~II~NIR lines, respectively.
The nebular lines have slightly diverse velocity shifts: about $+170$~km~s$^{-1}$ for [Fe~II] $\lambda\lambda$7155, 7453, $-270$~km~s$^{-1}$ for [Ca~II] $\lambda\lambda$7291, 7324 and $-180$~km~s$^{-1}$ for [Ni~II] $\lambda$7376 (the lastter is difficult to measure as it is in the wing of [Ca~II]). As already noted by \cite{foley:2013} for SNe 2002cx, 2005hk, 2008A and 2008ge, the [Fe~II] and [Ca~II] features have shifts in opposite directions, highlighting a quite complex velocity structure of SNe~Iax.
\subsection{Near-infrared spectral sequence}\label{nir}
\begin{figure*}
\includegraphics[scale=.6,angle=0]{sn14ck_ir.pdf}
\caption{NIR spectra of SN~2014ck obtained with the Gemini North Telescope (+ GNIRS). The phase relative to $V$-band maximum is labeled for each spectrum. Prevalent features attributed to Fe~II, Mg~II, Ca~II, Co~II and Si~III are indicated with labels. Telluric regions are indicated with the $\oplus$ symbol and vertical grey bands.
The $-1$~d spectrum is compared to our best-fit {\tt SYNOW} synthetic spectrum (red).
The inset on the top shows the $+0.4$~d spectrum (black) in the range 0.95 to 1.15~\micron, showing the main features due to Fe~II (blue) and Mg~II (green). (A colour version of this figure is available in the online journal).}
\label{spec_nir}
\end{figure*}
\begin{figure}
\includegraphics[scale=.43,angle=0]{spec_synow_nir.pdf}
\caption{NIR spectrum of SN~2014ck at $+19$~d (black) and our best-fit {\tt SYNOW} synthetic spectrum (red). The contribution of prevalent ions is also shown. Telluric regions are indicated with the $\oplus$ symbol and vertical grey bands. Inset: close-up of the $H$-band spectral region, showing the ubiquitous signature of Co~II. (A colour version of this figure is available in the online journal). }
\label{synow_nir}
\end{figure}
The NIR spectral evolution of SN~2014ck is presented in Figure~\ref{spec_nir}. Before maximum light, large parts of the spectra resemble the infrared tail of a hot blackbody continuum, with the exception of few spectral features around 1~\micron\ and humps between 1.5 and 1.8~\micron.
From $-0.1$~d to $+3.9$~d, the most prominent features are attributed to Fe~II, in particular the stronger line with a P~Cygni profile at $\sim 1$~\micron\ is Fe~II 0.9998~\micron.
Mg~II 0.9227 and 1.0927~\micron\ \citep[and possibly weaker lines around
2.4~\micron\ due to Mg~II transitions, i.e. 2.4041, 2.4044, 2.4125~\micron, see][]{hoeflich:2002} produce shallow notches partially blended with Fe.
Moreover, also around 1~\micron, there might be traces of C~I lines at 0.9093, 0.9406, 1.0693 and 1.1754~\micron, which have been tentatively identified both in the sub-luminous Type~Ia SN~1999by \citep{hoeflich:2002} and in the Type~Iax SN~2012Z \citep{stritz:2015}, in addition to normal SNe~Ia (Hsiao et al. 2013, 2015). However, no confident carbon detections can be made in SN~2014ck NIR spectra. O~I 0.9264~\micron\ should be blended with Mg~II 0.9227~\micron. Actually, in our optical spectra the O~I~$\lambda$7773 -- which is expected to be 3--20 times stronger than the 0.9264~\micron\ line -- is already weak, so the O~I 0.9264~\micron\ is not expected to be a strong feature.
The 0.8446~\micron\ O~I line may contribute to the absorption, dominated in later phases by the Ca~II NIR triplet. Signatures of Si~II might be present blueward of 1~\micron\ (0.9413~\micron) and the 1.6930~\micron\ Si~II line may be part of the hump at these wavelengths, together with Mg~II (1.6787~\micron) and emerging Co~II lines.
A {\tt SYNOW} fit (adopting $T_{\rm bb} = 5800$~K and $v_{\rm ph} = 2500$~km~s$^{-1}$, see Section~\ref{bb}) of the $-1$~d spectrum was used to assist for the above line identification, including an extended set of ions (C~I, C~II, O~I, Mg~I, Mg~II, Si~II, Si~III, Ca~II, Fe~II, Fe~III and Co~II, see Figure~\ref{spec_nir}).
The inset of Figure~\ref{spec_nir} shows the most prominent feature in the earliest spectrum, attributed to Fe~II~0.9998~\micron.
Three weeks later, the NIR spectrum radically changes, being strongly dominated by Co~II lines, as previously documented in all other SNe~Iax with similar data \citep{kromer:2013,stritz:2014,stritz:2015}. Co~II clearly contributes with numerous lines between 1.6 and 1.8~\micron\ soon after maximum light, as it is already present in the spectrum taken at $+3.9$~d. Spectra obtained at $+19$~d or later show distinct absorption at the location of several Co~II lines, most prominently at 1.5759, 1.6064, 1.6361, 1.7772, 1.7462, 2.2205, 2.4596, 2.3613~\micron.
The increasing strength of Co~II with time is attributed both to a lower opacity and a higher abundance of $^{56}$Co in the external ejecta compared with SNe~Ia \citep{hsiao:2013}.
The {\tt SYNOW} fit to the $+19$~d spectrum of SN~2014ck is plotted in Figure~\ref{synow_nir}. We adopted $T_{\rm bb} = 4000$~K and $v_{\rm ph} = 1900$~km~s$^{-1}$ (see Section~\ref{bb}) and included a smaller subset of the above IMEs and Fe-group ions (Co~II, Fe~II, Ca~II, Si~III, O~I and Mg~II). While it is confirmed that Co~II largely dominates the spectrum redward of 1.5~\micron\ ($H$- and $K$-bands), Fe~II prevails blueward of that wavelength, as in the spectra of SNe~Ia during the transition from the photospheric to the nebular phase \citep{friesen:2014}.
The Ca~II~NIR triplet is also a prevalent feature starting from the spectrum at $+19$~d, and
Si~III might help to improve the fit of the features between 1.1 and 1.4~\micron\ \citep[as in SN~2012Z, see][]{stritz:2015}.
The NIR spectral evolution of SN~2014ck is reminiscent of what is observed in SN~2010ae \citep[see][their Figure~4]{stritz:2014} and, overall, in normal SNe~Ia \citep{marion:2009,hsiao:2013,hsiao:2015}. For both SNe~Ia and SNe~Iax the characteristic $H$- and $K$-band iron-peak complex rapidly emerges soon after $B$-band maximum and becomes the dominant feature in the NIR.
\subsection{A summary of the overall spectral characteristics}
Similarly to SN~2008ha \citep{foley:2010b}, the pre-maximum spectra of SN~2014ck show the signatures of Si~II $\lambda\lambda$6347, 6371, S~II $\lambda\lambda$5454, 5640 (together with Ca, O, Na and Fe) and no sign of hydrogen or helium.
As in SN~2008ha, unburned (C+O) material, specifically C~II $\lambda\lambda$6580, 7234 is also detected,
as is C~III $\lambda$4647.
Such analogies in the early phases might suggest they share analogous physical conditions and composition.
However, unlike SN~2008ha, the late spectrum of SN~2014ck is dominated by iron-peak elements, with the presence of both permitted Fe~II and forbidden [Fe~II], [Fe~III] and [Co~II] emission lines.
It also displays relatively strong features of both the Ca~II NIR triplet and [Ca~II] $\lambda\lambda$7291, 7324, which have a comparable strength in SN~2008ha but are weak or absent in SN~2002cx. In the NIR spectral region,
SN~2014ck shows features typical of SNe~Iax, with the ubiquitous presence of Co~II.
Fe~II lines are present in SN~2014ck at higher expansion velocities than IMEs and unburned C+O elements. Before maximum, Ca~II lines velocities are bracketed by measures of Fe~II, Fe~III and S~II.
\section{Concluding remarks}\label{discussion}
\begin{figure}
\includegraphics[scale=.58,angle=0]{fig_confronti.pdf}
\caption{Scatter plots of $\Delta m_{15} (V)$, $M_V$ and photospheric velocities $v_{\rm ph}$ derived around +10~days for several well studied SNe~Iax (see Narayan et al. 2011, their Figure~3).}
\label{conf}
\end{figure}
Mirroring the behaviour already observed for SN~2009ku by \cite{narayan:2011}, the high luminosity and low ejecta velocity for SN~2014ck is contrary to the trend generally seen in the heterogeneous group of peculiar SNe~Iax. In fact, from a photometric point of view SN~2014ck resembles SN~2002cx, the prototype of the class, showing similar peak luminosities ($M_B=-17.37 \pm 0.15$~mag for SN~2014ck vs.\ $M_B=-17.53 \pm 0.26$~mag for SN~2002cx) and decline rates ($\Delta m_{15}(B) = 1.76 \pm 0.15$~mag vs.\ $\Delta m_{15}(B) = 1.7 \pm 0.1$~mag) in all bands.
Given the almost identical bolometric luminosities at maximum, the synthesised mass of $^{56}$Ni is nearly the same for both objects, $\sim 0.1 M_\odot$.
Yet despite the relatively high peak luminosity and slow light curve decline, the spectra of SN~2014ck are more similar to those of the three-magnitudes-fainter SNe~2008ha \citep{foley:2009,valenti:2009}, 2007qd \citep{mcclelland:2010} and 2010ae \citep{stritz:2014}.
All these objects exhibit narrow spectral lines indicating low expansion velocities of the ejecta ($v_{\rm ph}$ from 2500 to 3000~km~s$^{-1}$ at maximum from the absorption minima of Si~II).
\cite{mcclelland:2010} and \cite{foley:2013} suggested the existence of a relation between peak luminosity, ejecta velocity and light-curve shape, with higher-velocity SNe~Iax being more luminous and more slowly declining.
On this basis, they argued that the full class of SNe~Iax might originate from a single explosion mechanism.
Both SNe~2009ku and 2014ck, with their low velocity and relatively high luminosity, are outliers in these relations, and the results of this work confirm and reinforce the results of \cite{narayan:2011}.
This is illustrated in Figure~\ref{conf}, which shows the positions of a sample of SN~Iax in a three-variable phase space composed of $\Delta m_{15}(V)$, $M_V$ and $v_{\rm ph}$ measured 10 days after maximum \cite[see][their Figure~3]{narayan:2011}.
In fact, this sample of SNe~Iax shows:
(i) there is not a monotonic correlation between $v_{\rm ph}$ and $\Delta {\rm m}_{15}(V)$, as at low velocities the spread in the decline rate is wide (top-left panel);
(ii) there is evidence of a linear correlation between $\Delta {\rm m}_{15}(V)$ and $M_V$ (bottom-left); and
(iii) SN~2014ck (and also SN~2009ku) are at odd with the suggestion of \cite{mcclelland:2010} that higher velocity SNe~Iax also have higher peak luminosities (bottom-right panel).
We note that these are still small-number statistics, and therefore these findings need further confirmation. However, at present we cannot conclude SNe~Iax are a one-dimensional sequence from fainter to brighter events.
Comprehensive explosion models are required to describe both the diversity and homogeneity among SNe~Iax. On the whole, the observations seem to indicate that they may originate from a homogeneous population \citep{foley:2013}.
However, specifically, there appears to be a diverse range of rise times, peak luminosities, expansion velocities, etc., and, as underlined above, there is not a clear correlation among these physical parameters. Thus, there may be multiple progenitor paths and explosion mechanisms to create a SN~Iax.
To date, the progenitors of SNe~Iax are still a subject of debate \citep{valenti:2009,foley:2009,foley:2010b,moriya:2010,liua:2015}. The search and detection of progenitor stars or systems in pre-explosion images is, in principle, a promising technique to test different progenitor models \citep{mccully:2014,liua:2015,liub:2015}. For SN~2014ck, archival {\it HST} images were obtained and no progenitor was detected at the SN position. The available images are not very deep and provide a $3\sigma$ limit of $M_{\rm F625W} > -6.5$~mag. This limit allows us to rule out only the most-luminous Wolf-Rayet stars as a potential progenitor.
Overall, the optical and NIR observational features summarised above favour a thermonuclear explosion of a C/O WD for SN~2014ck. A failed deflagration model \citep{jordan:2012,kromer:2013} might explain most of its observed properties, from the low peak luminosity and energetics (see Section~\ref{bolometric}) to the spectral characteristics (see Section~\ref{spec_evol}). In particular, the moderate $M_{\rm Ni}$ production derived for this event and the low expansion velocities are well matched in this scenario. The failed deflagration is too weak to unbind the WD, leaving behind a gravitationally bound, compact remnant around $\sim 1 M_\odot$. The low ejecta mass of SN~2014ck is partially consistent with that prediction. However, the rise time predicted by these models \citep[see Table~4 in][]{fink:2014} is too fast compared to the rise time derived for SN~2014ck ($t_{\rm rise} = 16.9^{+4.5}_{-2.7}$~days, see Section~6.2). Above all, the modelled burning of a C/O WD via a turbulent deflagration flame produces a homogeneous mixing of elements in the ejecta, with unburned material, partially burned material and fully burned (to the iron peak) material throughout the ejecta. Significant mixing in the ejecta was suggested for SN~2005hk by \cite{phillips:2007}. The failed deflagration model was also considered the most probable scenario for SN~2012Z by \cite{yamanaka:2015}. On the contrary, for the same event, \cite{stritz:2015} pointed out evidence of a layered structure for calcium, silicon and magnesium, which is instead the signature of a detonation. Thus, they suggested that these elements are produced in a detonation phase after the mixing has already occurred and the majority of the iron peak elements have been produced in a previous deflagration phase, i.e. a pulsational delayed detonation \citep[PDD;][]{hoeflich:1995,hoeflich:1996}.
Looking at the velocity distribution of elements in SN~2014ck (Figure~12),
the presence of Fe~II features up to high velocities, as reported by \cite{stritz:2015} for SN~2012Z (and also for few normal SNe~Ia), seems to suggest
a layered structure in the ejecta, arguing in favour of a detonation phase, which might have followed a deflagration. On the other hand, C+O elements and Si~II are moderately mixed, and one could argue that they would be identified at higher velocities if earlier spectra were available. Moreover, explaining iron-group material in the outer layers is still a challenge for explosion models, even if some results have been obtained within a delayed-detonation scenario \citep[see for example][and references therein]{hach:2013}. So far, several questions are still open, among them the mechanism of transition from deflagration to detonation. To be thorough, it is difficult to explain within PDD models the extremely low photospheric velocity at maximum of SN~2014ck. We also underline, once again, that the severe blending across the spectra of SNe~Iax, might prevent secure line identifications, as was detailed by \cite{szalai:2015}. Consequently, the derived expansion velocities of the elements is ill constrained, as well as the distinction between a layered or a mixed structure of the ejecta.
The analysis of our extended data set of SN~2014ck cannot exclude either the failed deflagration or the PDD models for this event. In addition, the comparison with other SNe~Iax highlights that the diversity within this class of transients cannot be reduced to a one-parameter description. This may also imply that distinct progenitors and/or explosion mechanisms are involved, despite the overall similarity of the main observables.
\section*{Acknowledgments}
This paper is based on observations made with: the LCOGT 2.0~m (Haleakala, Hawaii, USA) and 1.0~m (McDonald Observatory, Texas, USA) telescopes;
the 2.56~m Nordic Optical Telescope (La Palma, Spain); the INAF Osservatorio Astronomico di Padova Copernico 1.82~m Telescope (Mt.~Ekar, Asiago, Italy); the 3.58~m Telescopio Nazionale Galileo
(La Palma, Spain); the 8.1~m Gemini-N Telescope (Hilo, Hawaii, USA); and the 10.4~m Gran Telescopio Canarias (La Palma, Spain).
We acknowledge the staff at INAF OAPd in Asiago and at LCOGT for their support.
Based on observations obtained at the Gemini Observatory under Program
ID GN-2014A-Q-8 and GN-2014B-Q-41. Gemini is operated by the
Association of Universities for Research in Astronomy, Inc., under a
cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science
Foundation (United States), the National Research Council (Canada), CONICYT
(Chile), the Australian Research Council (Australia), Minist\'{e}rio da Ci\^{e}ncia,
Tecnologia e Inova\c{c}\~{a}o
(Brazil) and Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n
Productiva (Argentina).
Also based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555.
We thank WeiKang Zheng and Alex Filippenko for sending us pre-discovery and discovery Lick/KAIT images of UGC~12182 from which non-detection upper limits of SN~2014ck were determined and the rise time estimated. We also thank the anonymous referee for the thorough review of the paper.
AP, SB, NER, LTar, GT and MT are partially supported by the PRIN-INAF 2014 with the project ``Transient Universe: unveiling new types of stellar explosions with PESSTO".
NER acknowledge the support from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n.\ 267251 ``Astronomy Fellowships in Italy'' (AstroFIt). AMG acknowledges financial support by the Spanish {\it Ministerio de
Econom\'ia y Competitividad} (MINECO), grant ESP2013-41268-R.
MS and EYH acknowledge support provided by the Danish Agency for Science and Technology and Innovation through a Sapere Aude Level 2 grant.
This research has made use of: the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration;
{\sc iraf} packages, distributed by the National Optical Astronomy Observatory, operated by the Associated Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Fundation.
|
1,108,101,563,163 | arxiv | \section{Introduction and Statement of Results}
Given a finite, simple graph $G = (V(G),E(G))$, an independent set $I$ is a subset of $V(G)$ so that if $v,w \in I$, then $vw \notin E(G)$. The \emph{size} of an independent set is $|I|$. We will let $i(G)$ denote the number of independent sets in $G$ and $i_t(G)$ denote the number of independent sets of size $t$ in $G$. The quantity $i(G)$ has also been called the \emph{Fibonacci number} of $G$ \cite{ProdingerTichy} and the \emph{Merrifield-Simmons index} of $G$ \cite{MerrifieldSimmons}.
There has been a large number of papers devoted to finding the maximum and minimum values of $i(G)$ and $i_t(G)$ as $G$ ranges over some family of graphs. For a sampling of these results, we refer the reader to two surveys \cite{Cutler,Zhao} and the references found therein.
A \emph{proper vertex coloring} of a graph $G$ is an assignment of a color to each vertex so that no edge is monochromatic. A graph $G$ is \emph{$k$-chromatic} if there exists a proper coloring using $k$ colors but not one with $k-1$ colors. We call $G$ \emph{$k$-critical} if it is $k$-chromatic and every proper subgraph of $G$ is at most $(k-1)$-chromatic. Finally, a graph $G$ is \emph{$\ell$-connected} if $|V(G)|>\ell$ and any graph obtained by deleting fewer than $\ell$ vertices is connected. Recently Fox, He, and Manners \cite{FoxHeManners} proved an old conjecture of Tomescu by finding the $n$-vertex $k$-chromatic connected graph with the maximum number of proper vertex colorings that uses $k$ colors.
This focus of this note is on maximizing $i(G)$ and $i_t(G)$ within the family of $n$-vertex $k$-chromatic $\ell$-connected graphs. When $\ell=0$ and $\ell=1$, the maximum number of independent sets, and independent sets of each fixed size $t$, in these families was determined in \cite{EngbersErey}. Our first result generalizes this to maximizing $i(G)$ when $\ell>1$ for $n$ large. Before we state it, we first define the extremal graphs for the various values of $k$ and $\ell$. Recall that for graphs $G_1$ and $G_2$, the graph $G_1 \vee G_2$ has vertex set $V(G_1) \sqcup V(G_2)$ and edge set $E(G_1) \cup E(G_2) \cup \{xy:x \in V(G_1),y \in V(G_2)\}$. We denote the complete and empty graphs on $n$ vertices by $K_n$ and $E_n$, respectively.
\begin{definition}
Fix $k \geq 2$ and $\ell \geq 1$. For $k \leq \ell$, let $G^*:= (K_{k-1} \cup E_{\ell-k+1}) \vee E_{n-\ell}$, and for $k > \ell$ let $G^* := K_\ell \vee (K_{k-\ell} \cup E_{n-k})$. See Figure \ref{fig-G^*}.
\end{definition}
\begin{figure}[ht!]
\centering
\begin{tikzpicture}[scale=.7]
\node at (-1,3.75) {$G^*$ ($k \leq \ell$)};
\node (v1) at (1,1) [circle,draw] {\fontsize{7}{5.2}\selectfont {$k$-1}};
\node (v2) at (1,2) [circle,draw,scale=.4,fill] {};
\draw[densely dotted] (.35,.35) -- (.35,2.35);
\draw[densely dotted] (.35,2.35) -- (1.65,2.35);
\draw[densely dotted] (1.65,2.35) -- (1.65,.35);
\draw[densely dotted] (1.65,.35) -- (.35,.35);
\node (v3) at (3,0) [circle,draw,scale=.4,fill] {};
\node (v4) at (3,1) [circle,draw,scale=.4,fill] {};
\node (v5) at (3,2) [circle,draw,scale=.4,fill] {};
\node (v6) at (3,3) [circle,draw,scale=.4,fill] {};
\draw[densely dotted] (2.65,-.35) -- (3.35,-.35);
\draw[densely dotted] (3.35,-.35) -- (3.35,3.35);
\draw[densely dotted] (3.35,3.35) -- (2.65,3.35);
\draw[densely dotted] (2.65,3.35) -- (2.65,-.35);
\node at (3,3.75) {$n-\ell$};
\node at (1,2.75) {$\ell$};
\foreach \from/\to in {v1/v3,v1/v4,v1/v5,v1/v6,v2/v3,v2/v4,v2/v5,v2/v6}
\draw (\from) -- (\to);
\end{tikzpicture}
\qquad\qquad\qquad
\begin{tikzpicture}[scale=.8]
\node at (0,3.75) {$G^*$ ($k>\ell$)};
\node (v1) at (1,.8) [circle,draw,scale=.4,fill] {};
\node (v2) at (2,2) [circle,draw,scale=.4,fill] {};
\node at (2,2.6) {$\ell$};
\draw[densely dotted] (1.7,2.3) -- (2.3,2.3);
\draw[densely dotted] (2.3,2.3) -- (2.3,.7);
\draw[densely dotted] (2.3,.7) -- (1.7,.7);
\draw[densely dotted] (1.7,.7) -- (1.7,2.3);
\node (v3) at (1,2.2) [circle,draw,scale=.4,fill] {};
\draw[densely dotted] (-.1,2.5) -- (1.3,2.5);
\draw[densely dotted] (1.3,2.5) -- (1.3,.5);
\draw[densely dotted] (1.3,.5) -- (-.1,.5);
\draw[densely dotted] (-.1,.5) -- (-.1,2.5);
\node at (.6,2.8) {$k-\ell$};
\node (v4) at (2,1) [circle,draw,scale=.4,fill] {};
\node (v5) at (4,2) [circle,draw,scale=.4,fill] {};
\node (v6) at (4,.5) [circle,draw,scale=.4,fill] {};
\node (v7) at (4,1.25) [circle,draw,scale=.4,fill] {};
\node (v8) at (4,2.75) [circle,draw,scale=.4,fill] {};
\draw[densely dotted] (3.7,.2) -- (4.3,.2);
\draw[densely dotted] (4.3,.2) -- (4.3,3.05);
\draw[densely dotted] (4.3,3.05) -- (3.7,3.05);
\draw[densely dotted] (3.7,3.05) -- (3.7,.2);
\node at (4,3.35) {$n-k$};
\node (v9) at (0.2,1.5) [circle,draw,scale=.4,fill] {};
\foreach \from/\to in {v9/v1,v9/v2,v9/v3,v9/v4,v1/v3,v1/v2,v2/v3,v2/v4,v4/v3,v1/v4,v4/v5,v4/v6,v4/v7,v4/v5,v2/v5,v2/v6,v2/v7,v2/v8,v4/v8}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{The graph $G^*$ for the two possibilities for $k$ and $\ell$.}
\label{fig-G^*}
\end{figure}
So, for example, when $k=2\leq \ell$ we have $G^*=K_{\ell,n-\ell}$, and the fact that $i(G) \leq i(G^*)$ for all $n$-vertex $\ell$-connected bipartite graphs $G$, with equality if and only if $G=G^*$, appears as Corollary 2.2 in \cite{AlexanderCutlerMink} (recall that an $\ell$-connected graph has minimum degree at least $\ell$). When $\ell=1$ and $k > 1$ the graph $G^*$ is formed from a $k$-clique with $n-k$ pendants attached to one vertex in the clique, and the fact that $i(G) \leq i(G^*)$ for all connected $k$-chromatic graphs $G$, with equality if and only if $G=G^*$, appears as Corollary 3 in \cite{EngbersErey}. (Viewing $G^*$ naturally as $K_k \cup E_{n-k}$ in the case where $\ell=0$, the analogous result appears as Corollary 2 in \cite{EngbersErey}.)
We show that this result is in fact true for all $k>2$, $\ell \geq 1$, and $n$ large.
\begin{theorem}\label{thm-i(G)}
Let $k>2$ and $\ell \geq 1$ be fixed.
If $n > 2(k+\ell+2)\binom{6(k+\ell)}{\ell}$ and $G$ is an $n$-vertex $k$-chromatic $\ell$-connected graph, then
\[
i(G) \leq i(G^*)=\begin{cases}
2^{n-\ell}+k2^{\ell-k+1} -1, & \quad k \leq \ell \\
(k-\ell+1)2^{n-k}+\ell, & \quad k > \ell
\end{cases},
\]
with equality if and only if $G=G^*$.
\end{theorem}
In fact, we prove the following more general result, from which Theorem \ref{thm-i(G)} follows as $\ell$-connected graphs have minimum degree at least $\ell$.
\begin{theorem}\label{thm-asympresult}
Let $k>2$ and $\ell \geq 1$ be fixed.
If $n > 2(k+\ell+2)\binom{6(k+\ell)}{\ell}$ and $G$ is an $n$-vertex $k$-chromatic graph with minimum degree at least $\ell$, then
\[
i(G) \leq i(G^*)=\begin{cases}
2^{n-\ell}+k2^{\ell-k+1} -1, & \quad k \leq \ell \\
(k-\ell+1)2^{n-k}+\ell, & \quad k > \ell
\end{cases},
\]
with equality if and only if $G=G^*$.
\end{theorem}
The proof of Theorem \ref{thm-asympresult} uses a stability technique, and proceeds by showing that any graph $G$ satisfying $i(G) \geq i(G^*)$ must have a similar structure to $G^*$ in that $G$ must have a large complete bipartite subgraph $K_{\ell,cn}$ for some constant $c$. It then breaks into two cases depending on the values of $k$ and $\ell$, where the count of $i(G)$ is driven by those independent sets that completely avoid the size $\ell$ part of $K_{\ell,cn}$.
\medskip
In \cite{EngbersErey}, the authors ask what can be said in the family of $n$-vertex $k$-chromatic $\ell$-connected graphs for independent sets of size $t$ when $\ell>1$. We also provide some results here for specific values of $k$, $\ell$, and $t$. The first new case is that of $2$-connected $3$-chromatic graphs; we remark that results on maximizing $i(G)$ over all $n$-vertex $2$-connected graphs appear in \cite{HuaZhang}.
\begin{definition}
A \emph{theta graph} joins vertices $v$ and $w$ with three internally disjoint paths of (edge) lengths $a$, $b$, and $c$. We denote this graph by $\theta_{a,b,c}$.
\end{definition}
We have, for example, that $\theta_{2,2,2} = K_{2,3}$, and it is apparent that $|V(\theta_{a,b,c})| = a + b + c-1$. Note that when $a$ is even, the corresponding $vw$ path has an odd number of internal vertices. We now state the theorem.
\begin{theorem}\label{thm-2con3chrom}
Let $n \geq 4$, and let $G$ be an $n$-vertex $3$-chromatic $2$-connected graph. Then we have the following:
\begin{itemize}
\item if $n$ is odd, then $i_2(G) \leq i_2(C_n)$ with equality if and only if $G=C_n$;
\item if $n$ is even, then $i_2(G) \leq i_2(\theta_{a,b,c})$, where at least one of $a$, $b$, or $c$ is even, with equality if and only if $G = \theta_{a,b,c}$ where at least one of $a$, $b$, or $c$ is even; and
\item for all $3 \leq t \leq n-2$, $i_t(G) \leq i_t(K_2 \vee E_{n-2})$, and for $n \geq 5$ we have equality if and only if $G=K_2 \vee E_{n-2}$.
\end{itemize}
\end{theorem}
The results for maximizing $i(G)$ amongst $3$-chromatic $2$-connected graphs are consequences of \cite{HuaZhang}, and we briefly discuss this at the beginning of Section \ref{sec-2con3chrom}. Also, as a fairly routine consequence of results for independent sets of size $t$ in $n$-vertex $\ell$-connected graphs, we show the following.
\begin{theorem}\label{thm-bigt}
Let $k \geq 3$, $\ell \geq k$, and $n \geq 2\ell$ be fixed. If $G$ is an $n$-vertex $k$-chromatic $\ell$-connected graph and $t \geq \ell$, then
\[
i_t(G) \leq i_t(G^*).
\]
\end{theorem}
\medskip
Finally, we also consider the problem of maximizing the number of independent sets of size $t=2$ in $k$-chromatic $\ell$-connected graphs. Note that an independent set of size 2 induces an edge in the complement of the graph, so this problem is equivalent to minimizing the number of edges.
The problem of minimizing edges has been studied for several related families of graphs. The minimum number of edges in a $k$-chromatic graph is clearly ${k \choose 2}$. The minimum number of edges in $\ell$-connected graphs is $\lceil \frac{n\ell}{2} \rceil$ due to Harary \cite{Harary}. The minimum number of edges in $k$-critical graphs was first studied by Dirac \cite{Dirac} and Gallai \cite{Gallai-I, Gallai} and subsequently in \cite{Kostochka,Krivelevich1,Krivelevich2}. Minimizing edges in $k$-chromatic $\ell$-edge-connected graphs was considered in \cite{Westetal}, where it was briefly noted that some of the bounds also hold for $\ell$-connected graphs.
We present two sharp bounds for the minimum number of edges in $k$-chromatic $\ell$-connected graphs for the case that $k-1>\ell >1$. The first has the extra condition that $\ell \le n-k$ and our bound coincides exactly with the result in \cite{Westetal}, however, their proof relies on edge-connectivity. Furthermore, our techniques in the range $\ell \le n-k$ allow us to tackle the range $\ell > n-k$ as well; this appears as an unsolved case in \cite{Westetal}. All remaining cases for minimizing edges in $k$-chromatic $\ell$-connected graphs follow similarly to \cite{Westetal}, so we omit the results here.
\begin{theorem}\label{thm-minedges1}
If $G$ is a $k$-chromatic $\ell$-connected graph with $k-1>\ell>1$ and $\ell\leq n-k$ then
\[\abs{E(G)}\geq \binom{k}{2}+\frac{(n-k+1)\ell}{2}\]
and this bound is sharp.
\end{theorem}
\begin{theorem} \label{thm-minedges2}
If $G$ is a $k$-chromatic $\ell$-connected graph with $k-1>\ell>1$ and $\ell>n-k$ then
\[
|E(G)| \geq \binom{k}{2} + \binom{n-k}{2} + (n-k)(\ell-(n-k-1))
\]
and this bound is sharp.
\end{theorem}
In the rest of the paper we present the proofs of our results. We prove Theorem \ref{thm-asympresult} in Section \ref{sec-asympresult}, and we consider 2-connected, 3-chromatic graphs in Section \ref{sec-2con3chrom} where we also prove Theorem \ref{thm-bigt}. Then we consider maximizing independent sets of size $t=2$ in Section \ref{sec:t=2}. Finally, in Section \ref{sec-conclusion}, we highlight some open questions related to the results in this paper.
\section{Proof of Theorem \ref{thm-asympresult}} \label{sec-asympresult}
In this section we will prove Theorem \ref{thm-asympresult}, and to do so we will use the following results from \cite{EngbersErey}.
\begin{theorem}[\cite{EngbersErey}]\label{thm-EngbersEreyIndSets}
Let $G$ be an $n$-vertex $k$-chromatic graph. Then
\[
i(G) \leq i(K_{k} \cup E_{n-k}) = (k+1)2^{n-k}
\]
with equality if and only if $G = K_k \cup E_{n-k}$.
\end{theorem}
\begin{theorem}[\cite{EngbersErey}]\label{thm-EngbersEreyComponents}
Let $G$ be an $n$-vertex $k$-chromatic graph with $d$ components. Then
\[
i(G) \leq k2^{n-k} + 2^{d-1}
\]
with equality if and only if $G = (K_1 \vee (K_{k-1} \cup E_{n-k-d+1}))\cup E_{d-1}$.
\end{theorem}
We now move on to the proof.
\begin{proof}[Proof of Theorem \ref{thm-asympresult}]
Suppose that $G$ is an $n$-vertex $k$-chromatic graph with minimum degree at least $\ell$ which satisfies $i(G) \geq i(G^*)$. We investigate the structure of the graph $G$. First note that $i(G^*) > 2^{n-k-\ell}$ holds for each $k$ and $\ell$.
\medskip
\textbf{Step 1:} {\em Show that $G$ cannot contain a large matching.}
Consider a maximum matching $M$ in $G$, and let $|M|$ denote the size of this maximum matching. In any independent set, at most one endpoint of each edge in $M$ is in the independent set, giving three possibilities across each edge in $M$. Therefore, by only considering the restrictions on the edges in $M$, we have
\[
i(G) \leq 3^{|M|} 2^{n-2|M|} = \left( \frac{3}{4} \right)^{|M|} 2^n.
\]
If $|M| > 3k+3\ell$, then $\left( \frac{3}{4}\right)^{|M|} < \left(\frac{3}{4}\right)^{3k+3\ell} =\left( \frac{27}{64}\right)^{k+\ell}< \left(\frac{1}{2}\right)^{k+\ell}$ and so $i(G) < 2^{n-k-\ell}$, which contradicts the assumption that $i(G) \geq i(G^*)$. Therefore, we know that the maximum size of a matching $M$ in $G$ at most $3k+3\ell$.
\medskip
\textbf{Step 2:} {\em Show that there is a constant $c = c(k,\ell)$ so that $G$ contains $K_{\ell,cn}$ as a subgraph.}
Let $M$ be a maximum matching. By Step 1, there are $2|M| \leq 6(k+\ell)$ vertices that are endpoints in $M$; call this set of vertices $J$. The set $I=V(G) \setminus J$ has size $|I| \geq n-6(k+\ell)$ and, by maximality of $M$, must form an independent set. Since $G$ has minimum degree at least $\ell$, each vertex in $I$ must have at least $\ell$ neighbors in $J$.
The pigeonhole principle then produces some set $L$ of size $\ell$, with $L \subseteq J$, having at least $(n-|J|) / \binom{|J|}{\ell} \geq cn$ common neighbors in $I$ for some constant $c = c(k,\ell)>0$. Using that $n-|J| \geq n/2$ when $n \geq 12(k+\ell)$ and $\binom{|J|}{\ell} \leq \binom{6(k+\ell)}{\ell}$, we see that we can take $c = 1/(2\binom{6(k+\ell)}{\ell})$.
This shows that $G$ contains a (not necessarily induced) subgraph $K_{\ell,cn}$.
\medskip
\textbf{Step 3:} {\em Estimate the number of independent sets in $G$ that include a vertex from $L$.}
There are at most $2^{\ell}$ ways to include at least one vertex from $L$.
Then none of the at least $cn$ common neighbors of $L$ can be in the independent set,
so this gives an upper bound of
\begin{equation}\label{eqn-vertexinL}
2^{\ell} \cdot 2^{n-cn-\ell} = \left( \frac{1}{2} \right)^{cn} 2^n
\end{equation}
independent sets that contain some vertex from $L$.
\medskip
\textbf{Step 4:} {\em Find an upper bound on the number of independent sets in $G$. }
We have a bound from those independent sets that contain a vertex from $L$ above in \eqref{eqn-vertexinL}. Those that do not contain a vertex from $L$ correspond to the independent sets in $G'$, the graph obtained by deleting $L$. Note that $|V(G')| = n-\ell$. The chromatic number of $G'$ must be at most $k$, and so if the chromatic number is $m \leq k$ then by Theorem \ref{thm-EngbersEreyIndSets} we have at most $(m+1)2^{n-\ell-m}$ independent sets of this type, with equality if and only if $G - L$ is a complete graph on $m$ vertices with $n-\ell-m$ isolated vertices. To compare these maximal values for various $m$, note that for $m>1$ we have
\begin{equation}\label{eqn-compare}
(m+1)2^{n-\ell-m} = \frac{m+1}{2}2^{n-\ell-m+1} < m2^{n-\ell-m+1} = ((m-1)+1) 2^{n-\ell-(m-1)}.
\end{equation}
We now look at the two cases depending on the values of $k$ and $\ell$.
\bigskip
\textbf{Case 1 ($k > \ell$):} Suppose that $k>\ell$. We first argue that the chromatic number of $G'$ must be $k-\ell$. If not, then by the bound from \eqref{eqn-compare} the graph $G'$ has chromatic number at least $m=k-\ell+1$, and so at most $(k-\ell+2)2^{n-k-1}$ independent sets of this type. Combining this with \eqref{eqn-vertexinL} gives
\[
i(G) \leq (k-\ell+2)2^{n-k-1} + \left( \frac{1}{2} \right)^{cn} 2^n = \left( \frac{k-\ell+2}{2} + \left( \frac{1}{2} \right)^{cn}2^k \right) 2^{n-k}.
\]
For $n>(k+1)/c$ (recalling also that $\ell \geq 1$), we have $i(G) < (k-\ell+1)2^{n-k}+\ell$, which is a contradiction.
We now know that the chromatic number of $G'$ is $k-\ell$, and we next aim to show that $G'$ has many components. By Theorem \ref{thm-EngbersEreyComponents} an ($n-\ell$)-vertex $k$-chromatic graph with exactly $d$ components has at most $k2^{(n-\ell)-k}+2^{d-1}$ independent sets; note that this is an increasing function of $d$.
Therefore, if $G'$ has at most $n-k$ components, we have
\[
i(G) \leq (k-\ell) 2^{n-\ell - (k-\ell)} + 2^{n-k-1}+\left( \frac{1}{2} \right)^{cn} 2^n = (k-\ell)2^{n-k} + \left( \frac{1}{2} + \left( \frac{1}{2} \right)^{cn}2^k \right) 2^{n-k}.
\]
For $n > (k+1)/c$, we have $i(G) < (k-\ell+1)2^{n-k}+\ell$, which is a contradiction.
Therefore, we know:
\begin{itemize}
\item the chromatic number of $G'$ is $k-\ell$;
\item $G'$ has $n-\ell$ vertices; and
\item $G'$ has at least $n-k+1$ components.
\end{itemize}
This is only possible if $G'$ has exactly $n-\ell - (k-\ell)+1 = n-k+1$ components, one component is $K_{k-\ell}$, and the rest are isolated vertices. Therefore $G'$ must be the graph $K_{k-\ell} \cup E_{n-k}$.
Now, recall that $G$ has minimum degree at least $\ell$ and chromatic number $k$. The minimum degree condition forces each vertex in $L$ to be adjacent to each isolated vertex in $G'$.
If some vertex in $L$ is not adjacent to some vertex in the complete component of $G'$, then those two vertices can be assigned the same color in a proper coloring and this can easily be extended to a $(k-1)$-coloring of the vertices of $G$, which contradicts the assumption that $G$ is $k$-chromatic. Therefore the vertices in $L$ must form a dominating set in $G$. Furthermore, they must all be adjacent, or similar argument shows that $G$ can be properly colored with at most $k-1$ colors. Therefore $G$ must be the graph $G^*$.
\medskip
\textbf{Case 2 ($k \leq \ell$):} Suppose now that $k \leq \ell$. Recall that $G'$ is the graph obtained from $G$ by deleting $L$, and that in this case $i(G^*)= 2^{n-\ell} + k2^{\ell-k+1} -1$. First suppose that $G'$ contains some edge $e$. At most one of the endpoints of $e$ can be in an independent set, and so this combined with \eqref{eqn-vertexinL} gives
\[
i(G) \leq 3\cdot 2^{n-\ell-2} + \left( \frac{1}{2} \right)^{cn} 2^n = \left(\frac{3}{4} + \left( \frac{1}{2} \right)^{cn} 2^{\ell} \right) 2^{n-\ell}<2^{n-\ell}
\]
where the strict inequality holds for $n>(\ell+2)/c$. This is a contradiction to the assumption on $G$.
Therefore $G'$ must be the empty graph. As $G$ has minimum degree $\ell$, this means that each vertex in $G'$ is adjacent to each vertex in $L$. Since $G$ is $k$-chromatic, the induced graph on $L$ must be $(k-1)$-chromatic.
Since all edges are present between $G'$ and $L$, we have
\[
i(G) = i(L) + 2^{n-\ell}-1.
\]
Now, $L$ has $\ell$ vertices and chromatic number $k-1$, and so we know from Theorem \ref{thm-EngbersEreyIndSets} that it has at most $i(K_{k-1} \cup E_{\ell-k+1}) = k2^{\ell-k+1}$ independent sets, with equality if and only if $L=K_{k-1}\cup E_{\ell-k+1}$. Therefore
\[
i(G) = i(L) + 2^{n-\ell}-1 \leq k2^{\ell-k+1} + 2^{n-\ell} -1,
\]
with equality if and only if $L=K_{k-1}\cup E_{\ell-k+1}$, which implies that $G=G^*$.
\end{proof}
\section{$2$-connected $3$-chromatic and the Proof of Theorem \ref{thm-bigt}} \label{sec-2con3chrom}
In this section we first completely classify the $2$-connected $3$-chromatic graphs that maximize the total count of independent sets and the total count of independent sets of each (non-trivial) fixed size.
\subsection{Total count of independent sets}
We first show that the result for the total number of independent sets is essentially a corollary to a result from \cite{HuaZhang}. There it is proved that if $G$ is a $2$-connected graph with $n \geq 4$, then $i(G) \leq 2^{n-2}+3$ with equality if and only if $G$ is $K_{2,n-2}$ or $C_5$.
Since $C_5$ is $3$-chromatic and $K_{2,n-2}$ is not, it follows that if $n=5$ we have that $C_5$ is the $2$-connected $3$-chromatic graph with the most number of independent sets.
For $n \neq 5$, note that $K_2 \vee E_{n-1}$ is $3$-chromatic and
\[
i(K_{2,n-2}) = i(K_2 \vee E_{n-2})+1,
\]
so for $n\geq 4$ and $n \neq 5$ the characterization of equality implies that $K_2 \vee E_{n-2}$ is the (not necessarily unique) graph with the maximum number of independent sets.
\subsection{Size $t$ independent sets}
Now we move to independent sets of size $t$.
For $t \geq 3$, we have the following results for $2$-connected graphs and graphs with fixed minimum degree $\delta \geq 3$.
\begin{theorem}[\cite{EG}]\label{thm-EG}
Let $n \geq 4$. For every $t \geq 3$, every $n$-vertex graph $G$ with minimum degree at least $2$ satisfies
\[
i_t(G) \leq i_t(K_{2,n-2}).
\]
For $n \geq 5$ and $3 \leq t \leq n-2$ we have equality if and only if $G$ is $H \vee E_{n-2}$, where $H$ is any graph on two vertices.
\end{theorem}
\begin{theorem}[\cite{GLS}]\label{thm-GLS}
Let $n \geq 2\delta$. For every $t \geq 3$, every $n$-vertex graph $G$ with minimum degree at least $\delta$ satisfies
\[
i_t(G) \leq i_t(K_{\delta,n-\delta})
\]
and when $3 \leq t \leq \delta$, $K_{\delta,n-\delta}$ is the unique extremal graph.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm-2con3chrom}]
First, we analyze the case when $t=2$; here to maximize $i_2(G)$ we want to minimize $|E(G)|$.
A $2$-connected graph $G$ has minimum degree at least $2$, and so
\[
|E(G)| = \frac{1}{2}\sum_v d(v) \geq n,
\]
with equality if and only if $G$ is $2$-regular, which implies that
\[
i_2(G) \leq \binom{n}{2} - n =
i_2(C_n).
\]
The equality characterization for $n$ odd follows readily as $C_n$ is the only $2$-connected $2$-regular graph.
When $n$ is even and $G$ is 2-connected, we still have the bound of $E(G) \geq n$ with equality if and only if $G$ is $2$-regular if and only if $G=C_n$. Since in this case $C_n$ is not $3$-chromatic, this shows that
\[
|E(G)| \geq n+1,
\]
which proves the inequality as for an $n$-vertex theta graph $\theta_{a,b,c}$ we have $|E(\theta_{a,b,c})|=n+1$. We now work on the cases for equality.
Using that $G$ is not $2$-regular, by the handshaking lemma there must be at least two vertices of degree at least $3$, or one vertex of degree at least $4$.
A single vertex of degree $4$ with the remaining vertices having degree $2$ is not possible in a $2$-connected $G$, as the deletion of the degree $4$ vertex disconnects the graph.
So now suppose $G$ has degree $3$ vertices $v$ and $w$, and the remaining vertices of $G$ have degree $2$. If two of the edges out of $v$ are on a cycle that misses $w$, then again the deletion of $v$ disconnects the graph, which contradicts that $G$ is $2$-connected. So each edge out of $v$ must be on some path that ends at $w$, which implies that $G$ is a theta graph.
Given that $G$ is a theta graph $\theta_{a,b,c}$, we need the conditions on $a$, $b$, and $c$ so that $G$ is $3$-chromatic. By coloring $v$ and $w$ with different colors and then the paths between them, we see that the chromatic number is $2$ when $a$, $b$, and $c$ are all odd. So at least one of $a$, $b$, and $c$ must be even, and by parity considerations not all of $a$, $b$, and $c$ are even, so one parameter is even and another is odd. These two paths form an odd cycle in the graph, which shows that $\theta_{a,b,c}$ is indeed $3$-chromatic. This implies the characterization of equality.
Now we consider $t \geq 3$. When $n \leq 4$ there are no $n$-vertex $3$-chromatic graphs that have an independent set of size $t\geq 3$.
Since $G$ has minimum degree at least $2$, Theorem \ref{thm-EG} implies that for every $n \geq 5$ and $t \geq 3$, we have
\[
i_t(G) \leq i_t(K_{2,n-2}),
\]
with equality if and only if $G$ is $K_{2,n-2}$ or $K_2 \vee E_{n-2}$. Recalling that $K_{2,n-2}$ is bipartite, this proves the result and the characterization of equality for $n \geq 5$.
\end{proof}
\subsection{Proof of Theorem \ref{thm-bigt}} Suppose that $n \geq 2\ell$. By Theorem \ref{thm-GLS}, $K_{\ell,n-\ell}$ is an $\ell$-connected graph that has the most number of independent sets of size $t \geq 3$. When $t\geq \ell+1$, these independent sets consist of $t$ vertices from the size $n-\ell$ partition class. So if $k \leq \ell$, then $i_t(G^*) = i_t(K_{\ell,n-\ell})$, and therefore $i_t(G) \leq i_t(G^*)$ for all $k$-chromatic $\ell$-connected graphs, where $k \leq \ell$, $t \geq \ell+1$, and $n \geq 2\ell$.
When $t = \ell \geq 3$, Theorem \ref{thm-GLS} gives that $K_{\ell,n-\ell}$ is the \emph{unique} $\ell$-connected graph with the most number of independent sets of size $t$. Since for $t=\ell \geq 3$ we have $i_t(G^*) = i_t(K_{\ell,n-\ell})-1$ and for $k$-chromatic $\ell$-connected $G$ we have $i_t(G)<i_t(K_{\ell,n-\ell})$, this shows that $i_t(G) \leq i_t(G^*)$.
\begin{comment}
\section{4-chromatic}
When $k=4$, we know the result holds for $\ell=0$ and $\ell=1$ \cite{EngbersErey}.
Suppose that $\ell=3$; here the graph $G^*$ is $K_3 \vee E_{n-3}$. Note that for $t \geq 4$ we have $i_t(G^*) = i_t(K_{3,n-3})$, and for $t=3$ we have $i_3(G^*) = i_3(K_{3,n-3})-1$. By Theorem \ref{thm-GLS} we know that $K_{3,n-3}$ is the unique 3-connected graph with the most number of independent sets of size $3$, this shows that if $G$ is an $n$-vertex $4$-chromatic $3$-connected graph with $n\geq 6$ we have $i_3(G) \leq i_3(G^*)$. (When $n \leq 5$ there is no $3$-connected graph with an independent set of size $3$, since deleting the at most two vertices outside the independent set would disconnect the graph.)
TO DO: What about $\ell=2$; more precisely, what about $2$-connected $4$-chromatic graphs?
In this case, the graph $G^*$ takes $K_{2,n-2}$ and adds two edges (one between the two vertices in the size 2 part). The number of independent sets of size $t \geq 3$ in $G^*$ is $\binom{n-2}{t} - \binom{n-4}{t-2} = \binom{n-3}{t} + \binom{n-4}{t-1}$, and it is worth noting that a $4$-chromatic graph $G$ has $\alpha(G) \leq n-3$, that the vertex set of a $4$-chromatic graph can be partitioned into four independent sets, and that deleting any single vertex does not disconnect the graph.
If $\alpha(G) = n-3$, then the other three vertices must form a triangle. A size $t$ independent set can come from the maximal independent set or include one of the other three vertices (at most one, as they form a triangle). Is looking at various values of $\alpha(G)$ a good way to attack this problem?
\end{comment}
\section{The Case of $t=2$: Minimizing Edges}\label{sec:t=2}
In this section, we consider maximizing the number of independent sets of size $t=2$ in $k$-chromatic $\ell$-connected graphs. As previously mentioned, this problem is equivalent to minimizing the number of edges in such graphs, so our results and proofs are stated as such.
\begin{comment}
Here's a chart with some of these minimizing/maximizing edge type problems for the sake of literature search:
\begin{center}
\begin{tabular}{c|c|c}
&$k$-chromatic &$\ell$-connected\\
\hline
minimizing edges &$\binom{k}{2}$ (proof by contradiction) &$\lceil\frac{nl}{2}\rceil$ (Harary)\\
\hline
maximizing (consider critical case) &? &Ando and Egawa
\end{tabular}
\end{center}
\textcolor{blue}{LK: Did we say anything about the $\ell=1$ case? Should be a $K_k$ with a tree/forest hanging off of it} \textcolor{red}{JE: We have $\ell=1$ in \cite{EngbersErey}; it is cited there from a different paper. When $k=3$ any unicyclic graph with an odd cycle works (so there are just more extremal examples).}
\end{comment}
\subsection{Proof of Theorem \ref{thm-minedges1}}
We start by constructing a graph that achieves the bound. We make use of the $n$-vertex, $\ell$-connected Harary graph, which we denote by $H_{n, \ell}$. Recall that to construct $H_{n, \ell}$, place $n$ vertices $w_1$, $w_2$, \ldots, $w_n$ in order around a circle and join each vertex to the $\lfloor \frac{\ell}{2}\rfloor$ vertices closest to it in either direction. In the event that $\ell$ is odd, then also join each vertex to the vertex directly opposite (or as opposite as possible when $n$ is odd). The Harary graph $H_{n, \ell}$ has $\lceil n\ell /2 \rceil$ edges, which is the minimum number of edges over all graphs with the same number of vertices and connectivity \cite{Harary}.
Consider the disjoint union of the complete graph $K_k$ and a Harary graph $H_{n-k, \ell}$. Since we are assuming $\ell < k-1$ and $\ell \le n-k$, we can choose $\ell$ vertices $v_1$, $v_2$, \ldots, $v_\ell$ from $K_k$ and $\ell$ adjacent vertices $w_1$, $w_2$, \ldots, $w_\ell$ along the circle in $H_{n-k, \ell}$ and connect these vertices via a matching, $v_iw_i$. Now starting with a terminal edge, $w_1w_2$, of the $\ell$-vertex path in $H_{n-k, \ell}$, we remove every other edge of the path. In the case where $\ell$ is odd and $n-k$ is also odd, then we use the one higher degree vertex from $H_{n-k, \ell}$ in our $\ell$-vertex path and delete both edges to the sides. Call this graph $G^*$ and let $H'_{n-k,\ell}$ denote the subgraph induced by the vertices of $H_{n-k,\ell}$. See Figure \ref{fig-}. The graph $G^*$ has \[{k \choose 2} + \left\lceil\frac{(n-k)\ell}{2}\right\rceil + \ell -\lfloor\ell /2\rfloor = \binom{k}{2} + \left\lceil \frac{(n-k+1)\ell}{2}\right\rceil \] edges and we claim this graph is $k$-chromatic and $\ell$-connected.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\draw (-2.25,0) ellipse (1.5cm and 3.5cm);
\draw (-2.25,-4) node {$K_k$};
\node (v1) at (-2,2.5) [circle,draw,fill,scale=.3] {};
\node at (-2.4,2.5) {$v_1$};
\node (v2) at (-2,2) [circle,draw,fill,scale=.3] {};
\node at (-2.4,2) {$v_2$};
\node (v3) at (-2,1.5) [circle,draw,fill,scale=.3] {};
\node at (-2.4,1.5) {$v_3$};
\node (v4) at (-2,1) [circle,draw,fill,scale=.3] {};
\node at (-2.4,1) {$v_4$};
\node (v5) at (-2,0.5) [circle,draw,fill,scale=.3] {};
\node at (-2.4,0.5) {$v_5$};
\node (vdots1) at (-2,0) {$\vdots$};
\node (vellminus1) at (-2,-0.5) [circle,draw,fill,scale=.3] {};
\node at (-2.4,-0.5) {$v_{\ell-1}$};
\node (vell) at (-2,-1) [circle,draw,fill,scale=.3] {};
\node at (-2.4,-1) {$v_{\ell}$};
\node (vellplus1) at (-2,-1.5) [circle,draw,fill,scale=.3] {};
\node at (-2.4,-1.5) {$v_{\ell+1}$};
\node (vdots2) at (-2,-2) {$\vdots$};
\node (vk) at (-2,-2.5) [circle,draw,fill,scale=.3] {};
\node at (-2.4,-2.5) {$v_{k}$};
\draw (2.25,0) ellipse (1.75cm and 3cm);
\draw (2.25,-4) node {$H'_{n-k,\ell}$};
\node (w1) at (2,2.5) [circle,draw,fill,scale=.3] {};
\node at (2.4,2.5) {$w_1$};
\node (w2) at (2,2) [circle,draw,fill,scale=.3] {};
\node at (2.4,2) {$w_2$};
\node (w3) at (2,1.5) [circle,draw,fill,scale=.3] {};
\node at (2.4,1.5) {$w_3$};
\node (w4) at (2,1) [circle,draw,fill,scale=.3] {};
\node at (2.4,1) {$w_4$};
\node (w5) at (2,0.5) [circle,draw,fill,scale=.3] {};
\node at (2.4,0.5) {$w_5$};
\node (wdots1) at (2,0) {$\vdots$};
\node (wellminus1) at (2,-0.5) [circle,draw,fill,scale=.3] {};
\node at (2.5,-0.5) {$w_{\ell-1}$};
\node (well) at (2,-1) [circle,draw,fill,scale=.3] {};
\node at (2.5,-1) {$w_{\ell}$};
\node (wellplus1) at (2,-1.5) [circle,draw,fill,scale=.3] {};
\node at (2.5,-1.5) {$w_{\ell+1}$};
\node (wdots2) at (2,-2) {$\vdots$};
\node (wnminusk) at (2,-2.5) [circle,draw,fill,scale=.3] {};
\node at (2.5,-2.5) {$w_{n-k}$};
\foreach \from/\to in {v1/w1,v2/w2,v3/w3,v4/w4,v5/w5,vellminus1/wellminus1, vell/well, w2/w3, w4/w5, well/wellplus1}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{The construction for $G^*$ when $\ell < k-1$ and $\ell \le n-k$.}
\label{fig-}
\end{figure}
Now $G^*$ is $k$-chromatic since the subgraph $K_k$ requires $k$ colors.
We show the graph is also $\ell$-connected since any two vertices $v$ and $w$ are connected by $\ell$ disjoint paths. This is true if $v$ and $w$ both belong to the subgraph $K_k$ since $k>\ell$.
Suppose $v$ and $w$ both belong to $H'_{n-k,\ell}$. By construction, the Harary graph $H_{n-k,\ell}$ is $\ell$-connected, so there are $\ell$ disjoint paths between any two vertices. We extend these disjoint paths to $H'_{n-k,\ell}$ over deleted edges, $w_iw_{i+1}$, by instead using edges $w_iv_i$, $v_iv_{i+1}$, and $v_{i+1}w_{i+1}$ in $G^*$ (with the obvious modification if $\ell$ and $n-k$ are odd and the degree $\ell+1$ vertex is internal on one of the paths). This covers all cases except when $v$ itself has degree $\ell+1$, in which case $v=w_j$ for some $j$. But in this case since
$v$ has degree $\ell+1$, we can find $\ell$ disjoint paths between $v$ and any other vertex that excludes one of the edges $w_{j-1}v$ or $vw_{j+1}$; these $\ell$ disjoint paths can be extended to $H'_{n-k,j}$ as above.
Lastly, suppose $v$ belongs to the $K_k$ subgraph and $w$ belongs to the $H'_{n-k,\ell}$ subgraph. From $v$ there are $\ell$ disjoint paths to $H'_{n-k,\ell}$ each ending at one of $w_1$, $w_2$, \ldots, $w_\ell$. Since $H_{n-k,\ell}$ is $\ell$-connected, there exists $\ell$ disjoint paths from the subset of vertices $w_1$, $w_2$, \ldots, $w_\ell$ to the vertex $w$, and the same $\ell$ paths exists in $H'_{n-k,\ell}$. Therefore in all cases there are $\ell$ disjoint paths between the vertices $v$ and $w$. Since deleting the endpoints of the matching in $K_k$ disconnects the graph, this shows that $G^*$ is $\ell$-connected.
\begin{proof}[Proof of Theorem \ref{thm-minedges1}]
Sharpness follows from the graph $G^*$. To prove the lower bound, let $G$ be a $k$-chromatic $\ell$-connected graph. We will consider two cases: if $G$ is $k$-critical and otherwise.
If $G$ is $k$-critical, then all vertices have degree at least $k-1$, so $G$ has at least $\frac{n(k-1)}{2}$ edges. Then the difference in the number of edges between $G$ and $G^*$ is at least
\begin{align*}
\frac{n(k-1)}{2}-\binom{k}{2} - \frac{(n-k+1)\ell}{2} &=\frac{1}{2}\left(n(k-1)-k(k-1)-(n-k+1)\ell\right)\\
&=\frac{1}{2}((k-1)(n-k)-(n-k)\ell-\ell)\\
&=\frac{1}{2}((n-k)(k-1-\ell) -\ell)
\end{align*}
Since $k-1>\ell$, we have $k-1-\ell\geq 1$, and combining this with $(n-k)\geq \ell$ gives $(n-k)(k-1-\ell)\geq \ell$. Thus,
\[\frac{1}{2}((n-k)(k-1-\ell) -\ell)\geq 0\]
and so the bound is correct in this case.
Suppose $G$ is not $k$-critical. Then $G$ has an (induced) $k$-critical subgraph. This subgraph, $H$, has at least $k$ vertices and minimum degree at least $k-1$. Say $H$ has $k\leq x \leq n-1$ vertices. Then $H$ has at least $\frac{x(k-1)}{2}$ edges. Consider the vertices in $V(G)\setminus V(H)$. Since $G$ is $\ell$-connected, there must be at least $\ell$ disjoint paths between $V(G)$ and $V(G)\setminus V(H)$. This requires at least $\ell$ edges; assume there are $p \geq \ell$ edges with one endpoint in $V(H)$ and the other endpoint in $V(G) \setminus V(H)$. Moreover, every vertex in $V(G)\setminus V(H)$ has to have minimum degree at least $\ell$, since $G$ is $\ell$-connected. This requires a minimum of an additional
\[ \dfrac{(n-x)\ell-p}{2}\]
edges with both endpoints in $V(G)\setminus V(H)$.
In total, $G$ must have at least
\[ \frac{x(k-1)}{2} + p + \dfrac{(n-x)\ell-p}{2} \geq \frac{x(k-1)}{2} + \ell + \dfrac{(n-x)\ell-\ell}{2}\]
edges. This bound is linear in $x$ with positive slope $\frac{k-1-\ell}{2}$, and so is minimized by the minimum value of $x$. Since $x\ge k$, we get
\[ \frac{x(k-1)}{2} + \ell + \dfrac{(n-x)\ell-\ell}{2} \ge \frac{k(k-1)}{2} + \ell + \dfrac{(n-k)\ell-\ell}{2}\]
proving the claimed bound in this case. This finishes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm-minedges2}}
We again start by constructing a graph that achieves the bound. Consider the disjoint union of the complete graph $K_k$ and the complete graph $K_{n-k}$. Fix $\ell$ vertices $v_{1},v_{2},\ldots,v_{\ell} \in V(K_k)$, and label the vertices in $K_{n-k}$ by $w_1,\ldots,w_{n-k}$. For a fixed $i$, add the $\ell-(n-k-1)$ edges joining $w_i$ and $v_j$ for each $j$ satisfying $i \leq j \leq i+(\ell-n+k)$.
Call this graph $G^*$, and note that $G^*$ has $\binom{k}{2} + \binom{n-k}{2} + (n-k)(\ell-(n-k-1))$ edges. See Figure \ref{fig-G*}.
\begin{comment}
\textcolor{red}{JE: The motivation here is that we're trying to do a matching between vertices in $K_k$ and $K_{n-k}$ (like in the previous case; note that $K_{n-k}$ is a Harary graph!), but a matching isn't going to give the correct minimum degree condition, since $K_{n-k}$ vertices have ``small'' degree. So do a ``star-matching'' (my terminology?) with $\ell$ vertices from $K_k$: A matching uses $K_{1,1}$'s; here we're using $K_{1,\ell-(n-k-1)}$'s to get the correct minimum degree, and done systematically so the overlap of stars is straightforward and nothing degenerate happens. That was at least my thought here.}
\end{comment}
We first claim that $G^*$ is $k$-chromatic. It requires $k$ colors on $K_k$. And each vertex $w_i$ can be colored with the color on $v_{i-1}$ (where we consider $v_0$ to be the vertex $v_{\ell}$). This is a $k$-coloring of $G^*$.
\begin{figure}[ht!]
\begin{center}
\begin{tikzpicture}
\draw (-2,0) ellipse (1.5cm and 3.5cm);
\draw (2,0) ellipse (1.7cm and 2cm);
\coordinate(v1) at (-2,2.5);
\coordinate(v2) at (-2,2);
\coordinate(v3) at (-2,1.5);
\coordinate(v4) at (-2,1);
\coordinate(vdots) at (-2,.5);
\coordinate(vellminus1) at (-2,0);
\coordinate(vell) at (-2,-.5);
\coordinate(vellplus1) at (-2,-1);
\coordinate(velldots) at (-2,-1.5);
\coordinate(vkminus1) at (-2,-2.5);
\coordinate(vk) at (-2,-3);
\coordinate(w1) at (2,1.5);
\coordinate(w2) at (2,1);
\coordinate(w3) at (2,.5);
\coordinate(wdots) at (2,0);
\coordinate(wnminuskminus1) at (2,-.5);
\coordinate(wnminusk) at (2,-1);
\fill (v1) circle (2pt);
\draw (-2.25,2.5) node {$v_1$};
\fill (v2) circle (2pt);
\draw (-2.25,2) node {$v_2$};
\fill (v3) circle (2pt);
\draw (-2.25,1.5) node {$v_3$};
\fill (v4) circle (2pt);
\draw (-2.25,1) node {$v_4$};
\draw (vdots) node {$\vdots$};
\fill (vellminus1) circle (2pt);
\draw (-2.5,0) node {$v_{\ell-1}$};
\fill (vell) circle (2pt);
\draw (-2.25,-.5) node {$v_{\ell}$};
\fill (vellplus1) circle (2pt);
\draw (-2.5,-1) node {$v_{\ell+1}$};
\draw (velldots) node {$\vdots$};
\fill (vkminus1) circle (2pt);
\draw (-2.5,-2.5) node {$v_{k-1}$};
\fill (vk) circle (2pt);
\draw (-2.5,-3) node {$v_{k}$};
\fill (w1) circle (2pt);
\draw (2.5,1.5) node {$w_{1}$};
\fill (w2) circle (2pt);
\draw (2.5,1) node {$w_{2}$};
\fill (w3) circle (2pt);
\draw (2.5,.5) node {$w_{3}$};
\draw (wdots) node {$\vdots$};
\fill (wnminuskminus1) circle (2pt);
\draw (3,-.5) node {$w_{n-k-1}$};
\fill (wnminusk) circle (2pt);
\draw (3,-1) node {$w_{n-k}$};
\tikzstyle{EdgeStyle}=[-,ultra thin]
\Edge(v1)(w1);
\Edge(v2)(w1);
\Edge(v3)(w1);
\Edge(v2)(w2);
\Edge(v3)(w2);
\Edge(v4)(w2);
\Edge(v3)(w3);
\Edge(v4)(w3);
\Edge(wnminuskminus1)(vellminus1);
\Edge(wnminusk)(vellminus1);
\Edge(wnminusk)(vell);
\tikzstyle{EdgeStyle}=[loosely dotted,ultra thin]
\Edge(-1.5,.65)(w3);
\Edge(-1.5,.1)(wnminuskminus1);
\Edge(-1.5,.3)(wnminuskminus1);
\Edge(-1.5,.1)(wnminusk);
\draw (-2,-4) node {$K_k$};
\draw (2, -3) node {$K_{n-k}$};
\end{tikzpicture}
\end{center}
\caption{The graph $G^*$ when $\ell-(n-k-1)=3$; note that here $w_2$ is adjacent to $v_2$, $v_3$, and $v_4$.}
\label{fig-G*}
\end{figure}
Next, we claim that $G^*$ is $\ell$-connected; note that removing $v_{1},\ldots,v_{\ell}$ disconnects the graph (as $\ell<k$). We claim that any two vertices $v$ and $w$ are connected by $\ell$ disjoint paths. This is clear if $v$ and $w$ are both in $K_k$, since $\ell<k-1$. It is also clear if $v$ and $w$ are both in $K_{n-k}$, as there are $n-k-1$ paths in $K_{n-k}$, and the $\ell-(n-k-1)$ edges to $K_k$ from each vertex lead to $\ell-(n-k-1)$ other edge disjoint paths. So assume $v$ is some vertex in $K_k$ and $w=w_1$. We know that $w$ has neighbors $v_i$ for $1\leq i \leq \ell-n+k+1$; those have disjoint paths (with $0$ or $1$ edge) to $v$. And furthermore the neighbors $w_m$, $m >1$, can use their edge to $v_{m+\ell-n+k}$ and then edge $v_{m+\ell-n+k}v$ to produce the remaining disjoint paths from $w$ to $v$.
\begin{proof}[Proof of Theorem \ref{thm-minedges2}]
Note that we must have $k \geq 4$, since $k \leq 3$ implies $\ell<2$, which contradicts the assumption of $\ell>1$.
Also note that the inequalities together imply that $n<2k-1$. Furthermore, $n=k$ is not possible, as then $G=K_k$ which is $k-1$ connected, but $\ell=k-1$ is not allowed. Therefore we can assume $n>k$.
Sharpness comes from the graph $G^*$. Suppose $G$ is a $k$-chromatic $\ell$-connected graph with $\ell>n-k$ and $k-1>\ell>1$. We use a result from Gallai \cite{Gallai}, which says that if $k \geq 4$ and $k+2 \leq n \leq 2k-1$, a $k$-critical graph on $n$ vertices satisfies
\[
|E(G)| \geq \frac{1}{2}\left( n(k-1) + (n-k)(2k-n)-2\right);
\]
there is a characterization of equality given in that paper as well. It is noted in various places that no $k$-critical graph has $n=k+1$ vertices.
We consider two cases: $G$ is $k$-critical and otherwise.
If $G$ is $k$-critical, then since $k\geq 4$ and our ranges of $n$, $k$, and $\ell$ imply that $n<2k-1$, the result from Gallai \cite{Gallai} gives at least $\frac{1}{2}( n(k-1) + (n-k)(2k-n)-2)$ edges. Then the difference between $|E(G)|$ and $|E(G^*)|$ is at least
\begin{align*}
\frac{1}{2}&\left( n(k-1) + (n-k)(2k-n)-2\right) - \binom{k}{2} - \binom{n-k}{2} - (n-k)(\ell-(n-k-1))\\
&=\frac{n-k}{2}\left( (k-1) + (2k-n)-(n-k-1)-2\ell+2(n-k-1) \right) - 1\\
&= \frac{n-k}{2}\left( (k-1) + (2k-n)+(n-k-1)-2\ell \right) -1\\
&= \frac{n-k}{2}\left( (k-1) -\ell + (k-1)-\ell \right) -1\\
&= (n-k)(k-1-\ell) - 1.
\end{align*}
Now $k-1-\ell>0$ and $n-k\geq 2$ (as there is no $k$-critical graph on $n=k+1$ vertices), and so the above expression is positive, which implies that bound is correct in this case.
Suppose now that $G$ is not $k$-critical. Then $G$ again has an induced $k$-critical subgraph $H$ with $k \leq x \leq n-1$ vertices. Since $n<2k-1$, if $x>k+1$ then the Gallai bound applies, so $H$ has at least $\frac{1}{2}( x(k-1) + (x-k)(2k-x)-2)$ edges. If instead $x=k$, the graph $H$ has $\frac{1}{2}k(k-1)$ edges. We cannot have $x=k+1$ as there is no $k$-critical graph on $x=k+1$ vertices.
Consider the vertices in $V(G) \setminus V(H)$. Each vertex must have at least $\ell$ disjoint paths to $V(H)$, and so in particular must have minimum degree at least $\ell$. Therefore the degree sum of the vertices in $V(G) \setminus V(H)$ is at least $(n-x)\ell$. There can be at most $\binom{n-x}{2}$ edges that contribute two to this degree sum, coming from edges with both endpoints in $V(G) \setminus V(H)$. This means there must be at least $(n-x)\ell - (n-x)(n-x-1) = (n-x)(\ell-(n-x-1))$ edges between $H$ and $G-H$. And if there are $p$ edges missing inside the induced subgraph on $V(G) \setminus V(H)$, where $0 \leq p$, then we have at least $(n-x)(\ell-(n-x-1))+2p$ edges between $H$ and $G-H$.
Therefore the total number of edges in $G$ is at least
\[
\frac{( x(k-1) + (x-k)(2k-x)-2 \cdot \textbf{1}_{\{x\geq k+2\}})}{2} + \frac{(n-x)(n-x-1)}{2} - p + (n-x)(\ell-(n-x-1))+2p,
\]
where $\textbf{1}_{\{x\geq k+2\}}$ is the indicator function on the event $x \geq k+2$. Minimizing this, we take $p=0$, giving at least
\[
\frac{( x(k-1) + (x-k)(2k-x)-2\cdot \textbf{1}_{\{x\geq k+2\}})}{2} + \frac{(n-x)(n-x-1)}{2} + (n-x)(\ell-(n-x-1))
\]
edges. The expression
\[
\frac{x(k-1) + (x-k)(2k-x)-2\cdot \textbf{1}_{\{x\geq k+2\}} + (n-x)(n-x-1) + 2(n-x)(\ell-(n-x-1))}{2}
\]
is a quadratic function of (real-valued) $x$ with leading coefficient $-1$. By the Extreme Value Theorem, the minimum occurs over $k+2 \leq x \leq n-1$ at $x=k+2$ or $x=n-1$. We also need to compare these values to $x=k$, the other possible value for $x$. (We remark that we separate out the case when $x=k$ as the indicator function is not continuous, and so we cannot apply the Extreme Value Theorem on $k \leq x \leq n-1$.) When $x=k$ we have the bound
\[
\frac{k(k-1) + (n-k)(n-k-1) + 2(n-k)(\ell-(n-k-1))}{2}.
\]
We need to show that when $x=k+2$ or $x=n-1$, we have a larger bound. For $x=k+2$, the bound is
\[
\frac{(k+2)(k-1) + 2(k-2)-2 + (n-k-2)(n-k-3) + 2(n-k-2)(\ell-(n-k-3))}{2}
\]
and when $x=n-1$ the bound is
\[
\frac{(n-1)(k-1) + (n-1-k)(2k-n+1)-2 + 2\ell}{2}.
\]
Computing the $x=k+2$ count minus the $x=k$ count gives
\[
2(n-\ell)-7.
\]
Now for this bound we have $n \geq k+2$, so $\ell<k-1$ means $\ell < n-3$, or $4 \leq n-\ell$, so the difference in terms is positive in this case.
Computing the $x=n-1$ count minus the $x=k$ count gives
\[
(k-\ell)(n-1-k)-1;
\]
here $k-\ell\geq 2$ and $n-1>k$ (as $n \geq k+2$). This shows the difference is positive in this case. Therefore, the minimum value occurs for $x=k$, proving the claimed bound.
\end{proof}
\section{Concluding Remarks}\label{sec-conclusion}
In this section we highlight a few open problems that are related to the contents of this paper. In Theorem \ref{thm-asympresult}, we characterized the $n$-vertex $k$-chromatic $\ell$-connected graph with the maximum number of independent sets for large $n$. We expect the result to hold for all $n$ for which the graph $G^*$ is $k$-chromatic and $\ell$-connected.
\begin{conjecture}\label{conj-1}
Let $3 \leq k \leq \ell$ and $n \geq 2\ell$. If $G$ is an $n$-vertex $k$-chromatic $\ell$-connected graph, then
\[
i(G) \leq i(G^*).
\]
\end{conjecture}
\begin{conjecture}\label{conj-2}
Let $2 \leq \ell < k$ and $n\neq 5$. If $G$ is an $n$-vertex $k$-chromatic $\ell$-connected graph, then
\[
i(G) \leq i(G^*).
\]
\end{conjecture}
Conjecture \ref{conj-2} is true for $k=3$ and $\ell=2$ (for $n \neq 5$), as we showed in Section \ref{sec-2con3chrom}.
There are also open questions related to the number of independent sets of size $t$. We expect the following to hold, which extends Theorem \ref{thm-bigt} down to $t \geq 3$.
\begin{conjecture}\label{conj-3}
Let $3 \leq k \leq \ell$ and $n \geq 2\ell$. If $G$ is an $n$-vertex $k$-chromatic $\ell$-connected graph and $t \geq 3$, then
\[
i_t(G) \leq i_t(G^*).
\]
\end{conjecture}
It is also natural to conjecture that this behavior holds for $k>\ell$ as well.
\begin{conjecture}\label{conj-4}
Let $k \geq 4$ and $k > \ell$. If $G$ is an $n$-vertex $k$-chromatic $\ell$-connected graph and $t \geq 3$, then
\[
i_t(G) \leq i_t(G^*).
\]
\end{conjecture}
These last two conjectures appeared as questions in \cite{EngbersErey}. We note that the corresponding results for $k=3$ and $\ell=2$ are shown in Theorem \ref{thm-2con3chrom}, and that the cases when $\ell=0$ and $\ell=1$ appear in \cite{EngbersErey}.
|
1,108,101,563,164 | arxiv | \section{Introduction and main results}
\subsection{Overview} Let $\mathbf{k}$ be a field of characteristic zero and $X$ be a smooth geometrically connected projective curve over~$\mathbf{k}$ (geometric connectedness means that $X$ remains connected after the base change to an algebraic closure of $\mathbf{k}$). In~\cite{FedorovSoibelmans} we calculated the motivic classes of moduli stacks of semistable Higgs bundles on $X$. These motivic classes are closely related to Donaldson--Thomas invariants, see~\cite{KontsevichSoibelman08,KontsevichSoibelman10}. In~\cite{FedorovSoibelmans} we also calculated the motivic classes of moduli stacks of vector bundles with connections on $X$ by relating them to the motivic classes of stacks of semistable Higgs bundles.
In this paper, we extend these results to the parabolic case. Some of our results are parallel to the results of A.~Mellit in the case of finite fields (see~\cite{MellitPunctures}). One difference is that we fix the eigenvalues of the residues. Another difference is that by working over a field of characteristic zero, we are also able to treat bundles with connections. We also note that the calculation of the motivic class requires subtler techniques, than the calculation of the volume of the corresponding stack over a finite field.
\subsection{Moduli stacks} Let us briefly describe the moduli stacks whose motivic classes we will be interested in. There will be three classes of stacks.
\subsubsection{Parabolic bundles with connections}\label{sect:IntroConn}
Let $D\subset X(\mathbf{k})$ be a non-empty set of rational points of $X$. A \emph{parabolic bundle} of type $(X,D)$ is a collection $\mathbf E=(E,E_{\bullet,\bullet})$, where $E$ is a vector bundle over $X$ and $E_{x,\bullet}$ is a flag in its fiber $E_x$ for $x\in D$:
\[
E_x=E_{x,0}\supseteq E_{x,1}\supseteq\dots\supseteq E_{x,l}\supseteq\cdots,\qquad E_{x,l}=0\quad \text{for} \ l\gg0.
\]
A \emph{connection} on $E$ with \emph{poles bounded by $D$} is a morphism of sheaves of abelian groups $\nabla\colon E\to E\otimes\Omega_X(D)$ satisfying Leibniz rule. (Here $\Omega_X$ is the canonical line bundle on $X$.) In this case for $x\in D$ one defines the residue of the connection $\res_x\nabla\in\End(E_x)$. Let $\zeta=\zeta_{\bullet,\bullet}=(\zeta_{x,j})$ be a sequence of elements of $\mathbf{k}$ indexed by $D\times\mathbb{Z}_{>0}$ such that $\zeta_{x,j}=0$ for $j\gg0$. Let $\mathcal{C}{onn}(X,D,\zeta)$ denote the moduli stack parameterizing collections $(E,E_{\bullet,\bullet},\nabla)$, where $(E,E_{\bullet,\bullet})$ is a parabolic bundle of type $(X,D)$, $\nabla$~is a connection on $E$ with poles bounded by $D$ such that $(\res_x\nabla-\zeta_{x,j}1)(E_{x,{j-1}})\subset E_{x,j}$ for all $x\in D$ and $j>0$. We usually skip $X$ and $D$ from the notation as they are fixed, denoting $\mathcal{C}{onn}(X,D,\zeta)$ simply by $\mathcal{C}{onn}(\zeta)$. We call the points of $\mathcal{C}{onn}(\zeta)$ \emph{parabolic bundles with connections of type $(X,D)$ with eigenvalues $\zeta$.}
For a parabolic bundle $\mathbf E=(E,E_{\bullet,\bullet})$ define the \emph{class} of $\mathbf E$ as the following collection of integers:
\begin{equation}\label{eq:class}
(\rk E,\dim{E_{x,j-1}}-\dim{E_{x,j}},\deg E)\in\mathbb{Z}_{\ge0}\times\mathbb{Z}_{\ge0}[D\times\mathbb{Z}_{>0}]\times\mathbb{Z}.
\end{equation}
We also set $\rk\mathbf E:=\rk E$.
The stack $\mathcal{C}{onn}(\zeta)$ decomposes according to the classes of parabolic bundles; denote the component corresponding to parabolic bundles of class $\gamma$ by $\mathcal{C}{onn}_\gamma(\zeta)$. We will see that this stack is an Artin stack of finite type over $\mathbf{k}$. One of our main results (see Section~\ref{sect:ExplAnswers} and Theorem~\ref{th:ExplAnsw2}) is the calculation of the motivic class of this stack.
\subsubsection{Parabolic Higgs bundles}\label{sect:IntroHiggs} Let $\zeta$ be as above. A \emph{parabolic Higgs bundle with eigenvalues $\zeta$} is a triple $(E,E_{\bullet,\bullet},\Phi)$, where $(E,E_{\bullet,\bullet})$ is a parabolic bundle of type $(X,D)$, $\Phi\colon E\to E\otimes\Omega_X(D)$ is a morphism of $\mathcal O_X$-modules (called a \emph{Higgs field on $(E,E_{\bullet,\bullet})$}) such that for all $x\in D$ and $j>0$ we have
\[ (\Phi-\zeta_{x,j}1)(E_{x,{j-1}})\subset E_{x,j}\otimes\Omega_X(D)_x.
\] Denote the category and the stack of such Higgs bundles by $\mathcal{H}{iggs}(\zeta)$. Unfortunately, this stack is not of finite type over $\mathbf{k}$, and in fact, has an infinite motivic volume. To resolve the problem we endow the category with a stability structure. Let $\sigma=\sigma_{\bullet,\bullet}$ be a sequence of real numbers indexed by $D\times\mathbb{Z}_{>0}$. Let $\kappa\in\mathbb{R}_{\ge0}$. We define \emph{the $(\kappa,\sigma)$-degree} of a parabolic bundle $\mathbf E=(E,E_{\bullet,\bullet})$ by
\[
\deg_{\kappa,\sigma}\mathbf E:=\kappa\deg E+\sum_{x\in D}\sum_{j>0}\sigma_{x,j}(\dim E_{x,j-1}-\dim E_{x,j})\in\mathbb{R}.
\]
If $\mathbf E\ne0$, we define the \emph{$(\kappa,\sigma)$-slope} of $\mathbf E$ as $\deg_{\kappa,\sigma}\mathbf E/\rk\mathbf E$.
We say that a sequence $\sigma=\sigma_{\bullet,\bullet}$ of real numbers indexed by $D\times\mathbb{Z}_{>0}$ is a \emph{sequence of parabolic weights} if for all $x\in D$ we have
\begin{equation}\label{eq:StabCond}
\sigma_{x,1}\le\sigma_{x,2}\le\cdots
\end{equation}
and for all $x$ and $j$ we have $\sigma_{x,j}\le\sigma_{x,1}+1$. Let $\sigma$ be a sequence of parabolic weights. Let $\mathbf E=(E,E_{\bullet,\bullet})$ be a parabolic bundle. Let $F\subset E$ be a saturated vector subbundle (that is, $E/F$ is torsion free). Set $F_{x,j}:=F_x\cap E_{x,j}$. Then $\mathbf F:=(F,F_{\bullet,\bullet})$ is a parabolic bundle. We say that a Higgs bundle~$(\mathbf E,\Phi)$ is \emph{$\sigma$-semistable}, if for all saturated subbundles $F$ of $E$ preserved by~$\Phi$ the $(1,\sigma)$-slope of the corresponding parabolic bundle $\mathbf F$ is less than or equal to that of $\mathbf E$. We have an open substack $\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)$ of $\mathcal{H}{iggs}_\gamma(\zeta)$ classifying $\sigma$-semistable parabolic Higgs bundles. This stack is of finite type over $\mathbf{k}$; we will calculate its motivic class (see Section~\ref{sect:ExplAnswers} and Theorem~\ref{th:ExplAnsw}).
We note that condition~\eqref{eq:StabCond} is imposed on $\sigma$ to ensure that we have Harder--Narasimhan filtrations for parabolic Higgs bundles. We also note that, scaling $\kappa$ and $\sigma$ by the same positive real number scales all the slopes by the same number. This is why we restrict to the case $\kappa=1$ above (see Remark~\ref{rm:kappa} for more details).
\subsubsection{Semistable parabolic bundles with connections}\label{sect:IntroConnSS} We can also impose stability conditions on parabolic bundles with connections. Moreover, for non-resonant connections we can work with more general stability conditions, than those for Higgs bundles defined in the previous paragraph. A sequence $\zeta$ as above is called \emph{non-resonant} if for all $x\in X$ and all $i,j>0$ we have $\zeta_{x,i}-\zeta_{x,j}\notin\mathbb{Z}_{\ne0}$. Take $\kappa\in\mathbb{R}_{\ge0}$ and a sequence $\sigma$ of real numbers indexed by $D\times\mathbb{Z}_{>0}$ and satisfying condition~\eqref{eq:StabCond}.
\looseness=1 Assume that $\zeta$ is non-resonant. We define $(\kappa,\sigma)$-semistability of parabolic bundles with connections similarly to semistability of Higgs bundles but using the $(\kappa,\sigma)$-slope. Denote the corresponding moduli stack by $\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)$; this is an open substack of $\mathcal{C}{onn}_\gamma(\zeta)$. If $\zeta$ is resonant, then $\sigma$ has to satisfy some additional conditions (see Proposition~\ref{pr:HN2} and Re\-mark~\ref{rm:resonant}).
\subsection{Motivic Donaldson--Thomas invariants}\label{sect:DT} Our formulas for motivic classes of the moduli stacks above are all given in terms of certain motivic classes $\overline B_\gamma$ called \emph{motivic Donaldson--Thomas invariants} (see Section~\ref{sect:Aftermath} for this terminology), which we are going to define. First of all, we recall that in~\cite[Section~2]{FedorovSoibelmans} we defined (following earlier works \cite[Section~1]{Ekedahl09}, \cite{Joyce07}, and~\cite{KontsevichSoibelman08}) the ring of motivic classes of Artin stacks denoted $\Mot(\mathbf{k})$. We also defined its dimensional completion $\cMot(\mathbf{k})$. For an Artin stack $\mathcal S$ of finite type over $\mathbf{k}$ we have its motivic class $[\mathcal S]\in\Mot(\mathbf{k})$. We denote its image in $\cMot(\mathbf{k})$ by the same symbol.
For a curve $X$ and a partition $\lambda$ we defined the series $J_\lambda^{\rm mot}(z),H_\lambda^{\rm mot}(z)\in\cMot(\mathbf{k})[[z]]$ in~\cite[Section~1.3.2]{FedorovSoibelmans}. The definitions (especially of $H_\lambda^{\rm mot}(z)$) are somewhat long, so we will not recall them here inviting the reader to look into~\cite{FedorovSoibelmans}. We only note that $J_\lambda^{\rm mot}(z)$ and $H_\lambda^{\rm mot}(z)$ are defined in terms of the motivic zeta-function of~$X$ (cf.~\eqref{eq:MotZeta} below). In particular, they only depend on~$X$ but not on~$D$. In this paper, we will denote them by $J_{\lambda,X}^{\rm mot}(z)$ and $H_{\lambda,X}^{\rm mot}(z)$ respectively to emphasize that they depend on the curve $X$ and to ensure that they are not confused with motivic modified Macdonald polynomials $\tilde H_\lambda^{\rm mot}(w_\bullet;z)$ and with motivic Hall--Littlewood polynomials $H_\lambda^{\rm mot}(w_\bullet)$ defined below.
The modified Macdonald polynomials $\tilde H_\lambda(w_\bullet;q,z)$ are symmetric functions in variables $w_\bullet=(w_1,w_2,\dots)$ with coefficients in $\mathbb{Z}[q,z]$. In~\cite[Definition~2.5]{MellitPunctures} the modified Macdonald polynomials are defined as symmetric functions with coefficients in $\mathbb{Q}[q,z]$ but it is well-known that the coefficients are integers (see, e.g., \cite{HaglundEtAlOnMacdonaldPoly} and references therein). Note that, formally speaking, symmetric functions are not polynomials (they become polynomials upon plugging in $w_{N+1}=w_{N+2}=\dots=0$). Let $\mathbb{L}=\big[\mathbb A_\mathbf{k}^1\big]$ be the motivic class of the affine line. We denote by $\tilde H_\lambda^{\rm mot}(w_\bullet;z)$ the symmetric function with coefficients in $\Mot(\mathbf{k})[z]$ obtained from $\tilde H_\lambda(w_\bullet;q,z)$ by substituting $\mathbb{L}$ for~$q$.
We denote their images in the ring of symmetric functions with coefficients in $\cMot(\mathbf{k})[z]$ by $\tilde H_\lambda^{\rm mot}(w_\bullet;z)$ as well.
Let $\Gamma_+$ denote the commutative monoid of sequences $(r,r_{\bullet,\bullet},d)$, where $r$ is a nonnegative integer, $r_{\bullet,\bullet}$ is a sequence of nonnegative integers indexed by $D\times\mathbb{Z}_{>0}$, $d$ is an integer, subject to the following conditions:
\begin{enumerate}\itemsep=0pt
\item[(i)] For all $x\in D$ we have $\sum\limits_{j=1}^{\infty}r_{x,j}=r$. In particular, $r_{x,j}=0$ for $j$ large enough.
\item[(ii)] If $r=0$, then $d=0$ (and so $r_{x,j}=0$ for all $x$ and $j$).
\end{enumerate}
The operation on $\Gamma_+$ is the componentwise addition. For $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$ we set $\rk\gamma=r$. The significance of the monoid $\Gamma_+$ is that the class of a parabolic bundle $\mathbf E$ defined by~\eqref{eq:class} is an element of $\Gamma_+$. We also need a submonoid $\Gamma_+'\subset\Gamma_+$ given by $d\le 0$. Consider the completed monoid ring $\Mot(\mathbf{k})[[\Gamma_+']]$, we write its elements as $\sum\limits_{\gamma\in\Gamma_+'}A_\gamma e_\gamma$, where $A_\gamma\in\Mot(\mathbf{k})$, $e_\gamma$ are basis vectors. It is convenient to identify $e_\gamma$ with a monomial
\[
w^r\prod_{x\in D}\prod_{j=1}^{\infty}w_{x,j}^{r_{x,j}}z^d,
\]
where $w_{\bullet,\bullet}=(w_{x,j})$ is a sequence of variables indexed by $D\times\mathbb{Z}_{>0}$. Then we identify $\Mot(\mathbf{k})[[\Gamma_+']]$ with a subring of $\Mot(\mathbf{k})\big[\big[w,w_{\bullet,\bullet},z^{-1}\big]\big]$. Similarly, we consider the completed monoid ring $\cMot(\mathbf{k})[[\Gamma_+']]$. We note that these rings are closely related to completed quantum tori considered in~\cite{KontsevichSoibelman08,KontsevichSoibelman10}. In our case, they are commutative, essentially because we are working with 2-dimensional Calabi--Yau categories; see Sections~\ref{sect:Aftermath} and~\ref{sect:CatsOverPar} for more details.
Finally, we need the notion of \emph{plethystic exponent and logarithm}. Let $\Mot(\mathbf{k})[[\Gamma_+']]^0$ denote the subset of $\Mot(\mathbf{k})[[\Gamma_+']]$ consisting of elements with zero constant terms. Then we have a~bijection
\[
\Exp:\Mot(\mathbf{k})[[\Gamma_+']]^0\to1+\Mot(\mathbf{k})[[\Gamma_+']]^0
\]
called the plethystic exponent. We refer the reader to Section~\ref{sect:Plethystic} for the definition. Let the plethystic logarithm $\Log$ be the inverse bijection. Let us write
\[
\mathbb{L}\cdot\Log\left(\sum_\lambda w^{|\lambda|} J_{\lambda,X}^{\rm mot}\big(z^{-1}\big)H_{\lambda,X}^{\rm mot}\big(z^{-1}\big)\prod_{x\in D}\tilde H_\lambda^{\rm mot}\big(w_{x,\bullet};z^{-1}\big)\right)=
\sum_{\gamma\in\Gamma'_+}\overline B_\gamma e_\gamma,
\]
where the sum in the LHS is over all partitions. We call the elements $\overline B_\gamma$ the \emph{Donaldson--Thomas invariants}. Note that $\overline{B}_0=0$.
When $X=\P^1_\mathbf{k}$, we can define motivic Donaldson--Thomas invariants $B_\gamma$ by a simpler formula valid in $\Mot(\mathbf{k})$:
\begin{equation}\label{eq:DT_P1intro}
\mathbb{L}\cdot\Log\left(\sum_\lambda\frac{w^{|\lambda|}\prod\limits_{x\in D}\tilde H_\lambda^{\rm mot}\big(w_{x,\bullet};z^{-1}\big)}
{\prod\limits_{h\in\Hook(\lambda)}
\big(\mathbb{L}^{a(h)}-z^{-l(h)-1}\big)\big(\mathbb{L}^{a(h)+1}-z^{-l(h)}\big)}\right)=
\sum_{\gamma\in\Gamma'_+}B_\gamma e_\gamma,
\end{equation}
where $\Hook(\lambda)$ stands for the set of hooks of $\lambda$, $a(h)$ and $l(h)$ stand for the armlength and the leglength of the hook $h$ respectively. We show that for $X=\P^1$ the images of $B_\gamma$ in $\cMot(\mathbf{k})$ are equal to $\overline B_\gamma$.
\subsection{Explicit formulas}\label{sect:ExplAnswers} The following explicit formulas for the motivic classes are parts of Theorem~\ref{th:ExplAnsw2}, Theorem~\ref{th:ExplAnsw}, and Theorem~\ref{th:ExplAnsw3} respectively. Let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$, $\gamma\ne0$. Let $\zeta$ be as in Section~\ref{sect:IntroConn}. For $\kappa\in\mathbf{k}$ we define the $(\kappa,\zeta)$-degree and the $(\kappa,\zeta)$-slope of parabolic bundles similarly to $(\kappa,\sigma)$-degree and $(\kappa,\sigma)$-slope defined in Section~\ref{sect:IntroConnSS}; the only difference is that the $(\kappa,\zeta)$-degree and $(\kappa,\zeta)$-slope take values in~$\mathbf{k}$.
For each $\tau\in\mathbf{k}$, define the elements $C_\gamma(\zeta)\in\cMot(\mathbf{k})$, where $\gamma$ ranges over elements of $\Gamma_+'$ such that $\gamma=0$ or the $(1,\zeta)$-slope of $\gamma$ is $\tau$, by the following formula
\begin{equation*}
\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{1,\zeta}\gamma=\tau\rk\gamma}}\mathbb{L}^{-\chi(\gamma)}C_\gamma(\zeta)e_\gamma=
\Exp\left(\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{1,\zeta}\gamma=\tau\rk\gamma}}
\overline B_\gamma e_\gamma
\right),
\end{equation*}
where $\chi(\gamma):=(g-1)r^2+\sum\limits_{x\in D}\sum\limits_{j<j'}r_{x,j}r_{x,j'}$, $g$ is the genus of~$X$. Let $\gamma\in\Gamma_+$ be such that $\deg_{1,\zeta}\gamma=0$. Then, according to Theorem~\ref{th:ExplAnsw2}, the stack $\mathcal{C}{onn}_\gamma(\zeta)$ is of finite type over $\mathbf{k}$ and we have in $\cMot(\mathbf{k})$
\[
[\mathcal{C}{onn}_\gamma(\zeta)]=C_{(r,r_{\bullet,\bullet},d-Nr)}(\zeta),
\]
whenever $N$ is large enough. If $\deg_{1,\zeta}\gamma\ne0$, then the stack $\mathcal{C}{onn}_\gamma(\zeta)$ is empty.
Next, assume that $\zeta$ and $\sigma$ are as in Section~\ref{sect:IntroHiggs}, for $\tau\in\mathbb{R}$ define the elements $H_\gamma(\zeta,\sigma)\in\cMot(\mathbf{k})$ by the following formula
\begin{equation*}
\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{0,\zeta}\gamma=0\\ \deg_{1,\sigma}\gamma=\tau\rk\gamma}}\mathbb{L}^{-\chi(\gamma)}H_\gamma(\zeta,\sigma)e_\gamma=
\Exp\left(\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{0,\zeta}\gamma=0\\ \deg_{1,\sigma}\gamma=\tau\rk\gamma}}
\overline B_\gamma e_\gamma
\right).
\end{equation*}
Assume that $\deg_{0,\zeta}\gamma\!=\!0$, where $\gamma\in\Gamma_+'$. Then, according to Theorem~\ref{th:ExplAnsw}, the stack $\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)$ is of finite type over $\mathbf{k}$ and we have
\[
\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]=H_{(r,r_{\bullet,\bullet},d-Nr)}(\zeta,\sigma),
\]
whenever $N$ is large enough. If $\deg_{0,\zeta}\gamma\ne0$, then the stack $\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)$ is empty.
Finally, assume that $\zeta$ and $(\kappa,\sigma)$ are as in Section~\ref{sect:IntroConnSS}. For $\tau\in\mathbf{k}$, $\tau'\in\mathbb{R}$ define the elements $C_\gamma(\zeta,\kappa,\sigma)\in\cMot(\mathbf{k})$ by the following formula
\begin{equation*}
\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{1,\zeta}\gamma=\tau\rk\gamma\\ \deg_{\kappa,\sigma}\gamma=\tau'\rk\gamma }}\mathbb{L}^{-\chi(\gamma)}C_\gamma(\zeta,\kappa,\sigma)e_\gamma=
\Exp\left(\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{1,\zeta}\gamma=\tau\rk\gamma\\ \deg_{\kappa,\sigma}\gamma=\tau'\rk\gamma }}
\overline B_\gamma e_\gamma
\right).
\end{equation*}
Let $\gamma\in\Gamma_+$ be such that $\deg_{1,\zeta}\gamma=0$. Then, according to Theorem~\ref{th:ExplAnsw3}, we have
\[
\big[\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]=C_{(r,r_{\bullet,\bullet},d-Nr)}(\zeta,\kappa,\sigma),
\]
whenever $N$ is large enough. If $\deg_{1,\zeta}\gamma\ne0$, then the stack $\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)$ is empty.
If $X=\P^1$, we get similar results valid in $\Mot(\mathbf{k})$, by replacing $\overline B_\gamma$ with $B_\gamma$ defined by a~simpler formula~\eqref{eq:DT_P1intro}.
\begin{Remark} We note that each of the above motivic classes depends only on finitely many DT-invariants. Indeed, $[\mathcal{C}{onn}_\gamma(\zeta)]$, $\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]$, and $\big[\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]$ depend only on $\overline B_{\gamma'}$ with $\rk\gamma'\le\rk\gamma$ and for a given $\gamma$ there are only finitely many such $\gamma'\in\Gamma'_+$.
\end{Remark}
\begin{Remark} We note also that all the stacks whose motivic classes we are calculating are of finite type over $\mathbf{k}$, so their motivic classes are defined in $\Mot(\mathbf{k})$. However, we can only calculate their motivic classes in $\cMot(\mathbf{k})$ except when $X=\P^1$. The reason is that, our calculation is based on the calculation of motivic classes of stacks of vector bundles on $X$ (without parabolic structures) with nilpotent endomorphisms. This calculation is performed in~\cite{FedorovSoibelmans} and is, in turn, based on the motivic analogue of Harder's residue formula (see~\cite[Theorem~1.5.1 and Section~4]{FedorovSoibelmans} and~\cite[Theorem~2.2.3]{HarderAnnals}). This formula, which is essentially saying that ``all vector bundles have essentially the same motivic number of Borel reductions'' involves some limiting process and is, therefore, only valid in the completed ring~$\cMot(\mathbf{k})$.
\end{Remark}
\subsection{Aftermath}\label{sect:Aftermath} In Section~\ref{sect:DT} we defined the classes $\overline B_\gamma$. These classes should be thought of as the Donaldson--Thomas invariants of the stack $\mathcal{H}{iggs}(0)$ of parabolic Higgs bundles with nilpotent residues. Note that this stack is the cotangent bundle of $\mathcal{B}{un}^{\rm par}(X,D)$, while the stacks $\mathcal{H}{iggs}(\zeta)$ and $\mathcal{C}{onn}(\zeta)$ are \emph{twisted cotangent bundles}. We emphasize that $\overline B_\gamma$ do not depend on~$\zeta$,~$\kappa$, and~$\sigma$. The meaning of the formulas in Section~\ref{sect:ExplAnswers} is that the Donaldson--Thomas invariants of these twisted cotangent bundles are obtained by restricting the range of $\gamma$ to the submonoid $\deg_{0,\zeta}\gamma=0$ in the case of $\mathcal{H}{iggs}(\zeta)$ and to the submonoid $\deg_{1,\zeta}\gamma=0$ in the case of $\mathcal{C}{onn}(\zeta)$.
Another feature of the formulas is that the motivic classes of the stacks depend on \emph{equations} satisfied by~$\kappa$ and~$\sigma$ rather than on inequalities. In other words, there is no \emph{wall-crossing} in our case. This is not very surprising, as the category $\mathcal{H}{iggs}(0)$, being a cotangent bundle of $\mathcal{B}{un}^{\rm par}(X,D)$, is a 2-dimensional Calabi--Yau category, cf.~\cite{RenSoibelman}.
One can speculate that similar results should be valid for the twisted cotangent stacks to the moduli stack of objects of any reasonable 1-dimensional category. Note that such cotangent stacks were studied by G.~Dobrovolska, V.~Ginzburg, and R.~Travkin in~\cite{DobrovolskaGinzburgTravkin}.
Another example of such a twisted cotangent stack is the category of vector bundles with irregular connections and appropriate level structures. This example is certainly more complicated as the corresponding abelian category has infinite homological dimension. We hope to return to this question in subsequent publications.
The formulas in Section~\ref{sect:ExplAnswers} are explicit but complicated. However, one can see that the motivic classes under considerations belong to the sub-$\lambda$-ring of $\cMot(\mathbf{k})$ generated by $\mathbb{L}$, $X$, and the inverses of $\mathbb{L}^i-1$ for $i\ge1$. We note that this ring is probably \emph{strictly larger}, than the subring of $\cMot(\mathbf{k})$ generated by $\mathbb{L}$, the symmetric powers $X^{(i)}$, and the inverses of $\mathbb{L}^i-1$ for $i\ge1$. The reason is that $\cMot(\mathbf{k})$ is unlikely to be a special $\lambda$-ring, see Section~\ref{sect:OtherRel}. On the other hand, if $X=\P^1$, then all our motivic classes are rational functions in $\mathbb{L}$ with denominators being products of $\mathbb{L}^i-1$ for $i\ge1$.
\subsection{Other results} It is clear from the above formulas that we have a lot of equalities between different motivic classes of Higgs bundles and bundles with connections. In particular, we show in Propositions~\ref{pr:ConnUniversal} and~\ref{pr:ConnUniversal2} that every motivic class of the form $\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]$ or $\big[\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]$ is equal to some motivic class of the form $[\mathcal{C}{onn}_\gamma(\zeta)]$, provided that $\mathbf{k}$ is not a finite extension of $\mathbb{Q}$. As a~consequence, we derive from results of Crawley-Boevey~\cite{CrawleyBoeveyIndecompPar} a criterion of non-emptiness of our moduli stacks. It is not difficult to see that if $X\ne\P^1_\mathbf{k}$, then the stack $\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]$ is non-empty if and only if $\deg_{0,\zeta}\gamma=0$, while the stack $\big[\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]$ is non-empty if and only if $\deg_{1,\zeta}\gamma=0$. For $X=\P^1_\mathbf{k}$ the question is much more subtle and is related to the so-called Deligne--Simpson problem. This problem was originally stated for $\mathbf{k} =\mathbb C$ in~\cite{SimsponProducts}. It may be reformulated for an arbitrary algebraically closed field $\mathbf{k}$ of characteristic~$0$ as follows: given a~sequence of $\mathfrak{gl}_r$-conjugacy classes $C_\bullet$ indexed by $D$, does there exist a pair $(E, \nabla)$ consisting of a~rank~$r$ vector bundle and a connection $\nabla$ on $E$ with poles bounded by $D$ such that $\Res_x \nabla\in C_x$ for all $x\in D$? Bundles with connections $(E,\nabla)$ parameterized by $\mathcal{C}{onn}_{\gamma}\big(\P^1_\mathbf{k},D,\zeta\big)$ are exactly bundles with connections such that each residue $\Res_x\nabla$ lies in the closure of a conjugacy class determined by $\gamma$ and $\zeta$ (see~\cite{Crawley-Boevey:Indecomposable}). If this conjugacy class is semisimple for each $x\in D$, then the elements of $\mathcal{C}{onn}_{\gamma}\big(\P^1_\mathbf{k},D,\zeta\big)$ are the solutions of the corresponding Deligne--Simpson problem. For a comprehensive survey of the Deligne--Simpson problem, see \cite{Crawley-Boevey:Indecomposable,Kostov2004Deligne,Simpson:MiddleConv}.
We note that if $\mathbf{k}$ is not algebraically closed, one can ask a subtler question of whether there is a $\mathbf{k}$-rational point in $\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]$ or $\big[\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]$. We do not know the answer to this question. A somewhat similar question for moduli spaces of quiver representations is considered by V.~Hoskins and F.~Schaffhauser in~\cite{hoskins2017rational}.
At this point, we would like to emphasize that working with not necessarily algebraically closed fields is inevitable in the motivic setup: even if one is only interested in the case $\mathbf{k}=\mathbb C$, one still has to consider all fields of characteristic zero, see Remark~\ref{rm:ArbFields} below.
One of the motivations for this work is the non-abelian Hodge theory of C.~Simpson (see~\cite{SimpsonHarmonicNoncompact}). In this paper, Simpson constructs an equivalence between a category of parabolic bundles with connections and a category of Higgs bundles. In Proposition~\ref{pr:Simpson}, we show that the corresponding stacks have equal motivic classes. We note that neither statement can be derived from the other (cf.~\cite[Remark~1.2.2]{FedorovSoibelmans}). We note also, that it is clear from our results that there are many more equalities of motivic classes, than those that one can guess from the non-abelian Hodge theory. We remark that V.~Hoskins and S.~Pepin Lehalleur have shown in~\cite[Theorem~4.2]{HoskinsLehalleurOnVoevodskyMotive} that the Voevodsky motives of the coarse moduli spaces of bundles with connections and of Higgs bundles are equal in the case when the rank and the degree are coprime. One can ask whether this can be upgraded to the parabolic situation.
\subsection{Other relations with previous work}\label{sect:OtherRel}
We have already noted that our results are closely related with the results of Mellit~\cite{MellitPunctures}. One difference is that Mellit counts the weighted number of points over a finite field, while we work over a field of characteristic zero and calculate motivic classes. Mellit counts the volumes of moduli stacks of Higgs bundles but is not considering bundles with connections. Another difference is that Mellit is not fixing the eigenvalues of Higgs fields.
\looseness=-1 On the other hand, Mellit's answers are simpler as they do not involve Schiffmann's polynomials~$H_\lambda$. In fact, Mellit's simplification of Schiffmann's formula from~\cite{SchiffmannIndecomposable} has nothing to do with parabolic structures. This simplification is the content of Mellit's papers~\cite{MellitIntegrality,MellitPunctures,MR4105090}. We believe that this simplification \emph{does not go through} in the motivic case because Mellit is using the fact that the $\lambda$-ring structure on symmetric functions is special (which, roughly speaking, means that $\lambda_\bullet(xy)$ and $\lambda_\bullet(\lambda_\bullet(x))$ can be expressed in terms of $\lambda_\bullet(x)$ and $\lambda_\bullet(y)$). It is known (see~\cite{LarsenLuntsRational}) that the Grothendieck $\lambda$-ring of varieties is not a special $\lambda$-ring. We do not know whether the Grothendieck $\lambda$-ring of \emph{stacks} $\Mot(\mathbf{k})$ and its completion $\cMot(\mathbf{k})$ are special. In any case, if one replaces $\cMot(\mathbf{k})$ by some its quotient that is a special $\lambda$-ring, then one expects that the Mellit's simplifications are valid in this quotient. Examples of such quotients are the Grothendieck ring of the category of Chow motives and the maximal special quotient of $\cMot(\mathbf{k})$ (see~\cite{LarsenLuntsRational}).
The paper~\cite{ChuangDiaconescuDonagiPantev}, although conjectural, contains an alternative approach to the problem via upgrading the computation of the motivic class of Higgs bundles to the problem about motivic Pandharipande--Thomas invariants on the non-compact Calabi--Yau 3-fold associated with the spectral curve.
Note also that Mozgovoy and Schiffmann in~\cite{MozgovoySchiffmanOnHiggsBundles} consider Higgs bundles with a twist by an arbitrary line bundle of degree at least $2g-2$, where $g$ is the genus of $X$. However, they do not consider parabolic structures and do not fix eigenvalues.
Finally we note that the general philosophy of Donaldson--Thomas invariants and the approach via motivic and cohomological Hall algebras (see~\cite{KontsevichSoibelman08,KontsevichSoibelman10}) are applicable to our situation. For more details about the approach that uses motivic Hall algebras we refer the reader to~\cite[Section~1.6, Remark~3.6.3]{FedorovSoibelmans}.
\subsection{Organization of the article}
In Section~\ref{sect:ParBundles} we define the category $\mathcal{B}{un}^{\rm par}(X,D)$ of parabolic bundles and its graded stack of objects denoted by the same letter. Most of our stacks below will be stacks over $\mathcal{B}{un}^{\rm par}(X,D)$.
In Section~\ref{sect:ParPairs} we study the stack of bundles with endomorphisms. This stack is the main intermediate object in our calculations. First, we calculate the motivic classes of stacks of parabolic bundles with nilpotent endomorphisms with fixed generic type. The calculation is based on Theorem~\ref{th:Factorization}, saying that these motivic classes are products of motivic classes of similar stacks without parabolic structures and of ``local stacks'' independent of the curve. This is a motivic analogue of~\cite[Theorem~5.6]{MellitPunctures}. However, the proof in the motivic case is significantly more involved and, hopefully, more conceptual.
The motivic classes of stacks of vector bundles (without parabolic structures) with nilpotent endomorphisms are calculated in the proof of~\cite[Theorem~1.4.1]{FedorovSoibelmans}. Since the motivic classes of ``local stacks'' are independent of the curve, it is enough to calculate them for $X=\P^1$; we do this using the ideas and results of Mellit~\cite{MellitPunctures}.
Then we use the formalism of plethystic powers to calculate the motivic classes of stacks of parabolic bundles with arbitrary endomorphisms.
In Section~\ref{sect:HiggsnEigenval} we study parabolic Higgs bundles with fixed eigenvalues. If the eigenvalues are equal to zero, then the endomorphisms of a given parabolic bundle and the Higgs fields on this bundle are parameterized by vector spaces whose dimensions differ by some Euler characteristic. Thus, it is easy to relate the motivic classes of the two stacks. If the eigenvalues are not zero, then not every parabolic bundle admits a Higgs fields with these eigenvalues. We give a criterion for existence of such a Higgs field in Lemma~\ref{lm:existence}. This allows us to express the motivic class of parabolic Higgs bundles with fixed eigenvalues in terms of the motivic class of the so-called isoslopy parabolic bundles with endomorphisms (see Proposition~\ref{pr:Sasha}).
The motivic class of isoslopy parabolic bundles with endomorphisms is derived from the results of Section~\ref{sect:ParPairs} with the help of a factorization formula (see Proposition~\ref{pr:IsoslProd}). This is analogous to~\cite[Proposition~3.5.1]{FedorovSoibelmans}.
In Section~\ref{sect:Stability} we use a version of Kontsevich--Soibelman factorization formula to calculate the motivic classes of stacks of semistable Higgs bundles. These depend on two sets of parameters: the eigenvalues and the stability condition. Somewhat surprisingly, these two sets come symmetrically in the answer.
Up to Section~\ref{sect:Stabilization} we work with nonpositive vector bundles, that is, vector bundles having no subbundle of positive degree. Without stability this restriction is inevitable as otherwise the moduli stacks would have infinite motivic volume. With a stability condition we can drop this technical restriction; the motivic classes of semistable parabolic Higgs bundles whose underlying vector bundles are not necessarily nonpositive are calculated in Section~\ref{sect:Stabilization}.
In Section~\ref{sect:Conn} we study the moduli stack of bundles with connections~-- with or without stability condition. The strategy for connections is similar to that for Higgs bundles except that the corresponding stacks are of finite type over $\mathbf{k}$ even without stability conditions. So we first calculate the motivic classes of bundles with connections with given eigenvalues without stability conditions and without nonpositivity assumptions and then use a version of Kontsevich--Soibelman factorization formula to calculate the motivic classes of stacks of semistable bundles with connections.
{\samepage In Section~\ref{sect:NonEmpty} we give a precise criterion for non-emptiness of moduli stacks of Higgs bundles, bundles with connections, or semistable bundles with connections. The idea is that such a~stack is non-empty if and only if its motivic class is non-zero. Using our explicit formulas we re-write each of our motivic classes as the motivic class of a stack of bundles with connections (without stability conditions). The non-emptiness of such a stack is decided using Lemma~\ref{lm:existence2} and Crawley-Boevey's result~\cite[p.~1334, Corollary]{CrawleyBoeveyIndecompPar}.
}
\section{Preliminaries}
\subsection{Conventions}
We denote by $\mathbf{k}$ a field of characteristic zero. We denote by $X$ a smooth projective geometrically connected curve over $\mathbf{k}$ (recall that geometric connectedness means that $X$ remains connected after the base change to an algebraic closure of $\mathbf{k}$). We denote by $D$ a set of $\mathbf{k}$-rational points of $X$ and by $\deg D$ the number of elements of $D$.
If $E$ is a vector space or a vector bundle, we denote by $E^\vee$ the dual vector space (resp.~vector bundle). We identify vector bundles with their sheaves of sections. If $F$ is a coherent sheaf, we denote by $\mathcal{E}{nd}(F)$ the sheaf of its endomorphisms, we have $\mathcal{E}{nd}(F)=F^\vee\otimes F$ if $F$ is a vector bundle.
\subsubsection{Partitions and nilpotent matrices} By a partition we mean a non-increasing sequence of integers $\lambda=\lambda_1\ge\lambda_2\ge\cdots$, where $\lambda_l=0$ for $l\gg0$. Set $|\lambda|:=\sum_i\lambda_i$. For partitions $\lambda$ and $\mu$ we set $\langle\lambda,\mu\rangle=\sum_i\lambda'_i\mu'_i$, where $\lambda'$ and $\mu'$ are the conjugate partitions.
We denote by $\mathfrak{gl}_{r,\mathbf{k}}$ or simply by $\mathfrak{gl}_r$ the set of $r\times r$ matrices with entries in $\mathbf{k}$. We say that a~nilpotent matrix $n\in\mathfrak{gl}_{|\lambda|}=\mathfrak{gl}_{|\lambda|,\mathbf{k}}$ is of type $\lambda$, if for all $i\ge1$ we have $\dim\Ker n^i-\dim\Ker n^{i-1}=\lambda_i$. For each partition $\lambda$ choose a nilpotent matrix $n_\lambda$ of type $\lambda$. For concreteness, we can take for $n_\lambda$ the direct sum of nilpotent Jordan blocks, where the number of blocks of size $i\times i$ is equal to $\lambda_i-\lambda_{i+1}$.
A sequence $(w_1,\dots,w_l,\dots)$ we denote by $w_\bullet$.
\subsubsection{Stacks} We will be working with stacks. All our stacks will have affine stabilizers. Our stacks will be Artin stacks locally of finite type over a field except in Section~\ref{sect:Factorization}, where we will have to work with stacks whose points have stabilizers of infinite type. For a stack $\mathcal S$ we often abuse notation by writing $s\in\mathcal S$ to mean that $s$ is an object of the groupoid $\mathcal S(\mathbf{k})$, or an object of the groupoid $\mathcal S(K)$, where $K$ is an extension of $\mathbf{k}$. Following~\cite[Chapter~5]{LaumonMoretBailly} we say that a~$K'$-point~$\xi'$ of~$\mathcal S$ is \emph{equivalent} to a $K''$-point $\xi''$ of $\mathcal S$ if there is an extension $K\supset\mathbf{k}$ and $\mathbf{k}$-embeddings $K'\hookrightarrow K$, $K''\hookrightarrow K$ such that $\xi'_K$ is isomorphic to $\xi''_K$ (as an object of $\mathcal S(K)$). The corresponding equivalence classes are called \emph{points of $\mathcal S$}; the set of points is denoted by $|\mathcal S|$. See~\cite[Section~2]{FedorovSoibelmans} for more details.
We write ``morphism of stacks'' to mean ``1-morphism of stacks''. We write ``of finite type'' to mean ``of finite type over $\mathbf{k}$''.
\subsection{Motivic functions and motivic classes} Recall that in~\cite[Section~2]{FedorovSoibelmans} we defined (following~\cite[Section~1]{Ekedahl09},~\cite{Joyce07}, and~\cite{KontsevichSoibelman08}) the ring of motivic classes of Artin stacks denoted $\Mot(\mathbf{k})$.
More generally, for an Artin stack $\mathcal X$ locally of finite type over $\mathbf{k}$, we defined the $\Mot(\mathbf{k})$-module of motivic functions on $\mathcal X$ denoted $\Mot(\mathcal X)$. For a morphism $f\colon \mathcal X\to\mathcal Y$ we have the pullback homomorphism $f^*\colon \Mot(\mathcal Y)\to\Mot(\mathcal X)$. The pushforward homomorphism $f_!\colon \Mot(\mathcal X)\to\Mot(\mathcal Y)$ is defined when $f$ is of finite type. We also defined the ring of completed motivic classes, denoted $\cMot(\mathbf{k})$, and $\cMot(\mathbf{k})$-modules of completed motivic functions $\cMot(\mathcal X)$ with a~(probably non-injective) morphism $\Mot(\mathcal X)\to\cMot(\mathcal X)$. We also defined the pullbacks and the pushforwards of completed motivic functions.
We usually work with $\Mot(\mathbf{k})$ but our final results are formulated in $\cMot(\mathbf{k})$.
We defined the notion of a constructible subset of a stack. If $\mathcal X\to\mathcal Y$ is a morphism of finite type, and $\mathcal S\subset\mathcal X$ is a constructible subset, we defined the motivic function $[\mathcal S\to\mathcal Y]\in\Mot(\mathcal Y)$. Recall~\cite[Proposition~2.6.1]{FedorovSoibelmans}:
\begin{Proposition}\label{pr:MotFunEqual}
Assume that we are given $A,B\in\Mot(\mathcal X)$ are such that for all field extensions $K\supset\mathbf{k}$ and for all $\mathbf{k}$-morphisms $\xi\colon \Spec K\to\mathcal X$ we have $\xi^*A=\xi^*B$. Then $A=B$.
\end{Proposition}
\begin{Remark}\label{rm:ArbFields}
The previous proposition is one of the reasons we have to work with arbitrary fields. Indeed, even if we start with $\mathbf{k}=\mathbb C$, to be able to apply the proposition we have to consider all finitely generated extensions of~$\mathbb C$; see, for example, Section~\ref{sect:ProofFact}.
\end{Remark}
In Section~\ref{sect:NonEmpty} we will need the following proposition.
\begin{Proposition}\label{pr:NonEmpty}
An Artin stack of finite type over $\mathbf{k}$ is non-empty if and only if its motivic class in $\cMot(\mathbf{k})$ is not equal to zero.
\end{Proposition}
\begin{proof}
The `if' direction is obvious. For the other direction assume for a contradiction that $\mathcal S$ is a non-empty Artin stack of finite type over $\mathbf{k}$ such that $[\mathcal S]=0\in\cMot(\mathbf{k})$. This means that for all $m\in\mathbb{Z}$ we have $[\mathcal S]\in F^m\Mot(\mathbf{k})$, where $F^\bullet$ is the dimensional filtration on $\Mot(\mathbf{k})$. According to~\cite[Propositions~3.5.6 and~3.5.9]{KreschStacks} every Artin stack of finite type with affine stabilizers has a~stratification by global quotients of the form $T/{\rm GL}_n$, where $T$ is a scheme. Thus, replacing $\mathcal S$ with a stratification and clearing the denominators, we may assume that~$\mathcal S$ is a disjoint union of integral affine schemes. Recall from~\cite[Section~2.5]{FedorovSoibelmans} that $\Mot(\mathbf{k})$ is the localization of the K-ring of varieties $\Mot_{\rm var}(\mathbf{k})$ with respect to the multiplicative set generated by $\mathbb{L}$ and $\mathbb{L}^i-1$, where $i>0$. Thus, multiplying $\mathcal S$ by a certain product of these elements, we may assume that the class of $\mathcal S$ in $\Mot_{\rm var}(\mathbf{k})$ belongs to the subgroup $F^{m-1}\Mot_{\rm var}(\mathbf{k})$ generated by the classes of the varieties of dimension at most $m-1$, where $m=\dim\mathcal S$. Compactifying each top-dimensional connected component of $\mathcal S$ and taking the resolution of singularities, we may assume that $\mathcal S$ is the disjoint union of smooth projective $\mathbf{k}$-varieties.
Recall that the Hodge--Deligne polynomial of a smooth projective variety $Y$ is
\[
\sum_{p,q=0}^{\dim Y}(-1)^{p+q}h^{p,q}(Y)u^pv^q,
\]
where $h^{p,q}=\dim H^q(Y,\wedge^p\Omega_Y)$. This extends uniquely to a~homomorphism $E\colon \Mot_{\rm var}(\mathbf{k})\to\mathbb{Z}[u,v]$. Clearly, $E([Y])$ has degree $2m$, if $Y$ is a smooth projective variety of dimension~$m$. On the other hand, $E([Y])$ has degree at most $2m-2$, if $Y$ is any variety of dimension at most $m-1$. We see that, on the one hand $E(\mathcal S)$ has degree~$2m$, on the other hand it has degree $2m-2$. We come to contradiction.
\end{proof}
\subsection{Principal bundles and special groups} Let $H$ be an algebraic group of finite type over $\mathbf{k}$. Recall that a \emph{principal $H$-bundle} over a $\mathbf{k}$-stack $\mathcal B$ is a stack $E$ together with a schematic smooth surjective morphism of finite type $E\to\mathcal B$ and an action $a\colon H\times_\mathbf{k} E\to E$ such that $H$ acts simply transitively on the fibers of $E\to\mathcal B$. More precisely, the simple transitivity means that the morphism $(a,p_2)\colon H\times E\to E\times_\mathcal B E$ is an isomorphism, where $p_2\colon H\times E\to E$ is the projection. A principal bundle is \emph{trivial} if there is an isomorphism $E\approx H\times\mathcal B$ compatible with the action and the projection to~$\mathcal B$. If $E$ is a principal $H$-bundle over $\mathcal B$ and $\mathcal B'\to\mathcal B$ is a morphism, then one gets an induced principal $H$-bundle $E\times_\mathcal B\cB'$ over $\mathcal B'$. The group $H$ is called \emph{special} if every principal $H$-bundle~$E$ over a~scheme $B$ of finite type over~$\mathbf{k}$ is locally trivial over $B$ in the Zariski topology.
The following lemma is standard, see, e.g., \cite[Section~2.3]{BehrendDhillon}.
\begin{Lemma}\label{lm:SpecialMot}
Let $H$ be a special group and $E\to\mathcal B$ be a principal $H$-bundle, where $\mathcal B$ is an Artin stack of finite type over $\mathbf{k}$. Then in $\Mot(\mathbf{k})$ we have $[E]=[H][\mathcal B]$.
\end{Lemma}
\begin{proof}
The case when $\mathcal B$ is a scheme is easily proved by Noetherian induction. If $\mathcal B$ is a stack, then, using again~\cite[Propositions~3.5.6 and~3.5.9]{KreschStacks}, we may assume that $\mathcal B$ is a global quotient: $\mathcal B=S/{\rm GL}_n$, where $S$ is a scheme. Then we have the cartesian diagram
\begin{equation*}
\begin{CD}
E' @>>> E\\
@VVV @VVV\\
S @>>> \mathcal B,
\end{CD}
\end{equation*}
where $E'=E\times_\mathcal B S$ and $E=E'/{\rm GL}_n$. Applying~\cite[Corollary~2.2.2]{FedorovSoibelmans} with $\mathcal S=\Spec\mathbf{k}$, we get $[E']=[E][{\rm GL}_n]$ and $S=[\mathcal B][{\rm GL}_n]$. Next, $E'$ is a principal $H$-bundle over the scheme $S$ so $[E']=[H][S]$. Combining these equations we easily get the required statement.
\end{proof}
Recall that a $\mathbf{k}$-group $U$ is \emph{unipotent} if it can be embedded into a group of strictly upper triangular matrices. Every unipotent subgroup is obtained from the additive group of the 1-dimensional vector space by iterated extensions.
\begin{Lemma}\label{lm:SpecialRad}
Let $H$ be an algebraic group of finite type over $\mathbf{k}$ and let $U$ be a unipotent subgroup. Assume that $H/U$ is special. Then $H$ is special. In particular, every unipotent group is special.
\end{Lemma}
\begin{proof}
Assume first that $H$ is a unipotent group. We claim that every principal $H$-bundle over an affine scheme is trivial. We prove this by induction on $\dim H$. If $\dim H=1$, then~$H$ is the additive group and the principal $H$-bundles over a scheme $B$ are classified by the coherent cohomology group $H^1(B,\mathcal O_B)$, which vanishes as soon as $B$ is affine. If $\dim H>1$, then there is a subgroup $H'\subset H$ such that $\dim H'<\dim H$ and $\dim(H/H')<\dim H$. The groups~$H'$ and~$H/H'$ are unipotent. Recall that the principal $H$-bundles over $B$ are classified by the first non-abelian \'etale cohomology group $H_{\text{\'et}}^1(B,H)$. Now the statement follows from the exact sequence $H_{\text{\'et}}^1(B,H')\to H_{\text{\'et}}^1(B,H)\to H_{\text{\'et}}^1(B,H/H')$. In particular, every unipotent group is special.
Now assume that $H/U$ is special, where $U$ is unipotent, and consider, for an affine $B$, the exact sequence $1=H_{\text{\'et}}^1(B,U)\to H_{\text{\'et}}^1(B,H)\to H_{\text{\'et}}^1(B,H/U)$. We see that a principal $H$-bundle is trivial over an affine scheme if the induced principal $H/U$-bundle is trivial. The lemma follows.
\end{proof}
\begin{Lemma}\label{lm:Zspecial}
Let $Z_\lambda$ be the centralizer of $n_\lambda$ in ${\rm GL}_{|\lambda|}$. Then $Z_\lambda$ is a special group.
\end{Lemma}
\begin{proof}
It is well-known that the quotient of $Z_\lambda$ by its unipotent radical is the product of ${\rm GL}_{r_i}$ for some $r_i\in\mathbb{Z}_{>0}$. It remains to note that ${\rm GL}_{r_i}$ are special groups and the product of special groups is special.
\end{proof}
\section{Parabolic bundles}\label{sect:ParBundles}
\subsection{Definitions and notations} Recall that $\mathbf{k}$ denotes a field of characteristic zero, $X$ stands for a smooth projective geometrically connected curve over $\mathbf{k}$, and $D$ is a set of rational points of $X$. We often assume that $D\ne\varnothing$; in this case $X$ has a divisor of degree one defined over $\mathbf{k}$. We will often have to consider sequences indexed by $D\times\mathbb{Z}_{>0}$ or by $D\times\mathbb{Z}_{\ge0}$. A typical notation will be $r_{\bullet,\bullet}$. If $x\in D$, then $r_{x,\bullet}$ stands for the sequence $r_{x,1}, r_{x,2}, \dots$ (or $r_{x,0}, r_{x,1}, \dots$).
The monoid of all sequences $r_{\bullet,\bullet}$ indexed by $D\times\mathbb{Z}_{>0}$ with terms $r_{x,j}$ in a commutative monoid~$S$ and such that $r_{x,j}=0$ for $j\gg0$ (that is, functions on $D\times\mathbb{Z}_{>0}$ with finite support) will be denoted by $S[D\times\mathbb{Z}_{>0}]$.
\begin{Definition}
A \emph{parabolic bundle} of type $(X,D)$ is a collection $(E,E_{\bullet,\bullet})$, where $E$ is a vector bundle over~$X$ and $E_{x,\bullet}$ is a flag in $E_x$ for $x\in D$:
\[
E_x=E_{x,0}\supseteq E_{x,1}\supseteq\dots\supseteq E_{x,l}\supseteq\cdots,\qquad E_{x,l}=0\quad \text{for} \ l\gg0.
\]
\end{Definition}
We have the category $\mathcal{B}{un}^{\rm par}(X,D)$ of parabolic bundles. We sometimes denote a parabolic bundle by a single boldface letter: $\mathbf E=(E,E_{\bullet,\bullet})$. The morphism from $\mathbf E$ to $\mathbf E'$ is a morphism $\phi\colon E\to E'$ such that for all $x\in D$ and $j\ge0$ we have $\phi(E_{x,j})\subset E'_{x,j}$. This category is an additive $\mathbf{k}$-linear category. The direct sum of $(E,E_{\bullet,\bullet})$ and $(E',E'_{\bullet,\bullet})$ is $(E\oplus E',E_{\bullet,\bullet}\oplus E'_{\bullet,\bullet})$. We note that the decomposition of a parabolic bundle into a direct sum of indecomposable parabolic bundles is unique up to isomorphism, while the isotypic summands are unique; the proof is similar to~\cite[Theorem~3]{Atiyah-KrullSchmidt} (see also~\cite[Proposition~3.1.2]{FedorovSoibelmans}).
We often skip $X$ and $D$ from the notation, writing $\mathcal{B}{un}^{\rm par}$ instead of $\mathcal{B}{un}^{\rm par}(X,D)$.
Abusing notation, we denote by $\mathcal{B}{un}^{\rm par}$ the stack of objects of the category $\mathcal{B}{un}^{\rm par}$. Precisely, if $S$ is a $\mathbf{k}$-scheme, then $\mathcal{B}{un}^{\rm par}(S)$ is the groupoid of collections $(E,E_{\bullet,\bullet})$, where $E$ is a vector bundle over $S\times_\mathbf{k} X$, $E_{x,\bullet}$ is a filtration by vector subbundles of the restriction of $E$ to $S\times_\mathbf{k} x$ for $x\in D$. Here by a subbundle of $E|_{S\times_\mathbf{k} x}$ we mean a subsheaf that splits off as a direct summand Zariski locally over $S$.
\subsection[Monoids $\Gamma_+$ and $\Gamma'_+$]{Monoids $\boldsymbol{\Gamma_+}$ and $\boldsymbol{\Gamma'_+}$}\label{sect:Gamma} Consider the free abelian group $\mathbb{Z}\times\mathbb{Z}[D\times\mathbb{Z}_{>0}]\times\mathbb{Z}$ and its subgroup $\Gamma$ consisting of $(r,r_{\bullet,\bullet},d)$ such that for all $x\in D$ we have $\sum_{j=1}^{\infty}r_{x,j}=r$.
Let $\Gamma_+\subset\Gamma$ be the monoid of sequences $(r,r_{\bullet,\bullet},d)$ such that
\begin{enumerate}\itemsep=0pt
\item[(i)] $r\ge0$ and for all $x\in D$ and $j>0$ we have $r_{x,j}\ge0$;
\item[(ii)] if $r=0$, then $d=0$.
\end{enumerate}
Note that it follows from these conditions that $r=0$ implies that $(r,r_{\bullet,\bullet},d)$ is the zero sequence; we denote it by 0. Define the \emph{class function}:
\[
\cl\colon \ \mathcal{B}{un}^{\rm par}\to\Gamma_+,\qquad(E,E_{\bullet,\bullet})\mapsto(\rk E,\dim E_{x,j-1}-\dim E_{x,j},\deg E).
\]
For $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma$ we set $\rk\gamma:=r$. For a parabolic bundle $\mathbf E$ we set $\rk\mathbf E:=\rk\cl(\mathbf E)$.
For $\gamma\in\Gamma_+$, we denote by $\mathcal{B}{un}^{\rm par}_\gamma$ the stack of objects of class $\gamma$; this is an open and closed substack of $\mathcal{B}{un}^{\rm par}$. It is often convenient to think of $\mathcal{B}{un}^{\rm par}=\bigsqcup\limits_{\gamma\in\Gamma_+}\mathcal{B}{un}_\gamma^{\rm par}$ as a $\Gamma_+$-graded stack. Note that $\mathcal{B}{un}^{\rm par}_0$ has a single object: the parabolic bundle of rank zero.
Let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$. The projection $\mathcal{B}{un}_\gamma^{\rm par}(X,D)\to\mathcal{B}{un}_{r,d}(X)$ to the stack of rank~$r$ degree $d$ vector bundles on $X$ is schematic and of finite type (in fact, projective). Thus $\mathcal{B}{un}_\gamma^{\rm par}(X,D)$ and $\mathcal{B}{un}^{\rm par}(X,D)$ are Artin stacks locally of finite type.
Let us call a vector bundle on $X$ \emph{nonpositive} if it does not have a subbundle of positive degree. Recall that in~\cite[Section~3.2]{FedorovSoibelmans}) we called a vector bundle on $X$ HN-nonnegative, if its Harder--Narasimhan spectrum is nonnegative. by~\cite[Lemma~3.2.1(i)]{FedorovSoibelmans} there is an open substack $\mathcal{B}{un}^+(X)\subset\mathcal{B}{un}(X)$ classifying HN-nonnegative vector bundles. Moreover, by~\cite[Lemma~3.2.1(iii)]{FedorovSoibelmans} the substack of $\mathcal{B}{un}^+(X)$ corresponding to vector bundles of rank~$r$ and degree~$d$ is of finite type (this substack was denoted by $\mathcal{B}{un}_{r,d}^{\ge0}(X)$ in loc.~cit.).
By~\cite[Lemma~3.2.1(ii)]{FedorovSoibelmans} a bundle $E$ is HN-nonnegative if and only if it has no quotient bundles of negative degree. Thus a vector bundle $E$ is nonpositive if and only if its dual $E^\vee$ is HN-nonnegative. The following lemma is now clear.
\begin{Lemma}\label{lm:PosNeg}
There is an open substack $\mathcal{B}{un}^-(X)$ of $\mathcal{B}{un}(X)$ classifying nonpositive vector bundles. The assignment $E\mapsto E^\vee$ is an isomorphism between $\mathcal{B}{un}^-(X)$ and $\mathcal{B}{un}^+(X)$. The component of $\mathcal{B}{un}^-(X)$ corresponding to vector bundles of fixed degree and rank is of finite type.
\end{Lemma}
Set
\[
\mathcal{B}{un}^{{\rm par},-}=\mathcal{B}{un}^{{\rm par},-}(X,D):=\mathcal{B}{un}^{\rm par}(X,D)\times_{\mathcal{B}{un}(X)}\mathcal{B}{un}^-(X).
\]
In other words, $\mathcal{B}{un}^{{\rm par},-}$ is the stack (and the category) of parabolic bundles on $X$ whose underlying vector bundle is nonpositive. Set also
\[
\mathcal{B}{un}^{{\rm par},-}_\gamma=\mathcal{B}{un}^{{\rm par},-}_\gamma(X,D):=\mathcal{B}{un}^{\rm par}_\gamma(X,D)\times_{\mathcal{B}{un}(X)}\mathcal{B}{un}^-(X).
\]
\begin{Lemma}\label{lm:ParGamma}
For all $\gamma\in\Gamma_+$ the stack $\mathcal{B}{un}^{{\rm par},-}_\gamma(X,D)$ is an Artin stack of finite type.
\end{Lemma}
\begin{proof}
Follows from Lemma~\ref{lm:PosNeg}.
\end{proof}
Let $\Gamma'_+$ be the submonoid of $\Gamma_+$ consisting of sequences with $d\le0$. Clearly, $\mathcal{B}{un}_\gamma^{{\rm par},-}\ne\varnothing$ only if $\gamma\in\Gamma'_+$.
\subsection[Categories over $\mathcal{B}{un}^{\rm par}$ and $\Gamma_+$-graded stacks]{Categories over $\boldsymbol{\mathcal{B}{un}^{\rm par}}$ and $\boldsymbol{\Gamma_+}$-graded stacks}\label{sect:CatsOverPar}
We will consider below many categories with a forgetful functor to $\mathcal{B}{un}^{\rm par}$ (e.g., the category of parabolic Higgs bundles). Let $\mathcal C$ be such a category and denote by $\mathcal C$ its stack of objects as well. Assume that the morphism $\mathcal C\to\mathcal{B}{un}^{\rm par}$ is of finite type. Define the stacks
\begin{gather}\label{eq:StacksOverPar}
\mathcal C_\gamma:=\mathcal C\times_{\mathcal{B}{un}^{\rm par}}\mathcal{B}{un}^{\rm par}_\gamma,\qquad\!\!
\mathcal C^-:=\mathcal C\times_{\mathcal{B}{un}^{\rm par}}\mathcal{B}{un}^{{\rm par},-},\qquad\!\!
\mathcal C_\gamma^-:=\mathcal C\times_{\mathcal{B}{un}^{\rm par}}\mathcal{B}{un}^{{\rm par},-}_\gamma.\!\!
\end{gather}
By Lemma~\ref{lm:ParGamma}, the stack $\mathcal C_\gamma^-$ is of finite type. The stack $\mathcal C=\bigsqcup\limits_{\gamma\in\Gamma_+}\mathcal C_\gamma$ is $\Gamma_+$-graded, while the stack $\mathcal C^-=\bigsqcup\limits_{\gamma\in\Gamma_+}\mathcal C^-_\gamma$ is $\Gamma'_+$-graded. Moreover, the stack $\mathcal C^-$ is of finite type as a graded stack, that is, its graded components are stacks of finite type.
The group ring $\Mot(\mathbf{k})[\Gamma_+]$ is closely related to \emph{quantum tori} (cf.~\cite{KontsevichSoibelman08}). This has a natural basis $e_\gamma$, where~$\gamma$ ranges over $\Gamma_+$; the multiplication is given by $e_\gamma e_{\gamma'}=e_{\gamma+\gamma'}$. The reason for the multiplication in the quantum torus to be commutative is that we are actually working with 2-dimensional Calabi--Yau categories, and hence the skewsymmetrization of the Euler form vanishes (cf.~\cite{RenSoibelman}).
Let $\Mot(\mathbf{k})[[\Gamma_+]]$ be the completion of $\Mot(\mathbf{k})[\Gamma_+]$ (this can be viewed as the group of $\Mot(\mathbf{k})$-valued functions on $\Gamma_+$).
If $\mathcal D$ is a $\Gamma_+$-graded stack of finite type, we consider the generating series
\begin{equation}\label{eq:MotDT}
[\mathcal D]:=\sum_{\gamma\in\Gamma_+}[\mathcal D_\gamma]e_\gamma\in\Mot(\mathbf{k})[[\Gamma_+]].
\end{equation}
We call $[\mathcal D]$ the \emph{graded motivic class} of the stack $\mathcal D$. Recall that in~\cite{KontsevichSoibelman08} the motivic Donaldson--Thomas series was defined as an element of the completed motivic quantum torus. In our case, it associates to a $\Gamma_+$-graded stack $\mathcal D$ an infinite series in the {\it commutative} motivic quantum torus corresponding to the monoid $\Gamma_+$ endowed with the trivial bilinear form. Thus, it coincides with~$[\mathcal D]$.
Sometimes it is convenient to write $e_\gamma$ explicitly as
\[
e_\gamma=w^r\prod_{x\in D}\prod_{j=1}^{\infty}w_{x,j}^{r_{x,j}}z^d\in\mathbb{Z}\big[\big[w,w_{\bullet,\bullet},z,z^{-1}\big]\big]\subset\Mot(\mathbf{k}) \big[\big[w,w_{\bullet,\bullet},z,z^{-1}\big]\big],
\]
where $\gamma=(r,r_{\bullet,\bullet},d)$. Here $w=w_{\bullet,\bullet}$ stands for the collection of variables
\[
w_{x,\bullet}:=(w_{x,1},w_{x,2},\dots,w_{x,j},\dots),\qquad x\in D.
\]
The variables $w$, $z$, and $w_{x,j}$ for $x\in D$, $j\ge1$ are commuting variables.
\begin{Remark}
Note that we do not fix the lengths of flags. Let us fix a function $l\colon D\to\mathbb{Z}_{>0}$ and consider only flags of length at most $l(x)$ at $x$. Let $\Gamma_{+,l}$ be the submonoid of $\Gamma_+$ consisting of sequences $(r,r_{\bullet,\bullet},d)$ such that $r_{x,j}=0$ whenever $j>l(x)$. We have the obvious projection $\Gamma_+\to\Gamma_{+,l}$, which induces a homomorphism $\Pi_l \colon \Mot(\mathbf{k})[[\Gamma_+]]\to\Mot(\mathbf{k})[[\Gamma_{+,l}]]$. Explicitly, this is just setting $w_{x,j}=0$ whenever $j>l(x)$. Let $\mathcal D$ be as in~\eqref{eq:MotDT}, then $\Pi_l[\mathcal D]=\sum\limits_{\gamma\in\Gamma_{+,l}}[\mathcal D_\gamma]e_\gamma\in\Mot(\mathbf{k})[[\Gamma_{+,l}]]$ is the graded motivic class of the substack of~$\mathcal D$ where the lengths of the flags are bounded by $l$. We see that the difference between fixing the length of flags and allowing flags of arbitrary lengths corresponds on the quantum torus side to the difference between polynomials in infinite number of variables and polynomials in finite number of variables.
In our applications, $[\mathcal D]$ will be symmetric in each sequence of variables $w_{x,\bullet}$ (cf.~Remark~\ref{rm:Weyl}). In this case, the difference between fixing and not fixing the lengths corresponds on the side of motivic classes to the difference between symmetric polynomials and symmetric functions, cf.~\cite[Chapter~1, Section~2]{macdonald1998symmetric}.
\end{Remark}
We emphasize that $\Mot(\mathbf{k})[[\Gamma_+]]$ is not a ring. However, $\Mot(\mathbf{k})[[\Gamma'_+]]\subset\Mot(\mathbf{k})[[w,w_{\bullet,\bullet},z^{-1}]]$ is a ring and $[\mathcal C^-]\in\Mot(\mathbf{k})[[\Gamma'_+]]$ whenever $\mathcal C$ is a stack of finite type over $\mathcal{B}{un}^{\rm par}$. This is in accordance to the general theory in~\cite{KontsevichSoibelman08}, where one fixes a strict sector in $\mathbb{R}^2$ in order to have well-defined Donaldson--Thomas invariants. We can also replace $\Mot(\mathbf{k})$ with $\cMot(\mathbf{k})$ in all the above constructions.
\subsection{Motivic zeta-functions and plethystic operations}\label{sect:Plethystic}
Following~\cite{KapranovMotivic}, for a variety $Y$ define its \emph{motivic zeta-funcion} by
\begin{equation}\label{eq:MotZeta}
\zeta_Y(z):=\sum_{n=0}^\infty\big[Y^{(n)}\big]z^n\in\Mot(\mathbf{k})[[z]],
\end{equation}
where $Y^{(n)}=Y^n/\Sigma_n$ is the $n$-th symmetric power of $Y$ ($\Sigma_n$ denotes the group of permutations). Consider the group $(1+z\Mot(\mathbf{k})[[z]])^\times$, where the group operation is multiplication. According to~\cite[Theorem~2.3]{EkedahlLambdaStacks} $\zeta$ can be uniquely extended to a homomorphism
\[
\zeta\colon \ \Mot(\mathbf{k})\to(1+z\Mot(\mathbf{k})[[z]])^\times\colon \ A\mapsto\zeta_A(z)
\]
such that we have $\zeta_{\mathbb{L}^n A}(z)=\zeta_A\big(\mathbb{L}^nz\big)$ for all $n\in\mathbb{Z}$ and $A\in\Mot(\mathbf{k})$. Clearly,
\[ \zeta_A(z)\equiv1+Ax\pmod {z^2}.\] Thus we have equipped $\Mot(\mathbf{k})$ with a $\lambda$-ring structure. Note that $\Mot(\mathbf{k})$ is \emph{not} a~special $\lambda$-ring, in particular, $\zeta(AB)$ cannot be expressed in terms of $\zeta(A)$ and $\zeta(B)$ (so some authors would call this a pre-$\lambda$-ring structure).
According to loc.~cit., this homomorphism $\zeta$ is continuous with respect to the dimensional filtration on $\Mot(\mathbf{k})$, so it extends to a homomorphism
\begin{equation*}
\zeta\colon \ \cMot(\mathbf{k})\to\big(1+z\cMot(\mathbf{k})[[z]]\big)^\times,
\end{equation*}
which coincides with the one constructed in~\cite[Section~1.3.1]{FedorovSoibelmans}.
Let $\Mot(\mathbf{k})[[\Gamma_+']]^0$ stand for the series without constant term. We define the \emph{plethystic exponent}
$\Exp\colon \Mot(\mathbf{k})[[\Gamma_+']]^0\to(1+\Mot(\mathbf{k})[[\Gamma_+']]^0)^\times$ by
\[
\Exp\left(\sum_{\gamma\in\Gamma_+'} A_\gamma e_\gamma\right)=\prod_{\gamma\in\Gamma_+'}\Exp(A_\gamma e_\gamma)=
\prod_{\gamma\in\Gamma_+'}\zeta_{A_\gamma}(e_\gamma).
\]
One shows easily that this is an isomorphism of abelian groups. Denote the inverse isomorphism by $\Log$ (the \emph{plethystic logarithm}). Finally, we define the \emph{plethystic power} by
\[
\Pow\colon \ \big(1+\Mot(\mathbf{k})[[\Gamma_+']]^0\big)\times\Mot(\mathbf{k})\to1+\Mot(\mathbf{k})[[\Gamma_+']]^0\colon \
(f,A)\mapsto\Exp(A\Log(f)).
\]
We note that we can similarly define $\Exp$, $\Log$, and $\Pow$ for the completed ring $\cMot(\mathbf{k})$, which coincide with the operations defined in~\cite{FedorovSoibelmans} when $D=\varnothing$.
\subsection{Parabolic subbundles and quotient bundles}\label{sect:Subobjects} Let $\mathbf E=(E,E_{\bullet,\bullet})$ be a parabolic bundle of type $(X,D)$. We say that $\mathbf E'=(E',E'_{\bullet,\bullet})$ is a \emph{strict} parabolic subbundle of $\mathbf E$ if $E'$ is a saturated subbundle of $E$ (that is, $E/E'$ is torsion free) and for all $x$ and $j$ we have $E'_{x,j}=E_{x,j}\cap E'_x$. Note that strict parabolic subbundles of $\mathbf E=(E,E_{\bullet,\bullet})$ are in bijective correspondence with saturated subbundles of $E$. Let $\mathbf E'=(E',E'_{\bullet,\bullet})$ be a strict parabolic subbundle of $\mathbf E=(E,E_{\bullet,\bullet})$; set $E'':=E/E'$. Then we have a parabolic structure on $E''$ given by $E''_{x,j}:=E_{x,j}/E'_{x,j}$. We call the parabolic bundle $\mathbf E/\mathbf E':=(E'',E''_{\bullet,\bullet})$ the \emph{quotient parabolic bundle} of $\mathbf E$. Thus, the quotient parabolic bundles of $\mathbf E$ are also in bijective correspondence with saturated subbundles of $E$. Finally, in the above situation, we say that
\begin{equation}\label{eq:ExactSeq}
0\to\mathbf E'\to\mathbf E\to\mathbf E/\mathbf E'\to0
\end{equation}
is a \emph{short exact sequence}. We also say that \emph{$\mathbf E$ is an extension of $\mathbf E/\mathbf E'$ by $\mathbf E'$.} It is clear that in this case we have $\cl(\mathbf E)=\cl(\mathbf E')+\cl(\mathbf E/\mathbf E')$. One can use the short exact sequences above to define the group $K_0(\mathcal{B}{un}^{\rm par})$; the class function $\cl$ extends to $\cl\colon K_0(\mathcal{B}{un}^{\rm par})\to\Gamma$.
\begin{Remark}The category $\mathcal{B}{un}^{\rm par}$ is not abelian. It can be extended to an abelian category by viewing vector bundles with flags as coherent sheaves on orbifold curves. Then the abelian category is the category of coherent sheaves on this orbifold. This extension will not be used in the current paper. On the other hand, if we define short exact sequences in $\mathcal{B}{un}^{\rm par}$ as sequences isomorphic to some sequence of the form~\eqref{eq:ExactSeq}, then $\mathcal{B}{un}^{\rm par}$ becomes an exact category in the sense of Quillen.
\end{Remark}
Let $\phi\colon E\to F$ be a morphism of vector bundles on $X$. We say that $\phi$ is \emph{generically an isomorphism} if it is an isomorphism at the generic point of $X$. Equivalently, $\phi$ is an isomorphism over a non-empty Zariski open subset of $X$. Another reformulation is that $\phi$ is injective and~$F/\phi(E)$ is a torsion sheaf. Sometimes one says in this situation that $E$ is a lower modification of~$F$.
\begin{Definition}We say that a morphism of parabolic bundles $\phi\colon (E,E_{\bullet,\bullet})\to(F,F_{\bullet,\bullet})$ is \emph{generically an isomorphism} if the underlying morphism $E\to F$ is generically an isomorphism.
\end{Definition}
\begin{Lemma}\label{lm:MorPar} Let $\phi\colon \mathbf E\to\mathbf F$ be a morphism of parabolic bundles. Then there are strict parabolic subbundles $\mathbf E'\subset\mathbf E$ and $\mathbf F'\subset\mathbf F$ such that $\phi$ can be decomposed as
\[
\mathbf E\xrightarrow{\phi_1}\mathbf E/\mathbf E'\xrightarrow{\phi_2}\mathbf F'\xrightarrow{\phi_3}\mathbf F,
\]
where $\phi_1$ is the canonical projection, $\phi_2$ is generically an isomorphism, $\phi_3$ is the canonical embedding.
\end{Lemma}
\begin{proof}
Write $\mathbf E=(E,E_{\bullet,\bullet})$, $\mathbf F=(F,F_{\bullet,\bullet})$, let $\phi'\colon E\to F$ be the underlying morphism of vector bundles.
Note that $\Ker\phi'$ is a vector subbundle of $E$. Indeed, $E/\Ker\phi'$ is isomorphic to a subsheaf of $F$, so it is torsion free.
Let $\mathbf E'$ be the strict subbundle of $\mathbf E$ whose underlying vector bundle is $\Ker(\phi')$. Let $F'$ be the saturation of the image of $\phi'$ (that is, $F'$ is the unique saturated vector subbundle of $F$ containing $\phi'(E)$ such that the quotient $F'/\phi'(E)$ is a torsion sheaf). Let $\mathbf F'$ be the strict subbundle of $\mathbf F$ whose underlying vector bundle is $F'$. Now the existence of the decomposition is clear.
\end{proof}
\subsection{Generalized degrees and slopes}\label{sect:DegreeSlope} Let $A$ be a $\mathbb{Q}$-vector space (in applications it will be $\mathbf{k}$ or $\mathbb{R}$). Let $\kappa\in A$, $\zeta=\zeta_{\bullet,\bullet}\in A[D\times\mathbb{Z}_{>0}]$. Then we define the homomorphism $\deg_{\kappa,\sigma}\colon \Gamma\to A$ by
\[
\deg_{\kappa,\zeta}(r,r_{\bullet,\bullet},d)=\kappa d+\sum_{x\in D}\sum_{j>0}\zeta_{x,j}r_{x,j}.
\]
If $\rk\gamma\ne0$, we define the $(\kappa,\zeta)$-slope of $\gamma$ by $\deg_{\kappa,\zeta}\gamma/\rk\gamma$. We write $\deg_{\kappa,\zeta}\mathbf E$ for $\deg_{\kappa,\zeta}\cl(\mathbf E)$ and call $\deg_{\kappa,\zeta}\mathbf E/\rk\mathbf E$ \emph{the $(\kappa,\sigma)$-slope of $\mathbf E$}.
We say that a parabolic bundle $\mathbf E$ is \emph{$(\kappa,\zeta)$-isoslopy} if the $(\kappa,\zeta)$-slope of any direct summand of $\mathbf E$ is equal to the $(\kappa,\zeta)$-slope of $\mathbf E$.
We remark that it is common to write $\zeta\star\gamma$ for $\deg_{0,\zeta}\gamma$ and $\deg_\zeta\gamma$ for $\deg_{1,\zeta}\gamma$ but we prefer a uniform notation. We also remark that for an exact sequence~\eqref{eq:ExactSeq} we have $\deg_{\kappa,\zeta}\mathbf E=\deg_{\kappa,\zeta}\mathbf E'+\deg_{\kappa,\zeta}(\mathbf E/\mathbf E')$.
\subsection{Parabolic weights and stability conditions}\label{sect:ParWeights}
The following definition should be compared to~\cite[Definition~6.9]{MellitPunctures}.
\begin{Definition}\label{def:StabilityCond}
We say that a sequence $\sigma=\sigma_{\bullet,\bullet}$ of real numbers indexed by $D\times\mathbb{Z}_{>0}$ is a~\emph{sequence of parabolic weights} if for all $x\in D$ we have
\begin{equation}\label{eq:StabCond2}
\sigma_{x,1}\le\sigma_{x,2}\le\cdots
\end{equation}
and for all $x$ and $j$ we have $\sigma_{x,j}\le\sigma_{x,1}+1$.
\end{Definition}
To every sequence of parabolic weights we will associate a notion of stability on parabolic bundles in Definition~\ref{def:Semistable} below. Thus we denote the set of all sequences of parabolic weights by $\Stab=\Stab(X,D)$.
Fix $\sigma\in\Stab$.
\begin{Definition}\label{def:Semistable}
A parabolic bundle $\mathbf E$ is \emph{$\sigma$-semistable} if for all strict parabolic subbundles $\mathbf E'\subset\mathbf E$ we have
\begin{equation}\label{eq:ss}
\frac{\deg_{1,\sigma}\mathbf E'}{\rk\mathbf E'}\le\frac{\deg_{1,\sigma}\mathbf E}{\rk\mathbf E}.
\end{equation}
\end{Definition}
\begin{Remark}\label{rm:kappa}
We can similarly define semistability for any $\kappa>0$ replacing the condition $\sigma_{x,j}\le\sigma_{x,1}+1$ with $\sigma_{x,j}\le\sigma_{x,1}+\kappa$. However, scaling $\kappa$ and $\sigma$ by the same positive real number scales all the slopes by the same number, so we would get the same notion of semistability. This is why we restrict to the case $\kappa=1$ above. The case $\kappa=0$ would not yield stacks of finite type; however, the possibility of taking $\kappa=0$ will be useful below, when we work with connections (see Section~\ref{sect:StabConn}).
\end{Remark}
\begin{Proposition}\label{pr:HN}
Let $\mathbf E\in\mathcal{B}{un}^{\rm par}$ be a parabolic bundle. Then there is a unique filtration $0=\mathbf E_0\subset\mathbf E_1\subset\dots\subset\mathbf E_m=\mathbf E$ by strict parabolic subbundles such that all the quotients $\mathbf E_i/\mathbf E_{i-1}$ are $\sigma$-semistable and we have $\tau_1>\dots>\tau_m$, where $\tau_i$ is the $(1,\sigma)$-slope of $\mathbf E_i/\mathbf E_{i-1}$.
\end{Proposition}
\begin{proof}
We start with a Lemma.
\begin{Lemma}\label{lm:ModifDegree}
Let the morphism $\mathbf E\to\mathbf F$ be generically an isomorphism. Then $\deg_{1,\sigma}\mathbf E\le\deg_{1,\sigma}\mathbf F$.
\end{Lemma}
\begin{proof}
Write $\mathbf E=(E,E_{\bullet,\bullet})$ and $\mathbf F=(F,F_{\bullet,\bullet})$. Let $\phi\colon E\to F$ be the underlying morphism of vector bundles. For $x\in D$ let $d_x$ denote the dimension of the kernel of $\phi_x$. Then
\[
\deg E=\deg F-\length(F/\phi(E))\le\deg F-\sum_{x\in D}d_x.
\]
On the other hand, for all $x\in D$ and $i>0$ we have $\dim E_{x,i}\le\dim F_{x,i}+d_x$. Hence
\begin{align*}
\deg_{1,\sigma}\mathbf E& =\deg E+\sum_{x,j>0}\sigma_{x,j}(\dim E_{x,j-1}-\dim E_{x,j})\\
& =\deg E+\sum_{x\in D}\left(\sigma_{x,1}\rk E+\sum_{i>0}(\sigma_{x,i+1}-\sigma_{x,i})\dim E_{x,i}\right)\\
& \le \deg F-\sum_{x\in D} d_x+\sum_{x\in D}\left(\sigma_{x,1}\rk F+\sum_{i>0}(\sigma_{x,i+1}-\sigma_{x,i})(\dim F_{x,i}+d_x)\right)\\
& = \deg_{1,\sigma}\mathbf F+\sum_{x\in D}d_x\left(-1+\sum_{i>0}(\sigma_{x,i+1}-\sigma_{x,i})\right)\le\deg_{1,\sigma}\mathbf F.
\end{align*}
Lemma~\ref{lm:ModifDegree} is proved.
\end{proof}
The rest of the proof of Proposition~\ref{pr:HN} is completely analogous to the proof of~\cite[Section~1.3]{HarderNarasimhan} in view of Lemma~\ref{lm:MorPar}.
\end{proof}
In the situation of Proposition~\ref{pr:HN}(i) we say that the filtration is the \emph{Harder--Narasimhan filtration} of $E$ (or \emph{HN-filtration} for short) and $\tau_1>\dots>\tau_m$ is the $\sigma$-HN spectrum of $\mathbf E$. We define $\mathcal{B}{un}^{{\rm par},\le\tau}$ and $\mathcal{B}{un}^{{\rm par},\ge\tau}$ as full subcategories of $\mathcal{B}{un}^{\rm par}$ whose objects are parabolic bundles with the $\sigma$-HN spectrum contained in $(-\infty,\tau]$ (resp.~$[\tau,\infty)$). We emphasize that the categories $\mathcal{B}{un}^{{\rm par},\le0}$ and $\mathcal{B}{un}^{{\rm par},-}$ should not be confused with each other: they coincide only if $\sigma=0$. The following lemma is standard.
\begin{Lemma}\label{lm:NoMorphismSS}
Let $\mathbf E$ be an object of $\mathcal{B}{un}^{{\rm par},\le\tau}$ and $\mathbf E'$ be an object of $\mathcal{B}{un}^{{\rm par},\ge\tau'}$, where $\tau<\tau'$. Then $\Hom_{\mathcal{B}{un}^{\rm par}}(\mathbf E',\mathbf E)=0$.
\end{Lemma}
\section{Parabolic pairs}\label{sect:ParPairs}
\subsection{Parabolic pairs and their generic Jordan types} The notion of parabolic pair, interesting by itself, will be used as a technical tool for studying parabolic Higgs bundles in Section~\ref{sect:HiggsnEigenval} and parabolic bundles with connections in Section~\ref{sect:Conn}. Our main results in this section are Theorem~\ref{th:MotMellitPunctures} and Corollary~\ref{cor:Pairs}. They give explicit answers for graded motivic classes of stacks of nilpotent parabolic pairs and parabolic pairs respectively. We will also give a simplified answer in the case $X=\P^1$ in Section~\ref{sect:P1ManyPts}.
\begin{Definition}
A \emph{parabolic pair} $(\mathbf E,\Psi)$ consists of a parabolic bundle
\[
\mathbf E=(E,E_{\bullet,\bullet})\in\mathcal{B}{un}^{\rm par}(X,D)
\]
and an endomorphism $\Psi$ of $\mathbf E$ (that is, an endomorphism of $E$ preserving each $E_{x,j}$). If $\Psi$ is nilpotent we will speak about \emph{nilpotent parabolic pairs}.
\end{Definition}
Parabolic pairs as well as nilpotent parabolic pairs form an additive $\mathbf{k}$-linear category denoted $\mathcal{P}{air}=\mathcal{P}{air}(X,D)$ (resp.~$\mathcal{P}{air}^{\rm nilp}=\mathcal{P}{air}^{\rm nilp}(X,D)$). Again, we abuse notation by denoting the stacks of objects by the same symbols. We define $\mathcal{P}{air}_\gamma$, $\mathcal{P}{air}_\gamma^{\rm nilp}$, $\mathcal{P}{air}_\gamma^-$, $\mathcal{P}{air}_\gamma^{{\rm nilp},-}$ etc.\ following the general construction~\eqref{eq:StacksOverPar} of Section~\ref{sect:CatsOverPar}.
The forgetful morphisms $\mathcal{P}{air}\to\mathcal{B}{un}^{\rm par}$ and $\mathcal{P}{air}^{\rm nilp}\to\mathcal{B}{un}^{\rm par}$ are schematic and of finite type. In particular, $\mathcal{P}{air}^-$ and $\mathcal{P}{air}^{{\rm nilp},-}$ are $\Gamma'_+$-graded Artin stacks of finite type (in the graded sense).
Let $K\supset\mathbf{k}$ be an extension and $(E,E_{\bullet,\bullet},\Psi)\in\mathcal{P}{air}^{\rm nilp}(K)$. If we trivialize $E$ at the generic point of $X_K=X\times_\mathbf{k}\Spec K$, $\Psi$ becomes a $\rk E\times\rk E$ nilpotent matrix. Its Jordan type is a partition $\lambda$ of $\rk E$. Thus we get a locally closed stratification of $\mathcal{P}{air}^{\rm nilp}$ according to the generic Jordan type of the nilpotent endomorphism
\[
\mathcal{P}{air}^{\rm nilp}(X,D)=\bigsqcup_\lambda\mathcal{P}{air}^{\rm nilp}(X,D,\lambda),
\]
where the disjoint union is over all partitions. In other words, $\mathcal{P}{air}^{\rm nilp}(X,D,\lambda)$ classifies nilpotent parabolic pairs such that the endomorphism is generically conjugate to $n_\lambda$ (that is, conjugate to $n_\lambda$ at the generic point of $X$, or, equivalently, at each point of a non-empty Zariski open subset of $X$). We remark that any endomorphism generically conjugate to $n_\lambda$ is necessarily nilpotent.
Again, we define the $\Gamma'_+$-graded stacks $\mathcal{P}{air}^{{\rm nilp},-}(X,D,\lambda)$ using the general formalism~\eqref{eq:StacksOverPar} of Section~\ref{sect:CatsOverPar}.
\subsection{Motivic classes of parabolic bundles with nilpotent endomorphisms}\label{sect:NilpEnd} Our goal in this section is to calculate the graded motivic class (that is, the motivic Donaldson--Thomas series, cf.~\cite{KontsevichSoibelman08})
\begin{align}
\big[\mathcal{P}{air}^{{\rm nilp},-}(X,D,\lambda)\big]& =\sum_{\gamma\in\Gamma'_+}\big[\mathcal{P}{air}_\gamma^{{\rm nilp},-}(X,D,\lambda)\big]e_\gamma\nonumber\\
& = w^{|\lambda|}\sum_{\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma'_+}\big[\mathcal{P}{air}_\gamma^{{\rm nilp},-}(X,D,\lambda)\big]\prod_{x,j}w_{x,j}^{r_{x,j}}z^d.\label{eq:Omega}
\end{align}
The partition $\lambda$ is fixed until the end of Section~\ref{sect:Factorization}. This graded motivic class is calculated as follows. First, in Section~\ref{sect:Factorization} we write this graded motivic class as the product of a term that is independent of the parabolic structures, and the ``local'' terms independent of the curve. The first term has been calculated in~\cite{FedorovSoibelmans}. Since the local terms are independent of the curve, it is enough to calculate them when $X=\P^1$. More precisely, we will work with $\P^1$ and two points with parabolic structures (that is, $D=\{0,\infty\}$) but we will calculate the sum over all partitions (Section~\ref{sect:P1}). This part is very similar to~\cite[Section~5.4]{MellitPunctures}. In Section~\ref{sect:MotEnd} we give the explicit answer for the graded motivic classes under consideration. Using the formalism of plethystic powers we then easily calculate the class of parabolic bundles with not necessarily nilpotent endomorphisms.
\begin{Remark}\label{rm:Weyl}
We will see in Theorem~\ref{th:MotMellitPunctures} that~\eqref{eq:Omega} is a symmetric function in $w_{x,\bullet}$ for each $x\in D$. This can be explained as follows. Note that the ``Weyl group'' $W:=\prod\limits_{x\in D}\Sigma_{\infty}$ acts on~$\Gamma_+$ and~$\Gamma'_+$ in the obvious way (here $\Sigma_\infty$ is the inductive limit of the permutation groups $\Sigma_l$). Using the commutativity of the motivic Hall algebra of the category of representations of the Jordan quiver (the quiver with one vertex and one loop), one can easily show that the motivic classes $\big[\mathcal{P}{air}_\gamma^{{\rm nilp},-}(X,D,\lambda)\big]$ are $W$-invariant. Thus, we can re-write~\eqref{eq:Omega} as
\[
w^{|\lambda|}\mathop{\sum\nolimits'}\limits_{\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma'_+}
\big[\mathcal{P}{air}_\gamma^{{\rm nilp},-}(X,D,\lambda)\big]\prod_{x\in D} m_{r_{x,\bullet}}(w_{x,\bullet})z^d,
\]
where the summation is only over $r_{\bullet,\bullet}$ such that we have $r_{x,j}\ge r_{x,j+1}$ for all $x$ and $j$ (that is, $r_{x,\bullet}$ is a partition of~$|\lambda|$). Here, for a partition $\mu$, $m_\mu$ is the symmetric function equal to the sum of all monomials whose ordered list of exponents is~$\mu$.
\end{Remark}
\subsection[Factorization of graded motivic classes of stacks of nilpotent parabolic pairs]{Factorization of graded motivic classes of stacks\\ of nilpotent parabolic pairs}\label{sect:Factorization}
In this section we factorize~\eqref{eq:Omega} as the product of the global part (depending only on $X$ but not on $D$) and the local parts corresponding to points of $D$ (but independent of $X$). This is a~motivic version of~\cite[Theorem~5.6]{MellitPunctures}. We follow the same ideas, though some parts of Mellit's proof do not work in the motivic case and must be replaced by different arguments. On the other hand, we were able to simplify some parts of Mellit's proof, in particular, by working with stacks.
Note that $\mathcal{P}{air}^{{\rm nilp},-}(X,\varnothing,\lambda)$ classifies pairs $(E,\Psi)$, where~$E$ is a nonpositive vector bundle on~$X$, $\Psi$ is an endomorphism of $E$ generically conjugate to $n_\lambda$ but there are no parabolic structures.
\begin{Lemma}\label{lm:invertible}
The motivic class $\big[\mathcal{P}{air}_\gamma^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]\in w^{|\lambda|}\Mot(\mathbf{k})\big[\big[z^{-1}\big]\big]$ is invertible.
\end{Lemma}
\begin{proof}
It is enough to show that the degree 0 part $\big[\mathcal{P}{air}_{|\lambda|,0}^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]$ is invertible in $\Mot(\mathbf{k})$. Note that a nonpositive vector bundle of degree 0 on $\P^1$ is necessarily trivial. Thus, this degree zero part classifies pairs $(E,\Psi)$, where $E$ is a trivial vector bundle and $\Psi$ is a constant endomorphism conjugate to $n_\lambda$. It is easy to see that this stack is isomorphic to the classifying stack of the centralizer $Z_\lambda$ of $n_\lambda$. By Lemma~\ref{lm:Zspecial} $Z_\lambda$ is special. Now it follows from Lemma~\ref{lm:SpecialMot} or~\cite[Lemma~2.2.3]{FedorovSoibelmans} that $\big[\mathcal{P}{air}_{|\lambda|,0}^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]=1/[Z_\lambda]$.
\end{proof}
We need some notation. Note that $\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\infty,\lambda\big)\big]\in w^{|\lambda|}\Mot(\mathbf{k})\big[\big[w_{\infty,\bullet},z^{-1}\big]\big]$. For $x\in D$ let
\[
\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\infty,\lambda\big)\big]_x\in w^{|\lambda|}\Mot(\mathbf{k})\big[\big[w_{x,\bullet},z^{-1}\big]\big]
\]
denote the result of replacing $w_{\infty,\bullet}$ by $w_{x,\bullet}$ in this series.
\begin{Theorem}\label{th:Factorization}
We have in $\Mot(\mathbf{k})[[\Gamma'_+]]$.
\begin{equation*
\big[\mathcal{P}{air}^{{\rm nilp},-}(X,D,\lambda)\big]=\big[\mathcal{P}{air}^{{\rm nilp},-}(X,\varnothing,\lambda)\big]
\prod_{x\in D}
\frac{\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\infty,\lambda\big)\big]_x}
{\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]}.
\end{equation*}
\end{Theorem}
The proof of the theorem occupies the rest of Section~\ref{sect:Factorization}; it is based on the local study of stacks in the formal neighborhood of~$D$.
The main idea of the proof is very simple. Let us assume that $D=\{x\}$ is a single rational point of $X$. In Section~\ref{sect:LocStacks} we will define the stack $\mathcal{P}{air}^{{\rm loc},{\rm fl}}$ classifying triples $(F,\Phi,F_\bullet)$, where $F$ is a rank $|\lambda|$ vector bundle over the formal completion of $X$ at $x$, $\Phi$ is a nilpotent endomorphism of $F$ generically conjugate to $n_\lambda$, $F_\bullet$ is a flag in $F_x$ preserved by $\Phi(x)$. We have an obvious restriction morphism $\mathcal{P}{air}^{{\rm nilp},-}(X,x,\lambda)\to\mathcal{P}{air}^{{\rm loc},{\rm fl}}$. We will see in Lemma~\ref{lm:fiber} that this restriction morphism has constant fiber. Thus, one is tempted to write the graded motivic class $\big[\mathcal{P}{air}^{{\rm nilp},-}(X,x,\lambda)\big]$ as the product of the graded motivic class of this fiber and of $\mathcal{P}{air}^{{\rm loc},{\rm fl}}$. This would quickly lead to the proof of the theorem. Unfortunately, $\mathcal{P}{air}^{{\rm loc},{\rm fl}}$ is not an Artin stack as its points have inertia groups of infinite type, so its motivic class does not make sense. The major part of the proof consists of going around this problem.
Let us give the overview of the proof. In Section~\ref{sect:jets} we define and study the schemes of jets into $\mathfrak{gl}_{|\lambda|}$. In Section~\ref{sect:LocStacks} we study the local stacks; they are essentially the quotients of the schemes of jets by the group of jets of ${\rm GL}_{|\lambda|}$. In Section~\ref{sect:StratThm} we re-write the theorem as a~statement about motivic classes of graded components. In Section~\ref{sect:Res} we study the fibers of the localization map; this is the main part of the proof. We complete the proof in Section~\ref{sect:ProofFact}.
\subsubsection{Jets}\label{sect:jets} We will denote the non-archimedean local field $\mathbf{k}((t))$ by $\mathbb K$ and its ring of integers $\mathbf{k}[[t]]$ by $\mathbb O$. The order of pole at $t=0$ gives rise to a valuation map $\val\colon \mathbb K\to\mathbb{Z}\cup\{-\infty\}$, where $\val(0)=-\infty$. Clearly, $\val$ extends to $\mathfrak{gl}_{r,\mathbb K}$ as the maximum of valuations of all matrix elements. Let $J(X)$ denote the jet scheme of a scheme $X$ (this is a scheme of infinite type), and let $J_N(X)$ denote the scheme of order $N-1$ jets. In particular, $J_1(X)=X$.
For an algebraic group $G$ of finite type over $\mathbf{k}$ we have the jet group $G_\mathbb O:=J(G)$ and the jet group of finite type $J_N(G)$. The $N$-th congruence subgroup $G^{(N)}$ is the kernel of the projection $G_\mathbb O\to J_N(G)$. We also have the ind-group of loops $G_\mathbb K$ containing $G_\mathbb O$. Let $\Delta:=\Spec\mathbb O$ be the formal disc and $\mathring\Delta:=\Spec\mathbb K$ be its generic point (the punctured formal disc). Also set $\Delta_N:=\Spec k[[t]]/t^N$. The groups ${\rm GL}_{r,\mathbb O}$, ${\rm GL}_{r,\mathbb K}$, and $J_N({\rm GL}_r)$ are the groups of automorphisms of the trivial vector bundles on $\Delta$, $\mathring\Delta$, and $\Delta_N$ respectively. For more details on the jet and loop groups we refer the reader to~\cite{SorgerLecturesBundles}.
Set $r=|\lambda|$. Consider the orbit stratification of $\mathfrak{gl}_r$ under the adjoint action of ${\rm GL}_r$: $\mathfrak{gl}_r=\bigsqcup\limits_{\mu\vdash r}\mathcal O_\mu$, where $\mathcal O_\mu$ is the adjoint orbit containing $n_\mu$. Let $\overline\mathcal O_\mu$ denote the Zariski closure of $\mathcal O_\mu$. Set
\[
J(\lambda):=J\big(\overline\mathcal O_\lambda\big)-\bigcup_{\mathcal O_\mu\subset\overline\mathcal O_\lambda-\mathcal O_\lambda}J\big(\overline\mathcal O_\mu\big).
\]
Note that $J(\lambda)$ parameterizes morphisms $\Delta\to\mathfrak{gl}_r$ such that the image of the generic point of~$\Delta$ is in $\mathcal O_\lambda$, that is, jets that are generically conjugate to~$n_\lambda$.
\begin{Definition}
We say that a loop $g\in {\rm GL}_{r,\mathbb K}(\mathbf{k})={\rm GL}_r(\mathbb K)$ is \emph{kernel-strict} if $g^{-1}n_\lambda g\in\mathfrak{gl}_{r,\mathbb O}$ and $g^{-1}$ induces an isomorphism between the $\mathbb O$-modules $\Ker n_\lambda\otimes_\mathbf{k}\mathbb O$ and $\Ker\big(g^{-1}n_\lambda g\big)$.
\end{Definition}
\begin{Remark} Note that our definition is a little different from~\cite[Definition~3.8]{MellitPunctures}. Mellit's definition of kernel-strictness depends also on a choice of a matrix $\theta$. In terminology of Mellit our $g$ is kernel-strict for $\theta=g^{-1}n_\lambda g$. Note also that the results of Mellit we are using here and below are formulated over finite fields but are valid over any field, proofs being the same.
\end{Remark}
Let $\Phi$ be a $\mathbf{k}$-point of $J(\lambda)$, then there is a kernel-strict $g\in {\rm GL}_{r,\mathbb K}(\mathbf{k})={\rm GL}_r(\mathbb K)$ such that $\Phi=g^{-1}n_\lambda g$. Set $\deg\Phi:=\val(\det g)$. The existence of such $g$ and independence of the degree on the choice of $g$ is proved in~\cite[Lemma~3.7]{MellitPunctures}. It follows also from loc.~cit.~that the degree is nonnegative.
If $K\supset\mathbf{k}$ is a field extension, we similarly define the degree of a $K$-point of $J(\lambda)$. The degree is compatible with field extensions. Thus we get a stratification of the set of points of $J(\lambda)$: $|J(\lambda)|=\bigsqcup\limits_{d\ge0}J_d(\lambda)$. Let $\pi_N\colon J(\lambda)\to J_N\big(\overline\mathcal O_\lambda\big)$ be the truncation map. Set $J_{d,N}(\lambda):=\pi_N(J_d(\lambda))$.
\begin{Proposition}\label{pr:jets} For a nonnegative integer $d$ and a partition $\lambda$ there is a positive integer $N(d,\lambda)$ such that for $N>N(d,\lambda)$ we have
\begin{enumerate}\itemsep=0pt
\item[$(i)$] For all $\Phi$ in $J_d(\lambda)$ there is a kernel-strict $g$ with $\val(g)<N/2$, $\val\big(g^{-1}\big)<N/2$ such that $g\Phi g^{-1}=n_\lambda$.
\item[$(ii)$] $J_d(\lambda)=\pi_N^{-1}(J_{d,N}(\lambda))$.
\item[$(iii)$] Any two points in the same fiber of the projection $J_d(\lambda)\to J_{d,N}(\lambda)$ are conjugate by an element of ${\rm GL}_{r,\mathbb O}$.
\end{enumerate}
\end{Proposition}
\begin{proof}
By~\cite[Lemma~3.7]{MellitPunctures} there is $N_0\ge1$ (depending on $\lambda$ and $d$) such that for all $\Phi\in J_d(\lambda)$ there is a kernel-strict $g$ with $\val(g)<N_0$ such that $g\Phi g^{-1}=n_\lambda$. Then $\val(\det g)=d$. Set $N_1:=rN_0+d$. Then by Cramer's rule $\val\big(g^{-1}\big)<N_1$. We also have $\val(g)<N_1$. Take $N(d,\lambda):=4N_1$. With this choice of $N(d,\lambda)$ part~(i) of the proposition is clear.
Note that (ii) is saying that the degree of an infinite jet depends only on its $N$-th truncation if $N>4N_1$. Thus to prove (ii) we need to show that if $\Phi\in J_d(\lambda)$ and $\Phi'\in J(\lambda)$ is such that $\Phi'\equiv\Phi\pmod{z^{4N_1}}$, then $\Phi'\in J_d(\lambda)$. Choose a kernel-strict $g$ with $\val(g)<N_1$, $\val\big(g^{-1}\big)<N_1$ such that $g\Phi g^{-1}=n_\lambda$. Then $g\Phi'g^{-1}\equiv n_\lambda\pmod{z^{2N_1}}$. We need a lemma.
\begin{Lemma}\label{lm:conjugate}
There is $g'\in {\rm GL}_r^{(2N_1)}$ such that $g'g\Phi'g^{-1}(g')^{-1}=n_\lambda$.
\end{Lemma}
\begin{proof}
Set $\Phi'':=g\Phi'g^{-1}$ and $N_2:=2N_1$. Write $\Phi''\equiv n_\lambda+\Phi_{N_2}z^{N_2}\pmod{z^{N_2+1}}$. Since $\Phi''\in J\big(\overline\mathcal O_\lambda\big)$, $\Phi_{N_2}$ belongs to the tangent space to $\overline\mathcal O_\lambda$ at $n_\lambda$, which is naturally identified with $[n_\lambda,\mathfrak{gl}_r]$. Thus there is $g_{_{N_2}}\in\mathfrak{gl}_r$ such that $[n_\lambda,g_{_{N_2}}]=\Phi_{N_2}$. Then $\big(1+g_{_{N_2}}z^{N_2}\big)\Phi''\big(1+g_{_{N_2}}z^{N_2}\big)^{-1}\equiv n_\lambda\pmod{z^{N_2+1}}$.
Repeating this process we find $g_{_{N_2+1}},g_{_{N_2+2}},\ldots\in\mathfrak{gl}_r$ such that for $j>0$ we have
\begin{gather*}
\big(1+g_{_{N_2+j}}z^{N_2+j}\big)\cdots\big(1+g_{_{N_2}}z^{N_2}\big)\Phi''
\big(1+g_{_{N_2}}z^{N_2}\big)^{-1}\cdots\big(1+g_{_{N_2+j}}z^{N_2+j}\big)^{-1}\\
\qquad{} \equiv n_\lambda\pmod{z^{N_2+j+1}}.
\end{gather*}
It remains to take $g'=\prod\limits_{j=0}^\infty\big(1+g_{_{N_2+j}}z^{N_2+j}\big)$. Lemma~\ref{lm:conjugate} is proved.
\end{proof}
We return to the proof Proposition~\ref{pr:jets}. Note that $g^{-1}g'g\in {\rm GL}_{r,\mathbb O}$. It is easy to see that the set of kernel-strict loops is invariant under the multiplication by points of ${\rm GL}_{r,\mathbb O}$ on the right, so $g'g=g\big(g^{-1}g'g\big)$ is kernel-strict. Clearly, $\val(\det(g'g))=\val(\det g)=d$, so (ii) follows. Further, $g^{-1}g'g$ conjugates $\Phi'$ to $\Phi$, so (iii) follows as well. Proposition~\ref{pr:jets} is proved.
\end{proof}
For every $d\ge0$ we fix $N(d,\lambda)$ satisfying the conditions of the above proposition.
\begin{Definition}\label{def:stabilized}
For $N>N(d,\lambda)$ we call any jet in $J_{d,N}(\lambda)$ \emph{stabilized} and call $d$ its \emph{degree}.
\end{Definition}
According to the proposition, the degree of a stabilized jet is well-defined and any two lifts of a stabilized jet to an infinite jet are conjugate by an element of ${\rm GL}_{r,\mathbb O}$. Note that for every jet $\Phi\in J_d(\lambda)$ its truncation $\pi_N(\Phi)$ is stabilized for $N$ large enough.
\subsubsection{Local stacks}\label{sect:LocStacks} Consider the quotient stack $\mathcal{P}{air}^{\rm loc}=\mathcal{P}{air}^{\rm loc}(\lambda):=J(\lambda)/{\rm GL}_{r,\mathbb O}$, where ${\rm GL}_{r,\mathbb O}$ acts by conjugation. We skip $\lambda$ from the notation as it is fixed until the end of Section~\ref{sect:Factorization}. Since the degree function on $J(\lambda)$ is ${\rm GL}_{r,\mathbb O}$-invariant, we get the degree function on the points of $\mathcal{P}{air}^{\rm loc}$.
Note that $\mathcal{P}{air}^{\rm loc}$ classifies pairs $(F,\Phi)$, where $F$ is a rank $r=|\lambda|$ vector bundle over~$\Delta$, $\Phi$~is a nilpotent endomorphism of $F$ generically conjugate to $n_\lambda$. This follows from the fact that every vector bundle on $\Delta$ is trivial. It also follows that every $K$-point of $\mathcal{P}{air}^{\rm loc}$ is isomorphic to a point of the form $(\mathbb O^r,\Phi)$, where $\Phi\in\mathfrak{gl}_{r,\mathbb O}$.
We emphasize that $\mathcal{P}{air}^{\rm loc}$ is not an Artin stack (its isotropy groups are not of finite type).
Define $\mathcal{P}{air}^{\rm loc}_N:=J_N\big(\overline\mathcal O_\lambda\big)/J_N({\rm GL}_r)$; this is an Artin stack of finite type. The points of~$\mathcal{P}{air}^{\rm loc}_N$ are the pairs $(F,\Phi)$ where $F$ is a vector bundle on~$\Delta_N$, $\Phi$~is an endomorphism of $F$ such that if we trivialize $F$, $\Phi$ becomes a jet with values in $\overline\mathcal O_\lambda$. We say that $(F,\Phi)$ is \emph{stabilized} if $\Phi$ is stabilized in the sense of Definition~\ref{def:stabilized}. Note that this does not depend on the trivialization of~$F$. If $(F,\Phi)$ is stabilized, then we have a well-defined notion of the degree of~$(F,\Phi)$. Explicitly, we can lift $(F,\Phi)$ to a point $(\mathbb O^r,\overline\Phi)$ of $\mathcal{P}{air}^{\rm loc}$, and the degree of $(F,\Phi)$ is equal to the degree of $\overline\Phi\in\mathfrak{gl}_{r,\mathbb O}$.
We define the stack $\mathcal{P}{air}^{{\rm loc},{\rm fl}}$ as the stack classifying triples $(F,\Phi,F_\bullet)$, where $(F,\Phi)$ is a point of $\mathcal{P}{air}^{\rm loc}$, $F_\bullet$ is a flag in the fiber $F_0$ preserved by $\Phi(0)$. We define the stack $\mathcal{P}{air}^{{\rm loc},{\rm fl}}_N$ as the stack classifying triples $(F,\Phi,F_\bullet)$, where $(F,\Phi)$ is a point of $\mathcal{P}{air}^{\rm loc}_N$, $F_\bullet$ is a flag in $F_0$ preserved by $\Phi(0)$.
\subsubsection{Preparation for the proof of Theorem~\ref{th:Factorization}}\label{sect:StratThm} We will assume that $D=x$ is a single rational point of $X$. This will unburden the notation; the general case is proved similarly. Thus we want to prove that
\[
\big[\mathcal{P}{air}^{{\rm nilp},-}(X,x,\lambda)\big]\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]=\big[\mathcal{P}{air}^{{\rm nilp},-}(X,\varnothing,\lambda)\big]\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\infty,\lambda\big)\big]_x.
\]
Equating the graded components, we see that this reduces to the following proposition.
\begin{Proposition}\label{pr:factorization}
Let $d$ be a nonpositive integer, $r_\bullet=(r_1,r_2,\dots)$ be a sequence of nonnegative integers such that $\sum_ir_i=r=|\lambda|$. Then we have in $\Mot(\mathbf{k})$:
\begin{gather*
\sum_{d'+d''=d}\big[\mathcal{P}{air}^{{\rm nilp},-}_{r,r_\bullet,d'}(X,x,\lambda)\times\mathcal{P}{air}^{{\rm nilp},-}_{r,d''}\big(\P^1,\varnothing,\lambda\big)\big]\\
\qquad{} =
\sum_{d'+d''=d}\big[\mathcal{P}{air}^{{\rm nilp},-}_{r,d'}(X,\varnothing,\lambda)\times\mathcal{P}{air}^{{\rm nilp},-}_{r,r_\bullet,d''}\big(\P^1,\infty,\lambda\big)\big].
\end{gather*}
\end{Proposition}
This proposition will be proved in Section~\ref{sect:ProofFact}. We emphasize that the sum is over all $d',d''\in\mathbb{Z}$ with $d'+d''=d$ but the terms are non-zero only if $d',d''\in[d,0]$. We note that the RHS is manifestly independent of $x$. Thus, the LHS is independent of $x$ as well.
\subsubsection[The restriction to the formal neighborhood of $x$]{The restriction to the formal neighborhood of $\boldsymbol{x}$}\label{sect:Res}
We keep the simplifying assumption that $D=\{x\}$ is a single point; we write $r_\bullet$ instead of~$r_{x,\bullet}$. Fix $\gamma=(r,r_{\bullet},d)\in\Gamma_+'$. For $x\in X$ let $\mathcal O_{X,x}$ be the local ring of $x$ and $\hat\mathcal O_{X,x}$ be its formal completion. Set $\Delta_x:=\Spec\hat\mathcal O_{X,x}$. Choose a formal coordinate at $x$, use it to identify $\Delta_x$ with $\Delta$ and the $N$-th infinitesimal neighborhood $\Delta_{x,N}$ of $x$ with $\Delta_N$. Consider the restriction morphism
\begin{equation}\label{eq:res_x_fl}
\loc_x^{\rm fl}\colon \ \mathcal{P}{air}_{r,r_\bullet,d}^{{\rm nilp},-}(X,x,\lambda)\to\mathcal{P}{air}^{{\rm loc},{\rm fl}}_{r_\bullet},\qquad (E,\Psi,E_{x,\bullet})
\mapsto(E|_{\Delta_x},\Psi|_{\Delta_x},E_{x,\bullet}).
\end{equation}
Similarly we have a morphism
\begin{equation}\label{eq:res_x}
\loc_x \colon \ \mathcal{P}{air}_{r,d}^{{\rm nilp},-}(X,\varnothing,\lambda)\to\mathcal{P}{air}^{\rm loc},\qquad (E,\Psi)
\mapsto(E|_{\Delta_x},\Psi|_{\Delta_x}).
\end{equation}
Our nearest goal is to describe the fibers of these morphisms. For a nonpositive integer $e$, let ${\mathcal F}{ib}_e(X,x)$ denote the open substack of $\mathcal{P}{air}^{{\rm nilp},-}_{r,e}(X,\varnothing,\lambda)$ consisting of $(E,\Psi)$ such that $\Psi$ is conjugate to $n_\lambda$ at $x$. Let $\overline{{\mathcal F}{ib}}_e(X,x)$ denote the stack of triples $(E,\Psi,s)$, where $(E,\Psi)$ is a~point of ${\mathcal F}{ib}_e(X,x)$, $s$ is a trivialization of $E$ over $\Delta_x$ such that $\Psi=n_\lambda$ in this trivialization. Recall that in Section~\ref{sect:LocStacks} we defined the notion of degree for the points of $\mathcal{P}{air}^{\rm loc}$.
\begin{Lemma}\label{lm:InfFiber}\quad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] The fiber of $\loc_x^{\rm fl}$ over $(F,\Phi,F_\bullet)$ is isomorphic to $\overline{{\mathcal F}{ib}}_{d+e}(X,x)$, where $e$ is the degree of~$(F,\Phi)$.
\item[$(ii)$]
Similarly, the fiber of $\loc_x$ over $(F,\Phi)$ is isomorphic to $\overline{{\mathcal F}{ib}}_{d+e}(X,x)$, where $e$ is the degree of $(F,\Phi)$.
\end{enumerate}
\end{Lemma}
\begin{proof}
We prove (ii) first. Fix a trivialization of $F$ on the formal disc $\Delta$. Then $\Phi$ becomes an element of $\mathfrak{gl}_{r,\mathbb K}$ and we choose a kernel-strict $g$ such that $g\Phi g^{-1}=n_\lambda$. Then $\val(\det g)=e$.
Denote the fiber under consideration by $\overline{{\mathcal F}{ib}}$. The fiber can be described as the stack of triples $(E,\Psi,s)$, where $E$ is a nonpositive vector bundle, $\Psi$ is an endomorphism, $s$ is the trivialization of~$E$ over $\Delta_x$ such that in this trivialization we have $\Psi|_{\Delta_x}=\Phi$. Note that such $\Psi$ is automatically conjugate to $n_\lambda$ at the generic point of $X$.
If $(E,\Psi,s)$ is a point of $\overline{{\mathcal F}{ib}}$, then $E|_{X-x}$ is trivialized over the punctured disc $\mathring\Delta_x$, and we use the $g$ chosen above to glue $E|_{X-x}$ with the trivial bundle $\mathbf{k}^r\times\Delta_x$ on $\mathring\Delta_x$ (we recall that $g$ can be viewed as an automorphism of the trivial vector bundle on $\mathring\Delta_x$). We obtain a new vector bundle $E'$ on $X$ with an isomorphism $E'|_{X-x}\simeq E|_{X-x}$ and a trivialization over $\Delta_x$. Thus $\Psi$ gives rise to an endomorphism $\Psi'$ of $E'|_{X-x}$. It is easy to derive from the definition of $g$ that in the given trivialization we have $\Psi'|_{\mathring\Delta_x}=n_\lambda$. Thus $\Psi'$ extends to $x$ and, moreover, in the given trivialization of $E'$ over $\Delta_x$ we have $\Psi'|_{\Delta_x}=n_\lambda$.
Note that $E'$ is nonpositive. Indeed, $\Ker\Psi$ is nonpositive as a subbundle of $E$. Since $g$ is kernel-strict, the isomorphism between $\Ker\Psi$ and $\Ker\Psi'$ extends from $X-x$ to $X$. Thus $\Ker\Psi'$ is also nonpositive. But by~\cite[Proposition~5.3]{MellitPunctures} this implies that $E'$ is nonpositive as well.
Next, we have an isomorphism between $\wedge^rE$ and $\wedge^rE'$ over $X-x$, and it has a zero of order $\val(\det g)$ at $x$. Thus $\deg E'=\deg\wedge^rE'=\deg\wedge^rE+\val(\det g)=\deg E+e=d+e$.
We have constructed a morphism $\overline{{\mathcal F}{ib}}\to\overline{{\mathcal F}{ib}}_{d+e}(X,x)$. Conversely, given a point $(E',\Psi',s')$ of $\overline{{\mathcal F}{ib}}_{d+e}(X,x)$, we use $g$ and $s'$ to construct a new bundle $E$ with an isomorphism to $E'$ over $X-x$ and a trivialization over $\Delta_x$. Then $\Psi'|_{X-x}$ give rise to an endomorphism of $E|_{X-x}$ and we check that it extends to $x$ and, moreover, in the trivialization of $E$ over $\Delta_x$ we have $\Psi|_{\Delta_x}=\Phi$. Now it is easy to see that the two constructions are inverse to each other. This proves (ii).
Now (i) follows from the cartesian diagram
\[
\begin{CD}
\mathcal{P}{air}_{r,r_\bullet,d}^{{\rm nilp},-}(X,x,\lambda)@>\loc_x^{\rm fl}>>\mathcal{P}{air}^{{\rm loc},{\rm fl}}_{r_\bullet}\\
@VVV @VVV\\
\mathcal{P}{air}_{r,d}^{{\rm nilp},-}(X,\varnothing,\lambda)@>\loc_x>>\mathcal{P}{air}^{\rm loc},
\end{CD}
\]
where the vertical arrows correspond to forgetting the flags.
\end{proof}
Consider now the compositions of~\eqref{eq:res_x_fl} and~\eqref{eq:res_x} with restrictions to the $N$-th infinitesimal neighborhood of $x$.
\begin{equation}\label{eq:res_x_flN}
\loc_{x,N}^{\rm fl}\colon \ \mathcal{P}{air}_{r,r_\bullet,d}^{{\rm nilp},-}(X,x,\lambda)\to\mathcal{P}{air}^{{\rm loc},{\rm fl}}_{r_\bullet,N},\qquad (E,\Psi,E_{x,\bullet})
\mapsto(E|_{\Delta_{x,N}},\Psi|_{\Delta_{x,N}},E_{x,\bullet}).
\end{equation}
Similarly we have a morphism
\begin{equation}\label{eq:res_xN}
\loc_{x,N}\colon \ \mathcal{P}{air}_{r,d}^{{\rm nilp},-}(X,\varnothing,\lambda)\to\mathcal{P}{air}^{\rm loc}_N,\qquad (E,\Psi)
\mapsto(E|_{\Delta_{x,N}},\Psi|_{\Delta_{x,N}}).
\end{equation}
Let $(F,\Phi,F_\bullet)$ be a point of $\mathcal{P}{air}^{{\rm loc},{\rm fl}}_{r_\bullet,N}$ and assume that $(F,\Phi)$ is stabilized. By Definition~\ref{def:stabilized} we can find $\Phi'\in\mathfrak{gl}_{r,\mathbb O}$ such that $(\mathbb O^r,\Phi')$ lifts $(F,\Phi)$ and $N>N(e,\lambda)$, where $e$ is the degree of $\Phi'$ and $N(e,\lambda)$ is the integer number from Proposition~\ref{pr:jets}, which was fixed just before Definition~\ref{def:stabilized}. Choose a kernel-strict $g\in\mathfrak{gl}_{r,\mathbb K}$ such that $g\Phi'g^{-1}=n_\lambda$. Denote by $Z^{(N)}_g$ the intersection $Z_\mathbb O\cap\big(g^{-1}{\rm GL}^{(N)}g\big)\subset {\rm GL}_{r,\mathbb K}$, where $Z=Z_\lambda$ is the centralizer of $n_\lambda$ in ${\rm GL}_{|\lambda|}$ (cf.~Lemma~\ref{lm:Zspecial}). Recall that by Proposition~\ref{pr:jets}(i) we may assume that the orders of the poles of $g$ and $g^{-1}$ are less than $N/2$ so we have $Z^{(N)}_g\subset Z^{(1)}$. This is a pro-unipotent group. Clearly, $Z_\mathbb O$ (and thus~$Z^{(N)}_g$ as well) acts on $\overline{{\mathcal F}{ib}}_{d+e}(X,x)$ by changing the trivialization of $E$ on $\Delta_x$.
\begin{Lemma}\label{lm:fiber} \quad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Let $(F,\Phi,F_\bullet)\in\mathcal{P}{air}_{r_\bullet,N}^{{\rm loc},{\rm fl}}$ be such that $(F,\Phi)$ is stabilized, choose $g\in {\rm GL}_{r,\mathbb K}$ as in the previous paragraph. Then the fiber of $\loc_{x,N}^{\rm fl}$ over $(F,\Phi,F_\bullet)$ is isomorphic to $\overline{{\mathcal F}{ib}}_{d+e}(X,x)/Z^{(N)}_g$, where $e$ is the degree of $(F,\Phi)$.
\item[$(ii)$] Similarly, the fiber of $\loc_{x,N}$ over $(F,\Phi)$ is isomorphic to $\overline{{\mathcal F}{ib}}_{d+e}(X,x)/Z^{(N)}_g$.
\end{enumerate}
\end{Lemma}
\begin{proof}
Let us prove (ii), the proof of (i) is completely analogous. Denote the fiber under consideration by ${\mathcal F}{ib}$ and let $\overline{{\mathcal F}{ib}}$ be the fiber of $\loc_x$ over $(\mathbb O^r,\Phi')$. Then we have a restriction morphism $\overline{{\mathcal F}{ib}}\to{\mathcal F}{ib}$. It follows from the stability of $(F,\Phi)$ and Proposition~\ref{pr:jets}(iii) that this morphism is surjective. On the other hand, it is easy to see that two points of $\overline{{\mathcal F}{ib}}$ map to the same point of ${\mathcal F}{ib}$ if and only if they differ by the action of an element of $gZ_\mathbb O g^{-1}\cap {\rm GL}^{(N)}$. On the other hand, according to Lemma~\ref{lm:InfFiber}, $\overline{{\mathcal F}{ib}}\simeq\overline{{\mathcal F}{ib}}_{d+e}(X,x)$. One checks that under this isomorphism, the action of $gZ_\mathbb O g^{-1}\cap {\rm GL}^{(N)}$ on $\overline{{\mathcal F}{ib}}$ corresponds to the action of $Z^{(N)}_g$ on $\overline{{\mathcal F}{ib}}_{d+e}(X,x)$.
\end{proof}
We need to calculate the motivic class of this fiber. Set $Z_g:=Z_\mathbb O/Z^{(N)}_g$; this is a group of finite type.
\begin{Lemma}\label{lm:MotFiber} We have in $\Mot(\mathbf{k})$
\[
\big[\overline{{\mathcal F}{ib}}_{d+e}(X,x)/Z^{(N)}_g\big]=[{\mathcal F}{ib}_{d+e}(X,x)]/[Z_g].
\]
\end{Lemma}
\begin{proof}For large $M$ we have $Z^{(M)}\subset Z_g^{(N)}$ and this subgroup is normal.
\emph{Claim.} The groups $Z^{(N)}_g/Z^{(M)}$, $Z_\mathbb O/Z^{(M)}$, and $Z_\mathbb O/Z^{(N)}_g$ are special.
\emph{Proof of the claim.} Recall that $Z_g^{(N)}\subset Z^{(1)}$. The group $Z^{(N)}_g/Z^{(M)}$ is special, since every unipotent group is special by Lemma~\ref{lm:SpecialRad}. Next, the quotient of $Z_\mathbb O/Z^{(M)}$ by the unipotent subgroup $Z^{(1)}/Z^{(M)}$ is equal to $Z$ and the statement follows from Lemmas~\ref{lm:SpecialRad} and~\ref{lm:Zspecial}. A~similar argument shows that $Z_\mathbb O/Z^{(N)}_g$ is special.\qed
We continue with the proof of Lemma~\ref{lm:MotFiber}. Next, $\overline{\mathcal F}{ib}_{d+e}(X,x)/Z^{(M)}$ is a $Z_\mathbb O/Z^{(M)}$-principal bundle over ${\mathcal F}{ib}_{d+e}(X,x)$. Since $Z_\mathbb O/Z^{(M)}$ is a special group, we get by Lemma~\ref{lm:SpecialMot}
\[
\big[\overline{\mathcal F}{ib}_{d+e}(X,x)/Z^{(M)}\big]=\big[Z_\mathbb O/Z^{(M)}\big][{\mathcal F}{ib}_{d+e}(X,x)].
\]
Similarly, $\overline{{\mathcal F}{ib}}_{d+e}(X,x)/Z^{(M)}$ is a $Z^{(N)}_g/Z^{(M)}$-principal bundle over $\overline{{\mathcal F}{ib}}_{d+e}(X,x)/Z^{(N)}_g$ and we get
\[
\big[\overline{{\mathcal F}{ib}}_{d+e}(X,x)/Z^{(M)}\big]=\big[Z^{(N)}_g/Z^{(M)}\big]\big[\overline{{\mathcal F}{ib}}_{d+e}(X,x)/Z^{(N)}_g\big].
\]
Finally, $Z_\mathbb O/Z^{(M)}$ is a $Z^{(N)}_g/Z^{(M)}$-principal bundle over $Z_g$ and we have
\[
\big[Z_\mathbb O/Z^{(M)}\big]=\big[Z^{(N)}_g/Z^{(M)}\big][Z_g].
\]
The lemma follows from these three equations.
\end{proof}
\begin{Lemma}\label{lm:unique}
Let $N$ be an integer larger than $N(j,\lambda)$ for all $j=0,\dots,-d$. Assume that the fiber of~\eqref{eq:res_xN} over $(F,\Phi)$ is non-empty. Then $(F,\Phi)$ is stabilized. A similar statement holds for the fibers of~\eqref{eq:res_x_flN}.
\end{Lemma}
\begin{proof}
We prove the statement about the fibers of~\eqref{eq:res_xN}, the other statement being analogous. Let $(E,\Psi)$ be a point of the fiber, then the fiber of $\loc_x$ over $(E|_{\Delta_x},\Psi|_{\Delta_x})$ is non-empty. Then by Lemma~\ref{lm:InfFiber} this fiber is isomorphic to $\overline{{\mathcal F}{ib}}_{d+e}(X,x)$, where $e$ is the degree of $(E|_{\Delta_x},\Psi|_{\Delta_x})$ (recall that $e\ge0$). Since $\overline{{\mathcal F}{ib}}_{d+e}(X,x)$ classifies nonpositive vector bundles, we get $d+e\le0$. Thus $N>N(e,\lambda)$ and we see that $(F,\Phi)$ is stabilized.
\end{proof}
\subsubsection{Proof of Proposition~\ref{pr:factorization}}\label{sect:ProofFact}
Let us take an integer $N$ larger than $N(j,\lambda)$ for all $j=0,\dots,-d$. It is enough to show that the motivic functions
\[
A:=\sum_{d'+d''=d}\big[\mathcal{P}{air}^{{\rm nilp},-}_{r,r_\bullet,d'}(X,x,\lambda)\times\mathcal{P}{air}^{{\rm nilp},-}_{r,d''}\big(\P^1,\varnothing,\lambda\big)
\to\mathcal{P}{air}^{{\rm loc},{\rm fl}}_{r_\bullet,N}\times\mathcal{P}{air}^{\rm loc}_N\big]
\]
and
\[
B:=\sum_{d'+d''=d}\big[\mathcal{P}{air}^{{\rm nilp},-}_{r,r_\bullet,d'}\big(\P^1,\infty,\lambda\big)\times\mathcal{P}{air}^{{\rm nilp},-}_{r,d''}(X,\varnothing,\lambda)
\to\mathcal{P}{air}^{{\rm loc},{\rm fl}}_{r_\bullet,N}\times\mathcal{P}{air}^{\rm loc}_N\big]
\]
are equal. The morphisms are $\loc_{x,N}^{\rm fl}\times\loc_{\infty,N}$ and $\loc_{\infty,N}^{\rm fl}\times\loc_{x,N}$ respectively.
Let $K\supset\mathbf{k}$ be an extension and let $\xi$ be a $K$-point of the stack $\mathcal{P}{air}^{{\rm loc},{\rm fl}}_{r_\bullet,N}\times\mathcal{P}{air}^{\rm loc}_{r,N}$ represented by $((F,\Phi,F_\bullet),(F',\Phi'))$. By Proposition~\ref{pr:MotFunEqual} it is enough to show that $\xi^*A=\xi^*B$. Using base change, we may assume that $K=\mathbf{k}$. According to Lemma~\ref{lm:unique}, these motivic functions are zero unless $(F,\Phi)$ and $(F',\Phi')$ are stabilized.
Let us lift $(F,\Phi)$ and $(F',\Phi')$ to $(\mathbb O^r,\overline\Phi)$ and $(\mathbb O^r,\overline\Phi')$, where $\Phi',\overline\Phi'\in\mathfrak{gl}_{r,\mathbb O}$. Choose kernel-strict $g,g'\in {\rm GL}_{r,\mathbb K}$ such that $g$, $g^{-1}$, $g'$, and $(g')^{-1}$ have poles of order less than $N/2$ and such that $g\overline\Phi g^{-1}=g'\overline\Phi'(g')^{-1}=n_\lambda$.
Then, according to Lemmas~\ref{lm:fiber} and~\ref{lm:MotFiber} (applied to $X$ and $\P^1$) we get
\begin{align*}
\xi^*A& =\frac{\sum\limits_{d'+d''=d}[{\mathcal F}{ib}_{d'+e}(X,x)][{\mathcal F}{ib}_{d''+e'}(\P^1,\infty)]}{[Z_g][Z_{g'}]}\\
& = \frac{\sum\limits_{d'+d''=d+e+e'}[{\mathcal F}{ib}_{d'}(X,x)][{\mathcal F}{ib}_{d''}(\P^1,\infty)]}{[Z_g][Z_{g'}]}.
\end{align*}
Similarly,
\begin{align*}
\xi^*B& =\frac{\sum\limits_{d'+d''=d}[{\mathcal F}{ib}_{d'+e}(\P^1,\infty)][{\mathcal F}{ib}_{d''+e'}(X,x)]}{[Z_g][Z_{g'}]}\\
& = \frac{\sum\limits_{d'+d''=d+e+e'}[{\mathcal F}{ib}_{d'}(\P^1,\infty)][{\mathcal F}{ib}_{d''}(X,x)]}{[Z_g][Z_{g'}]}.
\end{align*}
We see that $\xi^*A=\xi^*B$. This completes the proof of Proposition~\ref{pr:factorization} and thus the proof of Theorem~\ref{th:Factorization}.\qed
\begin{Remark} We emphasize that we only worked with motivic classes of Artin stacks of finite type. It seems plausible that one can define motivic classes of stacks like $\big[\,\overline{{\mathcal F}{ib}}_d(X,x)\big]$ using some ideas of motivic integration. This would significantly simplify our argument. Unfortunately, we were not able to develop such a formalism.
\end{Remark}
\subsection[Case of $\P^1$ and two points]{Case of $\boldsymbol{\P^1}$ and two points}\label{sect:P1}
Consider the case when $X=\P^1$, $D=\{0,\infty\}$.
\begin{Proposition}\label{pr:P1}
We have in $\Mot(\mathbf{k})[[\Gamma'_+]]\subset\Mot(\mathbf{k})\big[\big[w,w_{0,\bullet},w_{\infty,\bullet},z^{-1}\big]\big]$
\begin{equation}\label{eq:P10infty}
\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\{0,\infty\}\big)\big]=\Exp\left(w\frac{\sum_{j=1}^\infty\sum_{j'=1}^\infty w_{0,j}w_{\infty,j'}}{(\mathbb{L}-1)(1-z^{-1})}\right),
\end{equation}
where $\Exp$ is the plethystic exponent defined in Section~{\rm \ref{sect:Plethystic}}.
\end{Proposition}
\begin{proof}
We note that the proof in~\cite[Section~5.4]{MellitPunctures} goes through in the motivic case as well. The only difference is that Mellit uses the Hall algebra of the Jordan quiver (that is, the Hall algebra of the category of vector spaces with nilpotent endomorphisms); this Hall algebra has to be replaced with the similar motivic Hall algebra in our case.
Let us give more details. In~\cite[Section~5]{FedorovSoibelmans}, to any smooth projective geometrically connected curve over $\mathbf{k}$ we associated the Hall algebra of the category of coherent sheaves on this curve, denoted by~$\mathcal H$. Let us take the curve to be $\P^1_\mathbf{k}$ and let $\mathcal H_0$ be the subalgebra of torsion sheaves supported at $0\in\P^1_\mathbf{k}$. Since such sheaves are identified with finite dimensional representations of the Jordan quiver, we can view $\mathcal H_0$ as the Hall algebra of the category of such representations. Of course, we could have taken any other curve and any rational point.
Following Mellit, we re-write the LHS of~\eqref{eq:P10infty} in terms of products of certain elements of this algebra. This part of Mellit's proof is geometric, so it is easily carried to the motivic case. The rest of the proof is a calculation in this Hall algebra; the necessary identities in the Hall algebra are easily derived from results of \cite[Section~5]{FedorovSoibelmans}.
\end{proof}
\begin{Remark}
Note that the RHS of~\eqref{eq:P10infty} can be written without plethystic exponents as follows (cf.~\cite[Lemma~5.7.3]{FedorovSoibelmans})
\begin{equation*
\prod_{d=0}^{-\infty}\prod_{j=1}^\infty\prod_{j'=1}^\infty\left(
1+\sum_{i\ge1}\frac{\mathbb{L}^{i(i-1)}}{\big(\mathbb{L}^i-1\big)\cdots\big(\mathbb{L}^i-\mathbb{L}^{i-1}\big)}z^{id}w^iw_{0,j}^iw_{\infty,j'}^i
\right).
\end{equation*}
\end{Remark}
\subsection{Motivic modified Macdonald polynomials} For a commutative unital ring $A$, let $\Sym_A[w_\bullet]$ be the ring of symmetric functions with coefficients in $A$ in variables $w_\bullet$. In this section we define axiomatically the images of modified Macdonald polynomials in $\Sym_{\Mot(\mathbf{k})[[z]]}[w_\bullet]$.
Consider the modified Macdonald polynomials $\tilde H_\lambda(w_\bullet;q,z)\in\Sym_{\mathbb{Z}[q,z]}[w_\bullet]$. For a definition see, for example,~\cite[Definition~2.5]{MellitPunctures}. It is not clear from this definition that the coefficients of $\tilde H_\lambda(w_\bullet;q,z)$ are integers, but this is well-known (see, e.g.,~\cite{HaglundEtAlOnMacdonaldPoly} and references therein). Note that~$\tilde H_\lambda$ is a symmetric function so, formally speaking, it is not a polynomial.
We denote by $\tilde H^{\rm mot}_\lambda(w_\bullet;z)\in\Sym_{\Mot(\mathbf{k})[z]}[w_\bullet]$ the image of the corresponding modified Macdonald polynomial under the homomorphism $\Sym_{\mathbb{Z}[q,z]}[w_\bullet]\to\Sym_{\Mot(\mathbf{k})[z]}[w_\bullet]$ sending $q$ to $\mathbb{L}$; we call these images \emph{motivic modified Macdonald polynomials}.
Define the motivic Hall--Littlewood polynomials as the specialization
\[ H_\lambda^{\rm mot}(w_\bullet):=\tilde H_\lambda^{\rm mot}(w_\bullet;0).\] Thus $H_\lambda^{\rm mot}$ is the image of the usual Hall--Littlewood polynomial under the homomorphism $\mathbb{Z}[q,w_\bullet]\to\Mot(\mathbf{k})[w_\bullet]$ sending $q$ to $\mathbb{L}$. The motivic Hall--Littlewood polynomials can be interpreted as follows: let $\Fl_\lambda$ stand for the scheme of all flags in $\mathbf{k}^{|\lambda|}$ preserved by $n_\lambda$. Then $\Fl_\lambda$ is graded by the type of the flag. It is not difficult to check that we have
\[
H^{\rm mot}_\lambda(w_\bullet)=[\Fl_\lambda]\in\Mot(\mathbf{k})[w_\bullet]
\]
(cf.~\cite[Theorem~2.12, Corollary~2.13]{MellitPunctures}).
It follows from~\cite[Chapter~3, equation~(2.7)]{macdonald1998symmetric} that $H_\lambda^{\rm mot}$ form a basis of the $\Mot(\mathbf{k})$-module $\Sym_{\Mot(\mathbf{k})}[w_\bullet]$. Thus $\tilde H_\lambda^{\rm mot}$ also form a basis of $\Sym_{\Mot(\mathbf{k})[[z]]}[w_\bullet]$.
\begin{Proposition}\label{pr:axiomMacdonald}\quad
\begin{enumerate}\itemsep=0pt
\item[$(a)$] The motivic modified Macdonald polynomials $\tilde H_\lambda^{\rm mot}$ satisfy the following properties
\begin{enumerate}\itemsep=0pt
\item[$(i)$]
\begin{equation}\label{eq:scalar}
\Exp\left(\frac{\sum\limits_{j=1}^\infty\sum\limits_{j'=1}^\infty w_{0,j}w_{\infty,j'}}{(\mathbb{L}-1)(1-z)}\right)=
\sum_\lambda a_\lambda(z)\tilde H_\lambda^{\rm mot}(w_{0,\bullet};z)\tilde H_\lambda^{\rm mot}(w_{\infty,\bullet};z),
\end{equation}
where $a_\lambda$ are invertible elements of $\Mot(\mathbf{k})[[z]]$.
\item[$(ii)$] $\tilde H_\lambda^{\rm mot}=\sum\limits_{\mu\colon \mu'\prec\lambda'}b_{\lambda\mu}H_\mu^{\rm mot}$, where $b_{\lambda\mu}$ are some elements of $\Mot(\mathbf{k})[[z]]$ such that $b_{\lambda\lambda}$ are invertible. $($Here $\mu'$ and $\lambda'$ stand for the conjugate partitions, $\prec$ is the usual order on partitions.$)$
\item[$(iii)$] $\tilde H^{\rm mot}_\lambda(1_\bullet;z)=1$, where $1_\bullet$ stands for the sequence $(1,0,\dots,0,\dots)$.
\item[$(iv)$] $\tilde H^{\rm mot}_\lambda$ is homogeneous in $w_\bullet$ of degree $|\lambda|$.
\end{enumerate}
\item[$(b)$] The motivic modified Macdonald polynomials are uniquely determined by properties $(i)$--$(iv)$.
\item[$(c)$] Additionally we have
\begin{equation}\label{eq:scalarLengh}
a_\lambda(z)=\frac1{\prod\limits_{h\in\Hook(\lambda)}\big(\mathbb{L}^{a(h)}-z^{l(h)+1}\big)\big(\mathbb{L}^{a(h)+1}-z^{l(h)}\big)},
\end{equation}
where $\Hook(\lambda)$ stands for the set of hooks of $\lambda$, $a(h)$ and $l(h)$ stand for the armlength and the leglength of the hook $h$ respectively.
\end{enumerate}
\end{Proposition}
\begin{proof}
(a) It is enough to check the corresponding properties for the usual modified Macdonald polynomials $\tilde H_\lambda\in\Sym_{\mathbb{Z}[q,z]}[w_\bullet]$ and Hall--Littlewood polynomials $H_\lambda\in\Sym_{\mathbb{Z}[q]}[w_\bullet]$. To prove property~(ii) we note first that according to~\cite[Definition~2.5]{MellitPunctures} we have
\[
\tilde H_\lambda[(q-1)w_\bullet]=\sum_{\mu\colon \mu'\prec\lambda'}c_{\lambda\mu}(q,z)m_{\mu'}(w_\bullet),
\]
where $[(q-1)w_\bullet]$ stands for the plethystic action as in~\cite[Section~2.1]{MellitPunctures}. Recalling that the Hall--Littlewood polynomials are $z=0$ specializations of the modified Macdonald polynomials, we get
\[
H_\lambda[(q-1)w_\bullet]=\sum_{\mu\colon \mu'\prec\lambda'}c_{\lambda\mu}(q,0)m_{\mu'}(w_\bullet).
\]
Now it is easy to see that an analogue of property~(ii) holds in $\Sym_{\mathbb{Q}[q,z]}(w_\bullet)$:
we can write $\tilde H_\lambda=\sum\limits_{\mu\colon \mu'\prec\lambda'}b'_{\lambda\mu}H_\mu$, where $b'_{\lambda\mu}$ are some elements of $\mathbb{Q}[q,z]$.
Next, $H_\lambda$ form a basis in $\Sym_{\mathbb{Z}[q]}(w_\bullet)$ (by~\cite[Chapter~3, equation~(2.7)]{macdonald1998symmetric}), so $\tilde H_\lambda$ form a basis in $\Sym_{\mathbb{Z}[q][[w]]}(w_\bullet)$. It follows that $b'_{\lambda\mu}\in\mathbb{Z}[w][[z]]$ and $b'_{\lambda\lambda}$ are invertible in this ring. Now property~(ii) follows.
It is sufficient to prove properties (iii), and (iv) in $\Sym_{\mathbb{Q}(q,z)}[w_\bullet]$. Property~(iii) is clear from~\cite[Definition~2.5]{MellitPunctures}. Property~(iv) follows, for example, from the definition of $\tilde H$ given in~\cite{GarsiaHaiman1996remarkable}.
We first prove an analogue of property~(i) in $\Sym_{\mathbb{Q}(q,z)}[w_\bullet]$. Recall from loc.~cit.~that $\Sym_{\mathbb{Q}(q,z)}[w_\bullet]$ carries the $q,z$-scalar product $(\cdot,\cdot)_{q,z}$. for which
\[
\Exp\left(\frac{\sum\limits_{j=1}^\infty\sum\limits_{j'=1}^\infty w_{0,j}w_{\infty,j'}}{(q-1)(1-z)}\right)
\]
is the reproducing kernel. This means that if $f_\lambda(w_\bullet;q,z)$ is any graded $\mathbb{Q}(q,z)$-basis in \linebreak $\Sym_{\mathbb{Q}(q,z)}[w_\bullet]$ indexed by partitions and $f^\vee_\lambda(w_\bullet;q,z)$ is the dual basis with respect to $(\cdot,\cdot)_{q,z}$, then
\begin{equation}\label{eq:RepKer}
\Exp\left(\frac{\sum\limits_{j=1}^\infty\sum\limits_{j'=1}^\infty w_{0,j}w_{\infty,j'}}{(q-1)(1-z)}\right)=\sum_\lambda
f_\lambda(w_{0,\bullet};q,z)f^\vee_\lambda(w_{\infty,\bullet};q,z).
\end{equation}
Next, by property~(iv) the basis $H_\lambda(w_\bullet;q,z)$ is a graded basis. Thus, by~\cite[Proposition~2.7]{MellitPunctures} the dual of $H_\lambda(w_\bullet;q,z)$ is equal to $a_\lambda(q,z)H_\lambda(w_\bullet;q,z)$ for some $a_\lambda(q,z)\in\mathbb{Q}(q,z)$. Further, in~\cite[Section~2.4]{MellitPunctures} it is shown that
\begin{equation}\label{eq:alambda}
a_\lambda(q,z)=\frac1{\prod\limits_{h\in\Hook(\lambda)}\big(q^{a(h)}-z^{l(h)+1}\big)\big(q^{a(h)+1}-z^{l(h)}\big)}.
\end{equation}
It is clear from this formula that $a_\lambda(q,z)\in\mathbb{Z}(q)[[z]]$. Now property~(i) follows from~\eqref{eq:RepKer}.
The condition in part~(c) follows from~\eqref{eq:alambda}.
Now we prove part~(b). Since $\tilde H_\lambda^{\rm mot}$ form a basis of $\Sym_{\Mot(\mathbf{k})[[z]]}[w_\bullet]$, there is a unique $\Mot(\mathbf{k})[[z]]$-linear scalar product on $\Sym_{\Mot(\mathbf{k})[[z]]}[w_\bullet]$ such that $\langle\tilde H_\lambda^{\rm mot},\tilde H_\mu^{\rm mot}\rangle=\delta_{\lambda\mu}\frac1{a_\lambda}$. (This is the scalar product such that the LHS of~\eqref{eq:scalar} is the reproducing kernel for this product).
Let $H'_\lambda=H'_\lambda(w_\bullet;z)$ be symmetric functions satisfying conditions of part~(a), we need to show that $H'_\lambda=\tilde H_\lambda^{\rm mot}$. Applying condition (ii), we see that we can write
\begin{equation}\label{eq:triangle}
H'_\lambda=\sum_{\mu\colon \mu'\prec\lambda'}c_{\lambda\mu}\tilde H_\mu^{\rm mot},
\end{equation}
where $c_{\lambda\lambda}$ is invertible. Condition~(i) shows that we have
\[
\sum_\lambda a_\lambda(z)\tilde H_\lambda^{\rm mot}(w_{0,\bullet};z)\tilde H_\lambda^{\rm mot}(w_{\infty,\bullet};z)=
\sum_\lambda a'_\lambda(z)H'_\lambda(w_{0,\bullet};z)H'_\lambda(w_{\infty,\bullet};z),
\]
with invertible $a'_\lambda(z)$. Recalling that $H'_\lambda$ form a graded basis in $\Sym_{\Mot(\mathbf{k})[[z]]}[w_\bullet]$, we see that $\langle H'_\lambda,H'_\mu\rangle=\delta_{\lambda\mu}\frac1{a'_\lambda}$. Indeed, $H_\lambda^{\rm mot}$ and $a_\lambda H_\lambda^{\rm mot}$ are dual basis for the scalar product, so the above equality shows that $H'_\lambda$ and $a'_\lambda H'_\lambda$ are dual basis as well.
We prove that $H'_\lambda=\tilde H_\lambda^{\rm mot}$ by induction on the conjugate partition $\lambda'$ with respect to $\prec$. Thus we assume that $H'_\mu=\tilde H_\mu^{\rm mot}$ whenever $\mu'\prec\lambda'$. Taking the scalar product of~\eqref{eq:triangle} with $\tilde H_\mu^{\rm mot}=H'_\mu$, we see that $c_{\lambda\mu}\langle\tilde H_\mu^{\rm mot},\tilde H_\mu^{\rm mot}\rangle=0$ whenever $\mu\ne\lambda$. We see that $c_{\lambda\mu}=0$ so that $H'_\lambda=c_{\lambda\lambda}\tilde H_\lambda^{\rm mot}$. Now condition (iii) implies that $c_{\lambda\lambda}=1$.
\end{proof}
\subsection{Explicit formulas for the graded motivic classes of nilpotent pairs}\label{sect:MotEnd}
Now we are ready to give the precise formula for $\big[\mathcal{P}{air}^{{\rm nilp},-}(X,D,\lambda)\big]$. Recall that for a partition~$\lambda$ we defined $J_\lambda^{\rm mot}(z),H_\lambda^{\rm mot}(z)\in\cMot(\mathbf{k})[[z]]$ in~\cite[Section~1.3.2]{FedorovSoibelmans}. In this paper, we will denote them by $J_{\lambda,X}^{\rm mot}(z)$ and $H_{\lambda,X}^{\rm mot}(z)$ respectively to emphasize that they depend on the curve $X$ and to ensure that they are not confused with motivic modified Macdonald polynomials $\tilde H_\lambda^{\rm mot}(w_\bullet;z)$ and with motivic Hall--Littlewood polynomials $H_\lambda^{\rm mot}(w_\bullet)$. Denote by $g$ the genus of~$X$.
\begin{Theorem}\label{th:MotMellitPunctures} We have in $\cMot(\mathbf{k})[[\Gamma'_+]]$
\[
\big[\mathcal{P}{air}^{{\rm nilp},-}(X,D,\lambda)\big]=w^{|\lambda|}\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J_{\lambda,X}^{\rm mot}\big(z^{-1}\big)
H_{\lambda,X}^{\rm mot}\big(z^{-1}\big)\prod_{x\in D}\tilde H_\lambda^{\rm mot}\big(w_{x,\bullet};z^{-1}\big).
\]
\end{Theorem}
\begin{proof}
According to~\cite[Theorem~1.4.1]{FedorovSoibelmans}, we have
\[
\sum_\lambda\big[\mathcal{P}{air}^{{\rm nilp},+}(X,\varnothing,\lambda)\big]=
w^{|\lambda|}\sum_\lambda\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J^{\rm mot}_{\lambda,X}(z)H^{\rm mot}_{\lambda,X}(z),
\]
where the superscript ``$+$'' stands for HN-nonnegative vector bundles (see Section~\ref{sect:Gamma} and~\cite[Section~3.2]{FedorovSoibelmans} for the definition). Inspecting the proof, we see that for each $\lambda$ the summands are equal:
\begin{equation*
\big[\mathcal{P}{air}^{{\rm nilp},+}(X,\varnothing,\lambda)\big]=w^{|\lambda|}\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J_{\lambda,X}^{\rm mot}(z)H_{\lambda,X}^{\rm mot}(z).
\end{equation*}
By Lemma~\ref{lm:PosNeg} we get an isomorphism of stacks
\[
\mathcal{P}{air}^{{\rm nilp},+}_{r,d}(X,\varnothing,\lambda)\simeq\mathcal{P}{air}^{{\rm nilp},-}_{r,-d}(X,\varnothing,\lambda).
\]
Thus
\[
\big[\mathcal{P}{air}^{{\rm nilp},-}(X,\varnothing,\lambda)\big]=w^{|\lambda|}\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J_{\lambda,X}^{\rm mot}\big(z^{-1}\big)H_{\lambda,X}^{\rm mot}\big(z^{-1}\big).
\]
To be able to apply Theorem~\ref{th:Factorization}, we need the following lemma.
\begin{Lemma}\label{lm:MacDonald} We have in $\Mot(\mathbf{k})[[w_{\infty,\bullet},z^{-1}]]$
\[
\frac{\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\infty,\lambda\big)\big]}{\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]}=
\tilde H_\lambda^{\rm mot}\big(w_{\infty,\bullet};z^{-1}\big).
\]
\end{Lemma}
\begin{proof}
Our proof is similar to that of~\cite[Theorem~5.5]{MellitPunctures}. Let $H'_\lambda\in\Mot(\mathbf{k})[[w_\bullet,z]]$ be the series such that
\[
\frac{\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\infty,\lambda\big)\big]}{\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]}=H'_\lambda\big(w_{\infty,\bullet};z^{-1}\big).
\] Denote by $\mathcal C_{\lambda\mu}$ the stack classifying pairs $(E,\Psi)$, where $E$ is a nonpositive vector bundle of rank~$|\lambda|$ on $\P^1$, $\Psi$ is an endomorphism of $E$ generically conjugate to $n_\lambda$ and conjugate to~$n_\mu$ at $x=\infty$. Then $\mathcal C_{\lambda\mu}$ is graded by the degree of $E$, and we have $[\mathcal C_{\lambda\mu}]\in\Mot(\mathbf{k})\big[\big[z^{-1}\big]\big]$. Clearly, we have
\begin{equation}\label{eq:H'}
H'_\lambda\big(w_\bullet;z^{-1}\big)=\sum_\mu\frac{[\mathcal C_{\lambda\mu}]}{\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]}H_\mu^{\rm mot}(w_\bullet),
\end{equation}
where $H_\mu^{\rm mot}$ are the motivic Hall--Littlewood polynomials. Note that $C_{\lambda\mu}=\varnothing$ unless $\mu'\prec\lambda'$ because
for all $i$ the dimension of the fiber of $\Ker\Psi^i$ is semicontinuous on $\P^1$. Now it is easy to see that $H'_\lambda$ are symmetric functions with coefficients in $\Mot(\mathbf{k})[[z]]$. We will use Proposition~\ref{pr:axiomMacdonald}(b) to show that for all $\lambda$ we have $H'_\lambda=\tilde H_\lambda^{\rm mot}$.
To show that $H'_\lambda$ satisfy property~(ii) of Proposition~\ref{pr:axiomMacdonald}(a) it remains to show that $[\mathcal C_{\lambda\lambda}]$ is invertible. This is completely similar to the proof of Lemma~\ref{lm:invertible}.
To show that $H'_\lambda$ satisfy condition~(i) of Proposition~\ref{pr:axiomMacdonald}(a), we note that combining Theorem~\ref{th:Factorization} and Proposition~\ref{pr:P1}, we get
\[
\Exp\left(\frac{\sum\limits_{j=1}^\infty w_{0,j}w_{\infty,j}}{(\mathbb{L}-1)(1-z)}\right)=
\sum_\lambda a'_\lambda(z)H'_\lambda(w_{0,\bullet};z)H'_\lambda(w_{\infty,\bullet};z),
\]
where $a'_\lambda(z^{-1})=\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]$ and the statement follows.
Condition~(iii) of Proposition~\ref{pr:axiomMacdonald}(a) is obvious. Condition~(iv) follows from~\eqref{eq:H'}. We note also for the future use that it is clear from the argument that we have
\begin{equation}\label{eq:P1emptyset}
\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,\varnothing,\lambda\big)\big]=a'_\lambda\big(z^{-1}\big)=a_\lambda\big(z^{-1}\big),
\end{equation}
where $a_\lambda(z)$ is given by~\eqref{eq:scalarLengh}. The proof of Lemma~\ref{lm:MacDonald} is complete.
\end{proof}
Now Theorem~\ref{th:Factorization} completes the proof of Theorem~\ref{th:MotMellitPunctures}.
\end{proof}
\begin{Corollary}\label{cor:Pairs} We have in $\cMot(\mathbf{k})[[\Gamma'_+]]$
\begin{gather*}
\big[\mathcal{P}{air}^-(X,D)\big]=
\Pow\left(
\sum_\lambda w^{|\lambda|}\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J_{\lambda,X}^{\rm mot}\big(z^{-1}\big)
H_{\lambda,X}^{\rm mot}\big(z^{-1}\big)\prod_{x\in D}\tilde H_\lambda^{\rm mot}\big(w_{x,\bullet};z^{-1}\big)
,\mathbb{L}
\right).
\end{gather*}
\end{Corollary}
\begin{proof}
The argument is similar to the proof of~\cite[Proposition~3.8.1]{FedorovSoibelmans}. In more details, let $(E,\Psi,E_{\bullet,\bullet})$ be a $K$-point of $\mathcal{P}{air}^-(X,D)$. According to~\cite[Lemma~3.8.3]{FedorovSoibelmans}, we can uniquely decompose
\[
(E,\Psi)\xrightarrow{\simeq}\bigoplus_i R_{\mathbf{k}(x_i)/K}(E_i,x_i\Id+\Psi_i),
\]
where $x_i$ are distinct closed points of $\mathbb A_K^1$ (the eigenvalues of $\Psi$), $(E_i,\Psi_i)$ are $\mathbf{k}(x_i)$-points of the stack $\mathcal{P}{air}^{{\rm nilp},-}(X,\varnothing)$, $\mathbf{k}(x_i)\supset K$ is the residue field of $x_i$, and $R_{\mathbf{k}(x_i)/K}$ is the pushforward functor. It follows easily from the proof of~\cite[Lemma~3.8.3]{FedorovSoibelmans}, that we can write uniquely
\[
(E,\Psi,E_{\bullet,\bullet})\xrightarrow{\simeq}\bigoplus_i R_{\mathbf{k}(x_i)/K}(E_i,x_i\Id+\Psi_i,E_{i,\bullet,\bullet}),
\]
where $(E_i,\Psi_i,E_{i,\bullet,\bullet})$ are $\mathbf{k}(x_i)$-points of $\mathcal{P}{air}^{{\rm nilp},-}(X,D)$. It remains to use a version of~\cite[Lemma~3.8.2]{FedorovSoibelmans}.
\end{proof}
\subsection[Case of $\P^1$]{Case of $\boldsymbol{\P^1}$}\label{sect:P1ManyPts}
In the case of $X=\P^1$ we can give a more explicit answer. Moreover, we get an answer valid in $\Mot(\mathbf{k})[[\Gamma'_+]]$ rather than in its completion $\cMot(\mathbf{k})[[\Gamma'_+]]$, which is desirable, since we do not know whether the natural homomorphism $\Mot(\mathbf{k})\to\cMot(\mathbf{k})$ is injective. We argue as in~\cite[Corollary~5.9]{MellitPunctures}. Combining Theorem~\ref{th:Factorization},~\eqref{eq:P1emptyset}, and~\eqref{eq:scalarLengh}, we get the following formula valid in $\Mot(\mathbf{k})[[\Gamma'_+]]$.
\[
\big[\mathcal{P}{air}^{{\rm nilp},-}\big(\P^1,D,\lambda\big)\big]=w^{|\lambda|}\frac{\prod\limits_{x\in D}\tilde H_\lambda^{\rm mot}\big(w_{x,\bullet};z^{-1}\big)} {\prod\limits_{h\in\Hook(\lambda)}\big(\mathbb{L}^{a(h)}-z^{-l(h)-1}\big)\big(\mathbb{L}^{a(h)+1}-z^{-l(h)}\big)},
\]
where $\Hook(\lambda)$ stands for the set of hooks of $\lambda$, $a(h)$ and $l(h)$ stand for the armlength and the leglength of the hook $h$ respectively. Arguing as in Corollary~\ref{cor:Pairs}, we get in $\Mot(\mathbf{k})[[\Gamma'_+]]$
\begin{equation}\label{eq:P1}
\big[\mathcal{P}{air}^-\big(\P^1,D\big)\big]=
\Pow\left(
\sum_\lambda\frac{w^{|\lambda|}\prod\limits_{x\in D}\tilde H_\lambda^{\rm mot}(w_{x,\bullet};z)}
{\prod\limits_{h\in\Hook(\lambda)} \big(\mathbb{L}^{a(h)}-z^{-l(h)-1}\big)\big(\mathbb{L}^{a(h)+1}-z^{-l(h)}\big)} ,\mathbb{L}
\right).
\end{equation}
\section{Parabolic Higgs bundles with fixed eigenvalues}\label{sect:HiggsnEigenval}
\subsection{Stacks of Higgs bundles} Let $X$ and $D$ be as above. From now on we denote by $g$ the genus of $X$. Our goal in this section is to calculate the motivic Donaldson--Thomas series of the category of parabolic Higgs bundles. More precisely, we calculate the motivic classes of the moduli stacks of Higgs bundles with fixed eigenvalues and with nonpositive underlying vector bundles. Our argument is similar to~\cite[Sections~3.4--3.5]{FedorovSoibelmans}. The main result is Corollary~\ref{cor:MAIN}. We denote by $\Omega_X$ the canonical line bundle on $X$.
\begin{Definition}
A \emph{parabolic Higgs bundle} of type $(X,D)$ is a triple $(E,E_{\bullet,\bullet},\Phi)$, where $(E,E_{\bullet,\bullet})$ is a point of $\mathcal{B}{un}^{\rm par}(X,D)$, $\Phi\colon E\to E\otimes\Omega_X(D)$ is an $\mathcal O_X$-linear morphism (called a \emph{Higgs field on $(E,E_{\bullet,\bullet})$}) such that for all $x\in D$ and $j\ge0$ we have $\Phi_x(E_{x,j})\subset E_{x,j}\otimes\Omega_X(D)_x$.
\end{Definition}
We denote the category (and the Artin stack) of parabolic Higgs bundles by $\mathcal{H}{iggs}=\mathcal{H}{iggs}(X,D)$. We define the $\Gamma'_+$-graded stack $\mathcal{H}{iggs}^-=\mathcal{H}{iggs}^-(X,D)$ following the general formalism of Section~\ref{sect:CatsOverPar}, that is, $\mathcal{H}{iggs}^-$ is the open substack of $\mathcal{H}{iggs}$ corresponding to Higgs bundles with nonpositive underlying vector bundle. Clearly, this stack is of finite type in the graded sense (that is, the graded components are of finite type). In this section $X$ and $D$ are fixed, so we skip them from the notation.
\subsection{Existence of Higgs fields with prescribed residues}\label{Sect:ExistResidue}
To determine a criterion for the existence of a Higgs bundle with prescribed residues, we use an approach similar to~\cite{AtiyahConnections,MihaiMonodromie,MihaiConnexions}. Let $E\to X$ be a vector bundle and let $\Phi\colon E\to E\otimes\Omega_X(D)$ be a morphism. In this case, for all $x\in D$ we have a residue $\Res_x\Phi\in\End(E_x)$.
\begin{Proposition}\label{pr:exist}
\label{existenceHiggs}
Let $E$ be a vector bundle on $X$ and let for $x\in D$, $\rho_x\in\End(E_x)$ be an endomorphism of the fiber of $E$ at $x$. There exists a Higgs field $\Phi\colon E\to E\otimes\Omega_X(D)$ with $\Res_x\Phi=\rho_x$ for all $x\in D$ if and only if
\[
\sum_{x\in D}\tr(\rho_x\phi_x)=0
\]
for all $\phi\in\End(E)$, where $\tr$ stands for the trace.
\end{Proposition}
\begin{proof}
Consider the short exact sequence of sheaves
\[
0\rightarrow\mathcal O_X\rightarrow\mathcal K_X\rightarrow\mathcal K_X/\mathcal O_X\rightarrow 0,
\]
where $\mathcal K_X$ is the constant sheaf corresponding to the function field of $X$. Let $\Omega_\mathcal K$ be the constant sheaf of meromorphic differential forms on $X$. We can obtain a new short exact sequence of sheaves:
\[
0\rightarrow\mathcal{E}{nd}(E)\otimes\Omega_X \rightarrow \mathcal{E}{nd}(E)\otimes \Omega_\mathcal K \rightarrow\mathcal{E}{nd}(E)\otimes(\Omega_\mathcal K/\Omega_X)\rightarrow0
\]
by taking the tensor product of the first sequence with $\mathcal{E}{nd}(E)\otimes\Omega_X$. Note that the middle term in this sequence is a constant sheaf, while the last term is an (infinite) direct sum of skyscraper sheaves. That is, $\mathcal{E}{nd}(E)\otimes(\Omega_\mathcal K/\Omega_X)\cong\bigoplus_{x\in X} (i_x)_*(\End(E_x)\otimes(\Omega_\mathcal K/\Omega_X)_x)$, where $(\Omega_\mathcal K/\Omega_X)_x$ is the vector space of polar parts at $x$ of meromorphic 1-forms, the summation is taken over all closed points of $X$, and $i_x\colon x\rightarrow X$ is the inclusion. Passing to the long exact sequence for cohomology we obtain the following exact sequence of vector spaces:
\[
\End(E)\otimes\Omega_\mathcal K\rightarrow \bigoplus_{x\in X}\End(E_x)\otimes(\Omega_\mathcal K/\Omega_X)_x\rightarrow H^1(X, \mathcal{E}{nd}(E)\otimes\Omega_X) \rightarrow 0.
\]
This implies that $H^1(X, \mathcal{E}{nd}(E)\otimes \Omega_X)$ may be presented as the quotient of
\[ \bigoplus_{x\in X}\End(E_x)\otimes(\Omega_\mathcal K/\Omega_X)_x\] by the image of $\End(E)\otimes\Omega_\mathcal K$ (compare with the adelic description of cohomology given in~\cite[Chapter~2, Section~5]{SerreAlgGrClFields}). Further note that the required Higgs field $\Phi$ always exists locally, defined as $\Phi_x =\rho_x\frac{dz_x}{z_x}$, where $z_x$ is an \'etale coordinate near $x$ if $x\in D$ and $\Phi_x=0$ if $x\notin D$. Under the above presentation of $H^1(X, \mathcal{E}{nd}(E)\otimes \Omega_X)$, the local solutions $\Phi_x$ define a cohomology class $a(E,D,\rho_\bullet)\in H^1(X, \mathcal{E}{nd}(E)\otimes\Omega_X)$. Moreover, it follows from the exact sequence that $a(E,D,\rho_\bullet)=0$ if and only if $\Phi$ can be defined globally.
Serre duality defines a bilinear pairing $H^1(X,\mathcal{E}{nd}(E)\otimes\Omega_X)\times\End(E)\rightarrow\mathbf{k}$. Using the above presentation for $H^1(X, \mathcal{E}{nd}(E)\otimes \Omega_X)$ this pairing may be evaluated on $\phi\in\End(E)$ as
\[
\langle a(E,D,\rho_\bullet),\phi\rangle = \sum_{x\in X}\Res_x\tr(\Phi_x\phi_x) = \sum_{x\in D}\tr(\rho_x\phi_x).
\]
Since the pairing is perfect, $\sum\limits_{x\in D}\tr(\rho_x \phi_x) = 0$ for all $\phi\in\End(E)$ if and only if $a(E,D,\rho_\bullet)=0$. The proof is complete.
\end{proof}
\subsection{Parabolic Higgs bundles with fixed eigenvalues}\label{sect:ExistEigen}
Recall that $\mathbf{k}[D\times\mathbb{Z}_{>0}]$ is the set of all $\mathbf{k}$-valued sequences $\zeta=\zeta_{\bullet,\bullet}=(\zeta_{x,j})$ indexed by $D\times\mathbb{Z}_{>0}$ such that $\zeta_{x,j}=0$ for $j\gg0$.\footnote{According to our convention we should denote $\zeta$ by $\zeta_{\bullet,\bullet}$ but it does not look nice in the formulas.} For $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$ let $\mathcal{H}{iggs}(\zeta)=\mathcal{H}{iggs}(X,D,\zeta)$ denote the full subcategory of $\mathcal{H}{iggs}$ (and its stack of objects) corresponding to collections $(E,E_{\bullet,\bullet},\Phi)$ such that $(\Phi-\zeta_{x,j}1)(E_{x,{j-1}})\subset E_{x,j}\otimes\Omega_X(D)_x$ for all $x\in D$ and $j>0$. Again, the $\Gamma_+'$-graded stack $\mathcal{H}{iggs}^-(\zeta)$ is defined following the formalism of Section~\ref{sect:CatsOverPar}.
Let $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$ and let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$. Recall from Section~\ref{sect:DegreeSlope} that we have set
\[
\deg_{0,\zeta}\gamma:=\sum_{x\in D}\sum_{j=1}^{\infty}\zeta_{x,j}r_{x,j}\in\mathbf{k}.
\]
\begin{Lemma}\label{lm:existence}
Let $\mathbf E\in\mathcal{B}{un}^{\rm par}(\mathbf{k})$ and $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$. There exists an object $(\mathbf E,\Phi)\in\mathcal{H}{iggs}(\zeta)(\mathbf{k})$ if and only if $\deg_{0,\zeta}\mathbf E'=0$ for any direct summand $\mathbf E'$ of $\mathbf E$.
\end{Lemma}
Note that, in particular, this condition implies that $\deg_{0,\zeta}\mathbf E=0$.
\begin{proof}
The proof is the same as the proof of~\cite[Theorem~7.1]{Crawley-Boevey:Indecomposable} after replacing $b(E)$ with 0 and replacing~\cite[Theorem~7.2]{Crawley-Boevey:Indecomposable} with Proposition~\ref{pr:exist}. (The Atiyah class $b(E)$ represents the obstruction to existence of a connection (without poles) on $E$. Thus, it is absent for Higgs fields essentially because every vector bundle possesses the zero Higgs field.)
\end{proof}
\subsection{Parabolic pairs with isoslopy underlying parabolic bundles}\label{sect:Isoslopy} Recall that for $\kappa\in\mathbf{k}$ and $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$, a parabolic bundle $\mathbf E\in\mathcal{B}{un}^{\rm par}$ is $(\kappa,\zeta)$-isoslopy if
\[
\frac{\deg_{\kappa,\zeta}\mathbf E'}{\rk\mathbf E'}=\frac{\deg_{\kappa,\zeta}\mathbf E}{\rk\mathbf E}
\]
whenever $\mathbf E'$ is a direct summand of $\mathbf E$. Similarly to~\cite[Lemma~3.2.2]{FedorovSoibelmans}, one checks that the notion of $(\kappa,\zeta)$- isoslopy parabolic bundle is invariant with respect to field extensions. Thus for each $\gamma\in\Gamma_+'$ we have a well-defined subset $\mathcal{B}{un}^{{\rm par},(\kappa,\zeta)-{\rm iso},-}_\gamma\subset|\mathcal{B}{un}^{{\rm par},-}_\gamma|$. As in~\cite[Lemma~3.2.3]{FedorovSoibelmans} we show that this subset is constructible.
Let $\mathcal{P}{air}^{(\kappa,\zeta)-{\rm iso},-}_\gamma\!$ be the preimage of $\mathcal{B}{un}^{{\rm par},(\kappa,\zeta)-{\rm iso},-}_\gamma\!$ under the projection $|\mathcal{P}{air}|\!\to\!|\mathcal{B}{un}^{\rm par}|$.
\begin{Proposition}\label{pr:Sasha}
For $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$, set $\chi(\gamma):=(g-1)r^2+\sum\limits_{x\in D
\sum\limits_{j<j'}r_{x,j}r_{x,j'}$. Then
\[
[\mathcal{H}{iggs}_\gamma^-(\zeta)]=
\begin{cases}
\mathbb{L}^{\chi(\gamma)}\big[\mathcal{P}{air}^{(0,\zeta)-{\rm iso},-}_\gamma\big] & \text{if }\deg_{0,\zeta}\gamma=0,\\
0 &\text{otherwise.}
\end{cases}
\]
\end{Proposition}
\begin{proof}
The case of $\deg_{0,\zeta}\gamma\ne0$ is obvious in view of Lemma~\ref{lm:existence}. Assume that $\deg_{0,\zeta}\gamma=0$. It is enough to show the equality of motivic functions in $\Mot\big(\mathcal{B}{un}^{{\rm par},-}_\gamma\big)$:
\begin{equation}\label{eq:IsoslopyHiggs}
\big[\mathcal{H}{iggs}_\gamma^-(\zeta)\to\mathcal{B}{un}^{{\rm par},-}_\gamma\big]=
\mathbb{L}^{\chi(\gamma)}\big[\mathcal{P}{air}^{(0,\zeta)-{\rm iso},-}_\gamma\to\mathcal{B}{un}^{{\rm par},-}_\gamma\big].
\end{equation}
Let $K\supset\mathbf{k}$ be a field extension. Let $\xi\colon \Spec K\to\mathcal{B}{un}^{{\rm par},-}_\gamma$ be a point represented by a parabolic bundle $\mathbf E=(E,E_{\bullet,\bullet})$. In view of Proposition~\ref{pr:MotFunEqual}, we only need to check that the $\xi$-pullbacks of~\eqref{eq:IsoslopyHiggs} are equal. If $\mathbf E$ is not $(0,\zeta)$-isoslopy, then, by Lemma~\ref{lm:existence}, the pullbacks are equal to zero, so we assume that $\mathbf E$ is $(0,\zeta)$-isoslopy.
Let $\higgs(\mathbf E,\zeta)$ denote the space of Higgs fields on $\mathbf E$ with eigenvalues $\zeta$ (that is, the $\mathbf E$-fiber of the projection $\mathcal{H}{iggs}(\zeta)\to\mathcal{B}{un}^{\rm par}$). By Lemma~\ref{lm:existence}, $\higgs(\mathbf E,\zeta)$ is non-empty, so it is a torsor over the vector space $\higgs(\mathbf E,0)$. Thus,
\begin{equation*
\xi^*\big[\mathcal{H}{iggs}_\gamma^-(\zeta)\to\mathcal{B}{un}^{{\rm par},-}_\gamma\big]=\mathbb{L}^{\dim \higgs(\mathbf E,0)}.
\end{equation*}
On the other hand, we have
\[
\xi^*\big[\mathcal{P}{air}^{(0,\zeta)-{\rm iso},-}_\gamma\to\mathcal{B}{un}^{{\rm par},-}_\gamma\big]=\mathbb{L}^{\dim\End(\mathbf E)}.
\]
It remains to prove the following lemma.
\begin{Lemma}\label{lm:PairHiggs}
Let $\mathbf E\in\mathcal{B}{un}^{\rm par}_\gamma$ be a parabolic bundle. Then
\[
\dim\End(\mathbf E)-\dim\higgs(\mathbf E,0)=-\chi(\gamma).
\]
\end{Lemma}
\begin{proof}
Write $\mathbf E=(E,E_{\bullet,\bullet})$. Let $\mathcal{E}{nd}(\mathbf E)\subset\mathcal{E}{nd}(E)$ be the subsheaf of endomorphisms preserving flags. One checks easily that the trace pairing gives an isomorphism between the dual sheaf $\mathcal{E}{nd}(\mathbf E)^\vee\otimes\Omega_X$ and $\mathcal{H}{iggs}(\mathbf E,0)$, where $\mathcal{H}{iggs}(\mathbf E,0)$ stands for the sheaf of Higgs fields on~$\mathbf E$ with zero eigenvalues. Thus by Riemann--Roch theorem we have
\begin{align*}
\dim\End(\mathbf E)-\dim\higgs(\mathbf E,0)& =h^0(X,\mathcal{E}{nd}(\mathbf E))-h^0\big(X,\mathcal{E}{nd}(\mathbf E)^\vee\otimes\Omega_X\big)\\
& =(1-g)\rk\mathcal{E}{nd}(\mathbf E)+\deg\mathcal{E}{nd}(\mathbf E).
\end{align*}
It remains to calculate $\deg\mathcal{E}{nd}(\mathbf E)$. For $x\in D$ consider the fiber $E_x$, its ring of endomorphisms $\End(E_x)$, its subspace $V_x$ of endomorphisms preserving the flag $E_{x,\bullet}$, and the quotient of vector spaces $W_x:=\End(E_x)/V_x$. Further, consider the torsion sheaf $W:=\oplus_{x\in D}(i_x)_*W_x$, where, as before, $i_x\colon x\to X$ is the inclusion. We have an exact sequence
\[
0\to\mathcal{E}{nd}(\mathbf E)\to\mathcal{E}{nd}(E)\to W\to0,
\]
so
\[
\deg\mathcal{E}{nd}(\mathbf E)=\deg\mathcal{E}{nd}(E)-\length(W)=0-\sum_{x\in D
\sum_{j<j'}r_{x,j}r_{x,j'}
\]
and the lemma follows.
\end{proof}
The lemma completes the proof of Proposition~\ref{pr:Sasha}.
\end{proof}
\begin{Proposition}\label{pr:IsoslProd} We have in $\Mot(\mathbf{k})[[\Gamma'_+]]$
\[
[\mathcal{P}{air}^-]=
\prod_{\tau\in\mathbf{k}}\left(
\sum_{\substack{\gamma\in\Gamma'_+\\ \deg_{\kappa,\zeta}\gamma=\tau\rk\gamma}}\big[\mathcal{P}{air}^{(\kappa,\zeta)-{\rm iso},-}_\gamma\big]e_\gamma
\right).
\]
\end{Proposition}
We note that the product makes sense because for a $\gamma\in\Gamma_+'$ there are only finitely many ways to write $\gamma$ as the sum of elements of $\Gamma_+'$. Also, the order of the multiples is irrelevant, since we are working with a commutative quantum torus.
\begin{proof}
The proof is almost the same as the proof of~\cite[Lemma~3.5.3 ]{FedorovSoibelmans} (see also~\cite[Proposition~3.5.1]{FedorovSoibelmans}).
\end{proof}
We need some notation. Let us write
\[
\mathbb{L}\cdot\Log\left(\sum_\lambda w^{|\lambda|} J_{\lambda,X}^{\rm mot}\big(z^{-1}\big)H_{\lambda,X}^{\rm mot}\big(z^{-1}\big)\prod_{x\in D}\tilde H_\lambda^{\rm mot}\big(w_{x,\bullet};z^{-1}\big)\right)=
\sum_{\gamma\in\Gamma'_+}\overline B_\gamma e_\gamma,
\]
where $\Log$ is the plethystic logarithm defined in Section~\ref{sect:Plethystic}, the summation is over all partitions. We note that $\overline B_\gamma$ are $W$-invariant, where $W=\prod\limits_{x\in D}\Sigma_\infty$ (cf.~Remark~\ref{rm:Weyl}). Note also that $\overline B_0=0$ by the definition of plethystic logarithm.
\begin{Definition}\label{def:DT}
The motivic classes $\overline B_\gamma\in\cMot(\mathbf{k})$ are called \emph{motivic Donaldson--Thomas invariants} of the pair $(X,D)$.
\end{Definition}
\begin{Corollary}\label{cor:isoslopy} For each $\tau\in\mathbf{k}$ we have in $\cMot(\mathbf{k})[[\Gamma'_+]]$
\[
\sum_{\substack{\gamma\in\Gamma'_+\\ \deg_{\kappa,\zeta}\gamma=\tau\rk\gamma}}\big[\mathcal{P}{air}^{(\kappa,\zeta)-{\rm iso},-}_\gamma\big]e_\gamma=
\Exp\left(\sum_{\substack{\gamma\in\Gamma'_+\\ \deg_{\kappa,\zeta}\gamma=\tau\rk\gamma}}\overline B_\gamma e_\gamma\right),
\]
where $\Exp$ is the plethystic exponent defined in Section~{\rm \ref{sect:Plethystic}}.
\end{Corollary}
\begin{proof}
First of all, using Corollary~\ref{cor:Pairs} and properties of plethystic operations, we get
\[
[\mathcal{P}{air}^-]=\Exp\left(\sum_{\gamma\in\Gamma'_+}\overline B_\gamma e_\gamma\right)=
\prod_{\tau\in\mathbf{k}}\Exp\left(\sum_{\substack{\gamma\in\Gamma'_+\\ \deg_{\kappa,\zeta}\gamma=\tau\rk\gamma}}\overline B_\gamma e_\gamma\right).
\]
Now, it remains to use Proposition~\ref{pr:IsoslProd} and equate the slopes (cf.~\cite[Lemma~3.7.1]{FedorovSoibelmans}).
\end{proof}
\begin{Corollary}\label{cor:MAIN} We have in $\cMot(\mathbf{k})[[\Gamma'_+]]$
\[
\sum_{\gamma\in\Gamma'_+}\mathbb{L}^{-\chi(\gamma)}[\mathcal{H}{iggs}^-_\gamma(\zeta)]e_\gamma=
\Exp\left(\sum_{\substack{\gamma\in\Gamma'_+\\ \deg_{0,\zeta}\gamma=0}}\overline B_\gamma e_\gamma\right),
\]
where $\Exp$ is the plethystic exponent defined in Section~{\rm \ref{sect:Plethystic}}.
\end{Corollary}
\begin{proof}
Use Proposition~\ref{pr:Sasha} and Corollary~\ref{cor:isoslopy}.
\end{proof}
\subsection[Case of $\P^1$]{Case of $\boldsymbol{\P^1}$} Assume now that $X=\P^1$. Then we have a simpler result. Moreover, it is more precise in the sense that we get an answer in $\Mot(\mathbf{k})$ rather than in $\cMot(\mathbf{k})$. Define the Donaldson--Thomas invariants $B_\gamma\in\Mot(\mathbf{k})$ by
\begin{equation}\label{eq:DT_P1}
\mathbb{L}\cdot\Log\left(\sum_\lambda\frac{w^{|\lambda|}\prod\limits_{x\in D}\tilde H_\lambda^{\rm mot}\big(w_{x,\bullet};z^{-1}\big)}
{\prod\limits_{h\in\Hook(\lambda)}
\big(\mathbb{L}^{a(h)}-z^{-l(h)-1}\big)\big(\mathbb{L}^{a(h)+1}-z^{-l(h)}\big)}\right)=
\sum_{\gamma\in\Gamma'_+}B_\gamma e_\gamma.
\end{equation}
We have precisely the same formula as in Corollary~\ref{cor:MAIN}, where $\overline B_\gamma$ is replaced with $B_\gamma$ (thus, the formula is valid in $\Mot(\mathbf{k})$). The proof is the same as of Corollary~\ref{cor:MAIN} except that one uses~\eqref{eq:P1} instead of Corollary~\ref{cor:Pairs}. Comparing Corollary~\ref{cor:Pairs} with~\eqref{eq:P1}, we see that the images of $B_\gamma$ in $\cMot(\mathbf{k})$ are equal to $\overline B_\gamma$.
\section{Stability conditions for Higgs bundles}\label{sect:Stability}
\subsection{Harder--Narasimhan filtration} Recall that in Section~\ref{sect:ParWeights} we defined the set $\Stab$ of sequences of parabolic weights. To every sequence of parabolic weights we associated a stability condition on parabolic bundles in Definition~\ref{def:StabilityCond}. We want to extend this to Higgs bundles and to calculate the motivic classes of stacks of semistable parabolic Higgs bundles with nonpositive underlying vector bundles. Let~$X$ and~$D$ be as before and let $\sigma\in\Stab$.
\begin{Definition}\quad
\begin{enumerate}\itemsep=0pt
\item[(i)] A parabolic Higgs bundle $(\mathbf E,\Phi)$ is \emph{$\sigma$-semistable} if~\eqref{eq:ss} is satisfied for all strict subbundles preserved by $\Phi$.
\item[(ii)]
A parabolic Higgs bundle $(\mathbf E,\Phi)=(E,E_{\bullet,\bullet},\Phi)$ is \emph{$\sigma$-nonpositive-semistable}, if the underlying vector bundle of $\mathbf E$ is nonpositive and~\eqref{eq:ss} is satisfied for all strict subbundles $\mathbf E'=(E',E'_{\bullet,\bullet})$ such that $\Phi$ preserves $\mathbf E'$ and $E/E'$ is a nonpositive vector bundle.
\end{enumerate}
\end{Definition}
The notion of $\sigma$-nonpositive-semistable Higgs bundle is similar to that of nonnegative-semi\-stable Higgs bundle (see~\cite[Section~3.3]{FedorovSoibelmans} and~\cite{MozgovoySchiffmanOnHiggsBundles}). We emphasize that a $\sigma$-nonpositive-semistable parabolic Higgs bundle is not necessarily $\sigma$-semistable; cf.~\cite[Remark~3.3.1]{FedorovSoibelmans}.
Denote the substack of $\mathcal{H}{iggs}(\zeta)$ corresponding to $\sigma$-semistable (resp.~$\sigma$-nonpositive-semi\-stable) parabolic Higgs bundles by $\mathcal{H}{iggs}^{\sigma-{\rm ss}}(\zeta)$ (resp.~$\mathcal{H}{iggs}^{\sigma-{\rm ss},-}(\zeta)$). An argument similar to~\cite[Lemma~3.7]{Simpson1} shows that these are open substacks of $\mathcal{H}{iggs}(\zeta)$ and $\mathcal{H}{iggs}^-(\zeta)$ respectively. Note that if $(\mathbf E,\Phi)$ is a parabolic Higgs bundle and $\mathbf E'\subset\mathbf E$ is a strict parabolic subbundle preserved by $\Phi$, then we get an induced Higgs field on $\mathbf E/\mathbf E'$; denote it $\Phi'$. Then $(\mathbf E/\mathbf E',\Phi')\in\mathcal{H}{iggs}^-(\zeta)$. One can use this construction to give $\mathcal{H}{iggs}^-(\zeta)$ the structure of an exact category. The proof of the following proposition is completely similar to the proof of Proposition~\ref{pr:HN}.
\begin{Proposition}\label{pr:HN3}\quad
\begin{enumerate}\itemsep=0pt
\item[$(i)$]
If $(\mathbf E,\Phi)\in\mathcal{H}{iggs}(\zeta)$ is a parabolic Higgs bundle with eigenvalues $\zeta$, then there is a unique filtration $0=\mathbf E_0\subset\mathbf E_1\subset\dots\subset\mathbf E_m=\mathbf E$ by strict parabolic subbundles preserved by $\Phi$ such that all the quotients $\mathbf E_i/\mathbf E_{i-1}$ with induced Higgs fields are $\sigma$-semistable parabolic Higgs bundles and we have $\tau_1>\dots>\tau_m$, where $\tau_i$ is the $(1,\sigma)$-slope of $\mathbf E_i/\mathbf E_{i-1}$.
\item[$(ii)$] If $(\mathbf E,\Phi)\in\mathcal{H}{iggs}^-(\zeta)$, then there is a unique filtration of $\mathbf E$ as in~(i) by strict parabolic subbundles preserved by $\Phi$ with quotients being $\sigma$-nonpositive-semistable parabolic Higgs bundles.
\end{enumerate}
\end{Proposition}
\subsection{Kontsevich--Soibelman factorization formula}\label{sect:KS}
The general formalism of~\cite{KontsevichSoibelman08} implies the following factorization formula valid in $\Mot(\mathbf{k})[[\Gamma'_+]]$. One can also give a direct proof along the lines of the proof of~\cite[Proposition~3.6.1]{FedorovSoibelmans}\footnote{Note that all but countably many multiples are equal to one. We can understand the countable product as a~clockwise product as in~\cite{KontsevichSoibelman08,KontsevichSoibelman10}. Note, however, that this is a product in a commutative ring.}
\begin{equation}\label{eq:KS}
\sum_{\gamma\in\Gamma'_+}\mathbb{L}^{-\chi(\gamma)}[\mathcal{H}{iggs}^-_\gamma(\zeta)]e_\gamma=
\prod_{\tau\in\mathbb{R}}\left(
\sum_{\substack{\gamma\in\Gamma'_+\\ \deg_{1,\sigma}\gamma=\tau\rk\gamma}}\mathbb{L}^{-\chi(\gamma)}\big[\mathcal{H}{iggs}^{\sigma-{\rm ss},-}_\gamma(\zeta)\big]e_\gamma
\right).
\end{equation}
Now, taking the plethystic logarithms of both sides and using Corollary~\ref{cor:MAIN}, we get the following statement.
\begin{Proposition}\label{pr:expl-ss>=0} We have in $\cMot(\mathbf{k})[[\Gamma'_+]]$
\[
\sum_{\substack{\gamma\in\Gamma'_+\\ \deg_{1,\sigma}\gamma=\tau\rk\gamma}}\mathbb{L}^{-\chi(\gamma)}\big[\mathcal{H}{iggs}^{\sigma-{\rm ss},-}_\gamma(\zeta)\big]e_\gamma=
\Exp\left(\sum_{\substack{\gamma\in\Gamma'_+\\ \deg_{0,\zeta}\gamma=0\\ \deg_{1,\sigma}\gamma=\tau\rk\gamma}}\overline B_\gamma e_\gamma\right).
\]
If $X=\P^1$, then the same formula holds in $\Mot(\mathbf{k})[[\Gamma'_+]]$ with $\overline B_\gamma$ replaced by $B_\gamma$, where $B_\gamma$ are defined by~\eqref{eq:DT_P1}.
\end{Proposition}
\section{Stabilization}\label{sect:Stabilization}
\subsection{Stabilization of semistable Higgs bundles} Let $X$ and $D$ be as before. We will be assuming that $D\ne\varnothing$. Note that this implies that $X$ has a $\mathbf{k}$-rational divisor of degree one. Set $\delta:=\max(2g-2+\deg D,0)$. Fix a stability condition $\sigma\in\Stab$. Our goal in this section is to calculate the motivic class of the moduli stack of $\sigma$-semistable parabolic Higgs bundles without nonnegativity assumption. The main result in this section is Theorem~\ref{th:ExplAnsw}. Recall that in the end of Section~\ref{sect:ParWeights} we defined the categories $\mathcal{B}{un}^{{\rm par},\le\tau}$ and $\mathcal{B}{un}^{{\rm par},\ge\tau}$. These are the full subcategories of $\mathcal{B}{un}^{\rm par}$ whose objects are parabolic bundles with the $\sigma$-HN spectrum contained in $(-\infty,\tau]$ and~$[\tau,\infty)$ respectively.
We start with the following analogue of~\cite[Lemma~3.1]{MozgovoySchiffmanOnHiggsBundles}.
\begin{Lemma}\label{lm:gap}
Let $\mathbf E\in\mathcal{B}{un}^{\rm par}$ be a parabolic bundle with the $\sigma$-HN-spectrum $\tau_1>\tau_2>\dots>\tau_m$. Assume that for some $i\in\{1,\dots,m-1\}$ we have $\tau_i-\tau_{i+1}>\delta$. Then there are no $\sigma$-semistable Higgs bundles of the form $(\mathbf E,\Phi)$.
\end{Lemma}
\begin{proof}
Assume the contrary. We have an exact sequence
\[
0\to\mathbf E^{\ge}\to\mathbf E\to\mathbf E^{\le}\to0,
\]
where $\mathbf E^{\ge}\in\mathcal{B}{un}^{{\rm par},\ge\tau_i}$, $\mathbf E^{\le}\in\mathcal{B}{un}^{{\rm par},\le\tau_{i+1}}$. Then $\Phi$ induces a morphism $\Phi':\mathbf E^{\ge}\to\mathbf E^{\le}\otimes\Omega_X(D)$. Note that $\mathbf E^{\le}\otimes\Omega_X(D)\in\mathcal{B}{un}^{{\rm par},\le\tau_{i+1}+\delta}$. By Lemma~\ref{lm:NoMorphismSS}, $\Phi'=0$ and we see that $\mathbf E^{\ge}$ is preserved by $\Phi$ contradicting $\sigma$-semistability of $(\mathbf E,\Phi)$.
\end{proof}
Next, we have an analogue of~\cite[Lemma~3.2]{MozgovoySchiffmanOnHiggsBundles}.
\begin{Lemma}\label{lm:MozSchif}
Let $(\mathbf E,\Phi)$ be a $\sigma$-semistable Higgs bundle. Assume that $\deg_{1,\sigma}\mathbf E<-\frac{r(r-1)}2\delta$, where $r=\rk\mathbf E$. Then $\mathbf E\in\mathcal{B}{un}^{{\rm par},\le0}$.
\end{Lemma}
\begin{proof}
Let $\tau_1>\tau_2>\dots>\tau_m$ be the $\sigma$-HN-spectrum of $\mathbf E$. Denote by $r_i$ the jumps of the ranks of $\sigma$-HN-filtration. By Lemma~\ref{lm:gap} we have $\tau_i\ge\tau_1-(i-1)\delta$. We have
\[
-\frac{r-1}2\delta>\frac{\deg_{1,\sigma}\mathbf E}r=\frac{\sum\limits_{i=1}^m\tau_ir_i}r\ge\frac{\sum\limits_{i=1}^m(\tau_1-(i-1)\delta)r_i}r\ge
\frac mr\tau_1-\frac{r-1}2\delta
\]
and the statement follows.
\end{proof}
Set $|\sigma|:=\sum\limits_{x\in D}(\sup_i\sigma_{x,i}-\sigma_{x,1})$. We remark that we always have $|\sigma|\le\deg D$. We have an analogue of~\cite[Corollary~3.3]{MozgovoySchiffmanOnHiggsBundles}.
\begin{Lemma}\label{lm:ss=ss>=0}
Let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$ be such that $d<-r|\sigma|-\frac{r(r-1)}2\delta$. Then
\[
\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)=\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss},-}(\zeta).
\]
\end{Lemma}
\begin{proof}
For all $x$ and $i$ replace $\sigma_{x,i}$ by $\sigma_{x,i}-\sigma_{x,1}$. This does not change $|\sigma|$ and the notion of semistability but we now have $\sigma_{x,i}\ge0$ for all $x$ and $i$. Next
\begin{equation}\label{eq:|sigma|}
\deg_{1,\sigma}\gamma=d+\sum_x\sum_{i=1}^\infty\sigma_{x,i}r_{x,i}\le d+
\sum_x\Big(\sup_i\sigma_{x,i}\Big)\left(\sum_{i=1}^\infty r_{x,i}\right)=d+r|\sigma|.
\end{equation}
Let $(\mathbf E,\Phi)\in\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)$. By~\eqref{eq:|sigma|} we have{\samepage
\[
\deg_{1,\sigma}\gamma<-\frac{r(r-1)}2\delta.
\]
By Lemma~\ref{lm:MozSchif} we have $\mathbf E\in\mathcal{B}{un}^{{\rm par},\le0}\subset\mathcal{B}{un}^{{\rm par},-}$ (the last inclusion follows from $\sigma_{x,i}\ge0$).}
Conversely, assume that $(\mathbf E,\Phi)\in\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss},-}(\zeta)$. Assume for contradiction that $(\mathbf E,\Phi)$ is not $\sigma$-semistable. Then by Proposition~\ref{pr:HN3}(i) we have an exact sequence $0\to\mathbf E'\to\mathbf E\to\mathbf E''\to0$ in~$\mathcal{B}{un}^{\rm par}$ such that~$\Phi$ preserves $\mathbf E'$, and $(\mathbf E'',\Phi'')$ is $\sigma$-semistable, where $\Phi''$ is the induced Higgs field. Using~\eqref{eq:|sigma|} we get
\[
\frac{\deg_{1,\sigma}\mathbf E''}{\rk\mathbf E''}<\frac{\deg_{1,\sigma}\mathbf E}{\rk\mathbf E}\le\frac{d+r|\sigma|}r-\frac{r-1}2\delta\le
-\frac{\rk\mathbf E''-1}2\delta.
\]
Now it follows from Lemma~\ref{lm:MozSchif} that $\mathbf E''\in\mathcal{B}{un}^{{\rm par},\le0}$.
\looseness=1 Write $\mathbf E''=(E'',E''_{\bullet,\bullet})$. Since $(\mathbf E,\Phi)$ is $\sigma$-nonpositive-semistable, $E''$ cannot be nonpositive. Thus there is $E'''\subset E''$ with $\deg E'''>0$. Let $\mathbf E'''=(E''',E'''_{\bullet,\bullet})$ be the corresponding parabolic subbundle of $\mathbf E''$. Then $\deg_{1,\sigma}\mathbf E'''\ge\deg E'''>0$, which gives contradiction with $\mathbf E''\in\mathcal{B}{un}^{{\rm par},\le0}$.
\end{proof}
Set $\mathbf1=(0,0_{\bullet,\bullet},1)\in\Gamma$ (here $0_{\bullet,\bullet}$ is the sequence of zeroes indexed by $D\times\mathbb{Z}_{>0}$). If $\gamma\in\Gamma_+$ and $\gamma\ne0$, then $\gamma+N\mathbf1\in\Gamma_+$ for all $N\in\mathbb{Z}$.
\begin{Corollary}\label{cor:Stabilization}
Let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$, $\gamma\ne0$, and $N>|\sigma|+\frac{r-1}2\delta+d/r$. Then
\[
\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\simeq\mathcal{H}{iggs}_{\gamma-Nr\mathbf1}^{\sigma-{\rm ss},-}(\zeta).
\]
\end{Corollary}
\begin{proof}
Since $X$ has a divisor of degree one, it has a line bundle of degree $N$. Tensorisation with this line bundle gives $\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\simeq\mathcal{H}{iggs}_{\gamma-Nr\mathbf1}^{\sigma-{\rm ss}}(\zeta)$.
Now Lemma~\ref{lm:ss=ss>=0} completes the proof.
\end{proof}
Recall from Definition~\ref{def:DT} the Donaldson--Thomas invariants $\overline B_\gamma\in\cMot(\mathbf{k})$. For each $\tau\in\mathbb{R}$ define the elements $H_\gamma(\zeta,\sigma)\in\cMot(\mathbf{k})$, where $\gamma\in\Gamma_+$ is such that the $(1,\sigma)$-slope of $\gamma$ is $\tau$ (or $\gamma=0$), by the following formula.
\begin{equation}\label{eq:ExplAnswer}
\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{0,\zeta}\gamma=0\\ \deg_{1,\sigma}\gamma=\tau\rk\gamma}}\mathbb{L}^{-\chi(\gamma)}H_\gamma(\zeta,\sigma)e_\gamma=
\Exp\left(\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{0,\zeta}\gamma=0\\ \deg_{1,\sigma}\gamma=\tau\rk\gamma}}
\overline B_\gamma e_\gamma
\right).
\end{equation}
Thus $H_\gamma(\zeta,\sigma)$ is defined for all $\gamma$ such that $\deg_{0,\zeta}\gamma=0$. Note that $H_0(\zeta,\sigma)=1$. Now we can formulate our first main result.
\begin{Theorem}\label{th:ExplAnsw} Let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$, $\gamma\ne0$.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] The elements $H_\gamma(\zeta,\sigma)$ are periodic in the following sense: for $d<-|\sigma|-\frac{r-1}2\delta$ we have $H_\gamma(\zeta,\sigma)=H_{\gamma-r\mathbf1}(\zeta,\sigma)$.
\item[$(ii)$] The stack $\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)$ is of finite type and we have in $\cMot(\mathbf{k})$
\begin{equation}\label{eq:ThExpl1}
\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]=H_{\gamma-Nr\mathbf1}(\zeta,\sigma)
\end{equation}
whenever $N$ is large enough, provided that $\deg_{0,\zeta}\gamma=0$ {\rm(}it suffices to take $N>|\sigma|+\frac{r-1}2\delta+d/r${\rm)}. If $\deg_{0,\zeta}\gamma\ne0$, then the stack is empty.
\end{enumerate}
\end{Theorem}
\begin{proof}
For part~(ii) combine Corollary~\ref{cor:Stabilization} with Proposition~\ref{pr:expl-ss>=0}. Part~(i) is clear from \linebreak part~(ii).
\end{proof}
An immediate corollary of the above theorem and formula~\eqref{eq:ExplAnswer} is the following curious observation.
\begin{Corollary}\label{cor:EqualMot}
Assume that we are given $\gamma\in\Gamma_+$, sets of eigenvalues $\zeta$ and $\zeta'$, and sequences of parabolic weights $\sigma$, $\sigma'$. Let $\tau$ and $\tau'$ be $(1,\sigma)$ and $(1,\sigma')$-slopes of $\gamma$ respectively. Assume also that
\begin{gather*}
\{\gamma'\in\Gamma_+\colon \deg_{0,\zeta}\gamma'=0, \deg_{1,\sigma}\gamma'=\tau\rk\gamma\}=
\{\gamma'\in\Gamma_+\colon \deg_{0,\zeta'}\gamma'=0, \deg_{1,\sigma'}\gamma'=\tau'\rk\gamma\}.\!
\end{gather*}
Then we have an equality of motivic classes
\[
\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]=\big[\mathcal{H}{iggs}_\gamma^{\sigma'-{\rm ss}}(\zeta')\big].
\]
\end{Corollary}
\subsection[Case of $\P^1$]{Case of $\boldsymbol{\P^1}$}\label{sect:StabP1} If $X=\P^1$, we obtain simpler and more precise results. Namely, if we define elements $H_\gamma(\zeta,\sigma)$ by the same formula~\eqref{eq:ExplAnswer} but with $B_\gamma$ instead of $\overline B_\gamma$, then~\eqref{eq:ThExpl1} holds in $\Mot(\mathbf{k})$.
\section{Motivic classes of parabolic connections}\label{sect:Conn}
\subsection{Stacks of parabolic connections} Let $X$ and $D$ be as above. Our goal in this section is to calculate the motivic classes of the moduli stacks of parabolic bundles with connections with prescribed eigenvalues of residues. In Section~\ref{sect:StabConn} we put stability conditions on these moduli stacks and calculate the motivic classes of substacks of semistable parabolic bundles with connections. Our argument is similar to the argument for Higgs bundles.
Let $E$ be a vector bundle on $X$. A \emph{connection} on $E$ with \emph{poles bounded by $D$} is a morphism of sheaves of abelian groups $\nabla\colon E\to E\otimes\Omega_X(D)$ satisfying Leibniz rule. In this case for $x\in D$ one defines the residue of the connection $\res_x\nabla\in\End(E_x)$.
\begin{Definition}
A \emph{parabolic connection} of type $(X,D)$ is a triple $(E,E_{\bullet,\bullet},\nabla)$, where $(E,E_{\bullet,\bullet})$ is a point of $\mathcal{B}{un}^{\rm par}(X,D)$, $\nabla\colon E\to E\otimes\Omega_X(D)$ is a connection on $E$ such that for all $x\in D$ and $j\ge0$ we have $(\res_x\nabla)(E_{x,j})\subset E_{x,j}$.
\end{Definition}
We denote the category (and the Artin stack) of parabolic connections by $\mathcal{C}{onn}\!=\!\mathcal{C}{onn}(X,D)$. In this section $X$ and $D$ are fixed, so we skip them from the notation.
Recall that $\mathbf{k}[D\times\mathbb{Z}_{>0}]$ is the set of all $\mathbf{k}$-valued sequences $\zeta=\zeta_{\bullet,\bullet}=(\zeta_{x,j})$ indexed by $D\times\mathbb{Z}_{>0}$ such that $\zeta_{x,j}=0$ for $j\gg0$. For $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$ let $\mathcal{C}{onn}(\zeta)=\mathcal{C}{onn}(X,D,\zeta)$ denote the full subcategory of $\mathcal{C}{onn}$ (and its stack of objects) corresponding to collections $(E,E_{\bullet,\bullet},\nabla)$ such that $(\res_x\nabla-\zeta_{x,j}1)(E_{x,{j-1}})\subset E_{x,j}$ for all $x\in D$ and $j>0$. We call the points of $\mathcal{C}{onn}(\zeta)$ the parabolic bundles with connections \emph{with eigenvalues $\zeta$.}
Let $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$ and let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$. Recall from Section~\ref{sect:DegreeSlope} that
\[
\deg_{1,\zeta}\gamma=d+\sum_{x\in D}\sum_{j=1}^{\infty}\zeta_{x,j}r_{x,j}\in\mathbf{k}.
\]
The following lemma is~\cite[Theorem~7.1]{Crawley-Boevey:Indecomposable} if $\mathbf{k}=\mathbb C$. The proof in the general case is completely similar.
\begin{Lemma}\label{lm:existence2}
Let $\mathbf E\in\mathcal{B}{un}^{\rm par}(\mathbf{k})$ and $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$. There exists an object $(\mathbf E,\nabla)\in\mathcal{C}{onn}(\zeta)(\mathbf{k})$ if and only if $\deg_{1,\zeta}\mathbf E'=0$ for any direct summand $\mathbf E'$ of $\mathbf E$.
\end{Lemma}
Note that, in particular, for every $(\mathbf E,\nabla)\in\mathcal{C}{onn}(\zeta)(\mathbf{k})$ we have $\deg_{1,\zeta}\mathbf E=0$.
Recall from Section~\ref{sect:Isoslopy} the notion of $(\kappa,\zeta)$-isoslopy parabolic bundle and the stacks $\mathcal{P}{air}^{(\kappa,\zeta)-{\rm iso}}_\gamma$. Recall also that for $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$, we have set $\chi(\gamma):=(g-1)r^2+\sum\limits_{x\in D}\sum\limits_{j<j'}r_{x,j}r_{x,j'}$.
\begin{Proposition}\label{pr:Sasha2}
We have
\[
[\mathcal{C}{onn}_\gamma(\zeta)]=
\begin{cases}
\mathbb{L}^{\chi(\gamma)}\big[\mathcal{P}{air}^{(1,\zeta)-{\rm iso}}_\gamma\big]& \text{if }\deg_{1,\zeta}\gamma=0,\\
0 & \text{otherwise}.
\end{cases}
\]
\end{Proposition}
\begin{proof}
The proof is completely analogous to the proof of Proposition~\ref{pr:Sasha} with Lemma~\ref{lm:existence} replaced by Lemma~\ref{lm:existence2}.
\end{proof}
\subsection{Stabilization of isoslopy parabolic bundles}\label{sect:Stabilization2}
As in Section~\ref{sect:Stabilization} we will be assuming that $D\ne\varnothing$. Recall that this implies that $X$ has a $\mathbf{k}$-rational divisor of degree one. As before, set $\delta:=\max(2g-2+\deg D,0)$.
Recall that every vector bundle $E$ on $X$ has a unique HN-filtration and the slopes of the quotients form a sequence called the HN-spectrum of $E$. We start with the following analogue of~\cite[Lemma~4.1]{MozgovoySchiffmanOnHiggsBundles}.
\begin{Lemma}\label{lm:gap2}
Let $\mathbf E=(E,E_{\bullet,\bullet})\in\mathcal{B}{un}^{\rm par}$ be a parabolic bundle such that $E$ has HN-spectrum $\tau_1>\tau_2>\dots>\tau_m$. Assume that for some $i\in\{1,\dots,m-1\}$ we have $\tau_i-\tau_{i+1}>\delta$. Then $\mathbf E$ is decomposable.
\end{Lemma}
\begin{proof}One shows that the extensions of a parabolic bundle $\mathbf E''$ by a parabolic bundle $\mathbf E'$ (in the sense of Section~\ref{sect:Subobjects}) are classified by a vector space $\Ext^1(\mathbf E'',\mathbf E')$ dual to $\Hom(\mathbf E',\mathbf E''(\Omega_X(D))$. Let $0=E_0\subset E_1\subset\dots\subset E_m=E$ be the Harder--Narasimhan filtration of $E$. Let $\mathbf E_i$ be the strict parabolic subbundle with the underlying vector bundle $E_i$. We have an exact sequence $0\to\mathbf E_i\to\mathbf E\to\mathbf E/\mathbf E_i\to0$. Note that by the assumption the Harder--Narasimhan spectrum of $E_i$ is contained in $[\tau_i,\infty)$, while the Harder--Narasimhan spectrum of $(E/E_i)(\Omega_X(D))$ is contained in $(-\infty,\tau_i)$. It follows that $\Hom\bigl(E_i,(E/E_i)(\Omega_X(D))\bigr)=0$. Thus
\[
\Ext^1(\mathbf E/\mathbf E_i,\mathbf E_i)=\Hom\bigl(\mathbf E_i,(\mathbf E/\mathbf E_i)(\Omega_X(D))\bigr)^\vee=0.
\]
Thus $\mathbf E\simeq\mathbf E_i\oplus(\mathbf E/\mathbf E_i)$ is decomposable.
\end{proof}
Next, we have an analogue of~\cite[Corollary~4.2]{MozgovoySchiffmanOnHiggsBundles} whose proof is similar to loc.~cit.~and to that of Lemma~\ref{lm:MozSchif}.
\begin{Lemma}\label{lm:MozSchif2}
Let $\mathbf E\in\mathcal{B}{un}^{\rm par}$ be indecomposable and $\cl(\mathbf E)=(r,r_{\bullet,\bullet},d)$. Assume that $d<-\frac{r(r-1)}2\delta$. Then $\mathbf E\in\mathcal{B}{un}^{{\rm par},-}$.
\end{Lemma}
\begin{proof}
Write $\mathbf E=(E,E_{\bullet,\bullet})$. Let $\tau_1>\tau_2>\dots>\tau_m$ be the HN-spectrum of $E$. Denote by $r_i$ the jumps of the ranks of HN-filtration. By Lemma~\ref{lm:gap} we have $\tau_i\ge\tau_1-(i-1)\delta$. We have
\[
-\frac{r-1}2\delta>\frac dr=\frac{\sum\limits_{i=1}^m\tau_ir_i}r\ge\frac{\sum\limits_{i=1}^m(\tau_1-(i-1)\delta)r_i}r\ge
\frac mr\tau_1-\frac{r-1}2\delta
\]
and the statement follows.
\end{proof}
Fix $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$. Let $|\bullet|$ be any norm on the $\mathbb{Q}$-vector subspace of $\mathbf{k}$ generated by the components of~$\zeta$. If~$\mathbf{k}$ is embedded into $\mathbb C$, we can take the usual absolute value for $|\bullet|$. We set $|\zeta|:=\sum\limits_{x\in D}(\max_i|\zeta_{x,i}|)$. We have an analogue of~\cite[Lemma~3.2.3(i)]{FedorovSoibelmans} (cf.~also Lemma~\ref{lm:ss=ss>=0}).
\begin{Lemma}\label{lm:ss=ss>=02}
Let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$ be such that $d<-2r|\zeta|-\frac{r(r-1)}2\delta$. Then
\[
\mathcal{B}{un}_\gamma^{{\rm par},(1,\zeta)-{\rm iso}}=\mathcal{B}{un}_\gamma^{{\rm par},(1,\zeta)-{\rm iso},-}.
\]
\end{Lemma}
\begin{proof}
Take $\mathbf E=(E,E_{\bullet,\bullet})\in\mathcal{B}{un}_\gamma^{{\rm par},(1,\zeta)-{\rm iso}}$. Assume for a contradiction that $\mathbf E\notin\mathcal{B}{un}^{{\rm par},-}$. By Lemma~\ref{lm:MozSchif2}, $\mathbf E$ is decomposable. Let $\mathbf E'$ be an indecomposable summand of $\mathbf E$ such that $\mathbf E'\notin\mathcal{B}{un}^{{\rm par},-}$. By the definition of isoslopy bundles, we have $\frac{\deg_{1,\zeta}\mathbf E'}{\rk\mathbf E'}=\frac{\deg_{1,\zeta}\mathbf E}r$. Write $\cl(\mathbf E')=(r',r'_{\bullet,\bullet},d')$. We have
\[
\frac{d'}{r'}\le\frac{\deg_{1,\zeta}\mathbf E'}{\rk\mathbf E'}+|\zeta|=\frac{\deg_{1,\zeta}\mathbf E}r+|\zeta|\le\frac dr+2|\zeta|<
-\frac{(r-1)}2\delta\le-\frac{(r'-1)}2\delta
\]
and Lemma~\ref{lm:MozSchif2} gives a contradiction.
\end{proof}
Recall that we have $\mathbf1=(0,0_{\bullet,\bullet},1)\in\Gamma$.
\begin{Corollary}\label{cor:ExplAnswer2}
Let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$, $\gamma\ne0$, and $N>2|\zeta|+\frac{r-1}2\delta+d/r$. Then $\mathcal{P}{air}_\gamma^{(1,\zeta)-{\rm iso}}\simeq\mathcal{P}{air}_{\gamma-Nr\mathbf1}^{(1,\zeta)-{\rm iso},-}$. In particular, $\mathcal{P}{air}_\gamma^{(1,\zeta)-{\rm iso}}$ is a constructible subset of $\mathcal{P}{air}_\gamma$ of finite type.
\end{Corollary}
\begin{proof}
Since $X$ has a divisor of degree one, it has a line bundle of degree $N$. Tensorisation with this line bundle gives
$\mathcal{P}{air}_\gamma^{(1,\zeta)-{\rm iso}}\simeq\mathcal{P}{air}_{\gamma-Nr\mathbf1}^{(1,\zeta)-{\rm iso}}$. Now Lemma~\ref{lm:ss=ss>=02} completes the proof.
\end{proof}
Define the elements $C_\gamma(\zeta)\in\cMot(\mathbf{k})$, where $\gamma$ ranges over $\Gamma_+$ by the following formula (cf.~\eqref{eq:ExplAnswer})
\begin{equation}\label{eq:ExplAnswer2}
\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{1,\zeta}\gamma=\tau\rk\gamma}}\mathbb{L}^{-\chi(\gamma)}C_\gamma(\zeta)e_\gamma=
\Exp\left(\sum_{\substack{\gamma\in\Gamma_+'\\ \deg_{1,\zeta}\gamma=\tau\rk\gamma}}
\overline B_\gamma e_\gamma
\right).
\end{equation}
Now we can formulate our second main result. Recall that $\mathcal{C}{onn}(\zeta)=\varnothing$ unless $\deg_{1,\zeta}\gamma=0$.
\begin{Theorem}\label{th:ExplAnsw2} Let $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma_+$, $\gamma\ne0$ and $\deg_{1,\zeta}\gamma=0$. Let $\zeta$ be an element of $\mathbf{k}[D\times\mathbb{Z}_{>0}]$ and let $|\bullet|$ be a norm on the $\mathbb{Q}$-vector subspace of $\mathbf{k}$ generated by the components of~$\zeta$. Then
\begin{enumerate}\itemsep=0pt
\item[$(i)$] The elements $C_\gamma(\zeta)$ are periodic in the following sense: for $d<-2|\zeta|-\frac{r-1}2\delta$ we have $C_\gamma(\zeta)=C_{\gamma-r\mathbf1}(\zeta)$.
\item[$(ii)$]
The stack $\mathcal{C}{onn}_\gamma(\zeta)$ is of finite type and we have
\[
[\mathcal{C}{onn}_\gamma(\zeta)]=C_{\gamma-Nr\mathbf1}(\zeta),
\]
whenever $N$ is large enough {\rm(}it suffices to take $N>2|\zeta|+\frac{r-1}2\delta+d/r${\rm)}.
\end{enumerate}
\end{Theorem}
\begin{proof}
Combine Proposition~\ref{pr:Sasha2}, Corollary~\ref{cor:ExplAnswer2}, and Corollary~\ref{cor:isoslopy}.
\end{proof}
\subsubsection[Case of $\P^1$]{Case of $\boldsymbol{\P^1}$}\label{sect:StabP12} If $X=\P^1$, we obtain simpler and more precise results. Define $B_\gamma\in\Mot(\mathbf{k})$ by~\eqref{eq:DT_P1}. Define $C_\gamma(\zeta)\in\Mot(\mathbf{k})$ by the same formula~\eqref{eq:ExplAnswer2} but with $\overline B_\gamma$ replaced by $B_\gamma$. Then Theorem~\ref{th:ExplAnsw2} holds in $\Mot(\mathbf{k})$.
\subsection{Stability conditions for bundles with connections}\label{sect:StabConn} Recall that in Definition~\ref{def:StabilityCond} we defined the notion of a sequence of parabolic weights. For non-resonant connections one can work with more general sequences of parabolic weights. Let us give the definitions.
\begin{Definition}
We say that $\zeta=\zeta_{\bullet,\bullet}\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$ is \emph{non-resonant} if for all $x\in X$ and all $i,j>0$ we have $\zeta_{x,i}-\zeta_{x,j}\notin\mathbb{Z}_{\ne0}$.
\end{Definition}
The importance of this definition is in the following lemma.
\begin{Lemma}\label{lm:non-resonant}
Let $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$ be non-resonant and let $\phi$ be a morphism in $\mathcal{C}{onn}(\zeta)$ such that the underlying morphism of vector bundles is generically an isomorphism. Then the underlying morphism of vector bundles is an isomorphism.
\end{Lemma}
\begin{proof}
One easily reduces to the case $\mathbf{k}=\mathbb C$. Take $x\in D$. Since $\zeta$ is non-resonant, one can find a subset $\Omega$ of $\mathbb C$ containing $\{\zeta_{x,j}|j>0\}$ and such that the exponential function induces a bijection between $\Omega$ and $\mathbb C-0$. Then it is well-known that every regular connection on the punctured formal disc has a unique extension to the puncture such that the eigenvalues of the residues are in $\Omega$. The statement follows.
\end{proof}
Define the space of extended sequences of parabolic weights $\Stab'$ as the set of pairs $(\kappa,\sigma)$, where $\kappa\in\mathbb{R}_{\ge0}$, $\sigma=\sigma_{\bullet,\bullet}$ is a sequence of real numbers, indexed by $D\times\mathbb{Z}_{>0}$, such that for all $x\in D$ we have~\eqref{eq:StabCond2}.
\begin{Definition}
Let $(\kappa,\sigma)\in\Stab'$. A parabolic connection $(\mathbf E,\nabla)$ is \emph{$(\kappa,\sigma)$-semistable} if for all strict parabolic subbundles $\mathbf E'\subset\mathbf E$ preserved by $\nabla$ we have
\begin{equation*
\frac{\deg_{\kappa,\sigma}\mathbf E'}{\rk\mathbf E'}\le\frac{\deg_{\kappa,\sigma}\mathbf E}{\rk\mathbf E}.
\end{equation*}
\end{Definition}
We denote by $\mathcal{C}{onn}^{(\kappa,\sigma)-{\rm ss}}(\zeta)$ the substack of $\mathcal{C}{onn}(\zeta)$ corresponding to $(\kappa,\sigma)$-semistable connections. An argument similar to~\cite[Lemma~3.7]{Simpson1} shows that this is an open substacks of $\mathcal{C}{onn}(\zeta)$.
\begin{Proposition}\label{pr:HN2} Assume that $(\kappa,\sigma)\in\Stab'$. Assume also that either $\zeta$ is non-resonant, or $\kappa=1$, $\sigma\in\Stab$. If $(\mathbf E,\nabla)\in\mathcal{C}{onn}(\zeta)$, then there is a unique filtration $0=\mathbf E_0\subset\mathbf E_1\subset\dots\subset\mathbf E_m=\mathbf E$ by strict parabolic subbundles preserved by $\nabla$ such that all the quotients $\mathbf E_i/\mathbf E_{i-1}$ with induced connections are $(\kappa,\sigma)$-semistable parabolic bundles with connections and we have $\tau_1>\dots>\tau_m$, where $\tau_i$ is the $(\kappa,\sigma)$-slope of $\mathbf E_i/\mathbf E_{i-1}$.
\end{Proposition}
\begin{proof}
In the non-resonant case the proof is completely analogous to the proof of~\cite[Section~1.3]{HarderNarasimhan} in view of Lemma~\ref{lm:MorPar} and the following lemma.
\begin{Lemma}
Let $\zeta$ be non-resonant and let $\mathbf E\to\mathbf F$ be a morphism in $\mathcal{C}{onn}(\zeta)$, which is generically an isomorphism. Then $\deg_{\kappa,\sigma}\mathbf E\le\deg_{\kappa,\sigma}\mathbf F$.
\end{Lemma}
\begin{proof}
Write $\mathbf E=(E,E_{\bullet,\bullet})$ and $\mathbf F=(F,F_{\bullet,\bullet})$. Let $\phi\colon E\to F$ be the underlying morphism of vector bundles. By Lemma~\ref{lm:non-resonant}, $\phi$ is an isomorphism. Thus $\dim E_{x,j}\le\dim F_{x,j}$ for all~$x$ and~$j$. Therefore
\begin{align*}
\deg_{\kappa,\sigma}\mathbf E& =\kappa\deg E+\sum_{x,j>0}\sigma_{x,j}(\dim E_{x,j-1}-\dim E_{x,j}) \\
& = \kappa\deg E+\sum_{x\in D}\left(\sigma_{x,1}\rk E+\sum_{i>0}(\sigma_{x,i+1}-\sigma_{x,i})\dim E_{x,i}\right)\\
& \le \kappa\deg F+\sum_{x\in D}\left(\sigma_{x,1}\rk F+\sum_{i>0}(\sigma_{x,i+1}-\sigma_{x,i})\dim F_{x,i}\right)=\deg_{\kappa,\sigma}\mathbf F.\tag*{\qed}
\end{align*}\renewcommand{\qed}{}
\end{proof}
In the resonant case, the proof is completely analogous to the proof of~\cite[Section~1.3]{HarderNarasimhan} in view of Lemma~\ref{lm:ModifDegree} (cf.\ Propositions~\ref{pr:HN} and~\ref{pr:HN3}).
\end{proof}
\begin{Remark}\label{rm:resonant}
More generally, If $\zeta$ is resonant, one can work with any $(\kappa,\sigma)\in\Stab'$ such that $\sigma_{x,j}-\sigma_{x,1}\le\kappa$ for all $x$ and $j$. However, the notion of stability does not change if we scale~$(\kappa,\sigma)$. Thus, we can always assume that $\kappa=1$, in which case $\sigma\in\Stab$, or $\kappa=0$, in which case $\sigma=0$. The latter case corresponds to the trivial stability condition; the corresponding motivic class has been calculated in Theorem~\ref{th:ExplAnsw2}.
\end{Remark}
Similarly to Proposition~\ref{pr:expl-ss>=0} the Kontsevich--Soibelman factorization formula implies the following proposition.
\begin{Proposition}\label{pr:expl-ss>=02} Let $(\kappa,\sigma)\in\Stab'$. Assume that either $\zeta$ is non-resonant or $\kappa=1$ and $\sigma\in\Stab$. Then we have in $\Mot(\mathbf{k})[[\Gamma_+]]$
\begin{equation}\label{eq:KS2}
\sum_{\gamma\in\Gamma_+}\mathbb{L}^{-\chi(\gamma)}[\mathcal{C}{onn}_\gamma(\zeta)]e_\gamma=
\prod_{\tau\in\mathbb{R}}\left(
\sum_{\substack{\gamma\in\Gamma_+\\ \deg_{\kappa,\sigma}\gamma=\tau\rk\gamma}}\mathbb{L}^{-\chi(\gamma)}\big[\mathcal{C}{onn}^{(\kappa,\sigma)-{\rm ss}}_\gamma(\zeta)\big]e_\gamma
\right).
\end{equation}
\end{Proposition}
Define the elements $C_\gamma(\zeta,\kappa,\sigma)\in\cMot(\mathbf{k})$, where $\gamma$ ranges over $\Gamma_+$ by the following formula (cf.~\eqref{eq:ExplAnswer}) valid for any $\tau\in\mathbf{k}$, $\tau'\in\mathbb{R}$
\begin{equation}\label{eq:ExplAnswer3}
\sum_{\substack{\gamma\in\Gamma_+\\ \deg_{1,\zeta}\gamma=\tau\rk\gamma\\ \deg_{\kappa,\sigma}\gamma=\tau'\rk\gamma }}\mathbb{L}^{-\chi(\gamma)}C_\gamma(\zeta,\kappa,\sigma)e_\gamma=
\Exp\left(\sum_{\substack{\gamma\in\Gamma_+\\ \deg_{1,\zeta}\gamma=\tau\rk\gamma\\ \deg_{\kappa,\sigma}\gamma=\tau'\rk\gamma }}
\overline B_\gamma e_\gamma
\right).
\end{equation}
Note that we have $C_\gamma(\zeta,0,0)=C_\gamma(\zeta)$, where $C_\gamma(\zeta)$ are defined in~\eqref{eq:ExplAnswer2}. Now we can formulate our third main result.
\begin{Theorem}\label{th:ExplAnsw3} Assume that $\gamma\in\Gamma_+$, $\gamma\ne0$, and $\deg_{1,\zeta}\gamma=0$. Then for $N>3|\zeta|+(\rk\gamma-1)\delta/2$ we have
\[
\big[\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]=C_{\gamma-Nr\mathbf1}(\zeta,\kappa,\sigma).
\]
\end{Theorem}
We note that if $\kappa=0$, and $\sigma=0$, then this theorem is essentially Theorem~\ref{th:ExplAnsw2}.
\begin{proof}
We defined $C_\gamma(\zeta,\kappa,\sigma)$ when $\gamma\in\Gamma_+'$. It will be convenient for us to extend this notation to the case when
$\gamma$ is a multiple of $\mathbf1$ by setting $C_{n\mathbf1}(\zeta,\kappa,\sigma)=1$. We will prove that $\big[\mathcal{C}{onn}_\beta^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]=C_{\beta-Nr\mathbf1}(\zeta,\kappa,\sigma)$ whenever $\beta\in\Gamma_+$, $\deg_{1,\zeta}\beta=0$, and $N>3|\zeta|+(\rk\beta-1)\delta/2$ by induction on $\rk\beta$. The base case of $\rk\beta=0$ is obvious. Note that replacing $\gamma$ with $\gamma-N(\rk\gamma)\mathbf1$ shifts the $(\kappa,\sigma)$-slopes by $\kappa N$. Consider the product in the RHS of~\eqref{eq:KS2} and
\begin{equation}\label{eq:2}
\prod_{\tau\in\mathbb{R}}\left(
\sum_{\substack{\gamma\in\Gamma_+\\ \deg_{\kappa,\sigma}\gamma=\tau\rk\gamma\\ \deg_{1,\zeta}\gamma=N\rk\gamma}}\mathbb{L}^{-\chi(\gamma)}[C_{\gamma-N\rk\gamma\mathbf1}(\zeta,\kappa,\sigma)]e_\gamma
\right).
\end{equation}
We note that this product makes sense because we set $C_{-N\rk\gamma\mathbf1}(\zeta,\kappa,\sigma)=1$ in the beginning of the proof. Note also that $\deg_{1,\zeta}\beta=0$ implies that $d/r\le|\zeta|$, where $\beta=(r,r_{\bullet,\bullet},d)$, so $N>2|\zeta|+(r-1)\delta/2+d/r$. Using~\eqref{eq:KS2} and Theorem~\ref{th:ExplAnsw2} we easily see that the coefficients of these products at $e_\beta$ are both equal to $\mathbb{L}^{-\chi(\beta)}[\mathcal{C}{onn}_\beta(\zeta)]$. Expanding the product in the RHS of~\eqref{eq:KS2}, we see that the coefficient at $e_\beta$ is equal to
\[
\mathbb{L}^{-\chi(\beta)}\big[\mathcal{C}{onn}_\beta^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]+
\sum_{\beta_1,\dots,\beta_n}\prod_{i=1}^n\mathbb{L}^{-\chi(\beta_i)}\big[\mathcal{C}{onn}_{\beta_i}^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big],
\]
where the summation is over all decompositions of $\beta$ into the sum of $n\ge2$ non-zero elements of~$\Gamma_+$ such that $\deg_{1,\zeta}\beta_i=0$ and $\deg_{\kappa,\sigma}\beta_i=\tau\rk\beta_i$. Similarly, expanding the product~\eqref{eq:2}, we see that the coefficient at $e_\beta$ is equal to
\[
\mathbb{L}^{-\chi(\beta)}[C_{\beta-N\rk\beta\mathbf1}(\zeta,\kappa,\sigma)]+
\sum_{\beta_1,\dots,\beta_n}\prod_{i=1}^n\mathbb{L}^{-\chi(\beta_i)}[C_{\beta_i-N\rk\beta_i\mathbf1}(\zeta,\kappa,\sigma)].
\]
It remains to show that the respective terms in the sums are equal. Clearly, we have $N>3|\zeta|+(\rk\beta_i-1)\delta/2$ and the statement follows from the induction hypothesis.
\end{proof}
As usual, if $X=\P^1$ we obtain similar formulas valid in $\Mot(\mathbf{k})$ by replacing $\overline B_\gamma$ with $B_\gamma$ in~\eqref{eq:ExplAnswer3}.
\section[Equalities of motivic classes and non-emptiness of moduli stacks]{Equalities of motivic classes and non-emptiness\\ of moduli stacks}\label{sect:NonEmpty}
\subsection{Equalities between motivic classes of stacks}
In this section we will give a criterion for non-emptiness of stacks $\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)$ or $\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta).\!$ For the stacks $\mathcal{C}{onn}_\gamma(\zeta)$ such a criterion follows easily from~\cite{Crawley-Boevey:Indecomposable,CrawleyBoeveyIndecompPar}. We reduce the case, when stability condition is present, to the case of $\mathcal{C}{onn}_\gamma(\zeta)$ using some equalities between motivic classes and Proposition~\ref{pr:NonEmpty}. We will also discuss a relation with Simpson's non-abelian Hodge theory.
It follows from Theorem~\ref{th:ExplAnsw3} that the motivic class of $\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)$ depends only on the submonoid of $\Gamma_+$ given by the equations $\deg_{1,\zeta}\gamma'=0$, $\deg_{\kappa,\sigma}\gamma'=\tau\rk\gamma'$, where $\tau=\deg_{\kappa,\sigma}\gamma/\rk\gamma$. Using this fact, one can give a lot of examples of seemingly unrelated moduli stacks $\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)$ having the same motivic class. An analogous statement for moduli spaces of parabolic Higgs bundles is the content of Corollary~\ref{cor:EqualMot} above. Finally, we can get a lot of equalities between motivic classes of parabolic Higgs bundles and motivic classes of connections. In the following proposition, we show that motivic classes of the form $[\mathcal{C}{onn}_\gamma(\zeta)]$ are universal.
This proposition should be compared to Proposition~\ref{cor:EqualMot}.
\begin{Proposition}\label{pr:ConnUniversal}
Assume that $\mathbf{k}$ is not a finite extension of $\mathbb{Q}$. Let $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$. Let $\gamma\in\Gamma_+$. Let $(\kappa,\sigma)\in\Stab'$. Assume that $\zeta$ is non-resonant or $\kappa=1$ and $\sigma\in\Stab$. Set $\tau:=\deg_{\kappa,\sigma}\gamma/\rk\gamma$. Then there is $\zeta'\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$ such that
\begin{equation}\label{eq:EqualMot2}
\{\gamma'\in\Gamma_+\colon \deg_{1,\zeta}\gamma'=0, \deg_{\kappa,\sigma}\gamma'=\tau\rk\gamma'\}=
\{\gamma'\in\Gamma_+\colon \deg_{1,\zeta'}\gamma'=0\}.
\end{equation}
Moreover, we have $\big[\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]=[\mathcal{C}{onn}_\gamma(\zeta')]$.
\end{Proposition}
\begin{proof}
Choose $x\in D$ and set $\sigma'_{y,j}=\sigma_{y,j}$ if $y\ne x$, $\sigma'_{x,j}=\sigma_{x,j}-\tau$. Since for all $(r',r'_{\bullet,\bullet},d)\in\Gamma'$ we have $\sum_jr'_{x,j}=r'$, we see that $\deg_{\kappa,\sigma}\gamma'=\tau\rk\gamma$ if and only if $\deg_{\kappa,\sigma'}\gamma'=0$.
Now, let $U$ be the $\mathbb{Q}$-vector subspace of $\mathbb{R}$ generated by $1$ and all the numbers $\sigma'_{y,j}$, where~$y$ ranges over~$D$ and $j$ ranges over positive integers. Similarly, let $V$ be the $\mathbb{Q}$-vector subspace of~$\mathbf{k}$ generated by all the components $\zeta_{y,j}$ of $\zeta$. Since $\mathbf{k}$ is not a~finite extension of $\mathbb{Q}$, there is a~$\mathbb{Q}$-linear embedding $U\oplus V\to\mathbf{k}$. Moreover, we may assume that $(\kappa,1)$ maps to $1$. This follows from a general fact: if $L_1$ is a finite dimensional $\mathbb{Q}$-vector space, $L_2$ is an infinite dimensional $\mathbb{Q}$-vector space, $v_1$ and $v_2$ are non-zero vectors of $L_1$ and $L_2$ respectively, then there is a $\mathbb{Q}$-linear embedding $L_1\hookrightarrow L_2$ such that $v_1$ maps to $v_2$.
Next, $(\sigma'_{\bullet,\bullet},\zeta)$ is an element of $(U\oplus V)[D\times\mathbb{Z}_{>0}]$ and we define $\zeta'$ to be its image in $\mathbf{k}[D\times\mathbb{Z}_{>0}]$ under the embedding. It is clear that we have~\eqref{eq:EqualMot2}. Indeed, the equations
$\deg_{1,\zeta}\gamma'=0$, $\deg_{\kappa,\sigma'}\gamma'=0 $ can be written as the following equation in $U\oplus V$:
\[
d'(1,\kappa)+\sum_{y\in D}\sum_{j\ge0}(r'_{y,j-1}-r'_{y,j})(\zeta_{y,,j},\sigma'_{y,j})=0,
\]
where $\gamma'=(r',r'_{\bullet,\bullet},d')$. But the image of this equation in $\mathbf{k}$ is exactly $\deg_{1,\zeta'}\gamma'=0$.
Now the equality of motivic classes follows from Theorems~\ref{th:ExplAnsw2} and~\ref{th:ExplAnsw3} (see formulas~\eqref{eq:ExplAnswer2} and~\eqref{eq:ExplAnswer3}).
\end{proof}
Similarly, we have the following proposition.
\begin{Proposition}\label{pr:ConnUniversal2}
Assume that $\mathbf{k}$ is not a finite extension of $\mathbb{Q}$. Let $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$. Let $\sigma\in\Stab$. Let $\gamma\in\Gamma_+$. Set $\tau:=\deg_{1,\sigma}\gamma/\rk\gamma$. Then there is $\zeta'\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$ such that
\begin{equation}\label{eq:EqualMot3}
\{\gamma'\in\Gamma_+\colon \deg_{0,\zeta}\gamma'=0, \, \deg_{1,\sigma}\gamma'=\tau\rk\gamma'\}=
\{\gamma'\in\Gamma_+\colon \deg_{1,\zeta'}\gamma'=0\}.
\end{equation}
Moreover, we have $\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]=[\mathcal{C}{onn}_\gamma(\zeta')]$.
\end{Proposition}
\begin{proof}
The proof is analogous to that of Proposition~\ref{pr:ConnUniversal} except that one finds an embedding $U\oplus V\hookrightarrow\mathbf{k}$ taking $(0,\kappa)$ to $1$ and uses Theorem~\ref{th:ExplAnsw} and~\eqref{eq:ExplAnswer} instead of Theorem~\ref{th:ExplAnsw3} and~\eqref{eq:ExplAnswer3}.
\end{proof}
\begin{Remark}
Many motivic classes of parabolic Higgs bundles and parabolic connections are equal to classes of the form $\big[\mathcal{H}{iggs}^{\sigma-{\rm ss}}(0)\big]$, cf.~\cite[Theorem~1.2.1]{FedorovSoibelmans}. However, whether these classes are universal (that is, whether one can write every motivic class $\big[\mathcal{C}{onn}_\gamma^{(\kappa,\sigma)-{\rm ss}}(\zeta)\big]$ and $\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]$ in this form) is not clear; the reason is that there are restrictions on $\sigma\in\Stab$.
\end{Remark}
\subsection{Simpson's non-abelian Hodge theory} Assume now that $\mathbf{k}=\mathbb C$. Let $\sigma\in\Stab$ and $\zeta\in\mathbb C[D\times\mathbb{Z}_{>0}]$. Let $\Re\zeta$ and $\Im\zeta$ denote the real and imaginary parts of $\zeta$. Assume that either $\sigma-2\Re\zeta\in\Stab$ or $\sigma+2\sqrt{-1}\Im\zeta$ is non-resonant. In this case, for $\gamma\in\Gamma_+$ we have moduli stacks
\[
\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\qquad \text{and}\qquad \mathcal{C}{onn}_\gamma^{(1,\sigma-2\Re\zeta)-{\rm ss}}\big(\sigma+2\sqrt{-1}\Im\zeta\big).
\]
Assume that $\deg_{1,\sigma}\gamma=0$. Then, according to the results of~\cite{SimpsonHarmonicNoncompact}, the corresponding categories are equivalent (see especially that table on p.~720 in loc.~cit.). Note also that this equivalence of categories can be upgraded to a diffeomorphism of coarse moduli spaces (cf.~\cite{Biquard1997Higgs,BiquardBoalch2004, Nakajima1996Hyperkaehler}).
\begin{Proposition}\label{pr:Simpson}
We have in $\cMot(\mathbb C)$: $\big[\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}}(\zeta)\big]=\big[\mathcal{C}{onn}_\gamma^{(1,\sigma-2\Re\zeta)-{\rm ss}}\big(\sigma+2\sqrt{-1}\Im\zeta\big)\big]$.
\end{Proposition}
\begin{proof}
Note that the system of equations $\deg_{0,\zeta}\gamma'=\deg_{1,\sigma}\gamma'=0$ is equivalent to the system of three real equations
$\deg_{0,\Re\zeta}\gamma'=\deg_{0,\Im\zeta}\gamma'=\deg_{1,\sigma}\gamma'=0$. This system is, in turn, equivalent to the system $\deg_{1,\sigma-2\Re\zeta}\gamma'=\deg_{1,\sigma+2\sqrt{-1}\Im\zeta}\gamma'=0$. It remains to use Theorems~\ref{th:ExplAnsw} and~\ref{th:ExplAnsw3}.
\end{proof}
We emphasize that this result does not follow from the diffeomorphism of coarse moduli spaces or from the equivalence of categories. Nor the diffeomorphism of coarse moduli spaces or equivalence of categories can be derived from our result. We would also like to mention~\cite[Theorem~4.2]{HoskinsLehalleurOnVoevodskyMotive}, where the equality of Voevodsky motives is proved in the case when parabolic structures are absent and the rank and degree are coprime.
\subsection{Indecomposable parabolic bundles and non-emptiness of moduli stacks}
\subsubsection{Indecomposable parabolic bundles}\label{sect:Indecomp} Here we recall some results of~\cite{CrawleyBoeveyIndecompPar}. Recall that $X$ is a smooth projective curve of genus $g$, $D\subset X(\mathbf{k})$ is a non-empty set. Let $\gamma\in\Gamma_+$. We would like to know whether there exists an indecomposable parabolic bundle of class $\gamma$. The following simple statement is noted in~\cite[Introduction]{Crawley-Boevey:Indecomposable}.
\begin{Lemma}\label{lm:g>0}
Assume that $\mathbf{k}$ is algebraically closed. If $g>0$, then for all $\gamma\in\Gamma_+$ there is an indecomposable parabolic bundle of class $\gamma$.
\end{Lemma}
\begin{proof}
It is well-known that there is an indecomposable vector bundle on $X$ of rank $\rk\gamma$. Now one extends it arbitrarily to a parabolic bundle of class $\gamma$.
\end{proof}
Next, let $X=\P^1$. Fix $\gamma=(r,r_{\bullet,\bullet},d)\in\Gamma'$ and choose a sequence of positive integers $w_\bullet$ indexed by $D$ such that $r_{x,j}=0$ for $j\ge w_x$. Consider the star-shaped graph $G_{w_\bullet}$ with vertices~$v_*$ and~$v_{x,j}$ where $x\in D$, $j$ is between~1 and~$w_x-1$. The vertex $v_*$ is connected to all the vertices of the form $v_{x,1}$, the vertex $v_{x,i}$ is connected to $v_{x,i\pm1}$ (see picture).
$$
\begin{tikzpicture}
\draw[fill](0,5) circle [radius=0.1];
\node [below] (02) at (0,5) {};
\node [above] (01) at (0,5) {};
\node [right] (0) at (0,5) {};
\node [left] (*) at (-0.1,5) {$v_*$};
\draw[fill](2,8) circle [radius=0.1];
\node [left] (11l) at (2,8) {};
\node [right] (11r) at (2,8) {};
\draw [thick, -] (01) -- (11l);
\draw[fill](4,8) circle [radius=0.1];
\node [left] (12l) at (4,8) {};
\node [right] (12r) at (4,8) {};
\draw [thick, -] (12l) -- (11r);
\draw[opacity=0, fill](6.25,8) circle [radius=0.1];
\node [left] (13l) at (6.25,8) {};
\draw [thick, -] (13l) -- (12r);
\draw[fill](6.5,8) circle [radius=0.05];
\draw[fill](7,8) circle [radius=0.05];
\draw[fill](7.5,8) circle [radius=0.05];
\draw[opacity=0, fill](7.75,8) circle [radius=0.1];
\node [right] (1w12r) at (7.75,8) {};
\draw[fill](10,8) circle [radius=0.1];
\node [left] (1w11l) at (10,8) {};
\node [right] (1w11r) at (10,8) {};
\draw [thick, -] (1w11l) -- (1w12r);
\draw[fill](2,6) circle [radius=0.1];
\node [above] (vx1) at (2,6.1) {$v_{x,1}$};
\node[left] (21l) at (2,6) {};
\node[right] (21r) at (2,6) {};
\draw [thick, -] (1.8,6) -- (0.3,5.1);
\draw[fill](4,6) circle [radius=0.1];
\node [above] (vx2) at (4,6.1) {$v_{x,2}$};
\node [left] (22l) at (4,6) {};
\node [right] (22r) at (4,6) {};
\draw [thick, -] (22l) -- (21r);
\draw[opacity=0, fill](6.25,6) circle [radius=0.1];
\node [left] (23l) at (6.25,6) {};
\draw [thick, -] (23l) -- (22r);
\draw[fill](6.5,6) circle [radius=0.05];
\draw[fill](7,6) circle [radius=0.05];
\draw[fill](7.5,6) circle [radius=0.05];
\draw[opacity=0, fill](7.75, 6) circle [radius=0.1];
\node [right] (2w22r) at (7.75, 6) {};
\draw[fill](10,6) circle [radius=0.1];
\node [above] (vxw) at (10,6.1) {$v_{x,w_x-1}$};
\node [left] (2w21l) at (10,6) {};
\node [right] (2w21r) at (10,6) {};
\draw [thick, -] (2w21l) -- (2w22r);
\draw[fill](2,5) circle [radius=0.05];
\draw[fill](2,4) circle [radius=0.05];
\draw[fill](2,3) circle [radius=0.05];
\draw[fill](4,5) circle [radius=0.05];
\draw[fill](4,4) circle [radius=0.05];
\draw[fill](4,3) circle [radius=0.05];
\draw[fill](10,5) circle [radius=0.05];
\draw[fill](10,4) circle [radius=0.05];
\draw[fill](10,3) circle [radius=0.05];
\draw[fill](2,2) circle [radius=0.1];
\node[left] (k1l) at (2,2) {};
\node[right] (k1r) at (2,2) {};
\draw [thick, -] (0.15,4.7) -- (k1l);
\draw[fill](4,2) circle [radius=0.1];
\node [left] (k2l) at (4,2) {};
\node [right] (k2r) at (4,2) {};
\draw [thick, -] (k2l) -- (k1r);
\draw[opacity=0, fill](6.25,2) circle [radius=0.1];
\node [left] (k3l) at (6.25,2) {};
\draw [thick, -] (k3l) -- (k2r);
\draw[fill](6.5,2) circle [radius=0.05];
\draw[fill](7,2) circle [radius=0.05];
\draw[fill](7.5,2) circle [radius=0.05];
\draw[opacity=0, fill](7.75,2) circle [radius=0.1];
\node [right] (kwk2r) at (7.75,2) {};
\draw[fill](10,2) circle [radius=0.1];
\node [left] (kwk1l) at (10,2) {};
\node [right] (kwk1r) at (10,2) {};
\draw [thick, -] (kwk1l) -- (kwk2r);
\node[] at (6,0.9) {\emph{Star-shaped graph}};
\end{tikzpicture}
$$
Consider the Kac--Moody Lie algebra $\mathfrak g_{w_\bullet}$ associated to the generalized Cartan matrix defined by this graph (see, e.g.,~\cite[Section~1]{Kac82}). Let $\Lambda_{w_\bullet}$ be the root lattice of $\mathfrak g_{w_\bullet}$; we identify it with the free abelian group generated by the set of vertices. Then $\gamma$ gives rise to an element of $\Lambda_{w_\bullet}$ given by
\[
\rho_{\gamma,w_\bullet}:=rv_*+\sum_{x\in D}\sum_{j=1}^{w_x-1}\left(\sum_{i=1}^jr_{x,i}\right)v_{x,j}.
\]
Now~\cite[p.~1334, Corollary]{CrawleyBoeveyIndecompPar} can be re-formulated as follows.
\begin{Proposition}\label{pr:ExistIndecomp}
In the above notation, there is a non-zero indecomposable parabolic bundle $\mathbf E\in\mathcal{B}{un}^{\rm par}_\gamma$ if and only if $\rho_{\gamma,w_\bullet}$ is a root of $\mathfrak g_{w_\bullet}$.
\end{Proposition}
We see that $\rho_{\gamma,w_\bullet}$ does not depend on $d$. Thus if there is an indecomposable parabolic bundle $\mathbf E$ with $\cl(\mathbf E)=(r,r_{\bullet,\bullet},d)$, then for any $d'$ there is an indecomposable parabolic bundle of class $(r,r_{\bullet,\bullet},d')$. Secondly, we see that the property of $\rho_{\gamma,w_\bullet}$ being a root does not depend on the choice of $w_\bullet$ as long as the components of $w_\bullet$ are large enough. By a slight abuse of terminology we say that $\gamma$ is a root in this case.
\begin{Remark}
In fact, one can consider an infinite star-shaped graph $G_D$ with $\deg D$ infinite rays, and the corresponding Kac--Moody Lie algebra $\mathfrak g_D$, which is the inductive limit of $\mathfrak g_{w_\bullet}$. Then we have a homomorphism $\rho$ from $\Gamma_+$ to the root lattice of $\mathfrak g_D$ and the classes of indecomposable parabolic bundles are exactly the $\rho$-preimages of roots.
\end{Remark}
\subsubsection{Non-emptiness of moduli stacks}
Now we can give a full answer to the question of when $\mathcal{H}{iggs}_\gamma^{\sigma-{\rm ss}} (\zeta)$, $\mathcal{C}{onn}_\gamma(\zeta)$ and $\mathcal{C}{onn}^{(\kappa,\sigma)-{\rm ss}}_\gamma (\zeta)\!\!$ are non-empty. The first statement follows immediately from results of Crawley--Boevey.
\begin{Theorem}\label{th:NonEmpty}
Assume that $\gamma\in\Gamma_+$ and $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] If $g=g(X)>0$, then $\mathcal{C}{onn}_\gamma(\zeta)$ is non-empty if and only if $\deg_{1,\zeta}\gamma=0$.
\item[$(ii)$] If $g=0$, then $\mathcal{C}{onn}_\gamma(\zeta)$ is non-empty if and only if $\gamma$ can be written as $\sum\limits_{i=1}^n\gamma_i$, where $\gamma_i\in\Gamma'$ are roots and $\deg_{1,\zeta}\gamma_i=0$.
\end{enumerate}
\end{Theorem}
\begin{proof}
Note that a $\mathbf{k}$-stack $\mathcal X$ is non-empty if and only if $\mathcal X\times_\mathbf{k}\overline\mathbf{k}$ is non-empty, where $\overline\mathbf{k}$ is the algebraic closure of $\mathbf{k}$. Thus, we can assume that $\mathbf{k}$ is algebraically closed from the very beginning. By Lemma~\ref{lm:existence2}, $\mathcal{C}{onn}_\gamma(\zeta)$ is non-empty if and only if there is a $(1,\zeta)$-isoslopy parabolic bundle of class $\gamma$ such that $\deg_{1,\zeta}\gamma=0$. Now (i) follows from Lemma~\ref{lm:g>0}, while (ii) follows from Proposition~\ref{pr:ExistIndecomp}.
\end{proof}
\begin{Theorem}\label{th:NonEmpty2}
Assume that $\gamma\in\Gamma_+$, $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$, and $\sigma\in\Stab$.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] If $g=g(X)>0$, then $\mathcal{H}{iggs}^{\sigma-{\rm ss}}_\gamma(\zeta)$ is non-empty if and only if $\deg_{0,\zeta}\gamma=0$.
\item[$(ii)$]
If $g=0$, then $\mathcal{H}{iggs}^{\sigma-{\rm ss}}_\gamma(\zeta)$ is non-empty if and only if $\gamma$ can be written as $\sum\limits_{i=1}^n\gamma_i$, where $\gamma_i\in\Gamma'$ are roots, $\deg_{0,\zeta}\gamma_i=0$, and the $(1,\sigma)$-slope of each $\gamma_i$ is equal to the $(1,\sigma)$-slope of $\gamma$.
\end{enumerate}
\end{Theorem}
\begin{proof}
By Proposition~\ref{pr:NonEmpty}, $\mathcal{H}{iggs}^{\sigma-{\rm ss}}_\gamma(\zeta)$ is non-empty if and only if $\big[\mathcal{H}{iggs}^{\sigma-{\rm ss}}_\gamma(\zeta)\big]\ne0$. Let $\zeta'$ be as in Proposition~\ref{pr:ConnUniversal2}. Applying Proposition~\ref{pr:NonEmpty} again, we see that $\mathcal{H}{iggs}^{\sigma-{\rm ss}}_\gamma(\zeta)$ is non-empty if and only if $\mathcal{C}{onn}_\gamma(\zeta')$ is non-empty. It remains to use Theorem~\ref{th:NonEmpty} and~\eqref{eq:EqualMot3}.
\end{proof}
\begin{Theorem}\label{th:NonEmpty3}
Assume that $\gamma\in\Gamma_+$, $\zeta\in\mathbf{k}[D\times\mathbb{Z}_{>0}]$ and $(\kappa,\sigma)\in\Stab'$. Assume that either~$\zeta$ is non-resonant, or $\kappa=1$ and $\sigma\in\Stab$.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] If $g=g(X)>0$, then $\mathcal{C}{onn}^{(\kappa,\sigma)-{\rm ss}}_\gamma(\zeta)$ is non-empty if and only if $\deg_{1,\zeta}\gamma=0$.
\item[$(ii)$] If $g=0$, then $\mathcal{C}{onn}^{(\kappa,\sigma)-{\rm ss}}_\gamma(\zeta)$ is non-empty if and only if $\gamma$ can be written as $\sum\limits_{i=1}^n\gamma_i$, where $\gamma_i\in\Gamma'$ are roots, $\deg_{1,\zeta}\gamma_i=0$, and the $(\kappa,\sigma)$-slope of each $\gamma_i$ is equal to the $(\kappa,\sigma)$-slope of $\gamma$.
\end{enumerate}
\end{Theorem}
\begin{proof}
Same as of Theorem~\ref{th:NonEmpty2} except that one uses Proposition~\ref{pr:ConnUniversal} instead of Proposition~\ref{pr:ConnUniversal2} and~\eqref{eq:EqualMot2} instead of~\eqref{eq:EqualMot3}.
\end{proof}
\subsection*{Acknowledgements}
We thank E.~Diaconescu, J.~Heinloth, O.~Schiffmann, and especially A.~Mellit for useful discussions and correspondence. We thank P.~Boalch for a useful comment on an earlier version. A part of this work was done while R.F.~was visiting Max Planck Institute of Mathematics in Bonn, and a part when he was visiting A.~Mellit at the University of Vienna. The work of R.F.~was partially supported by NSF grant DMS--1406532. A.S.~and Y.S.~thank IHES for excellent research conditions and hospitality. The work of Y.S.~was partially supported by NSF grants and Munson--Simu Faculty Award at Kansas State University. The authors would like to thank the anonymous referees for carefully reading the paper and for useful comments.
\pdfbookmark[1]{References}{ref}
|
1,108,101,563,165 | arxiv | \subsection*{ACKNOWLEDGMENTS}
Dong-Sheng Ding and Ming-Xin Dong contributed this paper equally.
We thank Guo-Yong Xiang professor for loaning a SLM. This work was
supported by National Key R\&D Program of China (2017YFA0304800),
the National Natural Science Foundation of China (Grant Nos. 61525504,
61722510, 61435011, 11174271, 61275115, 11604322), and the Innovation
Fund from Chinese Academy of Sciences.
\section*{Supplementary}
\subsection*{Experimental time sequence. }
The repetition rate of our experiment is $100\,\mathrm{Hz}$, and
the MOT trapping time is 8.7 ms. Besides, the operation window of
1.3 ms consists of 2600 cycles with a cycle time of 500 ns. Write-laser
and read-laser are pulsed by acousto-optic modulator with pulse width
of 50 ns and 200 ns respectively in each cycle. The optical depth
in MOT is about 40. The storage time is controlled by changing the
delay time between write- and read-laser through an arbitrary function
generator. The magnetic field for trapping is shut down in the experiment
window.
\subsection*{4-F image system for four SLMs. }
The SLM 1 acts as a mask plane, and the center of atomic ensemble
in MOT is the image plane. Two lenses L1 and L2 with focal length
of 300 mm and 500 mm are utilized to map the phase message of SLM
1 to the atomic ensemble. Due to the phase matching condition $k_{W}-k_{S1}=k_{R}-k_{S2}$,
the imaging system can be easily optically aligned. The Signal 1 and
Signal 2 fields are collinear, the Signal 1 beam is completely overlapped
by the write beam through demonstrating electromagnetically induced
transparency effect. Here, the write-laser carrying high OAM quanta
diffracts very strongly and results in the waist of laser beam too
large in the center of atomic ensemble, which results in weak interaction
between write-laser and atomic ensemble. Through the 4-f image system
with unequal arms, we can not only map the OAM phase message to the
center of atomic ensemble accurately but also decrease the waist of
write-laser with high OAM quanta. Similarly, the single photon carried
with OAM phase message from the center of atomic ensemble is retrieved
to project on SLM 1 via the other 4-f image system, and ultimately
we collect photons by single-mode fibers.
\subsection*{Theoretical analysis. }
In the interaction picture, despite the decay of spin wave, the effective
Hamiltonian for the delayed four-wave mixing process is written as
\citep{wen2006transverse}
\begin{align}
{\hat{H}_{I}} & =\frac{{\varepsilon_{0}}}{4}\int_{-L/2}^{L/2}{dz{\chi^{(3)}}{{\vec{E}}_{W}}{{\vec{E}}_{R}}{{\vec{E}}^{*}}_{S1}{{\vec{E}}^{*}}_{S2}}+H.c
\end{align}
where $H.c.$ means the Hermitian conjugate. ${\chi^{(3)}}$ is the
third-order nonlinear susceptibility for resonant signal 2 photon,
which is given \citep{braje2004frequency}:
\begin{align}
{\chi^{(3)}} & =\frac{{N{\mu_{13}}{\mu_{32}}{\mu_{24}}{\mu_{41}}/({\varepsilon_{0}}{\hbar^{3}})}}{{({\Delta_{W}}+i{\gamma_{23}})[{{\left|{\Omega_{R}}\right|}^{2}}-4(\omega+i{\gamma_{24}})(\omega+i{\gamma_{21}})]}}
\end{align}
here, ${\mu_{ij}}$ are the electric dipole matrix elements. ${\gamma_{ij}}$
are the dephasing rates. ${\Omega_{R}}$ is the Rabi frequency of
read laser. The probability to generate the Signal 1 and Signal 2
in modes $|l_{S1}\rangle$, $|l_{S2}\rangle$ is given by the overlap
with write- and read-laser beam profiles:
\begin{equation}
\begin{split}c_{l_{W}l_{R}l_{S1}l_{S2}} & \sim\int_{-L/2}^{L/2}\int_{0}^{r}\int_{0}^{2\pi}\varepsilon_{0}\chi^{(3)}rLG_{0}^{l_{W}}(r,\phi)LG_{0}^{l_{R}}(r,\phi)\\
& LG_{0}^{l_{W}}(r,\phi)LG_{0}^{l_{R}}(r,\phi)[LG_{0}^{l_{S1}}(r,\phi)]^{*}[LG_{0}^{l_{S2}}(r,\phi)]^{*}d\phi drdz
\end{split}
\end{equation}
The integral over the azimuthal coordinate is
\begin{align}
\int_{0}^{2\pi}{d\phi}exp[i({l_{W}}+{l_{R}}-{l_{S1}}-{l_{S2}})\phi] & =2\pi{\delta_{{l_{W}}+{l_{R}},{l_{S1}}+{l_{S2}}}}
\end{align}
From which, we can obtain the topological charge conservation law
in OAM space is ${l_{W}}+{l_{R}}={l_{S1}}+{l_{S2}}$. According to
Eq. (4), we can find the probability of ${l_{S1}}$-Signal 1 and ${l_{S2}}$-Signal
2 photons with ${l_{W}}$-write and ${l_{R}}$-read lasers, which
strongly depends on the profiles match between the four fields.
In order to illustrate the topological charge conservation law in
our OAM quantum interface in DLCZ memory, we input the write-laser
with OAM quanta of $l_{W}$. Due to the fact that SRS process conserves
angular momentum, we have created OAM entanglement between Signal
1 and atomic spin wave, which can be specified by the formula $\left|\psi\right\rangle _{_{photon-atom}}^{{l_{W}}}{\rm {=}}\sum\nolimits _{l=-\infty}^{l=\infty}{c_{l}}{\left|l\right\rangle _{{\rm {S1}}}}\otimes{\left|{{l_{W}}-l}\right\rangle _{a}}$,
here, ${\left|{c_{l}}\right|^{2}}$ represents excitation probability,
${\left|l\right\rangle _{{\rm {S1}}}}$ is the OAM eigenmode of Signal
1 with quanta of $l$. ${\left|{{l_{W}}-l}\right\rangle _{a}}$ is
the OAM eigenmode of atomic spin wave with quanta of ${l_{W}}-l$.
Through this method, the atomic spin wave could carry the arbitrary
OAM topological charge with the term of ${l_{W}}-l$, thus resulting
in the redistributed quantum interface.
\begin{figure}[H]
\includegraphics[width=1\columnwidth]{Fig\lyxdot 5}\caption{Reconstructed density matrices for Modulated OAM entanglement. The
real (a,c) and imaginary (b, d) parts of density matrices for photonic
OAM entangled state $\left|\psi\right\rangle _{photon-photon}^{2,0}$
and $\left|\psi\right\rangle _{photon-photon}^{1,2}$. Each data for
reconstructing density matrices are recorded in 1000 s.}
\label{result 1}
\end{figure}
After a period of storage, we check photon-atom entanglement by inputting
read-laser with OAM quanta of $l_{R}$, and checking the entanglement
between Signal 1 and Signal 2. The entanglement between Signal 1 and
Signal 2 can be written as $\left|\psi\right\rangle _{photon-photon}^{{l_{W}},{l_{R}}}{\rm {=}}\sum\nolimits _{l=-\infty}^{l=\infty}{c_{l}}{\left|l\right\rangle _{{\rm {S1}}}}\otimes{\left|{{l_{W}}+{l_{R}}-l}\right\rangle _{{\rm {S2}}}}$.
At first, we set $l_{W}=2$ and $l_{R}=0$, it means using OAM quanta
of 2 and 0 to write and read respectively. Thus, the photonic entangled
state is a sum of ${\left|l\right\rangle _{{\rm {S1}}}}\otimes{\left|{2-l}\right\rangle _{{\rm {S2}}}}$
with different $l$, this is a modulated asymmetric OAM entangled
state. Here, we only post-select the OAM mode of entangled state into
two-dimensional subspace ${\left|0\right\rangle _{{\rm {S1}}}}{\left|2\right\rangle _{S2}}$
and ${\left|2\right\rangle _{{\rm {S1}}}}{\left|0\right\rangle _{S2}}$,
that is $\left|\psi\right\rangle _{photon-photon}^{2,0}{\rm {=}}{\raise0.5ex\hbox{\ensuremath{{\scriptstyle 1}}}\kern-0.1em /\kern-0.15em \lower0.25ex\hbox{\ensuremath{{\scriptstyle {\sqrt{2}}}}}}\left({{{\left|0\right\rangle }_{{\rm {S1}}}}{{\left|2\right\rangle }_{{\rm {S2}}}}+{{\left|2\right\rangle }_{{\rm {S1}}}}{{\left|0\right\rangle }_{{\rm {S2}}}}}\right)$.
To characterize the OAM entanglement between Signal 1 and Signal 2,
we reconstruct the density matrices by projecting Signal 1 and Signal
2 onto OAM bases of $\left|0\right\rangle $, $\left|2\right\rangle $,
${{\left({\left|0\right\rangle -i\left|2\right\rangle }\right)}\mathord{\left/{\vphantom{{\left({\left|0\right\rangle -i\left|2\right\rangle }\right)}{2^{1/2}}}}\right.\kern-\nulldelimiterspace}{2^{1/2}}}$,
${{\left({\left|0\right\rangle +\left|2\right\rangle }\right)}\mathord{\left/{\vphantom{{\left({\left|0\right\rangle +\left|2\right\rangle }\right)}{2^{1/2}}}}\right.\kern-\nulldelimiterspace}{2^{1/2}}}$
for demonstrating quantum tomography. Then we use the obtained 16
coincidence rates to reconstruct the density matrix of state as shown
in Fig. \ref{result 1} (a) and (b). According to the formula $F{\rm {=Tr(}}\sqrt{\sqrt{\rho}{\rho_{{\rm {ideal}}}}\sqrt{\rho}}{{\rm {)}}^{2}}$,
which compares the constructed density matrix $\rho$ with the ideal
density matrix ${\rho_{{\rm {ideal}}}}$, we obtain the fidelity of
$83.3\pm3.5$\%. We also try another data set of $m_{1}=1$ and $m_{2}=2$,
and obtain the photonic entangled state $\left|\psi\right\rangle _{photon-photon}^{1,2}{\rm {=}}{\raise0.5ex\hbox{\ensuremath{{\scriptstyle 1}}}\kern-0.1em /\kern-0.15em \lower0.25ex\hbox{\ensuremath{{\scriptstyle {\sqrt{2}}}}}}\left({{{\left|0\right\rangle }_{{\rm {S1}}}}{{\left|3\right\rangle }_{{\rm {S2}}}}+{{\left|3\right\rangle }_{{\rm {S1}}}}{{\left|0\right\rangle }_{{\rm {S2}}}}}\right)$.
Similarly, we reconstruct the density matrix of this state, the real
and imaginary parts of reconstructed density matrix are shown in Fig.
\ref{result 1} (c) and (d), with fidelity of $81.1\pm4.2$\%. In
this process, although the fidelity is not very high, but it reveals
that in DLCZ quantum memory, the OAM modes are conserved in the whole
writing and reading process.
\subsection*{The entanglement dimensionality witness. }
In order to demonstrate the high-D entanglement between Signal 1 and
atomic memory, we avoid the crosstalk between neighboring OAM modes.
We select the modes of $l=0,4,8,12,16$ in which three modes between
adjacent terms are removed for better isolation. We read the photon-atom
entanglement out to photon-photon entanglement for verification. So,
the entangled state is $\left|\psi\right\rangle _{photon-photon}^{{\rm {10}},-{\rm {1}}0}{\rm {=}}{c_{1}}{\left|0\right\rangle _{{\rm {S1}}}}{\left|0\right\rangle _{{\rm {S2}}}}+{c_{2}}{\left|{-4}\right\rangle _{{\rm {S1}}}}{\left|4\right\rangle _{{\rm {S2}}}}+{c_{3}}{\left|{-8}\right\rangle _{{\rm {S1}}}}{\left|8\right\rangle _{{\rm {S2}}}}+{c_{4}}{\left|{-12}\right\rangle _{{\rm {S1}}}}{\left|{12}\right\rangle _{{\rm {S2}}}}+{c_{5}}{\left|{-16}\right\rangle _{{\rm {S1}}}}{\left|{16}\right\rangle _{{\rm {S2}}}}$here,
${c_{5}}\sim{c_{5}}$ are the corresponding amplitudes of different
terms ${\left|0\right\rangle _{{\rm {S1}}}}{\left|0\right\rangle _{{\rm {S2}}}}\sim{\left|{\rm {-16}}\right\rangle _{{\rm {S1}}}}{\left|{\rm {16}}\right\rangle _{{\rm {S2}}}}$.
For verifying the high-D state, it is very promising to use high-D
entanglement dimensionality witness \citet{agnew2012observation,krenn2014generation}
to characterize the entanglement existed in our system. The entanglement
dimensionality witness is expressed as ${W_{d}}=3\frac{{D(D-1)}}{2}-D(D-d)$,
here, $D$ is the number of measured OAM modes, and $d$ is associated
with dimensions of entanglement. If $W>{W_{d}}$, the two photons
entangled in at least $d+1$ dimensions, where $W$ is obtained from
calculating the sum of visibilities $N=V_{x}+V_{y}+V_{z}$ in total
two dimensional subspace. The $V_{x}$, $V_{y}$ and $V_{z}$ represent
the visibilities of two-photon interference in the diagonal/anti-diagonal,
left-circular/right-circular and horizontal/vertical bases respectively
in each OAM mode of $a$ and $b$, here $a$ and $b$ are selected
from $l=0,4,8,12,16$. A disadvantage of quantum tomography for high-D
entanglement is that the needed measurement data is the order of $d^{4}$,
which is a large challenge in its realization and is impractical while
$d$ is set to 5 in our experiment. Therefore, we adopt the method
of dimensionality witness to certificate the existence of high-D entanglement
and characterize the dimensionality. We calculate the value $W$ is
21.93\textpm 0.55, which violates the bound $W_{d}$ of 20 (the number
of measured OAM mode $D$ is 5 and $d$ is 3) by 3 s.d\textquoteright s,
thus there is at least a 4-D OAM entanglement between Signal 1 and
Signal 2 photons. In these measurements, the atom-photon entangled
states are both detected in photonic regime, we assume the fidelity
of reading out from ensembles is near unit. Although there are definitely
some noise or inefficient elements from reading process, making the
degree of the measured entanglement lower than that existed in ensembles.
\begin{figure}[H]
\includegraphics[width=1\columnwidth]{Fig\lyxdot 6}\caption{(a) The post-selected correlated OAM matrix between Signal 1 and Signal
2 photons with OAM modes difference $|\Delta l|$ up to 16. (b) The
each sum of visibilities for 2-D subspaces for detecting the high-D
entanglement dimensionality witness.}
\label{high-D}
\end{figure}
\subsection*{2-D high-$l$ Entanglement and state tomography}
If we considered the OAM modes of $a$ and $b$ with $l$=32 and 28,
the Signal 1 and Signal 2 are entangled in OAM space and entangled
state is expressed as
\begin{equation}
\left|{\Psi_{3}}\right\rangle {\rm {=}}{\raise0.5ex\hbox{\ensuremath{{\scriptstyle {\rm {1}}}}}\kern-0.1em /\kern-0.15em \lower0.25ex\hbox{\ensuremath{{\scriptstyle {\sqrt{{\rm {2}}}}}}}}\left({{{\left|{-28}\right\rangle }_{{\rm {S1}}}}{{\left|{28}\right\rangle }_{{\rm {S2}}}}+{{\left|{-32}\right\rangle }_{{\rm {S1}}}}{{\left|{32}\right\rangle }_{{\rm {S2}}}}}\right)
\end{equation}
Here, ${\left|{-28}\right\rangle _{{\rm {S1}}}}$ represents the Signal
1 carrying with OAM quanta of -28. By using two computers, we project
two photons onto two SLMs respectively and four state of $\left|{\phi_{1\sim4}}\right\rangle $
($\left|{-28}\right\rangle $, $\left|{-32}\right\rangle $, ${{\left({\left|{-28}\right\rangle -i\left|{-32}\right\rangle }\right)}\mathord{\left/{\vphantom{{\left({\left|{-28}\right\rangle -i\left|{-32}\right\rangle }\right)}{2^{1/2}}}}\right.\kern-\nulldelimiterspace}{2^{1/2}}}$,
${{\left({\left|{-28}\right\rangle +\left|{-32}\right\rangle }\right)}\mathord{\left/{\vphantom{{\left({\left|{-28}\right\rangle +\left|{-32}\right\rangle }\right)}{2^{1/2}}}}\right.\kern-\nulldelimiterspace}{2^{1/2}}}$)
are programmed onto SLM 2 and $\left|{\varphi_{1\sim4}}\right\rangle $
($\left|{28}\right\rangle $, $\left|{32}\right\rangle $, ${{\left({\left|{28}\right\rangle -i\left|{32}\right\rangle }\right)}\mathord{\left/{\vphantom{{\left({\left|{28}\right\rangle -i\left|{32}\right\rangle }\right)}{2^{1/2}}}}\right.\kern-\nulldelimiterspace}{2^{1/2}}}$,
${{\left({\left|{28}\right\rangle +\left|{32}\right\rangle }\right)}\mathord{\left/{\vphantom{{\left({\left|{28}\right\rangle +\left|{32}\right\rangle }\right)}{2^{1/2}}}}\right.\kern-\nulldelimiterspace}{2^{1/2}}}$)
are programmed onto SLM 4. Then, we obtain a set of 16 data for reconstructing
the density matrix given in the main text. The error bars in our experiment
are estimated by Poisson statistics and using Monte Carlo simulations
with the aid of Mathematica software.
|
1,108,101,563,166 | arxiv | \section{Introduction}
Traditionally, a classification task is to assign items (instances) in a data-set to target categories (classes) based on classifier(s) learnt by training instances. In binary classification there are only two classes or categories and all instances in the data set will be assigned one of them. The target of a classification problem is trying to design classifiers which make error-free assignments.
The ROC graph is a technique for visualizing, organizing and selecting classifiers based on their performance~\cite{fawcett2006introduction}. A salient topic in ROC analysis is to generate ROC curves for varying discriminative thresholds over the output of the classifier~\cite{fawcett2006introduction}, and ROC curves have been used widely in many areas. Actually, over the course of the past 40 years, ROC technique has been widely applied in many research and application areas, such as signal detection~\cite{egan1975signal}, medical decision making~\cite{sox1988medical}, diagnostic systems~\cite{swets1988measuring}.
Though ROC curve works well in many cases, recently attention of the research is also drawn towards another perspective of ROC analysis, namely ROC convex hull (ROCCH). ROCCH pays more attention to the convex hull of a set of points (hard classifiers) obtained either from sever curves (i.e., soft classifiers) or itself (hard classifier). A classifier is potentially optimal, if and only if it is a component of ROCCH, in other words, ROCCH could provide better choices than a single ROC curve to specific environments. The significance of ROCCH in ROC analysis is that for test data sets with different skewed class distributions or misclassification costs, it is always possible to choose suitable classifiers by iso-performance lines\footnote{All classifiers corresponding to the points on one line have the same expected costs.} which is translated by operating conditions of classifiers and used to identify a portion of the ROCCH~\cite{provost2001robust}. Consequently, ROCCH is emphasized in this paper and we will focus on searching a group of independently hard classifiers to maximize the ROCCH performance rather try to maximize the area under the ROC curve (AUC) of a single soft classifier.
Essentially, ROCCH is the collection of all potentially optimal classifiers in a given set of classifiers, so ROCCH maximization is to find a group of classifiers with their performance approximating the top and the left axes as near as possible in ROC space. However, ROCCH maximization is not an easy task, there are not many works focusing on how to maximize the ROCCH though it is a really important topic in classification problems. Generally, the exist works could be reviewed into two categories, ROC geometric analysis based machine learning methods and multi-objective optimization strategies based evolutionary computation methods for ROCCH maximization.
Fawcett et al.~\cite{fawcett2001using} employed C4.5 and Rule Learning (RL) systems to induce decision rules in ROC space and its advanced version PRIE was introduced in~\cite{fawcett2008prie}. It was a straight way to analysis the geometrical properties to generate decision rules to maximize the ROC performance. However, the procedure easily gets trapped in local optima.
The concavity problem in ROC analysis was researched by Flach et al.~\cite{flach2003repairing} who demonstrated how to detect and repair concavities in ROC curves. The basic idea of that work is that if a point in the concavity can be mirrored to a better point which could perform well beyond the original ROC curve. But it is not a general method to maximize the ROC performance.
ROCCER was introduced by Prati et al. in~\cite{prati2005roccer}. It was argued that ROCCER is less dependent on the previously induced rules compared with set covering algorithms to construct rule sets that have a convex hull in ROC space. However, it adopted an association rule learner to generate new rules to cover the instance space as full as possible. It is too easy to fall into overfitting, because it needs many rules to cover the space which is similar with a decision tree with a very high height.
The Neyman-Pearson lemma as the theoretical basis for finding the optimal combination of classifiers to maximize the ROCCH is given in~\cite{Barreno_Cardenas_Tygar_2008}. In contrast to the similar technique in~\cite{flach2003repairing}, it not only focuses on repairing but it also pays attention on improving if there was on concavity. For a given rule set, the method proposed by~\cite{Barreno_Cardenas_Tygar_2008} can be efficient to combine these rules using \emph{AND} and \emph{OR} to get the optimum rule subset. However, as mentioned above, it misses schemes for generating new rules in the global rule set searching.
To maximize ROCCH is searching a group of classifiers to maximize the ROCCH performance ideally would yield classifiers that simultaneously minimize the \emph{fpr} and maximize the \emph{tpr}, i.\,e. that are located as much to the left and to the top of the ROC space as possible. However, it is very hard to optimize \emph{fpr} and \emph{tpr} simultaneously because they are conflicting targets. From this perspective, ROCCH maximization problem is similar to multi-objective optimization problem.
Zhao~\cite{zhao2007multi} proposed specific non-dominated relationship involved into multi-objective optimization framework to optimize \emph{tpr} and $1-$\emph{fpr}. However, it paid more attention on cost-sensitive classification and made many rules by information of costs of misclassification to rank the individuals in its multi-objective genetic programming. First, it is not a general method for ROCCH maximization because it only focused on cost-sensitive problem. Second, two data sets involved in experiments are too few to evaluation the proposed method.
Bhowan et al. searched the Pareto front to maximize the accuracy of each minority class with unbalanced data set~\cite{bhowan2009multi}, and they also employed multi-objective optimization techniques to evolve diverse ensembles using genetic programming to maximize the classification performance in~\cite{bhowan2012evolving}.
Wang et al. investigated investigated some EMOAs such as NSGA-II~\cite{deb2002fast}, MOEA/D~\cite{zhang2007moea}, SMS-EMOA~\cite{Beume20071653} and Approximation-Guided Evolutionary Multi-objective Algorithm (AG-EMOA)~\cite{Bringmann}. These different evolutionary multi-objective optimization frameworks had been combined with genetic programming to maximize ROC performance~\cite{wang2012multiobjective}.
However, ROCCH is different from Pareto front though it was reported they were similar to each other~\cite{fawcett2004roc}. ROCCH is the collection of points which construct the convex hull of existing classifiers in ROC space, and Pareto front is the collection of points that is the first level sorted by dominance relationship. Though evolutionary multi-objective algorithms(EMOAs) have been successfully used into ROCCH maximization, these EMO techniques do not take into account a special characteristic of ROCCH. That is by mixing two classifiers we can take any two real classifiers to construct any virtual classifier with its performance as a point along the line connected by above two points~\cite{fawcett2004roc}. Consequently, hard classifiers in concave parts of the Pareto front can always be replaced by classifier combinations that yield dominating points. The computational resources for the approximation of concave parts are thus better spent on the accurate approximation of only those parts of the Pareto front that are part of the convex hull.
In~\cite{shan2009multi,DavoodiMonfared20111435,ZapotecasMartinez:2010}, convex hull concept of was employed into EMOAs to make the sort fast or maintain a well-distributed set of non-dominated solutions. These work are good to supply some ideas of convex hull based sort. In~\cite{CococcioniCHEA} and~\cite{ducange2010multi}, convex hull-based ranking involved with evolutionary multi-objective optimization and fuzzy rule-based binary classifiers to maximize ROOCH in ROC space. However, the number of levels was pre-defined as three without explaining in first work and the second one was considered as bi-objectives optimization, which were accuracy of classification and complexity of classifier rules.
Moreover, instead of designing algorithms based on Pareto dominance compliant performance indicators, such as the hypervolume indicator as done in \cite{Beume20071653} and in \cite{igel2007covariance}, it seems more promising to directly target the algorithm towards the maximization of the area under the convex hull (AUCH).
In this paper, we utilize Genetic Programming (GP) combined multi-objective techniques to get the optimal ROCHH. Two strategies will be represented, the first is the convex hull-based without redundancy sort to make the population of GP into several levels such as non-dominated sort in NSGA-II, the second is using area-based contribution to select the survivors in the same level, actually we use $\mu$ + $\mu$ selection strategy as~\cite{igel2007covariance}. We show that convex hull-based without redundancy sort plays a key role in multi-objective genetic programming (MOGP) for maximizing ROCCH performance and area-based contribution selection scheme also can improve the performance.
This paper is organized as follows: Section~\ref{section:rochhmo} will discuss the relationship between ROCCH optimization and traditional multi-objective optimization in detail. Convex hull-based multi-objective genetic programming (CH-MOGP) will be described in Section~\ref{section:chmogp}. Experiments are studied in Section~\ref{section:experiment} and shows the advantages of our new algorithm. Section~\ref{section:confusion} gives the conclusions and a discussion on the important aspects and the future perspectives of this work.
\section{ROC Convex Hull and Multi-objective Optimization}
\label{section:rochhmo}
\subsection{What is ROCCH?}
Basically, ROC analysis concerns the confusion matrix for the outputs of a classifier, in which we can analysis the performance by measuring different metrics such as accuracy, precise, specificity, sensitivity and some others. ROC graph (Left side of Fig.~\ref{fig:rocspace}) is plotted upon Y axis and X axis respectively taken \emph{tpr} and \emph{fpr}, which are also defined from the confusion matrix. Each classifier can be mapped in the ROC graph by its performance. Essentially, ROCCH is the collection of all potentially optimal classifiers in a given set of classifiers(Right side of Fig.~\ref{fig:rocspace}). Furthermore, a classifier is potentially optimal if and only if it lies on the convex hull of the set of points in ROC space~\cite{fawcett2006introduction}.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.5in]{./graph/ROCCH_AUC}\\
\caption{ROC graph and ROC Convex hull in ROC Space}%
\label{fig:rocspace}%
\end{figure}
\subsection{ROCCH maximization problem and multi-objective optimization problem}
The target of ROOCH maximization problem essentially aims at searching a group of solutions (classifiers)to approximate the upmost line and the leftmost line in ROC space as closely as possible. However, it is conflicting to minimize $fpr$ and maximize $tpr$ simultaneously because if the classifier labels more instances as positives, it will produce less negatives and vice versa. Generally speaking, ROCCH maximization is considered as a multi-objective optimization problem from this perspective and we can describe it as follows:
\begin{eqnarray}%
\label{mop}%
\textnormal{maximize~}F(x) &=& (f_{tpr}(x),f_{1-fpr}(x))\nonumber\\%
\textnormal{subject to~}&&x \in \Omega%
\end{eqnarray}%
In Eq.~\ref{mop}, $x$ is a classifier and $F(x)$ is a vector function for $fpr$ and $tpr$ of the classifier. An important term in MOP is \emph{dominance} which can be defined as: Let $u = (u_{1},\dots,u_{m})$, $v = (v_{1},\dots,v_{m})$ be two vectors, $u$ is said to \emph{dominate} $v$ if $u_{i} \leq v_{i}$ for all $i = 1{\dots}m$, and $u \neq v$, this is noted as $u \prec v$. If $u$ and $v$ can not dominate each other, we say that $u$ and $v$ are \emph{nondominated}. The nondominated set is a set that each item does not dominate any another one. A point $x^{\star}$ is called \emph{Pareto optimal} if there is no $x \in \Omega$ such that $F(x)$ dominates $F(x^{\star})$~\cite{zhang2007moea,WGOEB}. Pareto set (PS) is the collection of all Pareto optimal points. The Pareto front is the set of all the Pareto objective vectors $PF = \{F(x)| x \in PS \}$.
Most evolutionary multi-objective algorithms involves the pair-wise based dominance to describe the relationship of two solutions. However, we get a special character in ROCCH maximization in ROC space. Fig.~\ref{fig:ROCCHParetoFront} shows the convex hull part and Pareto front for all points. Obviously, convex hull is different from the Pareto front though they were argued that they are similar to each other~\cite{flach2010roc}. For example, points $a,b,c$ in Fig.~\ref{fig:ROCCHParetoFront} are non-dominated set in traditional multi-objective optimization problem, however, the classifier along the line connected by $a$ and $c$ would dominate $b$. That is the special character in ROC maximization problem which makes ROCCH maximization is beyond traditional multi-objective optimization. However, we need to design some new techniques for searching a group of classifiers with maximum ROCCH.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.25\textwidth]{./graph/ROCCHParetoFront}\\
\caption{Pareto front and convex hull}%
\label{fig:ROCCHParetoFront}%
\end{figure}
\subsection{Nondominated sort does harm to EMOAs in ROCCH maximization}
The root reason for why we want to get the convex hull rather than Pareto front is that two classifiers will produce any classifiers with their ROC performance which is along the line connected by two point representing for the performance for previous two classifiers in ROC space~\cite{fawcett2004roc}. As shown in left side of Fig.~\ref{fig:doharm}, classifiers with performance at point $d$ and $b$ can be used to construct any virtual classifier with its performance at $e$ along the line connected by $d$ and $b$. That is a special and important character in ROCCH maximization problem.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{./graph/do_harm}\\
\caption{Nondominated sort keeps the individual which does nothing contribution to ROCCH}%
\label{fig:doharm}%
\end{figure}
In the right side of Fig.~\ref{fig:doharm}, all the points are nondominated to each other and belong to the convex hull expect for point $a$. However, if we take crowding-distance selection or hyper-volume contribution based selection to select one individual to be discarded from the population, obviously, point $a$ will be selected to survive rather than point $b$ though point $a$ is not on the convex hull. Actually, there are two phenomenons we need pay attention to, one is the sort strategy and the other is the selection scheme. Besides, suitable sort strategy and selection scheme are should be considered in EMOAs for ROCCH not matter which classifier is involved.
\subsection{The motivation and ideas for new multi-objective algorithms for ROCCH maximization}
We need to think about how to use the special character of ROCCH to make multi-objective optimization techniques more efficient to solve the ROCCH maximization problem. The main techniques for MOP is how to rank the population to select the solutions to survive in next generation. The mostly common rank approach includes two steps, one is sorting the population into several levels indicating the priority level, after that, a selection scheme is used to choose winners from solutions at the same level. In ROCCH maximization problem, firstly, convex hull-based idea will considered into sorting strategy, however, because of the critical concept of convex hull, it would make the diversity decrease fast in the evolutionary process, so we design convex hull-based sorting without redundancy to sort the population. Another idea is to use area-base selection scheme because the target is to maximize the area under the convex hull insteading of hypervolume or crowding-distance. Convex hull-based sorting without redundancy and area-based selection scheme will be descried in detail in Section~\ref{section:chmogp}.
\begin{figure*}[!thbp]
\centering
\includegraphics[width=0.8\textwidth]{./graph/without_red}\\
\caption{Convex hull-based sorting with and without redundancy, Area-based contribution to ROCCH}%
\label{fig:chullrankingwithout}%
\end{figure*}
\section{Convex Hull-based Multi-objective Genetic Programming (CH-MOGP)}
\label{section:chmogp}
In this section, we will describe our proposed convex hull-based multi-objective genetic programming to maximize ROCCH. Firstly, convex hull-based sorting without redundancy approach is used to rank the individuals in the union population into several levels which represent different priorities to survive as described in NSGA-II. Secondly, as the target is to maximize the area under the convex hull (AUCH) rather than the hypervolume mentioned in SMS-EMOA, and area-based indicator is designed to calculate the contribution of each individual to AUCH maximization. One major of disadvantage of ($\mu$ + 1) selection strategy was employed in SMS-EMOA and AG-EMOA is that it needs to call fast-nondominated sorting $\mu$ times to select $\mu$ offsprings. In~\cite{igel2007covariance}, an approximate scheme ($\mu$ + $\mu$) is proposed to make the selection process faster, and this idea has been adopted in CH-MOGP.
\subsection{Convex hull-based without redundancy sorting}
\begin{footnotesize}
\begin{algorithm}[!htbp]
\caption{\emph{Convex hull-based-sorting-without-redundancy} ($Q$,$r$)}
\label{algreduce}
\begin{algorithmic}[1]
\REQUIRE $Q \neq \emptyset$
\STATE $Q$ is a solution set
\STATE $r$ is the reference point
\ENSURE \small{ch-based-sorting-without-redundancy}
\STATE $i$ = 0
\WHILE{$Q \neq \emptyset$}
\STATE $T$ = $Q \cup \{r\}$
\STATE $\textbf{F}_i$ = Jarvis-Algorithm($T$)~\cite{jarvis1973identification}
\STATE $\textbf{F}_i$ = Elimination($\textbf{F}_i$) // Some points in $\textbf{F}_i$ are not interesting and removed
\STATE $Q = Q - \textbf{F}_i$
\STATE $i = i+1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\end{footnotesize}
First of all, we introduce convex hull-based without redundancy sorting in this subsection. The main idea is too keep the diversity of the population by force, that means, each redundant solutions will be put into an archive to be random selected to survive into the next generation if there is not enough non-redundant solutions to fill the whole population full. Non-redundant solutions with not good performance have chance to be kept by discarding the redundant solutions with good performance to make high diversity, and this could avoid that the solutions at the convex hull being copied a lot at the selection phase in evolutionary multi-objective optimization. As described in Alg.~\ref{algreduce}, the population will be split into redundant part and the other part which is sorted by convex hull-based sorting into several levels and the redundant part is taken as the last level which is the candidates by random selecting.
In Fig.~\ref{fig:chullrankingwithout}, the first and second graphs gives the illustration for convex hull-based sorting with and without redundancy. All the redundant individuals will be discarded into the last level and selected random to the next generation if it is necessary.
\begin{footnotesize}
\begin{algorithm}[!htbp]
\caption{\emph{DeltaArea} ($Q$)}
\label{deltahypervolume}
\begin{algorithmic}[1]
\REQUIRE $Q \neq \emptyset$
\STATE $Q$ is a solution set
\ENSURE \small{DeltaArea}
\STATE $m = sizeof(Q)$
\STATE $\textbf{E}$ is performance of $Q$
\STATE $\textbf{DeltaH}_{1},...,\textbf{DeltaH}_{m} \leftarrow 0$
\IF {$m < 3$}
\STATE Set $\textbf{DeltaH}_{1},...,\textbf{DeltaH}_{m} \leftarrow \infty$
\ELSE
\STATE Set $\textbf{DeltaH}_{1},\textbf{DeltaH}_{m} \leftarrow \infty$
\FOR {$2 \leq i \leq sizeof(Q)-1$}
\STATE $\textbf{DeltaH}_{i}$ = 0.5 $\times$ det(($\textbf{E}_{i}$-$\textbf{E}_{i-1}$) $\circ$ ($\textbf{E}_{i+1}$-$\textbf{E}_{i-1}$))
\ENDFOR
\WHILE {$sizeof(Q) > 2$}
\STATE $r \leftarrow argmin \{\textbf{DeltaH}\}$
\STATE $Q \leftarrow Q \texttt{\char92} \{Q_r \}$
\STATE Update($\textbf{DeltaH}_{r-1}$,$\textbf{DeltaH}_{r+1}$)
\ENDWHILE
\ENDIF
\STATE Return ($\textbf{DeltaH}$)
\end{algorithmic}
\end{algorithm}
\end{footnotesize}
\begin{footnotesize}
\begin{algorithm}[!htbp]
\caption{\emph{Reduce} ($Q$,$N$)}
\label{algreduce}
\begin{algorithmic}[1]
\REQUIRE $Q \neq \emptyset$
\STATE $Q$ is a solution set
\STATE $N$ is the number of solutions will be discarded
\ENSURE \small{Reduce}
\STATE $F = empty$
\STATE Split $Q$ into two subpopulation $U$ and $R$ // $R$ is the collection of redundant individuals
\IF {$sizeof(R) >= N$}
\STATE $F \leftarrow $ Random select $N$ solutions from $R$
\STATE $Q \leftarrow U \cup R \texttt{\char92} F$
\ELSE
\STATE $F \leftarrow R$
\STATE ${\Re_1,...,\Re_v} \leftarrow $$Convex hull$-$based$-$sort$-$without$-$redundancy (Q)$
\FOR {$i = v ... 1$}
\IF {$sizeof(F)$ + $sizeof(\Re_i) < N$}
\STATE $F \leftarrow F \cup \Re_i$
\STATE $U = U\texttt{\char92} \Re_i$
\ELSE
\STATE break
\ENDIF
\ENDFOR
\STATE $T \leftarrow Select$ $(N-sizeof(F))$ solutions with minial $DeltaArea(\Re_i)$
\STATE $F \leftarrow F \cup T$
\STATE $U \leftarrow U \texttt{\char92} T$
\STATE $Q \leftarrow U$
\ENDIF
\STATE Return ($Q$)
\end{algorithmic}
\end{algorithm}
\end{footnotesize}
\subsection{Area-based Selection Scheme}
\begin{equation}%
\Delta area = \frac{det((\textbf{X} - \textbf{L})\circ (\textbf{U} - \textbf{X}))}{2}
\label{equ:area}%
\end{equation}%
In this subsection, we describe our area-based indicator for selection scheme in the new EMOA. The reason for why area-based and not hypervolume-based contribution is adopt is we need to maximize the area under the convex hull. Area-based indicator is more directly and efficiently. In the third graph of Fig.~\ref{fig:chullrankingwithout}, it shows the novel area calculation for two dimensions. The contribution of one point $x$ with its performance vector \textbf{X} to the area is the area of triangle constructed by the point with its predecessor $l$ and successor $u$ with performance vector \textbf{L} and \textbf{U}. Alg.~\ref{deltahypervolume} gives the procedure of calculating of the novel area contribution. Eq.~\ref{equ:area} gives the equation to how to calculate the area contribution of each point to its convex hull front.
\begin{footnotesize}
\begin{algorithm}[!htbp]
\caption{\emph{CH-MOGP} ($Max,N$)}
\label{algchmoea}
\begin{algorithmic}[1]
\REQUIRE $Max > 0, N > 0$
\STATE $Max$ is the maximum of evaluations
\STATE $N$ is the population size
\ENSURE \small{CH-MOGP}
\STATE $P_{0} = init()$
\STATE $t = 0$
\STATE $m = 0$
\WHILE {$m < Max$}
\STATE $Q_{t} = empty$
\FOR {$i = 1:N$}
\STATE $q_{i} \leftarrow$ Operators on $P_t$
\STATE $Q_{t} \leftarrow Q_{t} + q_{i}$
\ENDFOR
\STATE $P_{t+1} \leftarrow Reduce(P_t \cup {Q_{t}})$
\STATE $t \leftarrow t + 1$
\STATE $m \leftarrow m + N$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\end{footnotesize}
\subsection{CH-MOGP}
Alg.~\ref{algchmoea} describes the CH-MOGP algorithm. The framework is very similar with SMS-EMOA and NSGA-II. However, we employ convex hull-based sorting without redundancy approach to rank the individuals into different levels. ($\mu + \mu$) scheme is adopted into CH-MOGP to maximize the ROC performance. Because the target is to maximize area under the convex hull, area-based selection is designed insteading of hypervolume-based contribution to keep the survivors with high area-based contribution.
In Alg.~\ref{algchmoea}, first of all, the population size and the maximum of evaluations are given. Initial population is constructed by a group of solutions represented by genetic decision trees~\cite{jin2000fgp} using ramped-half-and-half method~\cite{poli2008field}. To generate the offsprings, two operators are employed and described in detail in~\cite{wang2011memetic}. The selection part of CH-MOGP are operated by two schemes like other EMOAs, one is how to sort the population into different levels and the other is how to rank the solutions at the same level. Convex hull-based without redundancy sorting and area-based selection scheme play the main role to the selection part of CH-MOGP. To reduce the time of calling sorting approach, we also take ($\mu$ + $\mu$) scheme not ($\mu$ + 1) in SMS-EMOA.
\section{Experimental Studies}
\label{section:experiment}
\subsection{Data Set}
Nineteen data sets are selected from the UCI repository~\cite{WP27} and described in Table~\ref{DataSets}. Actually, we choose another three large-scaled data sets described in the last row of Table~\ref{DataSets} to make more solid results. In this paper, we focus on binary classification problems, so all the data sets are two-class problems. Balanced and imbalanced benchmark data sets are carefully selected. The scale in terms of the number of instances of these data sets ranges from hundreds to thousands.
\begin{table}[htbp]
\caption{Algorithms Involved}
\label{algorithms}
\begin{center}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{rccc}
\toprule
Name & Sorting & Selection & Scheme\\
\midrule
\emph{CH-MOGP} &CH-No-Redundancy& Area & $\mu + \mu$\\
\emph{RCHH-EMOA} &CH-No-Redundancy& Area & $\mu + 1$\\
\emph{CH-EMOA} &Convex Hull & Hypervolume & $\mu + 1$\\
\emph{CHCrowding}&CH-No-Redundancy& Crowding-distance& $\mu + \mu$ \\
\emph{CHH-MOGP} &Convex Hull & Area & $\mu + 1$ \\
\emph{NSGA-II} &Non-dominated & Crowding-distance& $\mu + \mu$ \\
\emph{SMS-EMOA} &Non-dominated & Hypervolume & $\mu + 1$ \\
\emph{MOEA/D} &Fitness & Fitness & -\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table}%
\begin{table*}
\caption{Nineteen UCI Data Sets}
\label{DataSets}
\begin{center}
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{rllrllrll}
\toprule
\multirow{2}{*}{Data Set} & No. of & Class &\multirow{2}{*}{Data Set} & No.of & Class &\multirow{2}{*}{Data Set} & No.of & Class \\ & features & Distribution & & features & Distribution & & features & Distribution\\
\midrule
\emph{australian} &14 &383:307 &\emph{house-votes} &16 &168:267 &\emph{pima} &8 &268:500 \\
\emph{bcw }&9 &458:241 &\emph{ionosphere} &34 &225:126 &\emph{sonar} &60 &97:111 \\
\emph{crx} &15 &307:383 &\emph{kr-vs-kp} &36 &1669:1527 & \emph{monks-3} &6 &228:204 \\
\emph{transfusion} &4 &178:570 &\emph{mammographic} &5 &445:516 &\emph{spect} &22 &212:55 \\
\emph{german} &24 &700:300 &\emph{monks-1} &6 &216:216 & \emph{parkinsons} &22 &147:48 \\
\emph{wdbc} &30 &212:357 &\emph{monks-2} &6 &290:142 &\emph{tic-tac-toe} &9 &626:332 \\
\emph{bands} &36 &228:312 & &\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table*}%
\begin{table*}[htbp]
\caption{Evaluation Times for each algorithm on 19 UCI Data Sets}
\label{EtimesDataSets}
\begin{center}
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{rlrlrlrl}
\toprule
\multirow{2}{*}{Data Set} & No. of &\multirow{2}{*}{Data Set} & No.of &\multirow{2}{*}{Data Set} & No.of &\multirow{2}{*}{Data Set} & No.of \\
& Evaluations & & Evaluations & & Evaluations& & Evaluations\\
\midrule
\emph{australian} & 100000 & \emph{bands} & 150000 & \emph{bcw} & 50000 & \emph{crx} & 50000 \\
\emph{german} & 200000 & \emph{house-votes} & 30000 & \emph{ionosphere} & 80000 & \emph{kr-vs-kp} & 200000 \\
\emph{mammographic} & 60000 & \emph{monks-1} & 200000 & \emph{monks-2} & 1000000 & \emph{monks-3} & 40000 \\
\emph{parkinsons} & 30000 & \emph{pima} & 80000 & \emph{sonar} & 30000 & \emph{spect} & 40000 \\
\emph{tic-tac-toe} & 300000 & \emph{transfusion} & 22000 & \emph{wdbc} & 30000 & \emph{adult} & 10000 \\
\emph{magic04} & 10000 & \emph{skin} & 10000 & &&&\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table*}%
\begin{table*}[!htbp]
\caption{Parameters for 8 algorithms}
\begin{center}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{rlrl}
\toprule
&Objective& Maximize Convex hull in ROC & \\
\midrule
Terminals of GP & \{0,1\} with 1 representing "Positive"; & Function set of GP & If-then-else , and,\\
& 0 representing "Negative" &&or, not, $>$, $<$ , =. \\
\midrule
Data sets & 22 UCI data sets &
Algorithms & 8 algorithms in Table~\ref{algorithms}\\
\midrule
Crossover rate & 0.9 &
Mutation rate & 0.1 \\
\midrule
Shifting rate & 0.1 &
Splitting rate & 0.1 \\
\midrule
Parameters for GP & P(Population size) = 20; & Termination criterion& Maximum of G of\\
& G (Maximum Evaluation Times) = M & &evaluation time has been reached\\
& Number of Runs : &&\\
& 5 fold cross-validation 20 times& &\\
\midrule
Selection strategy & Tournament selection, Size = 4 &
Max depth of & 3/17\\
&& initial/inprocess individual program & \\
\bottomrule
\end{tabular}%
}
\end{center}%
\label{parametertbl}%
\end{table*}%
\subsection{Algorithms Involved}
To evoluate the performance of two strategies proposed in this paper, Table.~\ref{algorithms} describes the algorithms involved to make rigorous and sufficient experimental comparisons. Generally speaking, this experiment is designed by considering three section of the EMOA, the first one is the strategy in sorting part including convex hull-based with and without redundancy sorting and non-dominated sorting(however, MOEA/D is decomposition based MOEA with different framework), the second one is the indicator for selection schemes including area-based, hyperovlume-based and crowding-distance-based, the last one is related with ($\mu$ + $\mu$) and ($\mu$ + 1) for different EMOAs.
\subsection{Evaluation and Configuration}
\textbf{Evaluation}: To evaluate the generalization performances of different classifiers produced by different algorithms, cross-validation is employed. We apply each algorithm on each 22 data sets with five-folds cross-validation for 20 times. Because we want to emphasize that our CH-MOGP could be better with less evaluation times, so we run each compared algorithms with large enough evaluation times to make them converge. Table.~\ref{EtimesDataSets} gives the details for algorithms on each data set.
\textbf{Configuration}: We take the representation called GDT~\cite{jin2000fgp} as the individual in all multi-objective evolutionary algorithms. For binary classification problems, 0 and 1 (standing for negative and positive) are selected as the
terminals of GP. Every classifier (individual) is constructed as $if$-$then$-$else$ tree which involves $and$, $or$, $not$, $>$,$<$ and $=$ as operator symbols. Most offspring individuals are obtained by the crossover operator with probability 0.9. We also employed the shifting, and splitting operators described in~\cite{wang8using} with probability 0.1. Tournament selection is adopted as the selection strategy and the tournament size is set to 4. To avoid overfitting, the maximum depth of each individual tree is limited to 17.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{./graph/diversity}\\
\caption{The diversity in convex hull-based sorting with and without redundancy effects the performance of the results}%
\label{fig:diversity}%
\end{figure*}
\subsection{Results and Analysis}
Fig.~\ref{fig:test}, Fig.~\ref{fig:test2}, Table.~\ref{averagestd} and Table.~\ref{Wilcoxon} show the performance of CH-EMOA compared with other EMOAs in 22 data sets. Generally speaking, CH-EMOA outperforms better not only at the AUCH but also the cost time.
In this subsection, we want to answer the questions as follows:
\begin{enumerate}
\item Why convex hull-based sorting without redundancy is better than traditional convex hull-based sorting?
\item Is convex hull-based sorting without redundancy is better than non-dominated sorting approach in ROCCH maximization problem?
\item Is area-based selection scheme is comparable with or better than crowding-distance or hypervolume based selection?
\item Is CH-MOGP is better than NSGA-II, SMS-EMOA and MOEA/D for ROCCH maximization?
\item Does CH-MOGP show some advantages to traditional machine learning algorithms?
\end{enumerate}
To evaluate the ideas we have proposed, we use 19 data sets in Table.~\ref{DataSets} with algorithms described in Table.~\ref{algorithms}.
\begin{table*}[htbp]
\caption{Performance of four different frameworks of MOGP on UCI data sets, mean and standard
deviation, multiplied by 100, are given in this table}
\label{averagestd}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{rcccc|rcccc}
\toprule
&CH-MOGP&SMS-EMOA&NSGA-II&MOEA/D& &CH-MOGP&SMS-EMOA&NSGA-II&MOEA/D\\
\midrule
\emph{australian} & 91.49 $\pm$ 2.72 & 91.67 $\pm$ 2.48 & 91.16 $\pm$ 2.41 & 90.29 $\pm$ 2.75&
\emph{bands} & 77.00 $\pm$ 4.05 & 76.38 $\pm$ 4.09 & 75.54 $\pm$ 3.56 & 71.85 $\pm$ 3.82\\
\emph{bcw} & 97.94 $\pm$ 1.20 & 97.73 $\pm$ 1.56 & 97.84 $\pm$ 1.41 & 97.48 $\pm$ 1.48&
\emph{crx} & 91.30 $\pm$ 2.45 & 91.16 $\pm$ 2.33 & 91.14 $\pm$ 2.36 & 89.88 $\pm$ 2.51\\
\emph{german} & 73.10 $\pm$ 3.24 & 73.32 $\pm$ 3.33 & 72.39 $\pm$ 3.07 & 71.45 $\pm$ 2.85&
\emph{house-votes} & 97.94 $\pm$ 1.56 & 97.69 $\pm$ 1.59 & 97.74 $\pm$ 1.71 & 97.15 $\pm$ 1.75\\
\emph{ionosphere} & 91.07 $\pm$ 4.95 & 90.51 $\pm$ 4.52 & 90.45 $\pm$ 4.53 & 89.89 $\pm$ 4.83&
\emph{kr-vs-kp} & 98.40 $\pm$ 0.89 & 98.63 $\pm$ 0.75 & 98.39 $\pm$ 0.79 & 96.67 $\pm$ 1.43\\
\emph{mammographic} & 89.75 $\pm$ 2.01 & 89.48 $\pm$ 1.94 & 89.41 $\pm$ 1.87 & 87.50 $\pm$ 2.23&
\emph{monks-1} & 99.70 $\pm$ 1.68 & 97.62 $\pm$ 3.71 & 99.62 $\pm$ 1.35 & 96.51 $\pm$ 5.69\\
\emph{monks-2} & 91.05 $\pm$ 8.00 & 89.28 $\pm$ 5.58 & 90.53 $\pm$ 5.19 & 73.26 $\pm$ 9.14&
\emph{monks-3} & 99.81 $\pm$ 0.43 & 99.74 $\pm$ 0.45 & 99.45 $\pm$ 2.87 & 99.07 $\pm$ 0.88\\
\emph{parkinsons} & 86.79 $\pm$ 6.86 & 85.11 $\pm$ 6.68 & 84.90 $\pm$ 7.54 & 83.94 $\pm$ 6.72&
\emph{pima} & 80.08 $\pm$ 3.38 & 79.85 $\pm$ 3.38 & 79.29 $\pm$ 3.70 & 76.93 $\pm$ 3.10\\
\emph{sonar} & 79.42 $\pm$ 5.87 & 78.04 $\pm$ 5.91 & 77.79 $\pm$ 7.34 & 75.75 $\pm$ 5.66&
\emph{spect} & 77.38 $\pm$ 7.36 & 76.27 $\pm$ 7.14 & 76.91 $\pm$ 8.46 & 74.88 $\pm$ 6.43\\
\emph{tic-tac-toe} & 83.40 $\pm$ 10.4 & 79.56 $\pm$ 11.1 & 79.07 $\pm$ 13.4 & 70.85 $\pm$ 10.4&
\emph{transfusion} & 71.62 $\pm$ 4.62 & 71.48 $\pm$ 4.47 & 71.49 $\pm$ 4.84 & 68.77 $\pm$ 4.63\\
\emph{wdbc} & 96.78 $\pm$ 1.92 & 96.49 $\pm$ 2.25 & 96.70 $\pm$ 2.11 & 95.90 $\pm$ 2.19& \\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table*}%
\begin{table*}[htbp]
\caption{Performance of four different frameworks of MOGP on three big data sets, mean and standard
deviation, multiplied by 100, are given in this table}
\label{Laveragestd}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{rcccc|rcccc}
\toprule
&CH-MOGP&SMS-EMOA&NSGA-II&MOEA/D& &CH-MOGP&SMS-EMOA&NSGA-II&MOEA/D\\
\midrule
\emph{adult} & 84.58 $\pm$ 1.40 & 82.53 $\pm$ 2.15 & 84.01 $\pm$ 1.38 & 77.04 $\pm$ 2.54&
\emph{magic04} & 83.02 $\pm$ 1.04 & 81.76 $\pm$ 1.57 & 82.01 $\pm$ 1.19 & 76.39 $\pm$ 3.07\\
\emph{skin} & 97.10 $\pm$ 1.11 & 95.46 $\pm$ 1.85 & 96.57 $\pm$ 1.25 & 93.20 $\pm$ 2.37\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table*}%
\subsubsection{Question 1}
As we argued above, because of the greedy sort of convex hull-based sorting, the diversity will decrease fast as the generation or evaluation times. Fig.~\ref{fig:diversity} shows the performance of CHH-EMOA and RCHH-EMOA which has been described in Table.~\ref{algorithms}. The only different between these two algorithms is the sorting scheme, CHH-EMOA adopts traditional convex hull-based sorting and RCHH-EMOA employs the convex hull-based sorting without redundancy approach. The third and fourth graph in Fig.~\ref{fig:diversity} give the number of different individuals in the convex hull and in the whole population which are simply indicated as the measurement for the diversity. Obviously, RCHH-EMOA with larger diversity performs better than CHH-EMOA is the first and second graph in Fig.~\ref{fig:diversity} which describe the AUCH performance in traning and test data set (Here, we take data set "Sonar" as an example). However, we also give the Wilcoxon-Sum-Rank-Test results (Which is with a
confidence level of 0.95) of RCHH-EMOA and CHH-EMOA for 19 data sets in Table.~\ref{table:Wilcoxon_C1}. Generally speaking, RCHH-EMOA with convex hull-based sorting without redundancy is better than CHH-EMOA.
\begin{table}[htbp]
\caption{Wilcoxon SUM-RANK Test on 19 UCI Data Sets: The table shows the wilcoxon sum test results between RCHH-EMOA and CHH-EMOA on 19 UCI Data sets at different evaluation times. Each $x$-$y$-$z$ in following table means RCH-EMOA wins $x$ times, losses $z$ times and draws $y$ times. Ratio means the ratio of total evaluation times}
\label{table:Wilcoxon_C1}
\begin{center}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{rlllllll}
\toprule
Ratio & $\frac{1}{15}$ & $\frac{1}{10}$&$\frac{1}{4}$ & $\frac{1}{3}$&$\frac{1}{2}$ & $\frac{2}{3}$&$1$ \\
\midrule
\emph{CHH-EMOA}& 4-15-0 & 5-14-0 & 5-14-0 & 6-13-0 & 6-13-0 & 6-13-0 & 4-15-0\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table}%
\subsubsection{Question 2}
Algorithm CHCrowding and NSGA-II are involved into answering question 2. As described in Table.~\ref{algorithms}, CHCrowding and NSGA-II employ crowding-distance as the strategy into selection scheme, however, they adopt different sorting approach. Convex hull-based sorting without redundancy is employed into CHCrowding and NSGA-II takes fast nondominated sorting, which is the only difference between them. Table.~\ref{table:Wilcoxon_C2} shows the Wilcoxon-Sum-Rank-Test results (Which is with a
confidence level of 0.95) for them. Obviously, CHCrowding losses none to NSGA-II and wins sometimes.
\begin{table}[htbp]
\caption{Wilcoxon SUM-RANK Test on 19 UCI Data Sets: The table shows the wilcoxon sum test results between CHCrowding and NSGA-II on 19 UCI Data sets at different evaluation times. Each $x$-$y$-$z$ in following table means CHCrowding wins $x$ times, losses $z$ times and draws $y$ times. Ratio means the ratio of total evaluation times}
\label{table:Wilcoxon_C2}
\begin{center}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{rlllllll}
\toprule
Ratio & $\frac{1}{15}$ & $\frac{1}{10}$&$\frac{1}{4}$ & $\frac{1}{3}$&$\frac{1}{2}$ & $\frac{2}{3}$&$1$ \\
\midrule
\emph{NSGA-II}& 3-16-0 & 2-17-0 & 2-17-0 & 3-16-0 & 3-16-0 & 3-16-0 & 1-18-0\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table}%
\subsubsection{Question 3}
For question 3, we takes two comparisons to explain. The first is that CHH-EMOA and CH-EMOA which are the same except for the selection schemes. In other words, CHH-EMOA prefers area-based selection and hypervolume contribution is involved into selection scheme for CH-EMOA. We also gives the Wilcoxon-Sum-Rank-Test results (Which is with a
confidence level of 0.95) for them in Table.~\ref{Wilcoxon_C31}. Obviously, area-based selection works better than hypervolume contribution when they are combined with convex hull-based sorting approach into multi-objective optimization algorithm designs. On the other hand, we employ CHCrowding and CH-MOGP to measure the different performance of area-based selection and crowding-distance selection. However, Table.~\ref{Wilcoxon_C3} shows there is no difference between them in 19 data sets. One reason is that convex hull-based sorting without redundancy plays more important role in the multi-objective algorithms than the selection scheme, however, selection scheme is also needed for the EMOAs. Though area-based and crowding-distance based selection schemes show no difference in above two algorithms, we still choose area-based selection because it is more intuitive for maximizing the ROC performance.
\begin{table}[htbp]
\caption{Wilcoxon SUM-RANK Test on 19 UCI Data Sets: The table shows the wilcoxon sum test results between CHH-EMOA and CH-EMOA on 19 UCI Data sets at different evaluation times. Each $x$-$y$-$z$ in following table means CHH-EMOA wins $x$ times, losses $z$ times and draws $y$ times. Ratio means the ratio of total evaluation times}
\label{Wilcoxon_C31}
\begin{center}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{rlllllll}
\toprule
Ratio of total evaluations & $\frac{1}{15}$ & $\frac{1}{10}$&$\frac{1}{4}$ & $\frac{1}{3}$&$\frac{1}{2}$ & $\frac{2}{3}$&$1$ \\
\midrule
\emph{CH-EMOA}& 3-16-0 & 4-15-0 & 4-15-0 & 4-15-0 & 4-15-0 & 6-13-0 & 5-14-0\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table}%
\begin{table}[htbp]
\caption{Wilcoxon SUM Test on 19 UCI Data Sets: The table shows the wilcoxon sum test results between CH-MOGP and CHCrowding on 19 UCI Data sets at different evaluation times. Each $x$-$y$-$z$ in following table means CH-MOGP wins $x$ times, losses $z$ times and draws $y$ times. Ratio means the ratio of total evaluation times}
\label{Wilcoxon_C3}
\begin{center}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{rlllllll}
\toprule
Ratio & $\frac{1}{15}$ & $\frac{1}{10}$&$\frac{1}{4}$ & $\frac{1}{3}$&$\frac{1}{2}$ & $\frac{2}{3}$&$1$ \\
\midrule
\emph{CHCrowding}& 0-19-0 & 0-19-0 & 0-19-0 & 0-19-0 & 0-19-0 & 0-19-0 & 0-19-0\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table}%
\subsubsection{Question 4}
\textbf{AUCH analysis:} To answer the question 4, we employ more data set, specially for big data set because we always emphasize that our algorithm will perform better with less evaluation times which means we will save a lot of time for problems with expensive evaluation. Table.~\ref{LDataSets} describes three big data set. Table.~\ref{averagestd} and~\ref{Laveragestd} give the result of 4 different evolutionary multi-objective algorithms involved with GDT for maximizing the area under convex hull in ROC space. Furthermore, Table.~\ref{Wilcoxon} gives the Wilcoxon Sum-Rank Test results (Which is with a
confidence level of 0.95) for them. To compare the performance of all algorithms at each stage of its evolutionary process, we show the results at 1/15, 1/10, 1/4, 1/3, 1/2 and 1 of the whole process. It is very clear that CH-MOGP outperforms among these EMOAs.
\begin{table}[htbp]
\caption{Wilcoxon SUM-Rank Test on 22 UCI Data Sets: The table shows the wilcoxon sum test results between CH-EMOA and other three EMOAs (NSGA-II, SMS-EMOA and MOEA/D) on 22 UCI Data sets at different evaluation times. Each $x$-$y$-$z$ in following table means CH-EMOA wins $x$ times, losses $z$ times and draws $y$ times.Ratio means the ratio of total evaluation times}
\label{Wilcoxon}
\begin{center}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{rlllllll}
\toprule
Ratio & $\frac{1}{15}$ & $\frac{1}{10}$&$\frac{1}{4}$ & $\frac{1}{3}$&$\frac{1}{2}$ & $\frac{2}{3}$&$1$ \\
\midrule
\emph{NSGA-II }& 4-15-0 & 4-15-0 & 2-17-0 & 4-15-0 & 5-14-0 & 5-14-0 & 4-15-0\\
\emph{SMS-EMOA} & 11-8-0 & 11-8-0 & 6-13-0 & 5-14-0 & 4-15-0 & 4-15-0 & 5-14-0\\
\emph{MOEA/D} & 19-0-0 & 19-0-0 & 19-0-0 & 19-0-0 & 19-0-0 & 19-0-0 & 19-0-0\\ \hline\hline
\emph{NSGA-II }& 0-3-0 & 1-2-0 & 1-2-0 & 1-2-0 & 2-1-0 & 2-1-0 & 2-1-0\\
\emph{SMS-EMOA} & 3-0-0 & 3-0-0 & 3-0-0 & 3-0-0 & 3-0-0 & 3-0-0 & 3-0-0\\
\emph{MOEA/D} & 3-0-0 & 3-0-0 & 3-0-0 & 3-0-0 & 3-0-0 & 3-0-0 & 3-0-0\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table}%
\begin{table*}
\caption{Three Large-scaled Data Sets}
\label{LDataSets}
\begin{center}
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{rllrllrll}
\toprule
\multirow{2}{*}{Data Set} & No. of & Class &\multirow{2}{*}{Data Set} & No.of & Class &\multirow{2}{*}{Data Set} & No.of & Class \\ & features & Distribution & & features & Distribution & & features & Distribution\\
\midrule
\emph{skin} &4 &50859:194198& \emph{magic04} &10 &12332 :6688 & \emph{adult} &14 &11687 : 37155 \\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table*}%
\begin{table*}[htbp]
\caption{Evaluation Times for each algorithm on 22 UCI Data Sets}
\label{LEtimesDataSets}
\begin{center}
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{rlrlrlrl}
\toprule
\multirow{2}{*}{Data Set} & No. of &\multirow{2}{*}{Data Set} & No.of &\multirow{2}{*}{Data Set} & No.of &\multirow{2}{*}{Data Set} & No.of \\
& Evaluations & & Evaluations & & Evaluations& & Evaluations\\
\midrule
\emph{australian} & 100000 & \emph{bands} & 1500000 & \emph{bcw} & 18500 & \emph{crx} & 450000 \\
\emph{german} & 120000 & \emph{house-votes} & 24000 & \emph{ionosphere} & 80000 & \emph{kr-vs-kp} & 2000000 \\
\emph{mammographic} & 80000 & \emph{monks-1} & 230000 & \emph{monks-2} & 10000000 & \emph{monks-3} & 190000 \\
\emph{parkinsons} & 42000 & \emph{pima} & 180000 & \emph{sonar} & 12000 & \emph{spect} & 10000 \\
\emph{tic-tac-toe} & 3000000 & \emph{transfusion} & 35000 & \emph{wdbc} & 21000 & \emph{adult} & 300000 \\
\emph{magic04} & 40000 & \emph{skin} & 30000 & &&&\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table*}%
\begin{table*}[htbp]
\caption{Performance of CH-MOGP and traditional classifiers on UCI data sets, mean and standard
deviation, multiplied by 100, are given in this table}
\label{LCCaveragestd}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{rcccc|rcccc}
\toprule
& CH-MOGP & C4.5 & NB & Pyriel & & CH-MOGP & C4.5 & NB & Pyriel\\
\midrule
\emph{australian} & 91.97 $\pm$ 2.53 & 85.52 $\pm$ 4.05 & 89.47 $\pm$ 2.78 & 91.75 $\pm$ 2.36 & \emph{monks-3} & 100.0 $\pm$ 0.00 & 100.0 $\pm$ 0.00 & 95.94 $\pm$ 2.17 & 99.60 $\pm$ 0.27\\
\emph{bands} & 78.50 $\pm$ 3.56 & 74.56 $\pm$ 4.59 & 73.91 $\pm$ 4.68 & 76.07 $\pm$ 4.81 & \emph{parkinsons} & 86.10 $\pm$ 6.66 & 78.91 $\pm$ 9.76 & 85.91 $\pm$ 6.11 & 88.24 $\pm$ 5.83\\
\emph{bcw} & 98.17 $\pm$ 1.06 & 95.05 $\pm$ 2.55 & 98.92 $\pm$ 0.62 & 98.16 $\pm$ 1.09 & \emph{pima} & 80.74 $\pm$ 3.12 & 75.23 $\pm$ 4.93 & 81.40 $\pm$ 3.01 & 79.58 $\pm$ 2.92\\
\emph{crx} & 91.82 $\pm$ 2.27 & 85.51 $\pm$ 3.94 & 87.88 $\pm$ 3.16 & 90.65 $\pm$ 2.77 & \emph{sonar} & 81.44 $\pm$ 5.15 & 73.85 $\pm$ 7.84 & 80.12 $\pm$ 7.03 & 69.92 $\pm$ 8.64\\
\emph{german} & 74.27 $\pm$ 2.79 & 65.36 $\pm$ 4.74 & 78.42 $\pm$ 2.94 & 75.95 $\pm$ 3.25 & \emph{spect} & 78.56 $\pm$ 7.44 & 76.88 $\pm$ 8.91 & 84.09 $\pm$ 6.03 & 83.51 $\pm$ 7.01\\
\emph{house-votes} & 98.23 $\pm$ 1.26 & 96.35 $\pm$ 2.04 & 98.05 $\pm$ 1.04 & 97.80 $\pm$ 1.49 & \emph{tic-tac-toe} & 90.07 $\pm$ 8.88 & 84.91 $\pm$ 13.9 & 61.50 $\pm$ 14.7 & 70.41 $\pm$ 12.5\\
\emph{ionosphere} & 92.42 $\pm$ 3.66 & 88.20 $\pm$ 5.65 & 93.57 $\pm$ 3.18 & 93.68 $\pm$ 4.23 & \emph{transfusion} & 72.19 $\pm$ 4.89 & 71.08 $\pm$ 5.08 & 70.93 $\pm$ 4.94 & 70.87 $\pm$ 5.39\\
\emph{kr-vs-kp} & 99.40 $\pm$ 0.26 & 99.71 $\pm$ 0.23 & 93.21 $\pm$ 1.00 & 98.26 $\pm$ 0.44 & \emph{wdbc} & 97.32 $\pm$ 1.40 & 92.74 $\pm$ 3.16 & 98.14 $\pm$ 1.33 & 96.58 $\pm$ 1.94\\
\emph{mammographic} & 90.20 $\pm$ 1.76 & 87.66 $\pm$ 2.21 & 89.77 $\pm$ 1.96 & 89.70 $\pm$ 2.02 & \emph{adult} & 88.97 $\pm$ 0.37 & 88.89 $\pm$ 0.53 & 85.27 $\pm$ 0.37 & 90.37 $\pm$ 0.25\\
\emph{monks-1} & 100.0 $\pm$ 0.00 & 77.13 $\pm$ 6.90 & 73.18 $\pm$ 4.58 & 70.93 $\pm$ 5.59 & \emph{magic04} & 87.16 $\pm$ 0.74 & 86.76 $\pm$ 0.83 & 75.70 $\pm$ 0.74 & 85.37 $\pm$ 0.76\\
\emph{monks-2} & 95.68 $\pm$ 4.61 & 94.17 $\pm$ 5.93 & 52.38 $\pm$ 7.04 & 51.25 $\pm$ 6.16 & \emph{skin} & 99.49 $\pm$ 0.11 & 99.93 $\pm$ 0.02 & 94.17 $\pm$ 0.07 & 98.15 $\pm$ 0.08\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table*}%
\begin{table}[htbp]
\caption{Wilcoxon SUM Test on 22 UCI Data Sets: The table shows the wilcoxon SUM-RANK test results between CH-EMOA and other three machine learning algorithms on 22 UCI Data sets at different evaluation times. Each $x$-$y$-$z$ in following table means CH-EMOA wins $x$ times, losses $z$ times and draws $y$ times.}
\label{LWilcoxonT}
\begin{center}
\resizebox{0.4\textwidth}{!}{
\begin{tabular}{rcccc}
\toprule
& CH-MOGP & C4.5 & NB & PRIE\\
\midrule
\emph{CH-MOGP} & & 15-5-2 & 11-6-5 & 13-4-5\\
\emph{C4.5} & & & 8-2-12 & 8-1-13\\
\emph{NB} & & & & 6-6-10\\
\emph{Pyriel} & & & & \\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table}%
\textbf{The Performance and Evaluation Times}:
Fig.~\ref{fig:test} and Fig.~\ref{fig:test2} show the performance of CH-MOGP, SMS-EMOA, NSGA-II and MOEA/D on 22 data sets. Actually, we give the convergence of these EMOAs for training and test data sets with 5-fold cross-validation 20 times. Generally speaking, the curves of CH-MOGP are over others on most data sets. In other words, for a given and very limited evaluation times, CH-MOGP can perform better than other EMOAs in the classification task.
\subsubsection{Question 5}
\textbf{AUCH comparison:} In this sub-section, we compare CH-MOGP with C4.5~\cite{quinlan1993c4}, Naive Bayes(NB)~\cite{lewis1998naive} and PRIE~\cite{fawcett2008prie} which are traditional machine learning algorithms for constructing classifiers. To make a fair comparison, we set the population size of CH-MOGP as 100. The reason is that soft classifiers usually output scores/probabilities to its test data sets, and the number of different kinds of scores or probabilities decides the number of performance points in ROC space, however, that number is not a small one. So we choose a general number, 100, as the population size of CH-MOGP. Meanwhile, it needs more evaluations to a larger population size, so Table.~\ref{LEtimesDataSets} gives the evaluation times for CH-MOGP in 22 data sets. Fig.~\ref{LCCaveragestd} shows the results for CH-MOGP, C4.5, NB and PRIE in all data sets, furthermore, Wilcoxon Sum-Rank Test results (Which is with a confidence level of 0.95) are given in Fig.~\ref{LWilcoxonT}.
\textbf{Evaluation Times:}
\begin{table}[htbp]
\caption{Times for CH-MOGP, C4.5, NB and PRIE to construct classifiers to maximize ROCHH}
\label{Ltime}
\begin{center}
\resizebox{0.55\textwidth}{!}{
\begin{tabular}{rccccrcccc}
\toprule
Time(s) & CHMOGP & C4.5 & NB & PRIE & Time(s) & CHMOGP & C4.5 & NB & PRIE\\
\midrule
australian & 116.91 & 0.06 & 0.02 & 4.18&
bands & 2242.5 & 0.04 & 0.03 & 15.85\\
bcw & 28.63 & 0.01 & 0.02 & 0.53&
crx & 653.45 & 0.02 & 0.02 & 2.92\\
german & 234.27 & 0.16 & 0.04 & 4.79&
house-votes & 13.2 & 0.01 & 0.02 & 0.48\\
ionosphere & 59.51 & 0.04 & 0.02 & 5.77&
kr-vs-kp & 12389.37 & 0.27 & 0.22 & 1.58\\
mammographic & 95.75 & 0.01 & 0.02 & 0.87&
monks-1 & 174.67 & 0.01 & 0.02 & 0.29\\
monks-2 & 8558.14 & 0.01 & 0.02 & 0.3&
monks-3 & 83.49 & 0.01 & 0.02 & 0.31\\
parkinsons & 17.48 & 0.01 & 0.02 & 1.62&
pima & 206.04 & 0.02 & 0.02 & 16.46\\
sonar & 129.28 & 0.03 & 0.02 & 31.45&
spect & 89.05 & 0.02 & 0.02 & 0.39\\
tic-tac-toe & 5396.3 & 0.03 & 0.02 & 0.48&
transfusion & 28.98 & 0.01 & 0.02 & 4.34\\
wdbc & 27.39 & 0.04 & 0.03 & 20.86&
adult & 15655.92 & 0.42 & 2.08 & 1771.73\\
magic04 & 7601.82 & 0.28 & 0.57 & 1103.05&
skin & 91856.38 & 15.01 & 3.7 & 70.15\\
\bottomrule
\end{tabular}
}
\end{center}%
\end{table}%
Table.~\ref{Ltime} gives the cost time for CH-MOGP, C4.5, NB and PRIE to construct classifiers to maximize ROCCH. The experiment environment is an 8 core CPU with 2.13GHz and 24GB RAM. Obviously, CH-MOGP consumes much more time than others, because of the metaheuristic character of EAs, GP needs to evaluate many classifiers until it converges. On the other hand, NB method calculates an a posteriori probability and the C4.5 adopts uses a greedy method to increase information gain. PRIE employs a greedy strategy to construct classifiers (more than one, usually dozens of classifiers) to maximize the ROCCH, so it cost a little more time than NB and C4.5, but still much less than CH-MOGP. Actually, how to reduce the evaluation time of CH-MOGP is an important topic.
\section{Conclusions and Future Work}
\label{section:confusion}
In this paper, we propose convex hull-based sorting approach and area-based selection scheme involved into multi-objective genetic programming for maximizing the ROC performance in classification tasks. First, we emphasized that convex hull maximization problems is similar but beyond multi-objective optimization problem, traditional techniques are helpful but needed to improve the solve the this kinds of problem. Insteading of fast non-dominated sorting approach in NSGA-II and SMS-EMOA, convex hull-based sorting is investigated in new algorithm design, however, we found convex hull-based sorting without redundancy was efficient to avoid losing diversity in the search process. Area-based selection scheme with $\mu + \mu$ is also designed for helping to rank the population. The new algorithm- CH-MOGP is also performed on benchmarks and work better than other traditional EMOAs and some other traditional machine learning algorithms. In the future work, there are three topics would be discussed. The first is how to improve CH-MOGP to reduce its time consuming character but keep the comparable performance for ROCCH maximization. The second one is that GP-based classifier could be replaced by other tree-based classifiers or other traditional machine learning classifiers such as SVM, NB, etc.. Different classifier would result better performance for ROCCH maximization. The third topic is convex hull based without redundancy sorting and area-based selection scheme, these two strategies are not only used in classification but also other area such as numerical optimization.
\section*{Acknowledgment}
The authors would like to thank...
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{./ZAverage/c2}\\
\caption{Performance of four different EMOAs on 10 data sets}%
\label{fig:test2}%
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{./ZAverage/c1}\\
\caption{Performance of four different EMOAs on 12 data sets}%
\label{fig:test}%
\end{figure*}
\bibliographystyle{elsarticle-num}
|
1,108,101,563,167 | arxiv | \section{Introduction}
The theory of mean field games (MFGs for short) is more and more investigated since the pioneering works \cite{ll06-1,ll06-2,ll07} of Lasry and Lions: it aims at studying the asymptotic behaviour of differential games (Nash equilibria) as the number of agents tends to infinity.
In the present work, we study deterministic mean field games with finite time horizon in which the dynamics of a generic agent is controlled by the acceleration. They are described by a system of PDEs
coupling a continuity equation for the density of the distribution of states (forward in time) and a Hamilton-Jacobi (HJ) equation for the optimal value of a representative agent (backward in time). The state variable is the pair $(x,v)\in {\mathbb R} ^N\times {\mathbb R}^N$ where $x$ stands for the position and $v$ stands for the velocity.
The systems of PDEs are of the form
\begin{equation}
\label{eq:MFGA}
\left\{
\begin{array}{rll}
(i)&-\partial_t u-v\cdot D_xu+H(x,v,D_vu)-F[m(t)](x,v)=0&\qquad \textrm{in }{\mathbb R}^{2N}\times (0,T)\\
(ii)& \partial_t m +v\cdot D_xm-{\rm div}_v(D_{p_v}H(x,v,D_vu)m)=0&\qquad \textrm{in }{\mathbb R}^{2N}\times (0,T)\\
(iii)& m(x,v,0)=m_0(x,v), u(x,v,T)=G[m(T)](x,v)\,,&\qquad \textrm{on }{\mathbb R}^{2N}
\end{array}\right.
\end{equation}
where $T$ is a positive real number, $u=u(x,v,t)$, $m=m(x,v,t)$, $(x,v)\in{\mathbb R}^{2N}$, $t\in(0,T)$ and
$H$ is defined by
\begin{equation}
\label{HamA}
H(x,v,p_v)=\max_{\alpha\in {\mathbb R}^N}(-\alpha p_v-l(x,v,\alpha)).
\end{equation}
We take $F$ and $G$ strongly regularizing and we assume that the running cost has the form $l(x,v,\alpha)=l(x,v)+\frac1 2 \vert \alpha\vert^2+\frac1 2 \vert v \vert^2$, where $(x,v)\mapsto l(x,v)$ is a bounded and $C^2$-bounded function.
{\it Formally}, systems of this form arise when the dynamics of the generic player is described by a {\sl double integrator}:
\begin{equation}\label{eq:HJA}
\left\{
\begin{array}{rcll}
\xi'(s)&=&\eta(s),\quad &s\in (t,T),\\
\eta'(s)&=&\alpha(s),\quad &s\in (t,T),\\
\xi(t)&=&x, &\\\eta(t)&=&v,&
\end{array}\right.
\end{equation}
and when the control law belongs to the space of the measurable functions with values in ${\mathbb R}^N$ and is chosen in
order to minimize the cost
\begin{equation}\label{cost}
J_t:=J_t(\xi,\eta, \alpha)
=\displaystyle\int_t^T
l(\xi(s), \eta(s), \alpha(s))+F[m(s)](\xi(s), \eta(s))ds+G[m(T)](\xi(T), \eta(T)).
\end{equation}
To summarize, the main features of this model are:
\begin{enumerate}
\item The control $\alpha$ is only involved in the dynamics of the second component of the state variable, see \eqref{eq:HJA}.
\item The running cost has the form
\begin{equation}
\label{eq:10}
l(\xi,\eta,\alpha)=l(\xi,\eta)+\frac1 2 \vert \eta \vert^2+\frac1 2 \vert \alpha \vert^2,
\end{equation}
where $(\xi,\eta)\mapsto l(\xi,\eta)$ is a bounded $C^2$ function, thus the former is unbounded w.r.t. the variable $\eta$.
Note that $\vert \eta \vert^2$ stands for a kinetic energy, whereas the term $\vert \alpha \vert^2$ is a penalty for large accelerations.
Note also that the results of the present paper hold for a fairly large class of generalizations of (\ref{eq:10}).
\item Setting $f(\xi, \eta, \alpha)= (\eta, \alpha)$, the Hamiltonian associated to the control problem of a generic player is
\begin{displaymath}
\mathcal{H}( \xi,\eta, p)= \max_{\alpha\in {\mathbb R}^N}\{-p\cdot f(\xi, \eta,\alpha) -l(\xi,\eta,\alpha)\}=-p_x\cdot \eta +H(\xi,\eta, p_v),
\end{displaymath}
where $p=(p_x,p_v)$ and $H$ is defined in (\ref{HamA}). The Hamilton $\mathcal{H}$ is neither strictly convex nor coercive
with respect to $p=(p_x, p_v)$. Hence the available results
on the regularity of the value function $u$ of the associated optimal control problem (\cite{CS}, \cite{Cla}, \cite{C}) and on the
existence of a solution of the MFG system (\cite{C}) cannot be applied.
\end{enumerate}
We recently learnt that a similar type of mean field games has been studied in \cite{CM}, independently, at the same time, and with different techniques. To the best of our knowledge, these systems have not been investigated elsewhere.
The main results of the present work are the existence of a solution of~\eqref{eq:MFGAs} and a characterization of the distribution of states~$m$. In order to establish the representation formula for~$m$, we use some ideas introduced by P-L Lions in his lectures at Coll\`ege de France (2012) (see \cite{L-coll,C}), some results proved in~\cite{CH,C13}, and the superposition principle \cite [Theorem 8.2.1]{AGS}. These methods rely on optimal control theory, in particular on optimal synthesis results.
In our setting, the lack of coercivity of $\mathcal{H}$ makes it impossible to directly apply the arguments of~\cite[Sect. 4.1]{C}, in particular a contraction property of the flow associated to the dynamics (see~\cite[Lemma 4.13]{C}).
However, the superposition principle and suitable optimal synthesis results will be used to characterize $m$ as the image of the initial distribution by the optimal flow associated with the Hamilton-Jacobi equation. By standard techniques for monotone operators (see Lasry and Lions~\cite{ll07}), we also obtain the uniqueness of the solution under classical assumptions.
The superposition principle has already been used in a different approach, for instance in the articles of Cardaliaguet~\cite{C15}, Cardaliaguet, M\'esz\'aros and Santambrogio~\cite{CMS16} and Orrieri, Porretta and Savar\'e~\cite{OPS19}. In these works, the authors tackle the MFG systems of the first order using a variational approach based on two optimization problems in duality, under suitable assumptions. Then, using the superposition principle, they are able to describe the solution to the continuity equation arising in the optimality conditions of the latter optimization problem by means of a measure on the space of continuous paths. This measure is concentrated on the set of minimizing curves for the optimal control problem underlying the Hamilton-Jacobi equation.
A similar approach to the one of the present paper was recently proposed for a class of non-coercive MFG when the generic player has some ``forbidden direction'' (see \cite{MMMT}), more precisely when, in the two dimensional case, the dynamics is of the form: $x_1'= \alpha_1$, $x_2'=h(x_1)\alpha_2$ and $h(x_1)$ may vanish.
In a near future, we plan to tackle mean field games with control on the acceleration and with constraints (for MFGs with state constraints we refer to \cite{ABLLM, CC,CCC, CCC2}).
The paper is organized as follows. In Section \ref{Ass}, we list our assumptions, give the definition of (weak) solution to system~\eqref{eq:MFGAs} and state the existence and uniqueness results for the latter.
In Section \ref{OC},
we obtain some regularity properties for the solution $u$ of the Hamilton-Jacobi equation~\eqref{eq:MFGAs}-(i) with $m$ fixed.
These properties, combined with the uniqueness of the optimal trajectories of the associated control problem, will be crucial for proving the main theorem.
In Section \ref{sect:c_eq}, we study the continuity equation~\eqref{eq:MFGAs}-(ii). An important ingredient is the vanishing viscosity method that is used to characterize its solution.
Finally, Section~\ref{sect:MFG} is devoted to the proofs of the main Theorem~\ref{thm:main} on the existence of a solution and of Proposition~\ref{prp:!} on its uniqueness.
In the Appendix, following a suggestion of the referee, we establish the existence and the uniqueness of the solution to the corresponding second order MFG system as a byproduct of the estimates needed for the vanishing viscosity limit.
\section{Assumptions and main results}\label{Ass}
We consider the running cost $l(x,v,\alpha)$ of the form $
l(x,v,\alpha)=l(x,v)+\frac1 2 \vert \alpha\vert^2+\frac1 2 \vert v \vert^2$.\\
Then system~\eqref{eq:MFGA} can be written
\begin{equation}
\label{eq:MFGAs}
\left\{
\begin{array}{rll}
(i)&-\partial_t u-v\cdot D_xu+\frac1 2 \vert D_vu\vert^2-\frac1 2 \vert v \vert^2 -l(x,v)- F[m](x,v)=0,&\quad \textrm{in }{\mathbb R}^{2N}\times (0,T),\\
(ii)& \partial_t m +v\cdot D_xm-{\rm div}_v(D_vu\, m)=0,&\quad \textrm{in }{\mathbb R}^{2N}\times (0,T),\\
(iii)& m(x,v,0)=m_0(x,v),\ u(x,v,T)=G[m(T)](x,v),&\quad \textrm{on }{\mathbb R}^{2N},
\end{array}\right.
\end{equation}
which corresponds to $H(x,v,p_v)=\frac1 2 \vert p_v\vert^2-\frac1 2 \vert v \vert^2-l(x,v)$.
Let $\mathcal P_1$ and $\mathcal P_2$ denote the spaces of Borel probability measures on~${\mathbb R}^{2N}$
with respectively finite first and second order moments, endowed with the Monge-Kantorovich distances~{${\bf d}_1$}, respectively {${\bf d}_2$}.
Let $C^2({\mathbb R}^{2N})$ denote the space of twice differentiable functions with continuous and bounded derivatives up to order two. It is endowed with the norm \\
$\|f\|_{C^{2}}:=\sup_{(x,v)\in{\mathbb R}^{2N}}[|f(x,v)|+|Df(x,v)|+|D^2f(x,v)|]$.
Hereafter, we shall make the following hypotheses:
\paragraph{\bf Assumptions (H)}
\begin{itemize}
\item[(H1)]\label{H1} The functions~$F$ and $G$ are real-valued continuous functions defined on $\mathcal P_1\times{\mathbb R}^{2N}$
\item[(H2)]\label{H2} The function~$l$ is a real-valued $C^2$ function defined on ${\mathbb R}^{2N}$
\item[(H3)]\label{H3} The map $m\mapsto F[m](\cdot, \cdot)$ is Lipschitz continuous from $\mathcal P_1$ to $C^{2}({\mathbb R}^{2N})$; moreover, there exists~$C>0$ such that $C \ge \|l\|_{C^2}$ and
$$\|F[m](\cdot,\cdot)\|_{C^2} + \|G[m](\cdot,\cdot)\|_{C^2}\leq C,\qquad \forall m\in \mathcal P_1
$$
\item[(H4)]\label{H4} the initial distribution~$m_0$, defined on ${\mathbb R}^{2N}$,
has a compactly supported density (still named $m_0$, with a slight abuse of notation) $m_0\in C^{0,\delta}({\mathbb R}^{2N})$ for some $\delta\in (0,1)$.
\end{itemize}
\begin{definition}\label{defsolmfg}
The pair $(u,m)$ is a solution of system~\eqref{eq:MFGAs} if:
\begin{itemize}
\item[1)] $u\in W_{\rm loc}^{1,\infty}({\mathbb R}^{2N}\times[0,T])$, $m\in C([0,T];\mathcal P_1({\mathbb R}^{2N}))$
and for all $ t\in [0,T]$, $m(t)$ is absolutely continuous with respect to Lebesgue measure on ${\mathbb R}^{2N}$.
Let $m(\cdot, \cdot, t)$ denote the density of $m(t)$. The function $(x,v,t)\mapsto m(x,v,t)$ is bounded.
\item[2)] equation~\eqref{eq:MFGAs}-(i) is satisfied by $u$ in the viscosity sense
\item[3)] equation~\eqref{eq:MFGAs}-(ii) is satisfied by $m$ in the sense of distributions.
\end{itemize}
\end{definition}
We can now state the main result of this paper:
\begin{theorem}\label{thm:main}
Under the assumptions $\rm{(H)}$:
\begin{enumerate}
\item System \eqref{eq:MFGAs} has a solution $(u,m)$ in the sense of Definition~\ref{defsolmfg},
\item $m$ is the image of $m_0$ by the flow
\begin{equation}\label{dyn}
\left\{
\begin{array}{ll}
x'(s)=v(s),& \quad x(0)=x, \\
v'(s)=-D_vu(x(s),v(s),s),&\quad v(0)=v.
\end{array}
\right.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proposition}\label{prp:!}
Under the additional assumptions
\begin{equation}\label{monot}
\int_{{\mathbb R}^{2N}}(F[m_1]-F[m_2]) d(m_1-m_2)>0\quad \textrm{and }
\int_{{\mathbb R}^{2N}}(G[m_1]-G[m_2]) d(m_1-m_2)\geq0
\end{equation}
for every $m_1,m_2\in\mathcal{P}_1({\mathbb R}^{2N})$, $m_1\ne m_2$, the solution found in Theorem~\ref{thm:main} is unique.
\end{proposition}
\section{The optimal control problem}\label{OC}
In this section, we tackle the optimal control problem related to equation~\eqref{eq:MFGAs}-(i) with a fixed $\overline m\in C([0,T];{\mathcal P}_1({\mathbb R}^{2N}))$. To simplify the notation, we introduce the functions
\begin{equation}\label{ell}
\ell(x,v,t):=l(x,v)+F[\overline m(t)](x,v)\quad\textrm{and}\qquad g(x,v):=G[\overline m(T)](x,v),
\end{equation}
which, from the set assumptions $\rm{(H)}$, satisfy
\begin{equation}\label{HOC}
\|\ell(\cdot,\cdot,t)\|_{C^2},\, \|\ell(x,v,\cdot)\|_{C}, \, \|g\|_{C^2}\leq C\qquad \forall t\in[0,T], (x,v)\in{\mathbb R}^{2N}.
\end{equation}
With the new notation, the optimal control problem to be solved by a representative agent whose state at time $t$ is $(x,v)$ is to find the control law $\alpha$ in order to minimize
\begin{equation}\label{def:OC}
J_t(\xi,\eta, \alpha) =\displaystyle\int_t^T
\left[\frac{|\alpha|^2}{2}+\frac{|\eta|^2}{2}+\ell(\xi(s), \eta(s), s)\right]ds+g(\xi(T),\eta(T)),
\end{equation}
by following the trajectory~\eqref{eq:HJA}. Then the Cauchy problem given by ~\eqref{eq:MFGAs}-(i) and its terminal condition becomes
\begin{equation}\label{HJ}
\left\{\begin{array}{ll}
-\partial_t u-v\cdot D_xu+\frac1 2 \vert D_vu\vert^2-\frac1 2 \vert v \vert^2 -\ell(x,v,t)=0& \textrm{in }{\mathbb R}^{2N}\times (0,T),\\
u(x,v,T)=g(x,v)& \textrm{on }{\mathbb R}^{2N}.
\end{array}\right.
\end{equation}
From~\eqref{def:OC}, it is obvious that the control~$\alpha$ must be chosen in~$L^2(t,T;{\mathbb R}^{N})$. Therefore, we can introduce the value function as follows:
\begin{definition} The value function for the cost $J_t$ defined in \eqref{def:OC} with dynamics~\eqref{eq:HJA} is
\begin{equation}\label{repr}u(x,v,t):=\inf\left\{ J_t(\xi,\eta, \alpha):\, (\xi,\eta, \alpha)\in \mathcal A(x,v,t)\right\}
\end{equation}
where
\begin{equation}
\label{eq:constraint}
{\mathcal A}(x,v,t)\!=\!\left\{ ( \xi, \eta, \alpha):\;\left|
\begin{array}[c]{l} (\xi,\eta)\in AC([t,T]; {\mathbb R}^{2N}),\;\alpha\in L^2(t,T; {\mathbb R}^N),\\
(\xi,\eta,\alpha) \textrm{ satisfy } \eqref{eq:HJA} \textrm{ and }\xi(t)= x, \eta(t)=v
\end{array}\right.\right\}.
\end{equation}
\end{definition}
\begin{lemma}\label{DPP}
\begin{itemize}
\item[i)] {\it (Existence of an optimal control.)} For every $(x,v,t)\in{\mathbb R}^N\times{\mathbb R}^N\times (0,T)$, there exists an optimal control $\alpha^*$ for $u(x,v,t)$.
\item[ii)] {\it (Concatenation.)} Let $(\xi^*,\eta^*)$ be an optimal trajectory for $u(x,v,t)$ corresponding to the control law $\alpha^*$. For $r\in(t,T)$, let $(\tilde\xi^*,\tilde\eta^*)$ be an optimal trajectory for $u(\xi^*(r),\eta^*(r),r)$ with control $\tilde \alpha^*$. Then the concatenation of $\alpha^*$ and $\tilde\alpha^*$ at time $r$ is optimal for $u(x,v,t)$ and, moreover,
\[
u(x,v,t)=u(\xi^*(r),\eta^*(r),r)+\int_t^r
\left[\frac{|\alpha^*|^2}{2}+\frac{|\eta^*|^2}{2}+\ell(\xi^*(s), \eta^*(s), s)\right]ds.
\]
\item[iii)] Under the same assumption as in point (ii), the control $\alpha^*_{\mid [r,T]}$ is optimal for \\ $u(\xi^*(r),\eta^*(r),r)$.
\item[iv)] {\it(Dynamic Programming Principle.)} The Dynamic Programming Principle holds, namely
\begin{equation*}
u(x,v,t)=\min_{(\xi,\eta,\alpha)\in{\mathcal A}(x,v,t)}\left\{
u(\xi(r),\eta(r),r)+\int_t^r\frac{|\alpha(s)|^2}{2}+\frac{|\eta(s)|^2}{2} + \ell(\xi(s),\eta(s),s)\, ds
\right\}.
\end{equation*}
\end{itemize}
\end{lemma}
\begin{proof}
{\sl(i)}: let $\{\alpha_n\}_n$ be a sequence of minimizing control laws and $(\xi_n, \eta_n)$ be the solution of (\ref{eq:HJA}) corresponding to $\alpha_n$. Then, the boundedness of $\ell$ and the definition of $J_t$ ensure that $\|\alpha_n\|_{L^2(t,T;{\mathbb R}^N)}$ are uniformly bounded. Then, possibly after extracting a subsequence, $\alpha_n\rightharpoonup \alpha^*$ in $L^2(t,T;{\mathbb R}^N)$, $\eta_n\to \eta^*$ in $C([t,T];{\mathbb R}^N)$ and $\xi_n\to \xi^*$ in $C^1([t,T];{\mathbb R}^N)$. The lower semi-continuity of $J_t$ yields that $\alpha^*$ is optimal.
Points {\sl(ii)}, {\sl(iii)} and {\sl(iv)} are obtained by arguing exactly as in \cite[Proposition 5.1]{MMMT} (points (1), (2) and (4) respectively), see also \cite{C}.
\end{proof}
\begin{lemma}\label{L1}
The value function~$u$ has the following properties:
\begin{enumerate}
\item (Lipschitz continuity in $x$ and local Lipschitz continuity in $v$) there exists a positive constant $C$, depending only on the constants in assumptions $\rm{(H)}$, such that
\begin{eqnarray*}
|u(x,v,t)-u(x',v,t)|&\leq& C|x-x'|\\
|u(x,v,t)-u(x,v',t)|&\leq& C(1+|v|+|v'|)|v-v'|
\end{eqnarray*}
for every $x,x',v,v'\in{\mathbb R}^N$, $t\in[0,T]$.
\item (Local Lipschitz continuity in $t$) there exists a positive constant $C$, depending only on the constants in assumptions $\rm{(H)}$, such that
\begin{eqnarray*}
|u(x,v,t)-u(x,v,t')|&\leq& C(1+|v|^2)|t-t'| \qquad \forall x,v\in{\mathbb R}^N,\,t,t'\in[0,T].
\end{eqnarray*}
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item Fix $t\in [0,T)$.
Let $\alpha$ be an optimal control law for $u(x,v, t)$ i.e.,
\begin{equation}
\label{eq:HJ31}
u(x, v, t) = \int_t^T\frac12 |\alpha(s)|^2+\frac12 |v(s)|^2+\ell(x(s),v(s),s)\,ds+g(x(T),v(T)),
\end{equation}
where~$(x(\cdot),v(\cdot))$ obeys to the dynamics \eqref{eq:HJA}.
We consider the path $(y(\cdot), w(\cdot))$ starting from $(y, w)$, with control $\alpha(\cdot)$.
Hence, we obtain
\begin{eqnarray*}
y(s)&=&y+w(s-t)+\int_t^s\int_t^{\theta}\alpha(\tau) \,d\tau d\theta=y-x+x(s)+(w-v)(s-t),\\
w(s)&=&w+\int_t^s \alpha(\tau)\,d\tau=w-v+v(s).
\end{eqnarray*}
Note that
\begin{equation}\label{diff}
v(s)-w(s)=v-w,\ x(s)-y(s)=x-y+(v-w)(s-t).
\end{equation}
The definition of the value function \eqref{repr} and relation \eqref{eq:HJ31} imply
\begin{eqnarray*}
u(y, w, t)&\leq& \int_t^T\frac12 |\alpha(s)|^2+\frac12 |w(s)|^2+
\ell(y(s),w(s), s)\,ds+g(y(T), w(T))\\
&\leq& u(x, v, t) -\int_t^T\frac12 |v(s)|^2-\ell(x(s),v(s), s)\,ds-g(x(T),v(T))\\
&&\quad +\int_t^T\frac12 |w(s)|^2+\ell(y(s),w(s), s)\,ds+g(y(T), w(T))\\
&\leq&
u(x, v, t)+ \int_t^T L_{\ell}(|x(s)-y(s)|+|v(s)-w(s)|)\, ds \\
&&\quad+L_g(|x(T)-y(T)|+|v(T)-w(T)|)+ \int_t^T\frac12 (|w(s)|^2-|v(s)|^2)ds,
\end{eqnarray*}
where $L_\ell$ and $L_g$ denote respectively the Lipschitz constants of~$\ell$ and $g$ w.r.t. $(x,v)$.
Hence, by \eqref{diff},
\begin{eqnarray*}
&&\int_t^T\frac12 (|w(s)|^2-|v(s)|^2)ds= \int_t^T\frac12 |w-v|\cdot|w(s)+v(s)|ds\leq\\
&&|w-v|\int_t^T|w+v+2\int_t^s\alpha(\tau)d\tau|ds\leq
C|w-v|(|w|+|v|+ 1),
\end{eqnarray*}
where the last inequality comes from \eqref{stimaL8} of Corollary \ref{coro:regularity} below.
Hence we obtain
\begin{eqnarray}\label{eq:9}
&&u(y, w, t)\leq
u(x, v, t)+ C|x-y|+K(v,w)|v-w|,
\end{eqnarray}
where $K(v,w)=C(|w|+|v|+ 1)$.
Reverting the roles of $(x,v)$ and $(y,w)$, we get the first result.
\medskip
\item
We fix $(x,v)$.
From the concatenation property of optimal trajectories established in Lemma \ref{DPP}, if $\alpha$ is optimal for $u(x,v,t)$ and $(x(s), v(s))$
is the associated optimal trajectory, then
$$u(x,v,t)=u(x(s), v(s), s)+\int_t^s\frac12 |\alpha(r)|^2+\frac12 |v(r)|^2+\ell(x(r),v(r),r)\,dr$$
for any $s\in[t,T]$.
Then
\begin{eqnarray*}
&&|u(x,v,t)-u(x,v,s)|\leq |u(x,v,t)-u(x(s),v(s), s)|+
|u(x(s), v(s), s)-u(x,v,s)|\\
&&\leq
\int_t^s\frac12 |\alpha(r)|^2+\frac12 |v(r)|^2+|\ell(x(r),v(r),r)|\,dr+ L|x(s)-x|+L(v)|v(s)-v|,
\end{eqnarray*}
where the last two terms come from the Lipschitz continuity of $u$ w.r.t. $(x,v)$: $L$ is the Lipschitz constant of $u$ with respect to $x$ and $L(v)$ is
a local Lipschitz constant of $u$ with respect to $v$.
From \eqref{eq:HJA} and the bound~\eqref{stimaL8} in Corollary \ref{coro:regularity} below,
we get the bounds for $x(s)$, $v(s)$ and $\alpha$, i.e.
$|v(s)-v|+|x(s)-x|\leq C(1+|v|)|s-t|$, hence
\begin{equation}\label{L2}
|u(x,v,t)-u(x,v,s)|\leq C(1+|v|^2)|s-t|,
\end{equation}
which ends the proof.
\end{enumerate}
\end{proof}
\begin{proposition}\label{exuniq}
The value function defined in \eqref{repr} is the unique viscosity solution to~\eqref{HJ} with an at most quadratic growth in $(x,v)$. Moreover, there exists a positive constant~$C$ such that
\begin{equation}\label{eq:stimau}
-C \leq u(x,v,t)\leq C(1+\vert v\vert^2) \qquad \forall(x,v,t)\in{\mathbb R}^N\times {\mathbb R}^N\times[0,T].
\end{equation}
\end{proposition}
\begin{proof}
Let us first establish that the value function fulfills \eqref{eq:stimau} and solves~\eqref{HJ} in the viscosity sense. Actually, taking $\alpha \equiv 0$ in \eqref{eq:HJA}, we get $\eta(s)=v$ and $\xi(s)=x+v(s-t)$; then, thanks to the boundedness of $\ell$ in~\eqref{HOC}, the value function verifies~\eqref{eq:stimau}.
Moreover, by Lemma~\ref{L1}, it is also continuous; hence, using the DPP in Lemma~\ref{DPP}-(iv), it is also a solution to \eqref{HJ}.\\
The uniqueness part of the statement is an immediate consequence of the comparison principle stated in \cite[Theorem 2.1]{DLL}.
\end{proof}
The following lemma deals with the semi-concavity of $u(x,v, t)$ w.r.t. $(x,v)$:
\begin{lemma}\label{semi-concav}
Under Hypothesis $\rm{(H)}$, $u(x,v,t)$ is semi-concave w.r.t. $(x,v)$ with a linear modulus of semi-concavity, which depends only
on the constants in assumptions $\rm{(H)}$.
\end{lemma}
\begin{proof}
For any $(x,v)$, $(y, w)$ and $\lambda\in[0,1]$,
consider $x_{\lambda}:=\lambda x+(1-\lambda)y$, $v_{\lambda}:=\lambda v+(1-\lambda)w$.
Let $\alpha$ be an optimal control for~$u(x_{\lambda}, v_{\lambda}, t)$; hence, the associated trajectory is
\begin{equation}\label{icslanda}
x_{\lambda}(s)=x_{\lambda}+ v_{\lambda}(s-t) +\int_t^s\int_t^{\theta}\alpha(\tau) \,d\tau d\theta,\
v_{\lambda}(s)= v_{\lambda}+ \int_t^s \alpha(\tau)\,d\tau
\end{equation}
and
$$u(x_{\lambda}, v_{\lambda}, t)=\int_t^T\frac12 |\alpha(s)|^2+\frac12 |v_{\lambda}(s)|^2+
\ell(x_{\lambda}(s),v_{\lambda}(s),s)ds+ g(x_{\lambda}(T), v_{\lambda}(T)).$$
Let $(x(s), v(s))$ be the trajectory starting at $(x,v)$ at time $t$ with control $\alpha$ and $(y(s), w(s))$ the trajectory starting at $(y,w)$ at time $t$ still with control $\alpha$.
We have to estimate
\begin{displaymath}
\begin{split}
&\lambda u(x,v, t) +(1-\lambda)u(y,w, t)-u(x_{\lambda}, v_{\lambda}, t)\\
\le &\int_t^T\frac12\lambda |v(s)|^2+ (1-\lambda)\frac12 |w(s)|^2-\frac12 |v_{\lambda}(s)|^2 ds\\
&+\int_t^T\lambda \ell(x(s),v(s),s)+(1-\lambda) \ell(y(s),w(s),s)- \ell(x_{\lambda}(s),v_{\lambda}(s),s)ds\\
&+\lambda g(x(T), v(T))+(1-\lambda)g(y(T), w(T))-g(x_{\lambda}(T), v_{\lambda}(T)).
\end{split}
\end{displaymath}
Since
\begin{equation}\label{dintutte}
v(s)=v+ \int_t^s \alpha(\tau)\,d\tau,\ w(s)=w+ \int_t^s \alpha(\tau)\,d\tau,\ v_{\lambda}(s)=\lambda v+(1-\lambda)w +\int_t^s \alpha(\tau)\,d\tau,
\end{equation}
we get
\begin{equation}
\begin{split}
&\lambda \frac12|v(s)|^2+ (1-\lambda)\frac12 |w(s)|^2-\frac12 |v_{\lambda}(s)|^2\label{eqsemi} \\
=&(\lambda v+(1-\lambda)w-\lambda v-(1-\lambda)w)\int_t^s \alpha(\tau)\,d\tau+ \lambda \frac{|v|^2}{2}+ (1-\lambda)\frac{|w|^2}{2}-\frac12|\lambda v+ (1-\lambda)w|^2\\
=&
\frac12\lambda(1-\lambda)|v|^2+ \frac12\lambda(1-\lambda)|w|^2-\lambda(1-\lambda)v\cdot w=
\frac12\lambda(1-\lambda)|v-w|^2.
\end{split}
\end{equation}
Hence
\begin{equation}\label{unouno}
\int_t^T\frac12\lambda |v(s)|^2+ (1-\lambda)\frac12 |w(s)|^2-\frac12 |v_{\lambda}(s)|^2 ds=\frac12\lambda(1-\lambda)|v-w|^2(T-t).
\end{equation}
Now, we have to estimate the terms $\lambda \ell(x(s), v(s), s) +(1-\lambda)\ell(y(s), w(s), s)- \ell(x_{\lambda}(s),v_{\lambda}(s),s)$ and
$\lambda g(x(T), v(T))+(1-\lambda)g(y(T), w(T))-g(x_{\lambda}(T), v_{\lambda}(T))$.
We write the algebra for the second term, since the treatment of the first term is similar.
The Taylor expansion of $g$ centered at $(x_{\lambda}(T), v_{\lambda}(T))$ gives
\begin{equation}
\label{taypro}
g(x(T),v(T))= g(x_{\lambda}(T), v_{\lambda}(T))+ Dg(x_{\lambda}(T), v_{\lambda}(T))(x(T)-x_{\lambda}(T), v(T)-v_{\lambda}(T))+ R_1,
\end{equation}
where $R_1$ is the remaining term in the expansion, namely
\begin{equation}
\label{R1}
R_1=\frac 1 2 (x(T)-x_{\lambda}(T), v(T)-v_{\lambda}(T))D^2g(\xi_1,\eta_1)(x(T)-x_{\lambda}(T), v(T)-v_{\lambda}(T))^T,
\end{equation}
for suitable $\xi_1, \eta_1$.\\
From \eqref{icslanda} and \eqref{dintutte}, we get
\begin{equation}
\label{reltutte}
\begin{array}[c]{rcl}
x(s)-x_{\lambda}(s)&=& (1-\lambda)((x-y)+(v-w)(s-t)),\\
v(s)-v_{\lambda}(s) &=& (1-\lambda)(v-w),\\
y(s)-x_{\lambda}(s)&=& \lambda((y-x)+(w-v)(s-t)),\\
w(s)-v_{\lambda}(s) &=& \lambda(w-v),
\end{array}
\end{equation}
hence the error term can be written as
\begin{equation}\label{R12}
R_1= \frac 1 2(1-\lambda)^2(x-y+(v-w)(T-t), v-w)D^2g(\xi_1,\eta_1)(x-y+(v-w)(T-t), v-w)^T.
\end{equation}
Similarly
\begin{displaymath}
g(y(T), w(T))=
g(x_{\lambda}(T), v_{\lambda}(T))+ Dg(x_{\lambda}(T), v_{\lambda}(T))(y(T)-x_{\lambda}(T), w(T)-v_{\lambda}(T))+ R_2,
\end{displaymath}
where
\begin{displaymath}
\begin{split}
R_2=
& \frac 1 2 (y(T)-x_{\lambda}(T), w(T)-v_{\lambda}(T))D^2g(\xi_2,\eta_2)(y(T)-x_{\lambda}(T), w(T)-v_{\lambda}(T))^T\\
=&\frac 1 2 \lambda^2 (y-x+(w-v)(T-t), w-v)D^2g(\xi_2,\eta_2)(y-x+(w-v)(T-t), w-v)^T.
\end{split}
\end{displaymath}
At this point, taking into account that
from \eqref{reltutte},
\begin{equation}
\label{zeroDg}
\begin{split}
&\lambda Dg(x_{\lambda}(T), v_{\lambda}(T))(x(T)-x_{\lambda}(T), v(T)-v_{\lambda}(T))\\
&+(1-\lambda) Dg(x_{\lambda}(T), v_{\lambda}(T))(y(T)-x_{\lambda}(T), w(T)-v_{\lambda}(T))\\
=& Dg(x_{\lambda}(T), v_{\lambda}(T))(\lambda(x(T)-x_{\lambda}(T))\\ &\displaystyle +(1-\lambda)(y(T)-x_{\lambda}(T)),
\lambda(v(T)-v_{\lambda}(T))+(1-\lambda)(w(T)-v_{\lambda}(T)))\\
=&0,
\end{split}
\end{equation}
we obtain that
\begin{equation}\label{gg}
\begin{array}[c]{ll}
&\displaystyle \lambda g(x(T), v(T))+(1-\lambda)g(y(T), w(T))-g(x_{\lambda}(T), v_{\lambda}(T))\\ =& \lambda R_1+
(1-\lambda) R_2\\
\le &\displaystyle (1-\lambda)\lambda C_T\|D^2g\|_{\infty}(|x-y|^2+|v-w|^2).
\end{array}
\end{equation}
Hence from \eqref{unouno}, \eqref{zeroDg}, \eqref{gg} we get
\begin{displaymath}
\begin{split}
&\lambda u(x,v, t) +(1-\lambda)u(y,w, t)-u(x_{\lambda}, v_{\lambda}, t)\\
\le &\frac{\lambda(1-\lambda)}2 |v-w|^2(T-t)+
C_T (1-\lambda)\lambda \left(\|D^2g\|_{\infty} +\|D^2\ell\|_{\infty}\right) \left(|x-y|^2+|v-w|^2\right).
\end{split}
\end{displaymath}
We obtain that $u$ is semi-concave in $(x,v)$ with a linear modulus of semi-concavity.
\end{proof}
Pontryagin's maximum principle yields the following necessary optimality conditions:
\begin{proposition}[Necessary conditions for optimality]\label{prop:pontriagin}
\label{MPP}
Let $(x^*, v^*, \alpha^*)$ be optimal for~$u(x,v,t)$ in~\eqref{repr}. There exists an arc $p=(p_x,p_v)\in AC([t,T];{\mathbb R}^N\times{\mathbb R}^N)$, hereafter called the costate, such that
\begin{enumerate}
\item $(\alpha^*, x^*, v^*, p)$ satisfies
the {\it adjoint equations}: for a.e. $s\in[t,T]$,
\begin{eqnarray}
&&p_x' =D_x\ell( x^*, v^*, s),\label{tag:adjoint1}\\
&&p_v'=-p_x+v^*+D_v\ell( x^*, v^*, s),\label{tag:adjoint2}
\end{eqnarray}
the {\it transversality condition}
\begin{equation}\label{tag:transversality}
p(T)=-D g(x^*(T), v^*(T)),
\end{equation}
together with the {\it maximum condition}: for almost all $s\in [t,T]$,
\begin{multline}\label{tag:max}
\max_{\alpha}p_x\cdot v^*+p_v\cdot\alpha-\dfrac{|\alpha|^2}2-\dfrac{|v^*|^2}2=
p_x\cdot v^*+p_v\cdot\alpha^*-\dfrac{|\alpha^*|^2}2-\dfrac{|v^*|^2}2.\end{multline}
\item The optimal control $\alpha^*$ is given by
\begin{equation}
\alpha^*=p_v, \text{ a.e in }[t,T].\label{tag:alpha*}
\end{equation}
\item The triple~$(x^*, v^*, p)$ satisfies the system of differential equations: for a.e. $s\in[t,T]$
\begin{eqnarray}
&&x'= v,\label{tag:1} \\
&&v'= p_v, \label{tag:2}\\
&&p_x'= D_x\ell(x,v, s),\label{tag:3}\\
&&p_v'=-p_x+v+D_v\ell(x,v, s),\label{tag:4}
\end{eqnarray}
with the mixed boundary conditions $x^*(t)= x$, $v^*(t)= v$, $p(T)=-D g(x^*(T), v^*(T))$.
\end{enumerate}
\end{proposition}
\begin{proof} 1. Hypothesis~\eqref{HOC} ensures that our control problem satisfies the assumption~\cite[Hypothesis 22.16]{Cla},
so we can invoke~\cite[Theorem 22.17]{Cla} on the maximum principle for problems with unbounded control.
Moreover, since there is no constraint on the state variable at $T$, the same arguments as in ~\cite[Corollary 22.3]{Cla}
ensure that the necessary conditions hold in normal form.
2. The maximum condition \eqref{tag:max} implies that
\[D_{\alpha}\left(p_x\cdot v^*+p_v\cdot \alpha-\dfrac{|\alpha|^2}2-\dfrac{|v^*|^2}2-f(x^*, v^*)\right)_{\alpha=\alpha^*}=0\quad \text{for a.e. }s\in [t, T]\]
from which we get \eqref{tag:alpha*}.
3. Conditions \eqref{tag:1} -- \eqref{tag:2} follow directly from \eqref{eq:HJA} and \eqref{tag:alpha*}. Conditions \eqref{tag:3} and \eqref{tag:4} coincide with \eqref{tag:adjoint1}, \eqref{tag:adjoint2}.
\end{proof}
\begin{corollary}[Feedback control and regularity]\label{coro:regularity}
Let $(x^*, v^*, \alpha^*)$ be optimal for $u(x, v, t)$ and $p=(p_x,p_v)$ be the related costate as in Proposition~\ref{prop:pontriagin}. Then:
\begin{enumerate}
\item The costate $p$ is uniquely expressed in terms of $x^*, v^*$ for every $s\in [t, T]$ by
\begin{equation}
\!\begin{cases}\label{tag:p}
p_x(s)\!\!&\!\!\!=-D_xg(x^*(T), v^*(T))-\!\!\displaystyle\int_s^T \!\!D_x\ell(x^*(\tau), v^*(\tau),\tau)\,d\tau,\\
p_v(s)\!\!&\!\!\!=-D_{v}g(x^*(T), v^*(T))-\displaystyle\int_s^T D_{v}\ell(x^*(\tau),v^*(\tau), \tau)+v^*(\tau)-p_x(\tau)\,d\tau.\\
\end{cases}
\end{equation}
\item The optimal control
$\alpha^*$ is a feedback control {\rm (}i.e., a function of $x^*, v^*${\rm )}, uniquely expressed in terms of $x^*, v^*$ for a.e. $s\in [t, T]$ by
\begin{equation}\label{tag:alpha}
\alpha^*(s)=p_v(s).
\end{equation}
\item The optimal trajectory $(x^*,v^*)$ and the optimal control $\alpha^*$ are of class $C^1$.
In particular the equalities \eqref{tag:alpha*} -- \eqref{tag:alpha} do hold for every $s\in [t, T]$.
Moreover
\begin{equation}\label{stimaL8}
\begin{array}[c]{rcl}
\|v^*\|_{C^1}+\|\alpha^*\|_{C^1}&\leq& C(1+|v|),\\
\|x^*\|_{C^1}&\leq& |x|+C(1+|v|).
\end{array}
\end{equation}
\item Assume that, for some $k\in\mathbb N$, $D_x\ell(x,v, s)$, $D_v\ell(x,v, s)$ are of class $C^k$.
Then $(x^*, v^*)$, $p$ and $\alpha^*$ are of class $C^{k+1}$.
\end{enumerate}
\end{corollary}
\begin{proof}Point~$1$ is obtained integrating \eqref{tag:3}--\eqref{tag:4} and taking into account the final time condition $p(T)=-D g(x^*(T),v^*(T))$.
Point~$2$ follows from \eqref{tag:alpha*}.
Proof of point~$3$. Since $x^*, v^*$ are continuous by the definition of admissible trajectories in \eqref{eq:constraint}, the continuity of $\alpha^*$ follows from \eqref{tag:p} and~\eqref{tag:alpha}. Then (\ref{eq:HJA}) implies $v^*\in C^1$ and also $x^*\in C^1$.
Relations~\eqref{tag:p},~\eqref{tag:alpha} (and the regularity of $\ell$) imply, respectively, that $p$ and $\alpha^*$ are of class $C^1$. By \eqref{tag:2}, we get that $v^*$ is $C^2$. Let us now prove the bounds \eqref{stimaL8}. To this end, we observe that equations \eqref{tag:2} and \eqref{tag:4} entail
\begin{equation*}
(v^*)''(\tau)-v^*(\tau)=-p_x(\tau)+D_v\ell(x^*(\tau),v^*(\tau),\tau)
\end{equation*}
where, by \eqref{tag:p} and ($H2$), the right hand side is bounded uniformly in $x$ and $v$, $\tau\in [t,T]$. Moreover,
\begin{itemize}
\item $(v^*)(t)=v$
\item by \eqref{tag:2}, \eqref{tag:transversality} and the regularity of $g$, $(v^*)'(T)$ is bounded uniformly in $x$ and $v$.
\end{itemize}
Hence, using the method of variation of constants for the above ordinary differential equation with assigned the values of $v^*(t)$ and of $(v^*)'(T)$, we get the estimate for $v^*$ and for $\alpha^*$. Integrating $v^*$, we get the estimate for $x^*$.
Proof of point~$4$. The relations~\eqref{tag:p} and the $C^1$-regularity of $x^*, v^*$ and $p$ imply that, actually, $p\in C^2$. Therefore, \eqref{tag:alpha} gives the $C^2$-regularity of $\alpha^*$ and, finally, \eqref{dyn} yields the $C^2$-regularity of $x^*, v^*$.
Further regularity of $x^*, v^*$, $\alpha^*$ and $p$ follows by a standard bootstrap inductive argument.
\end{proof}
\begin{remark}\label{contrcont}
Taking advantage of Corollary~\ref{coro:regularity}-(3), we will always consider the representation of the optimal control~$\alpha^*$ which belongs to~$C^1$.
\end{remark}
Corollary~\ref{th:nobifurc} that follows implies that the optimal trajectories for $u(x,v, t)$ do not bifurcate at any time $r>t$.
\begin{corollary}\label{th:nobifurc}
Under Hypothesis~\eqref{HOC}, let $(x^*, v^*)$ be an optimal trajectory for $u(x,v, t)$.
For every $t< r< T$, there are no other optimal trajectories for $u(x^*(r), v^*(r), r)$ other than $(x^*, v^*)$ restricted to $[r,T]$.
\end{corollary}
\begin{proof} 1.
Let $r\in (t, T)$ and $(y^*, w^*)$ be an optimal trajectory for $u(x^*(r), v^*(r), r)$.
Lemma~\ref{DPP} ensures that $(z^*, \nu^*)$, the concatenation of $(x^*, v^*)$ with $(y^*, w^*)$ at $r$ is an optimal trajectory
for $u(x,v, t)$. Let $p:=(p_x, p_v), q:=(q_x, q_v)$ be the costates corresponding respectively to $(x^*, v^*)$ and to $(z^*, \nu^*)$.
Both $(x^*, v^*, p)$ and $(z^*, \nu^*, q)$ satisfy \eqref{tag:1} -- \eqref{tag:4} on $[t, T]$.
Now, Corollary~\ref{coro:regularity} shows that $(x^*, v^*)$ and $(z^*, \nu^*)$ are of class $C^1$.
Since $x^*=z^*$, $v^*=\nu^*$ on $[t, r]$, we choose $\tau$ such that $t<\tau<r$.
From \eqref{tag:2}, we get \[p_v(\tau)=q_v(\tau).\]
Moreover, from \eqref{tag:2} and \eqref{tag:4}, we also get that
\[p_x(\tau)=q_x(\tau).\]
Therefore, both $(x^*, v^*, p)$ and $(z^*, \nu^*, q)$ are solutions to the same Cauchy problem on $[t, T]$
with the first order differential system \eqref{tag:1}-\eqref{tag:4} and Cauchy data at $\tau$.
The regularity assumptions on $\ell, g$ and Cauchy-Lipschitz Theorem guarantee the uniqueness of the solution.
Thus $x^*=z^*$, $v^*=\nu^*$ on $[\tau,T]$, from which we obtain the desired identities
$x^*=y^*$ and $v^*=w^*$ on $[r, T]$.
\end{proof}
\begin{definition}\label{def:cal_U}
For any~$(x,v,t)\in{\mathbb R}^{2N}\times[0,T]$, let ${\mathcal U}(x,v,t)$ denote the set of optimal controls for the value function~$u(x,v,t)$ defined in~\eqref{repr}.
\end{definition}
\begin{remark}\label{contrcont2}
Lemma~\ref{DPP}-(i) and Remark~\ref{contrcont} ensure that $\emptyset\ne {\mathcal U}(x,v,t)\subset C^1([t,T];{\mathbb R}^N)$.
\end{remark}
\begin{lemma}\label{4.9} The following properties hold:
\begin{enumerate}
\item The function $u(x,\cdot,t)$ is differentiable at $v$
if and only if the set $\{\alpha(t):\, \alpha\in {\mathcal{U}}(x,v, t)\}$ is a singleton.
Moreover $D_vu(x,v, t)=-\alpha(t)$.
\item In particular, if $\mathcal U(x,v, t)$ is a singleton, then, calling $ (x(s),v(s))$
the optimal trajectory associated to the singleton $\mathcal U(x,v, t)$,
$D_vu(x(s),v(s), s)$ exists for any $s\in [t,T]$.
\item If $ u(\cdot,\cdot,t) $ is differentiable at $(x,v)$, then $\mathcal U(x,v, t)$ is a singleton.
\end{enumerate}\end{lemma}
\begin{proof}
1. We prove that if $D_vu(x,v,t)$ exists, then all $\alpha(\cdot)\in {\mathcal{U}}(x,v, t)$ take the same value $\alpha(t)$ at $t$
and $D_vu(x,v,t)=-\alpha(t)$.
If $\alpha(\cdot)\in {\mathcal{U}}(x,v, t)$, calling $(x(\cdot), v(\cdot))$ the corresponding optimal trajectory, then
$$u(x,v,t)=
\int_t^T\frac12 |\alpha(s)|^2+\frac12 |v(s)|^2+
\ell(x(s),v(s), s)\,ds+g(x(T), v(T)),$$
and
$(x(\cdot), v(\cdot))$ and $\alpha(\cdot)$ satisfy the necessary conditions for optimality proved in Proposition
\ref{prop:pontriagin}.
Take $h=(h_1,h_2)\in{\mathbb R}^{2N}$ and consider the solution $(y(\cdot), w(\cdot))$ of (\ref{eq:HJA}) with initial condition $(y(t),w(t))=(x+h_1, v+h_2)$ and control~$\alpha$, namely
\begin{eqnarray*}
y(s)&=&x+h_1+(v+h_2)(s-t)+\int_t^s\,\int_t^{\theta}\alpha(\tau)d\tau d\theta=x(s)+h_1+h_2(s-t),\\
w(s)&=&v+h_2+\int_t^s\,\alpha(\tau)d\tau=v(s)+h_2.
\end{eqnarray*}
Hence,
\begin{equation}
\label{eq:1}
\begin{array}[c]{ll}
& u(x+h_1,v+h_2,t)-u(x,v,t)\\
\le & \displaystyle \int_t^T\frac12 |w(s)|^2-\frac12 |v(s)|^2+
\ell(y(s),w(s), s)-\ell(x(s),v(s), s)\,ds\\ &+g(y(T), w(T))-g(x(T), v(T))\\
=& \displaystyle \int_t^T\frac12 |v(s)+h_2|^2-\frac12 |v(s)|^2+
\ell(x(s)+h_1+h_2(s-t),v(s)+h_2, s)-\ell(x(s),v(s), s)\,ds \\
&\displaystyle+
g(x(T)+h_1+h_2(T-t), v(T)+h_2)-g(x(T), v(T))\\
=& \displaystyle \int_t^T\frac12 h_2^2+h_2\cdot v(s)+
\ell(x(s)+h_1+h_2(s-t),v(s)+h_2, s)-\ell(x(s),v(s), s)\,ds\\
&\displaystyle+
g(x(T)+h_1+h_2(T-t), v(T)+h_2)-g(x(T), v(T)).
\end{array}
\end{equation}
The arbitrariness of the sign of the components of $(h_1,h_2)$ and the differentiability of $u$ w.r.t. $v$ yields
\begin{displaymath}
\begin{array}[c]{ll}
D_vu(x, v, t)=&\displaystyle \int_t^Tv(s)ds+\int_t^T
D_x\ell(x(s),v(s), s)(s-t)+D_v\ell(x(s),v(s),s)\,ds\\
&+D_xg(x(T), v(T))(T-t)+D_vg(x(T), v(T)).
\end{array}
\end{displaymath}
By \eqref{tag:3} and~\eqref{tag:transversality}, we obtain
\begin{displaymath}
\begin{array}[c]{rcl}
\displaystyle \int_t^T D_x\ell(x(s),v(s),s)(s-t)ds&=& \displaystyle \int_t^Tp_x'(s)(s-t)ds= p_x(T)(T-t)- \int_t^Tp_x(s)ds\\ &= & \displaystyle
-D_xg(x(T), v(T))(T-t)-\int_t^Tp_x(s)ds.
\end{array}
\end{displaymath}
Hence
\begin{displaymath}
\begin{array}[c]{rcl}
D_vu(x,v,t)&=&\displaystyle \int_t^T(v(s)-p_x(s)+D_v\ell(x(s),v(s),s))ds+ D_vg(x(T), v(T))\\ &=& \displaystyle
\int_t^Tp_v'(s)ds+D_vg(x(T), v(T))\\ &=&-p_v(t) =-\alpha(t),
\end{array}
\end{displaymath}
where the last two equalities are due to~\eqref{tag:4},\eqref{tag:alpha*} and the terminal condition for $p$.
This uniquely determines the value of $\alpha(\cdot)$ at time~$t$.\\
Conversely we prove that, if all $\alpha(\cdot)\in {\mathcal{U}}(x,v,t)$ take the same value
$\alpha(t)$ at $t$, then $D_vu(x,v, t)$ exists. Fix $x$ and $t$.
From the semi-concavity of $u(x,\cdot,t)$, the differentiability of $u(x, \cdot, t)$ at $v$ will follow from the fact that $D_v^*u(x,v,t)$ is a singleton (see \cite[Proposition 3.3.4]{CS}).
Recall that the set of reachable gradients of $u(x,\cdot,t)$ is defined by
\begin{displaymath}
D_v^*u(x,v,t)=\left\{
\chi\in {\mathbb R}^{N}: \exists (v_n)_{n\in {\mathbb N}} \hbox{ with} \left| \begin{array}[c]{ll} & \displaystyle \lim_{n\to\infty} v_n= v, \\
&u(x,\cdot, t) \hbox{ is differentiable at } v_n ,
\\ & \displaystyle \lim_{n\to \infty} D_vu(x,v_n,t) =\chi.
\end{array}\right.
\right \} .
\end{displaymath}
Take $\chi\in D_v^*u(x,v,t)$. By definition of $D_v^*u(x,v,t)$ there exist sequences
$\{v_n\}$, $\{\chi_n=D_vu(x, v_n, t)\}$ such that
\begin{equation}\label{1DGv}
v_n\to v\quad\hbox{and}\quad \chi_n\to \chi.
\end{equation}
Consider $\alpha_n\in\mathcal U(x, v_n, t)$; by the other part of the statement (already proven), we know that
\begin{equation}\label{2DG}
-\alpha_n(t)=D_vu(x,v_n, t)=\chi_n.
\end{equation}
From estimate \eqref{stimaL8} in Corollary \ref{coro:regularity}, we see that
\begin{equation}\label{7DG}
\|\alpha_{n}^{\prime}\|_{\infty} \leq C(1+|v_n|)\leq C,\ \text {for any } n.
\end{equation}
Hence from
Ascoli-Arzel{\`a} Theorem, we deduce that, after extracting a subsequence, $\alpha_n$ uniformly converge to some $\alpha\in C([t,T];{\mathbb R}^{N})$.
In particular,
calling $(x_n(\cdot), v_n(\cdot))$ the trajectory associated to $\alpha_n$ starting from $(x, v_n)$:
\begin{equation*}
x_{n}(s)=x+v_{n}(s-t)+\int_t^s \int_t^{\theta}\,\alpha_{n}(\tau)d\tau d\theta,\quad\hbox{and}\quad
v_{n}(s)=v_{n}+\int_t^s\,\alpha_{n}(\tau)d\tau.
\end{equation*}
we get:
\begin{eqnarray*}
&&x_{n}(s)\to x(s)=x+v(s-t)+ \int_t^s\int_t^{\theta}\,\alpha(\tau)d\tau d\theta,\ \text{uniformly in } [t, T],\\
&&v_{n}(s)\to v(s)= v+\int_t^s\,\alpha(\tau)d\tau\ \text{uniformly in }[t, T].
\end{eqnarray*}
Moreover, by classical arguments of stability, $\alpha$ is optimal, i.e. $\alpha\in\mathcal U(x,v, t)$.
The uniform convergence of the $\alpha_n$ yields in particular that
$\alpha_n(t)\to \alpha(t)$ where $\alpha(t)$ is uniquely determined by assumption. By~(\ref{1DGv}) and \eqref{2DG}, we get that
$\chi_n\to \chi=\alpha(t)$. This implies that $D_v^*u(x, v, t)$ is a singleton, then $D_vu(x,v, t)$ exists. Going back to the first part of the proof, we see that $D_vu(x,v, t)=-\alpha(t)$.
\medskip
2. If $\mathcal U(x,v, t)=\{\alpha(\cdot)\}$, then for any $s\in[t,T]$, $\alpha(s)$ is uniquely determined.
Indeed, if there exists $\beta\in \mathcal U(x(s),v(s), s)$, then the concatenation $\gamma$ of $\alpha$ and $\beta$ (see Lemma \ref{DPP}) is also optimal, i.e. $\gamma\in\mathcal U(x,v, t)=\{\alpha(\cdot)\}$.\\
Then from point 1 with $t=s$ at $(x(s),v(s))$, we deduce that $D_vu(x(s),v(s), s)$ exists.
\medskip
3. From point 1, we know that for any $\alpha(\cdot)\in {\mathcal{U}}(x,v, t)$, $\alpha(t)$ is unique and coincides with~$-D_vu(x,v,t)$. Hence, relation~\eqref{tag:alpha*} ensures $p_v(t)=-D_vu(x,v,t)$.
On the other hand, note that, since $D_xu(x, v, t)$ exists, we get from (\ref{eq:1}) that
\begin{eqnarray*}
D_xu(x, v, t)&=&\int_t^T
D_x\ell(x(s),v(s),s)ds + D_xg(x(T), v(T))\\
&=&\int_t^Tp_x'(s)ds+ D_xg(x(T), v(T))=-p_x(t);
\end{eqnarray*}
thus, $p_x(t)$ and $p_v(t)$ are both uniquely determined.
Hence \eqref{tag:1}-\eqref{tag:4} is a system of differential equations with initial conditions $x(t), v(t)$, $p_x(t)$ and $p_v(t)$
which admits a unique solution $(x(\cdot), v(\cdot), p_x(\cdot),p_v(\cdot))$ by Cauchy-Lipschitz theorem,
and $(x(\cdot), v(\cdot))$ is the unique optimal trajectory starting from $(x,v)$, associated to the unique optimal control law $\alpha(\cdot)=p_v(\cdot)$.
\end{proof}
\begin{lemma}[optimal synthesis]\label{B}
Consider $\xi\in {\mathbb R}^N$ and $\eta\in {\mathbb R}^N$.
\begin{enumerate}
\item Let $x\in C^1([t,T]; {\mathbb R}^N)$, $v\in {\rm{AC}}([t,T];{\mathbb R}^N)$ be such that
\begin{equation}
\label{eq:2}
x(t)=\xi, \quad \hbox{and}\quad v(t)=\eta,
\end{equation}
and for almost every $s\in (t,T)$,
\begin{equation}
\label{eq:3}
u(x(s),\cdot,s) \hbox{ is differentiable at } v(s),
\end{equation}
and
\begin{equation}\label{OS}
\begin{array}[c]{rcl}
x'(s)&=&v(s),\\
v'(s)&=& -D_vu(x(s),v(s), s),
\end{array}
\end{equation}
where $u$ is the solution of \eqref{HJ}. Under these assumptions, the control law $\alpha(s)=v'(s)=-D_vu(x(s),v(s), s)$ is optimal for $u(\xi,\eta, t)$.
\item If $u(\cdot, \cdot, t)$ is differentiable at $(\xi,\eta)$,
then problem (\ref{eq:2}), (\ref{OS}) has a unique solution corresponding to the optimal trajectory.
\end{enumerate}
\end{lemma}
\begin{proof}
We adapt the arguments of \cite[Lemma 4.11]{C}.
Fix $(t,\xi,\eta)\in(0,T)\times {\mathbb R}^{2N}$. Let $x\in C^1([t,T]; {\mathbb R}^N)$, $v\in {\rm{AC}}([0,T];{\mathbb R}^N)$ be as in the statement.
Note that, from \eqref{OS}, since $|D_vu|$ grows at most linearly in $v$ (see Lemma~\ref{L1}-(point $1$)), Gronwall's Lemma ensures that $v(\cdot)$ is bounded in $(t,T)$; consequently, again by \eqref{OS} and Lemma~\ref{L1}-(point $1$), $v(\cdot)$ is Lipschitz continuous.
Therefore, from Lemma \ref{L1}, the function $s\mapsto u(x(s),v(s),s)$ is Lipschitz continuous as well. Hence, for almost every $s\in [t,T]$,
\begin{itemize}
\item (\ref{eq:3}) and (\ref{OS}) hold,
\item the function $u(x(\cdot), v(\cdot),\cdot)$ admits a derivative at $s$.
\end{itemize}
Fix such an $s$. Lebourg's Theorem for Lipschitz functions (see \cite[Thm 2.3.7]{Cla90} and \cite[Thm 2.5.1]{Cla90}) ensures that, for any sufficiently small number $h$, there exists $(y_h, w_h, s_h)$ in the open line segment $( (x(s), v(s),s), (x(s+h), v(s+h), s+h))$ and $(\chi^h_x, \chi^h_v,\chi^h_t) \in {\rm{conv}} \left(D_{x,v,t}^*u(y_h, w_h, s_h)\right)$ such that
\begin{equation}\label{31}
u(x(s+h),v(s+h), s+h)-u(x(s),x(s), s)= \chi^h_x\cdot (x(s+h)-x(s))+ \chi^h_v\cdot (v(s+h)-v(s)) +\chi^h_t h.
\end{equation}
Here, $\rm{conv}(A)$ stands for the convex hull of a set $A$ while $D_{x,v,t}^*u(y_h, w_h, s_h)$ stands for the reachable gradient at $(y_h, w_h, s_h)$ with respect to the variables $x$, $v$ and $t$ (see~\cite[eq.~ (4.4)]{BCD}. \\
By Carath{\'e}odory's theorem (see \cite[Thm A.1.6]{CS}), there exist $(\lambda^{h,i}, \chi^{h,i}_x, \chi^{h,i}_v, \chi^{h,i}_t)_{i=1,\dots,2N+2}$ such that $\lambda^{h,i}\geq0$, $\sum_{i=1}^{2N+2}\lambda^{h,i}=1$,
$(\chi^{h,i}_x, \chi^{h,i}_v, \chi^{h,i}_t)\in D_{x,v, t}^*u(y_h,w_h, s_h)$ and $(\chi^h_x, \chi^h_v, \chi^h_t) = \sum_{i=1}^{2N+2}\lambda^{h,i}(\chi^{h,i}_x, \chi^{h,i}_v,\chi^{h,i}_t)$.
We claim that, for any $i=1,\dots, 2N+2$, there holds
\[\lim_{h\to 0}\chi^{h,i}_v=D_vu(x(s),v(s),s).
\]
Indeed, let $\chi^{i}_v$ be any cluster point of $\{\chi^{h,i}_v\}_h$. After a diagonal extraction, there exist $(x_n,v_n,t_n)$ such that $u$ is differentiable at $(x_n,v_n,t_n)$, $(x_n,v_n,t_n)\to (x(s),v(s),s)$ and $D_vu(x_n,v_n,t_n) \to \chi^{i}_v$ as $n\to\infty$.
By \cite[Lemma 4.6]{C} (applied to $z_n(\cdot):=u(x_n,\cdot,t_n)$), we have
\begin{equation*}
\chi^{i}_v=\lim_{n}D_vu(x_n,v_n,t_n)\in D^+ z(v(s))
\end{equation*}
where $z(\cdot):=u(x(s),\cdot,s)$. On the other hand, assumption~\eqref{eq:3} ensures that~$z$ is differentiable at $v(s)$; hence, by \cite[Proposition 3.1.5-(c)]{CS}, we get $\chi^{i}_v=D_v u(x(s),v(s),s)$ and our claim is proved. In particular, we deduce that $\chi^{h}_v$ converge to $D_vu (x(s),v(s), s)$ as $h\to 0$.\\
On the other hand, since $u$ is a viscosity solution to equation~\eqref{HJ} and $(\chi^{h,i}_x, \chi^{h,i}_v, \chi^{h,i}_t)\in D_{x,v, t}^*u(y_h,w_h, s_h)$, we obtain that for all $i\in 1,\dots, 2N+2$,
\[
- \chi^{h,i}_t +\frac12\left|\chi^{h,i}_{v}\right|^2-\frac12\left|w_h\right|^2-w_h\cdot\chi^{h,i}_{x}=\ell(y_h, w_h, s_h).
\]
Therefore,
$\chi^{h}_t + w_h\cdot \chi^{h}_x = \frac12 \sum_{i=1}^{2N+2}\lambda^{h,i}\left|\chi^{h,i}_{v}\right|^2
-\frac12\left|w_h\right|^2- \ell(y_h,w_h,s_h)$ converges to\\
$\frac12 |D_v u(x(s),v(s), s)|^2
- \frac12|v(s)|^2 -\ell(x(s),v(s), s)$ as $h\to 0$.
\\
Then dividing (\ref{31}) by $h$ and letting $h$ tend to $0$, we get that
\begin{displaymath}
\begin{split}
& \frac {d}{ds} \left(u(x(s),v(s),s)\right)\\ =
& D_vu (x(s),v(s), s)\cdot v'(s)
+ \frac 1 2 \left|D_vu (x(s),v(s), s)\right| ^2 - \frac 1 2 |v(s)|^2 -\ell(x(s),v(s),s).
\end{split}
\end{displaymath}
Recalling (\ref{OS}), we get
\begin{displaymath}
\frac {d}{ds} \left(u(x(s),v(s),s)\right)= - \frac 1 2 \left|D_vu (x(s),v(s), s)\right| ^2 - \frac 1 2 |v(s)|^2 -\ell(x(s),v(s),s).
\end{displaymath}
or in equivalent manner,
\begin{displaymath}
\frac {d}{ds} \left(u(x(s),v(s),s)\right)= - \frac 1 2 \left|v'(s)\right| ^2 - \frac 1 2 |v(s)|^2 -\ell(x(s),v(s),s),
\end{displaymath}
which holds for almost every $s$.
Integrating this equality on $[t,T]$ and taking into account the terminal condition in~\eqref{HJ}, we obtain
\begin{displaymath}
u(x,v, t)=\int_t^T\frac12|v'(s)|^2+ \frac12|v(s)|^2+ \ell(x(s),v(s), s) ds +g(x(T),v(T)).
\end{displaymath}
Therefore, the control law $\alpha(s)= v'(s)= -D_vu (x(s),v(s),s)$ is optimal. This achieves the proof of the first statement.
\medskip
The second statement is a direct consequence of Lemma \ref{4.9}.
\end{proof}
\section{The continuity equation}\label{sect:c_eq}
In this section, our aim is to study equation \eqref{eq:MFGAs}-(ii), and more precisely the well-posedness of
\begin{equation}\label{continuity}
\left\{
\begin{array}{ll} \partial_t m+v\cdot D_xm-
\diver_v (m\, D_v u)= 0,&\qquad \textrm{in }{\mathbb R}^{2N}\times (0,T),\\
m(x,v, 0)=m_0(x,v), &\qquad \textrm{on }{\mathbb R}^{2N},
\end{array}\right.
\end{equation}
where $u$ is the value function associated to the cost $J_t$ in~\eqref{def:OC}; for the sake of clarity, let us recall from Proposition~\ref{exuniq} that $u$ is the unique viscosity solution fulfilling~\eqref{eq:stimau} to the problem
\begin{equation*
\left\{\begin{array}{ll}
-\partial_t u-v\cdot D_xu+\frac1 2 \vert D_vu\vert^2-\frac1 2 \vert v \vert^2-l(x,v) =F[\overline m(t)](x,v),&\quad \textrm{in }{\mathbb R}^{2N}\times (0,T),\\
u(x,v, T)=G[\overline m(T)](x, v),&\quad \textrm{on }{\mathbb R}^{2N},
\end{array}\right.
\end{equation*}
and~$\overline m$ is fixed and belongs to $ C([0,T];\mathcal P_1({\mathbb R}^{2N}))$.
It is worth to observe that the differential equation in~\eqref{continuity} can also be written
\[
\partial_t m- \diver_{x,v} (m\, b)=0,
\]
with $ b:=(-v, D_v u)$.
In the present framework, the properties of $u$ (semi-concavity and local Lipschitz continuity) are not enough to ensure that the flow $\Phi(x,t,s)$ given by Lemma~\ref{B} has a Lipschitz continuous inverse, by contrast with \cite[Lemma 4.13]{C}.
Moreover, the drift $ b$ is only locally bounded; this lack of regularity makes it impossible to apply the standard results for drifts which are Lipschitz continuous (uniqueness, existence and representation formula of $m$ as the push-forward of $m_0$ through the characteristic flow; e.g., see \cite[Proposition 8.1.8]{AGS}). We shall overcome this difficulty by applying the superposition principle \cite[Theorem 8.2.1]{AGS}. The latter yields a representation formula of $m$ as the push-forward of some measure on~$C([0,T];{\mathbb R}^{2N})$ through the evaluation map~$e_t$ defined by $e_t(\gamma)=\gamma(t)$ for all continuous function $\gamma$ with value in ${\mathbb R}^{2N}$.
In the following theorem, we state existence, uniqueness, and some regularity results for~\eqref{continuity}:
\begin{theorem}\label{prp:m}
Under assumptions {\rm (H)}, for any $\overline m\in C([0,T]; \mathcal P_1({\mathbb R}^{2N}))$,
there is a unique $m\in C^{\frac 1 2} ([0,T]; \mathcal P_1({\mathbb R}^{2N})) \cap L^\infty((0,T);\mathcal P_2({\mathbb R}^{2N}))$
which solves problem \eqref{continuity} in the sense of Definition~\ref{defsolmfg}.
Moreover $m(t,\cdot)$ satisfies: for any for $\phi\in C^0_b({\mathbb R}^{2N})$, for any $t\in [0,T]$,
\begin{equation}
\label{ambrosio}
\int_{{\mathbb R}^{2N}} \phi(x,v)\, m(x,v,t)dxdv=\int_{{\mathbb R}^{2N}}\phi\left(\overline {\gamma}_{x,v}(t)\right)\,m_0(x,v)\, dx dv,
\end{equation}
where, for a.e. $(x,v)\in{\mathbb R}^{2N}$, $\overline{\gamma}_{x,v}$ is the solution to \eqref{dyn}.
\end{theorem}
The proof of Theorem~\ref{prp:m} is given in the next two subsections which are devoted respectively to existence (see Proposition~\ref{VV}) and to uniqueness and the representation formula (see Proposition~\ref{!FP}).
\subsection{Existence of the solution}\label{subsect:ex}
We wish to establish the existence of a solution to the continuity equation
via a vanishing viscosity method applied to the {\it whole} MFG system in which the viscous terms involve
Laplace operators with respect to {\it both} $x$ and $v$. This is reminiscent of \cite[Appendix]{C13} (see also \cite[Section 4.4]{C}).
In this way, $D_vu$ is replaced by $D_vu^\sigma$, which is regular by standard regularity theory for parabolic equations;
this implies the regularity of the solution of the Fokker-Planck equation (see \cite{CH}).
Note also that $D_vu$ may be unbounded; we shall overcome this issue by taking advantage of estimates similar to those in Lemma \ref{L1}.
Indeed, these estimates will allow us to apply classical results for the existence and uniqueness of the solution.
\begin{proposition}\label{VV}
Under assumptions $\rm{(H)}$, for any~$\overline m\in C([0,T];\mathcal P_1({\mathbb R}^{2N}))$, problem \eqref{continuity} has a solution $m$ in the sense of Definition \ref{defsolmfg}. Moreover $m\in C^{\frac 1 2} ([0,T]; \mathcal P_1({\mathbb R}^{2N})) \cap L^\infty(0,T;\mathcal P_2({\mathbb R}^{2N}))$.
\end{proposition}
We consider the solution $(u^\sigma, m^\sigma)$ to the following problem
\begin{equation}
\label{eq:MFGv}
\left\{
\begin{array}{lll}
(i)\ -\partial_t u-\sigma \Delta_{x,v} u-v\cdot D_xu+\frac1 2 \vert D_vu\vert^2-\frac1 2 \vert v \vert^2-l(x,v) =F[\overline m](x, v),\ & \textrm{in }{\mathbb R}^{2N}\times (0,T),\\
(ii)\ \partial_t m-\sigma \Delta_{x,v} m-\diver _v (m D_v u)-v\cdot D_xm=0, & \textrm{in }{\mathbb R}^{2N}\times (0,T),\\
(iii)\ m(x,v, 0)=m_0(x,v),\quad u(x,v,T)=G[\overline m(T)](x, v),& \textrm{on }{\mathbb R}^{2N}.
\end{array}\right.
\end{equation}
Recall that equation \eqref{eq:MFGv}-(ii) has a standard probabilistic interpretation (see relation~\eqref{mstoch} below).
Our aim is to find a solution to problem~\eqref{continuity} by letting $\sigma$ tend to $0^+$. To this end, some estimates are needed.
\\
Note that equation~\eqref{eq:MFGv}-(ii) can be written in the compact form
\begin{equation}\label{eq:4}
\partial_t m-\sigma \Delta_{x,v} m-\diver _{x,v} (m { b}^\sigma)=0,\qquad \textrm{with }{ b}^\sigma:=(-v, D_v u).
\end{equation}
We start by establishing the well-posedness of system~\eqref{eq:MFGv} and that the functions $u^\sigma$ are Lipschitz continuous and semi-concave uniformly in~$\sigma$.
\begin{lemma}\label{visco:lemma5.2}
Under the same assumptions as in Proposition~\ref{VV}, there exists a unique classical solution~$u^\sigma$ to problem~\eqref{eq:MFGv}-(i), -(iii) with quadratic growth in $(x,v)$. Moreover, there exists a constant $C>0$
which depends only on the constants in assumptions $\rm{(H)}$, in particular it is independent of $\sigma\le 1$,
such that
\begin{eqnarray*}
&(a)&|u^\sigma(x,v,t)|\leq C(1+|v|^2),\\
&(b)&\|D_x u^\sigma\|_\infty\leq C, \quad |D_v u^\sigma(x,v,t)|\leq C(1+|v|), \quad |\partial_t u^\sigma(x,v,t)|\leq C(1+|v|^2),\\
&(c)& D^2_{x,v} u^\sigma\leq C,
\end{eqnarray*}
where $D_{x,v}^2 u$ is the Hessian of~$u$ with respect to both~$x$ and~$v$.
\end{lemma}
\begin{proof}
Following the same arguments of Proposition~\ref{exuniq} (based on the comparison principle by Da Lio and Ley~\cite[Theorem 2.1]{DLL}),
one can easily prove the existence of a viscosity solution to equation ~\eqref{eq:MFGv}-(i) with terminal condition
as in ~\eqref{eq:MFGv}-(iii) and satisfying inequality~$(a)$. Furthermore, still by the results in~\cite{DLL}, this solution is unique among the functions with this growth at infinity. Hence, estimate~$(a)$ is proved.
Let us now prove that this viscosity solution~$u^\sigma$ is a classical solution. To this end, let us assume for a moment that $u^\sigma$ satisfies estimates~$(b)$ and $(c)$. We see that $u$ is a viscosity subsolution of
\begin{displaymath}
- \partial_t u-\sigma \Delta_{x,v} u-v\cdot D_xu\le C(1+|v|^2).
\end{displaymath}
Moreover, from estimate~$(c)$, we see that at any point $(x,v,t)$, either $u^\sigma$ is twice differentiable with respect to $x$ and $v$, or
there does exist a smooth function that touches $u^\sigma$ from below. This and estimate~$(b)$ imply that $u^\sigma$ is a viscosity supersolution of
\begin{displaymath}
- \partial_t u-\sigma \Delta_{x,v} u-v\cdot D_xu\ge -C(1+|v|^2).
\end{displaymath}
for some positive constant $C$. From \cite{MR1341739}, $u$ is also a distributional subsolution (respectively supersolution) of the same linear inequalities.
Therefore, both $-\partial_t u^\sigma-\sigma \Delta_{x,v} u^\sigma-v\cdot D_xu^\sigma$ and
$-\frac1 2 \vert D_vu^\sigma\vert^2+\frac1 2 \vert v \vert^2 +l(x,v)+F[\overline m](x, v)$ are in $L_{\rm loc}^\infty$.
On the other hand, from~$(b)$ and $(c)$, Alexandrov's theorem implies that $u^\sigma$ is twice differentiable with respect to $x$ and $v$ almost everywhere, so the equation
$$-\partial_t u^\sigma-\sigma \Delta_{x,v} u^\sigma-v\cdot D_xu^\sigma
=-\frac1 2 \vert D_vu^\sigma\vert^2+\frac1 2 \vert v \vert^2 +\ell(x,v,t),$$
(where $\ell$ and $g$ are defined in (\ref{ell})),
holds almost everywhere, and in the sense of distributions since both the left and right hand sides are in $L^\infty_{\rm loc}$.
\\
Hence classical results on the regularity of weak solutions (including bootstrap) can be applied and yield that $u$ is a classical solution.
Let us now prove the estimates $(b)$ and $(c)$, by using similar arguments to those contained in the proofs
of Lemma \ref{L1}. They use a representation formula of $u$ arising from a stochastic optimal control problem (see, for example, \cite{DLL,BCQ,C}).
\\
Let $(\Omega, \mathcal F, (\mathcal F_t), \mathbb{P})$ be a complete filtered probability space, the filtration $(\mathcal F_t)$ supporting a standard $2N$-dimensional Brownian motion $B_s=(B_{x,s}, B_{v,s})$.
Let ${\mathcal A}_t$ be the set of ${\mathbb R}^{N}$-valued $(\mathcal F_t)$-progressively measurable processes and let $\mathbb{E}$ be the expectation with respect to the probability measure $\mathbb{P}$. The unique solution of \eqref{eq:MFGv}-(i) which satisfies point (a) can be written as:
\begin{displaymath}
u^\sigma(x,v, t) =\inf_{\alpha\in \mathcal A_t} \mathbb{E}\left(
\begin{array}[c]{l}
\displaystyle \int_t^T\left[\frac12 |\alpha(s)|^2+\frac12 |V(s)|^2+\ell(X(s), V(s), s) \right] \,ds +g(X(T), V(T))
\end{array}
\right)
\end{displaymath}
where the controlled process $(X(\cdot), V(\cdot))$ satisfies
\begin{displaymath}
X(t)=x, \quad V(t)=v,
\end{displaymath}
almost surely and
is governed by the stochastic differential equations
\begin{equation}
\label{1-stoc}
\left\{
\begin{array}{l}
dX=V(s) ds +\sqrt{2\sigma} dB_{x,s}, \\
dV= \alpha(s) ds +\sqrt{2\sigma} dB_{v,s}.
\end{array}
\right.
\end{equation}
Thus, almost surely,
\begin{equation}
\label{pro}
\left\{
\begin{array}{rcl}
\displaystyle X(s)&=& \displaystyle x+v(s-t)+\int_t^s \int_t^{\theta}\alpha(\tau) \,d\tau d\theta+\sqrt{2\sigma} \int_t^s\left( \int_t^{\theta}
dB_{v,\tau}\right)\, d\theta+\sqrt{2\sigma} \int_t^s dB_{x,\tau},\\
V(s)&=& \displaystyle v+\int_t^s \alpha(\tau)\,d\tau+\sqrt{2\sigma} \int_t^s dB_{v,\tau}.
\end{array}
\right.
\end{equation}
To prove $(b)$, we can exactly use the same arguments as for Lemma \ref{L1}, replacing the paths
$(x(s), v(s))$ and $(y(s), w(s))$ by the processes $(X(s), V(s))$ and $(Y(s), W(s))$,
and noting that, from \eqref{pro}, we get similar equalities as in \eqref{diff}.\\
Note that, for any $\sigma$, we get from $(a)$ that any $\epsilon$-optimal control $\alpha^{\sigma}$ for $u^\sigma(x,v, t)$ satisfies
\begin{displaymath}
\mathbb{E}\left(\int_t^T|\alpha^{\sigma}(s)|^2ds\right)\leq C(1+|v|^2),
\end{displaymath}
hence, we get the same estimates as (\ref{eq:9})
and \eqref{L2}, namely estimate $(b)$. An analytic proof of $(b)$ is also possible, see \cite[Chapter XI]{Lie}.
\\
In order to prove $(c)$, we can follow the same procedure as in the proof of Lemma \ref{semi-concav},
noting that:\\
i) equalities \eqref{eqsemi} and \eqref{reltutte} are still true for the stochastic processes,\\
ii) if we fix $s\in [t,T]$, using a Taylor expansion of $g$ as in \eqref{taypro}, we get
\begin{multline*}
g(X(s),V(s))=
g(X_{\lambda}(s), V_{\lambda}(s))+ Dg(X_{\lambda}(s), V_{\lambda}(s))(X(s)-X_{\lambda}(s), V(s)-V_{\lambda}(s))\\
+\frac{1}{2}(X(s)-X_{\lambda}(s), V(s)-V_{\lambda}(s))D^2g(\xi,\eta)(X(s)-X_{\lambda}(s), V(s)-V_{\lambda}(s))^T,
\end{multline*}
where
$\xi=X(s)+\theta_1 (X(s)-X_{\lambda}(s))$, $\eta=V(s)+\theta_2 (V(s)-V_{\lambda}(s))$
for suitable $\theta_1$ and $\theta_2$ in $[0,1]$. For a similar proof, see \cite{BCQ}.
\end{proof}
\begin{lemma}\label{visco:buonapos}
Under the same assumptions as in Proposition~\ref{VV}, there exists a unique classical solution $m^\sigma$
to problem~\eqref{eq:MFGv}-(ii), -(iii) with a sub-exponential growth in $(x,v)$. Moreover, $m^\sigma>0$.
\end{lemma}
\begin{proof}
By Lemma~\ref{visco:lemma5.2}, the problem for $m^\sigma$ can be written
\[
\quad \partial_t m-\sigma \Delta_{x,v} m -b^{\sigma}\cdot D_{xv} m-\left(\Delta_{v}u^{\sigma}\right)m=0,
\quad m(0)=m_0,
\]
where $b^\sigma$ has been introduced in (\ref{eq:4}) and from the estimates contained in Lemma \ref{visco:lemma5.2},
$|b^{\sigma}|\leq C(1+|v|)$ and $\Delta_{v}u^{\sigma}\leq C$.
Using this and the results contained in \cite{IK}, we get the existence and uniqueness of a classical solution $m^\sigma$ of \eqref{eq:MFGv}-(ii) with initial condition as in~\eqref{eq:MFGv}-(iii).
From the assumptions on $m_0$ and Harnack inequality (see for example \cite[Theorem 2.1, p.13]{LSU}) we get that $m^\sigma(\cdot,t)>0$ for $t>0$.
\end{proof}
Let us now prove some properties of the functions~$m^\sigma$ which will play a crucial role in the proofs of Proposition~\ref{VV} and of Theorem~\ref{thm:main}.
\begin{lemma}\label{visco:lemma4}
Under the same assumptions of Proposition~\ref{VV}, there exists a constant $K>0$
which depends only on the constants in assumptions $\rm{(H)}$ and on $m_0$, in particular it is independent of $\sigma\le 1$, such that:
\begin{equation*}
\begin{array}{ll}
1.\quad &\|m^\sigma\|_\infty\leq K, \\
2.\quad &{\bf d}_1(m^\sigma(t_1),m^\sigma(t_2))\leq K(t_2-t_1)^{1/2}, \qquad \forall t_1\le t_2\in[0,T],\\
3.\quad & \displaystyle\int_{{\mathbb R}^{2N}}(|x|^2+|v|^2)\,dm^\sigma(t)(x,v)\leq K \left(\displaystyle\int_{{\mathbb R}^{2N}}(|x|^2+|v|^2)\,dm_0(x,v)+1 \right),\qquad \forall t\in[0,T].
\end{array}
\end{equation*}
\end{lemma}
\begin{proof}
Point 1. In order to prove this $L^\infty$ estimate, we argue as in \cite[Theorem 5.1]{C13}. We note that
\begin{displaymath}
\diver _v (m^\sigma D_v u^\sigma)=D_vm^\sigma\cdot D_vu^\sigma +m^\sigma(\Delta_{v}u^\sigma)\leq D_vm^\sigma\cdot D_vu^\sigma +Cm^\sigma,
\end{displaymath}
because of the semi-concavity of~$u$ established in Lemma~\ref{visco:lemma5.2} and the positivity of $m^\sigma$. Therefore, from assumption ${\rm (H2)}$, the function~$m^\sigma$ satisfies
\begin{displaymath}
\partial_t m^\sigma-\sigma \Delta_{x,v} m^\sigma-v \cdot D_xm^\sigma-D_vu^\sigma\cdot D_vm^\sigma -Cm^\sigma\leq 0,\qquad m^\sigma(x,v, 0)\leq C.
\end{displaymath}
Then, using $w=Ce^{ C t}$ as a supersolution (recall that $C$ is independent of $\sigma$), we obtain that
$\|m^\sigma\|_\infty\leq Ce^{ C T}$, using the comparison principle proved in~\cite[Theorem 2.1]{DLL}.
To prove Points 2 and 3 as in the proof of \cite[Lemma 3.4 and 3.5]{C}, it is convenient to introduce the stochastic differential equation
\begin{equation}\label{11C}
dY_t= b^\sigma(Y_t,t) dt +\sqrt{2\sigma} dB_t,\qquad Y_0=Z_0,
\end{equation}
where $Y_t=(X_{t}, V_{t})$, $ b^\sigma (x,v,t)= (-v, D_vu^\sigma(x,v,t))$,
$B_t$ is a standard $2N$-dimensional Brownian motion, and ${\mathcal L}(Z_0)=m_0$. By standard arguments, setting
\begin{equation}
\label{mstoch}
m^\sigma(t):={\mathcal L}(Y_t),
\end{equation}
we know that $m^\sigma(t)$ is absolutely continuous with respect to Lebesgue measure, and that if $m^\sigma(\cdot, \cdot,t)$ is the density of $m^\sigma(t) $, then $m^\sigma$ is the weak solution to~\eqref{eq:MFGv}-(ii) with $m^\sigma|_{t=0}=m_0$ (from Ito's Theorem, since $ b^\sigma$ has at most linear growth with respect to $(x,v)$, Proposition 3.6 Chapter 5 \cite{KS}, p.303, and the book \cite{Kr}). Here again, we have used the estimate on $|D_vu^\sigma|$ given in Lemma \ref{visco:lemma5.2}.
\begin{description}
\item{Point 3:}
Noting that
$$\int_{{\mathbb R}^{2N}}(|x|^2+|v|^2) dm^\sigma(t)(x,v)= {\mathbb E}(|Y_t|^2),$$ the desired estimate
can be obtained by applying Estimate 3.17 of Problem 3.15, p. 306, (the solutions are at p. 389) of \cite{KS} with $m=1$.
\item{Point 2:}
For $t_2\geq t_1$, it is well known that
$${\bf d}_1(m^\sigma(t_1),m^\sigma(t_2))\leq {\mathbb E}(|Y_{t_1}-Y_{t_2}|).$$
Recall also that for a suitable constant $C$,
$$\vert b^\sigma(Y_\tau, \tau)\vert \leq C(\vert V_\tau\vert+1).$$
The latter two observations imply that
\begin{displaymath}
\begin{array}[c]{rcl}
\mathbb{E}(|Y_{t_1}-Y_{t_2}|)&\leq& \displaystyle \mathbb{E}\left(\int_{t_1}^{t_2} \vert b^\sigma(Y_\tau, \tau)\vert d\tau+ \sqrt {2\sigma}|B_{t_2}-B_{t_1}|\right)\\
&\leq& \displaystyle \mathbb{E}\left(C \int_{t_1}^{t_2} (\vert V_\tau \vert +1)\vert d\tau+ \sqrt {2\sigma} |B_{t_2}-B_{t_1}|\right)\\
&\leq& \displaystyle C\left(\mathbb{E}\left(\int_{t_1}^{t_2} (\vert V_\tau\vert ^2 +1)\vert d\tau\right)\right) ^{\frac 1 2 } \sqrt{t_2-t_1} + \sqrt {2\sigma}\sqrt{t_2-t_1} \\
&\leq & \displaystyle C\left(\mathbb{E}\left(\max_{[{t_1},{t_2}]}\vert Y_\tau\vert^2\right) +1\right) ^{\frac 1 2 } (t_2-t_1)+ \sqrt {2\sigma}\sqrt{t_2-t_1} .
\end{array}
\end{displaymath}
where we have used estimate \cite[(3.17) p. 306]{KS}.
\end{description}
\end{proof}
\begin{proof}[Proof of Proposition \ref{VV}]
The arguments are similar to those in the proof of \cite[Theorem 5.1]{C13} (see also \cite[Theorem 4.20]{C}).
Lemma~\ref{visco:lemma5.2} implies that possibly after the extraction of a subsequence,
$u^\sigma$ locally uniformly converges to some function~$u$,
which is Lipschitz continuous with respect to $x$, locally Lipschitz continuous with respect to $v$, and
$Du^\sigma\to Du$ a.e. (because of the semi-concavity estimate of Lemma~\ref{visco:lemma5.2} and \cite[Theorem 3.3.3]{CS}).
By standard stability result for viscosity solutions, the function~$u$ is a viscosity solution of \eqref{HJ}.
On the other hand, the function $m^\sigma$ satisfies the estimates stated in Lemma~\ref{visco:lemma4}:
\begin{enumerate}
\item from point 3, $m^\sigma(t)$ is bounded in $\mathcal P_2({\mathbb R}^{2N})$ uniformly in $\sigma\in[0,1]$ and $t\in [0,T]$
\item from points 2 and 3 , $m^\sigma$ is bounded in $C^{1/2}([0,T];\mathcal P_1({\mathbb R}^{2N}))$ uniformly with respect to $\sigma\in[0,1]$.
\end{enumerate}
Recalling that the subsets of $\mathcal P_1({\mathbb R}^{2N})$ whose elements have uniformly bounded second moment
are relatively compact in $\mathcal P_1({\mathbb R}^{2N})$, see for example \cite[Lemma 5.7]{C}, we can apply Ascoli-Arzel{\`a} theorem:
we may extract a sequence (still indexed by~$\sigma$ for simplicity) such that $\sigma\to 0^+$ and $m^\sigma$ converges to some $m\in C^{1/2}([0,T];\mathcal P_1({\mathbb R}^{2N}))$ in the $C([0,T];\mathcal P_1({\mathbb R}^{2N}))$ topology. Moreover, from point 1 in Lemma~\ref{visco:lemma4} and Banach-Alaoglu theorem, $m$ belongs to
$L^\infty_{\rm loc}((0,T)\times{\mathbb R}^{2N})$ and the sequence $m^\sigma$ converges to $m$ in $L^\infty_{\rm loc}((0,T)\times{\mathbb R}^{2N})$-weak-$*$.
\\
Therefore, by passing to the limit, we immediately obtain that $m|_{t=0}=m_0$, $\|m\|_\infty \le K$ and that
${\bf d}_1(m(t_1),m(t_2))\leq K(t_2-t_1)^{1/2}$, $\forall t_1\le t_2\in[0,T]$.
\\
Let us prove that for all $t\in [0,T]$,
\begin{equation}
\label{eq:7}
\displaystyle \int_{{\mathbb R}^{2N}}(|x|^2+|v|^2)\,dm(t)(x,v)\leq K \left(\int_{{\mathbb R}^{2N}} (|x|^2+|v|^2)\,dm_0(x,v)+1 \right).
\end{equation}
For that, let us consider the increasing sequence of functions defined on ${\mathbb R}_+$: $\phi_n(\rho)= 1\wedge ((n+1-\rho)\vee 0)$.
We know from point 3 in Lemma~\ref{visco:lemma4}, that for all $t\in [0,T]$,
\begin{equation}
\label{eq:5}
\displaystyle \int_{{\mathbb R}^{2N}}(|x|^2+|v|^2) \phi_n(|x|^2+|v|^2) m^\sigma(x,v,t) dxdv
\leq K \left(\int_{{\mathbb R}^{2N}} (|x|^2+|v|^2)\,dm_0(x,v)+1 \right).
\end{equation}
For a fixed $n$, we can pass to the limit in (\ref{eq:5}) thanks to the $L^\infty_{\rm loc}((0,T)\times{\mathbb R}^{2N})$-weak-$*$ convergence established above. We obtain:
\begin{equation}
\label{eq:6}
\displaystyle \int_{{\mathbb R}^{2N}}(|x|^2+|v|^2) \phi_n(|x|^2+|v|^2) m(x,v,t) dxdv
\leq K \left(\int_{{\mathbb R}^{2N}}|(|x|^2+|v|^2)\,dm_0(x,v)+1 \right).
\end{equation}
We then pass to the limit as $n\to +\infty$ thanks to Beppo Levi monotone convergence theorem, and obtain (\ref{eq:7}).
\\
Finally, $m^\sigma$ is a solution to \eqref{eq:MFGv}-(ii),
\begin{displaymath}
\int_0^T\int_{{\mathbb R}^{2N}}m^\sigma\left(-\partial_t \psi -\sigma \Delta_{x,v} \psi+D_v\psi\cdot D_vu^\sigma - v \cdot D_x\psi \right)\,dxdv\, dt=0
\end{displaymath}
for any $\psi\in C^\infty_c((0,T)\times{\mathbb R}^{2N})$.
Applying the dominated convergence theorem, we infer $D_v\psi\cdot D_vu^\sigma\to D_v\psi\cdot D_vu$ in $L^1$ as $\sigma \to 0^+$ because $D_v u^\sigma$ are locally bounded (see Lemma~\ref{visco:lemma5.2}), $D_vu^\sigma\to D_vu$ a.e. and $\psi$ has a compact support.
Letting $\sigma \to 0^+$, we conclude from the $L^\infty_{\rm loc}$-weak-$*$ convergence of~$m^\sigma$ and the convergence $Du^\sigma\to Du$ a.e. that the function~$m$ solves \eqref{continuity} in the sense of Definition~\ref{defsolmfg}.
\end{proof}
\begin{remark}
\label{sec:existence-solution}
Note that we have just proven that all the estimates on $u^\sigma$ contained in Lemma \ref{visco:lemma5.2} hold for $u$.
These estimates have also been obtained directly in the proof of Lemma~\ref{L1}. Similarly, all the estimates on $m^\sigma$ contained in Lemma \ref{visco:lemma4} hold for $m$.
\end{remark}
\subsection{Uniqueness of the solution}\label{uniq}
We now deal with uniqueness for ~\eqref{continuity}.
\begin{proposition}\label{!FP}
Under assumptions $\rm{(H)}$, the function $m$ found in Proposition \ref{VV} is the unique
solution to problem \eqref{continuity} in the sense of Definition~\ref{defsolmfg} such that \\
$m\in C^{\frac 1 2} ([0,T]; \mathcal P_1({\mathbb R}^{2N})) \cap L^\infty((0,T);\mathcal P_2({\mathbb R}^{2N}))$.
\\
Moreover, $m$ satisfies:
\begin{equation}\label{ambrosio2}
\int_{{\mathbb R}^{2N}} \phi(x,v) \, m(x,v,t) dxdv
=\int_{{\mathbb R}^{2N}}\phi(\overline{ \gamma}_{x,v}(t))\,m_0(x,v)\, dxdv, \qquad \forall \phi\in C^0_b({\mathbb R}^{2N}), \, \forall t\in[0,T],
\end{equation}
where, for a.e. $(x,v)\in{\mathbb R}^{2N}$, $\overline{\gamma}_{x,v}$ is the solution to \eqref{dyn}.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{!FP}]
The proof is similar to that of \cite[Proposition A.1]{CH}, which relies on the superposition principle~\cite[Theorem 8.2.1]{AGS}.
Let $\Gamma_T$ denote the set of continuous curves in ${\mathbb R}^{2N}$, namely $\Gamma_T=C([0,T];{\mathbb R}^{2N})$.
For any $t\in[0,T]$, we introduce the evaluation map: $e_t: {\mathbb R}^{2N}\times\Gamma_T\to {\mathbb R}^{2N}$, $e_t(x,v,\gamma):=\gamma(t)$.
Hereafter, when we write ``for a.e.'' without specifying the measure, we intend ``with respect to the Lebesgue measure''.
Let $m\in C^{1/2}([0,T];{\mathcal P}_1({\mathbb R}^{2N}))\cap L^\infty((0,T);{\mathcal P}_2({\mathbb R}^{2N}))$
be a solution of problem~\eqref{continuity} in the sense of Definition~\ref{defsolmfg}. Recall the notation $b(x,v,t)= (-v, D_vu(x,v,t))$.
The estimate (8.1.20) in chapter 8 of \cite{AGS} is fulfilled: indeed,
\begin{eqnarray*}
\int_0^T\int_{{\mathbb R}^{2N}}|b(x,v,t)|^2dm(t)(x,v)&\leq& C\int_0^T\int_{{\mathbb R}^{2N}}|v|^2dm(t)(x,v)\\
&&+C\int_0^T\int_{{\mathbb R}^{2N}}|D_vu(x,v,t)|^2dm(t)(x,v)\leq C,
\end{eqnarray*}
where the last inequality comes from the estimates on $D_vu$ and $m$ in Remark 4.2 (recall that $m(t)$ is a probability measure).
Therefore, the assumptions of the superposition principle are fulfilled (see \cite[Theorem 8.2.1]{AGS} and also \cite[pag. 182]{AGS}).
The latter and the disintegration theorem (see \cite[Theorem 5.3.1]{AGS}) entail that there exist a probability measure
$\eta$ on ${\mathbb R}^{2N}\times \Gamma_T$ and for $m_0$-almost every $(x,v)\in{\mathbb R}^{2N}$, a probability measure on $\eta_{x,v}$ on~$\Gamma_T$, such that
\begin{description}
\item{i)} $e_t\#\eta =m_t $, i.e., for every bounded and continuous real valued function $\psi$ defined on ${\mathbb R}^{2N}$, for every $t\in [0,T]$,
\begin{displaymath}
\int_{{\mathbb R}^{2N}} \psi(x,v)dm_t(x,v)= \int_{{\mathbb R}^{2N}\times \Gamma_T} \psi(\zeta(t)) d\eta (x,v,\zeta).
\end{displaymath}
In particular, $ e_0\#\eta =m_0$.
\item{ii)}
\begin{displaymath}
\eta =\displaystyle\int_{{\mathbb R}^{2N}}\eta_{x,v}\, dm_0(x,v),
\end{displaymath}
i.e. for every bounded Borel function $f: {\mathbb R}^{2N}\times \Gamma_T \to {\mathbb R}$,
\begin{displaymath}
\int_{{\mathbb R}^{2N}\times \Gamma_T} f(x,v, \zeta) d\eta (x,v,\zeta)= \int_{{\mathbb R}^{2N}} \left(\int_{\Gamma_T} f(x,v,\zeta) d\eta_{x,v}(\zeta)
\right) dm_0(x,v).
\end{displaymath}
\item{iii)} For $m_0$-almost every $(x,v)\in {\mathbb R}^{2N}$, the support of $\eta_{x,v} $ is contained in the set
\begin{equation}
\label{eq:8}
\left \{ \zeta\in {\rm AC} \left([0,T]; {\mathbb R}^{2N} \right) : \zeta(t) = (\xi(t), \eta(t)): \left|
\begin{array}[c]{l}
\xi(0)=x,\;\eta(0)=v,\\
\xi'(t)=\eta(t),\\
\eta'(t)= -D_vu(\xi(t),\eta(t),t).
\end{array}
\right. \right\}.
\end{equation}
\end{description}
Recall that in the present case, $m_0$ is absolutely continuous (from assumption ${\rm (H4)}$); hence, since for all $t\in [0,T]$, $u(\cdot,\cdot, t)$ is Lipschitz continuous, the optimal synthesis in Lemma~\ref{B} ensures that for a.e. $(x,v)\in{\mathbb R}^{2N}$, (\ref{eq:2})-\eqref{OS} (with $t=0$ in the present context) has a unique solution $\overline{\gamma}_{x,v}$, because it is the optimal trajectory for the cost $J_t$.
Therefore, for a.e. $(x,v)\in{\mathbb R}^{2N}$, the set in (\ref{eq:8}) is a singleton, or in equivalent manner,
$\eta_{x,v}$ coincides with $\delta_{\overline{\gamma}_{x,v}}$.
In conclusion, for any function $\psi\in C^0_b({\mathbb R}^{2N})$,
\begin{displaymath}
\begin{array}[c]{rcl}
\displaystyle \int_{{\mathbb R}^{2N}} \psi (x,v)\, m(x,v,t) dxdv&=&\displaystyle \int_{{\mathbb R}^{2N}\times\Gamma_T} \psi( e_t(\zeta)) d\eta(x,v,\zeta)
\\
&=&\displaystyle \int_{{\mathbb R}^{2N}} \left(\int_{\Gamma_T} \psi( e_t(\zeta)) d\eta_{x,v}(\zeta)
\right) dm_0(x,v)\\
&=& \displaystyle
\int_{{\mathbb R}^{2N}} \psi( e_t(\overline{\gamma}_{x,v})) dm_0(x,v)\\
&=& \displaystyle \int_{{\mathbb R}^{2N}} \psi(\overline{\gamma}_{x,v} (t)) m_0(x,v) dxdv.
\end{array}
\end{displaymath}
This shows that $m$ is uniquely defined as the image of $m_0$ by the flow of (\ref{dyn}).
\end{proof}
\begin{proof}[Proof of Theorem~\ref{prp:m}]{\empty}
Existence of $m$ comes from Proposition~\ref{VV}, uniqueness and the representation formula come from Proposition~\ref{!FP}.
\end{proof}
\section{Proof of the main results}\label{sect:MFG}
\begin{proof}[Proof of Theorem~\ref{thm:main}]{\empty}
For point 1, we argue as in the proof of \cite[Theorem 4.1]{C}. Consider the set ${\mathcal C} :=\{m\in C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))\mid m(0)=m_0\}$ endowed with the norm of~$C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$ and observe that it is a closed and convex subset of~$C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$. We also introduce a map ${\mathcal T}$ as follows: to any $m\in {\mathcal C}$, we associate the solution~$u$ to problem~\eqref{HJ} with $\overline m=m$ and to this $u$ we associate the solution~$\mu=:{\mathcal T}(m)$ to problem \eqref{continuity} which, by Proposition~\ref{VV} belongs to~${\mathcal C}$. Hence,~${\mathcal T}$ maps~${\mathcal C}$ into itself.
We claim that the map~${\mathcal T}$ has the following properties:
\begin{itemize}
\item[(a)] ${\mathcal T}$ is a continuous map with respect to the norm of~$C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$
\item[(b)] ${\mathcal T}$ is a compact map.
\end{itemize}
Assume for the moment that these properties are true. In this case, Schauder fixed point Theorem ensures the existence of a fixed point for~${\mathcal T}$, namely a solution to system~\eqref{eq:MFGAs}. Therefore it remains to prove properties $(a)$ and $(b)$.
Let us now prove~$(a)$. Let~$(m_n)_n$ be a sequence in~ ${\mathcal C}$ such that $m_n\to m$ in the $C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$ topology. We want to prove that ${\mathcal T}(m_n)\to {\mathcal T}(m)$ in $C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$.
We observe that hypothesis ${\rm (H3)}$ ensures that the functions~$(x,v,t)\mapsto F[m_n(t)](x,v)$ and~$(x,v)\mapsto G[m_n(T)](x,v)$ converge locally uniformly to the map~$(x,v,t)\mapsto F[m(t)](x,v)$ and respectively~$(x,v)\mapsto G[m(T)](x,v)$. Moreover, Lemma~\ref{L1} entails that the solutions~$u_n$ to problem~\eqref{HJ} with $\overline m=m_n$ are locally uniformly bounded and locally uniformly Lipschitz continuous. Therefore, by standard stability results for viscosity solutions, the sequence~$(u_n)_n$ converges locally uniformly to viscosity the solution~$u$ to problem~\eqref{HJ} with $\overline m=m$. Moreover, from Lemma~\ref{semi-concav}, the functions~$u_n$ are uniformly semi-concave; hence, by~\cite[Theorem 3.3.3]{CS}, $D u_n$ converge a.e. to $Du$.
By Proposition~\ref{VV} and Remark~\ref{sec:existence-solution}, the function~${\mathcal T}(m_n)$ verifies the bounds in Lemma~\ref{visco:lemma4} with a constant~$K$ independent of~$n$. Hence, the sequence~$({\mathcal T}(m_n))_n$ is uniformly bounded in~$C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$ (by Lemma~\ref{visco:lemma4}-(3) and Remark \ref{sec:existence-solution}, and because the subsets of $\mathcal P_1({\mathbb R}^{2N})$ whose elements have uniformly bounded second moment are relatively compact in $\mathcal P_1({\mathbb R}^{2N})$), and uniformly H{\"o}lder continuous in time with values in ${\mathcal P}_1({\mathbb R}^{2N})$ (by Lemma~\ref{visco:lemma4}-(2) and Remark \ref{sec:existence-solution}). Therefore, by Ascoli-Arzel{\`a} and Banach-Alaoglu theorems, there exists a subsequence~$({\mathcal T}(m_{n_k}))_k$ which converges to some $\mu\in C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$ in the $C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$-topology and in the $L^\infty_{\rm loc}((0,T)\times{\mathbb R}^{2N})$-weak-$*$ topology. As in Remark~\ref{sec:existence-solution}, $\mu$ verifies the bounds in~Lemma~\ref{visco:lemma4} and $\mu(0)=m_0$.
Observe that ${\mathcal T}(m_{n_k})$ solves problem~\eqref{continuity} with $u$ replaced by~$u_{n_k}$,
\begin{displaymath}
\int_0^T\int_{{\mathbb R}^{2N}}{\mathcal T}(m_{n_k})\left(-\partial_t \psi +D_v\psi\cdot D_vu_{n_k}-v\cdot D_x\psi \right)\,dxdv\, dt=0,
\end{displaymath}
for any $\psi\in C^\infty_c((0,T)\times{\mathbb R}^{2N})$. Passing to the limit as $k\to\infty$, we get that $\mu$ is a solution to~\eqref{continuity}. By the uniqueness result established in Proposition~\ref{!FP}, we deduce that $\mu={\mathcal T}(m)$, and that the whole sequence~$({\mathcal T}(m_n))_n$ converges to~${\mathcal T}(m)$.
Let us now prove $(b)$; since~${\mathcal C}$ is closed, it is enough to prove that ${\mathcal T}({\mathcal C})$ is a precompact subset of $C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$.
Let~$(\mu_n)_n$ be a sequence in~${\mathcal T}({\mathcal C})$ with $\mu_n={\mathcal T}(m_n)$ for some~$m_n\in{\mathcal C}$;
we wish to prove that, possibly for a subsequence, $\mu_n$ converges to some $\mu$ in the $C([0,T]; {\mathcal P}_1({\mathbb R}^{2N}))$-topology as $n\to\infty$.
By Remark~\ref{sec:existence-solution}, the functions~${\mathcal T}(m_n)$ satisfy the estimates in Lemma~\ref{visco:lemma4} with the same constant~$K$. Since the subsets of $\mathcal P_1({\mathbb R}^{2N})$ whose elements have uniformly bounded second moment are relatively compact in $\mathcal P_1({\mathbb R}^{2N})$, Lemma~\ref{visco:lemma4}-(3) ensures that the sequence $({\mathcal T}(m_n))_n$ is uniformly bounded. Moreover, Lemma~\ref{visco:lemma4}-(2) yields that the sequence $({\mathcal T}(m_n))_n$ is uniformly bounded in $C^{1/2}([0,T];{\mathcal P}_1({\mathbb R}^{2N}))$ and $L^\infty(0,T;{\mathcal P}_2({\mathbb R}^{2N}))$.
By arguing as in the proof of Proposition~\ref{VV}, we obtain that, possibly for a subsequence (still denoted by~${\mathcal T}(m_n)$), ${\mathcal T}(m_n)$ converges to some~$\mu$ in the $C([0,T];{\mathcal P}_1({\mathbb R}^{2N}))$-topology.
2.
Theorem \ref{prp:m} ensures that, if $(u,m)$ is a solution of \eqref{eq:MFGAs},
for any function $\psi\in C^0_b({\mathbb R}^{2N})$,
\begin{equation}\label{reprfor}
\int_{{\mathbb R}^{2N}} \psi(x,v)\, m(x,v,t)dxdv=\int_{{\mathbb R}^{2N}}\psi(\overline{ \gamma}_{x,v}(t))m_0(x,v)\, dxdv
\end{equation}
where $\overline{\gamma}_{x,v}$ is the solution of (\ref{dyn}) (uniquely defined for a.e. $(x,v)\in{\mathbb R}^{2N}$).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prp:!}]
Let $(u_1,m_1)$ and $(u_2,m_2)$ be two solutions to system~\eqref{eq:MFGAs} in the sense of Definition~\ref{defsolmfg}. By Theorem~\ref{prp:m}, for $i=1,2$, the function~$m_i$ satisfies~\eqref{ambrosio} with $\overline {\gamma}_{x,v}$ replaced by $\overline {\gamma}_{x,v}^i$, which for a.e. $(x,v)$ is the solution to~\eqref{dyn} with~$u$ replaced by~$u_i$.
Moreover, let us recall from Lemma~\ref{L1}-(1) that the Lipschitz constant of $u_i$ has an at most linear growth in~$v$. By Gronwall's Lemma we obtain that $\overline{\gamma}_{x,v}^i$ is bounded. Since $m_0$ has compact support, we deduce the function~$m_i$ has compact support. In particular, we obtain that $\overline u:=u_1-u_2$ is an admissible test-function for the continuity equation satisfied by~$m_i$.
Taking advantage of the convexity of our Hamiltonian~$\mathcal{H}$ and of the monotonicity of the couplings~$F$ and~$G$, we can conclude the proof following the same arguments as in~\cite[Theorem 2.5]{ll07}.
\end{proof}
\section{Appendix}
Let us now consider the second order MFG system: for a positive number $\sigma$,
\begin{equation}\label{MFG2order}
\left\{
\begin{array}{lll}
(i)\ -\partial_t u-\sigma \Delta_{x,v} u-v\cdot D_xu+\frac1 2 \vert D_vu\vert^2-\frac1 2 \vert v \vert^2-l(x,v) =F[m](x, v),\ & \textrm{in }{\mathbb R}^{2N}\times (0,T),\\
(ii)\ \partial_t m-\sigma \Delta_{x,v} m-\diver _v (m D_v u)-v\cdot D_xm=0, & \textrm{in }{\mathbb R}^{2N}\times (0,T),\\
(iii)\ m(x,v, 0)=m_0(x,v),\quad u(x,v,T)=G[m(T)](x, v),& \textrm{on }{\mathbb R}^{2N}.
\end{array}\right.
\end{equation}
We aim at proving the existence and uniqueness of a classical solution to system~\eqref{MFG2order}.
We shall see that these results are byproducts of the estimates that we have already used above in the vanishing viscosity limit. More
precisely, the properties obtained in Section~\ref{sect:c_eq} will play a crucial role in what follows.
\begin{theorem}\label{thm:2order}
Under our standing assumptions, there exists a classical solution to problem \eqref{MFG2order}. Moreover, if the coupling costs~$F$ and~$G$ satisfy~\eqref{monot}, the solution is unique.
\end{theorem}
\begin{proof}
Our arguments are reminiscent of those used in the proof of~\cite[Theorem 3.1]{C}. We introduce
\begin{equation*}
\mathcal{C}:=\left\{m\in C^0([0,T]; \mathcal{P}_1({\mathbb R}^{2N})):\quad m(0)=m_0\right\}
\end{equation*}
which is a non-empty closed and convex subset of $C^0([0,T]; \mathcal{P}_1({\mathbb R}^{2N}))$. We define a map $\mathcal{T}$ as follows: for any $m\in \mathcal{C}$, let $u$ be the unique solution to \eqref{MFG2order}-$(i)$ and $u(x,v,T)=G[m(T)](x, v)$ found in Lemma~\ref{visco:lemma5.2}; we set $\mathcal{T}(m)=\mu$ where $\mu$ is the unique solution to \eqref{MFG2order}-$(ii)$ and $m(x,v, 0)=m_0(x,v)$ found in Lemma~\ref{visco:buonapos}. Lemma \ref{visco:lemma4} ensures that $\mathcal{T}$ maps $\mathcal{C}$ into itself.\\
By the same arguments as in the proof of Theorem \ref{thm:main}, the map $\mathcal{T}$ is continuous with respect to the norm of $C([0,T]; \mathcal{P}_1({\mathbb R}^{2N}))$ and it is compact. Hence, Schauder fixed point theorem ensures the existence of a fixed point $m$ for $\mathcal{T}$. Let $u$ denote the corresponding solution to \eqref{MFG2order}-$(i)$ and -$(iii)$. By Lemma~\ref{visco:lemma5.2} and Lemma~\ref{visco:buonapos} again, $u$ and $m$ are regular. In conclusion, $(u,m)$ is the desired solution to \eqref{MFG2order}.\\
Let us now prove the uniqueness part of the statement. Let $(u_1,m_1)$ and $(u_2,m_2)$ be two solutions; set $\overline u=u_1-u_2$. Our aim is to follow the arguments in the proof of Proposition~\ref{prp:!}. To this end, it is enough to prove that $\overline u$ is an admissible test-function for $m_1$ and $m_2$.
Indeed, for any $R>1$, let $\phi_R$ be a cut-off function in ${\mathbb R}^{2N}$ defined by $\phi_R(x,v):=\phi_1(x/R,v/R)$ where $\phi_1$ is a $C^2$ function such that $\phi_1=1$ in $B_1$, $\phi_1=0$ outside $B_{2}$. Clearly,
\begin{equation}\label{uniq0}
D_{x,v}\phi_R=0 \quad\textrm{outside }\overline{B_{2R}\setminus B_R},\quad \|D_{x,v}\phi_R\|_\infty\leq C/R\quad \textrm{and }\|\Delta_{x,v}\phi_R\|_\infty\leq C/R^2.
\end{equation}
Using $\phi_R \overline u$ as test-function in~\eqref{MFG2order}-$(ii)$ with $(u,m)=(u_i,m_i)$ for $i=1$ or $i=2$, we get
\begin{eqnarray}\notag
0&=&\iint_{R^{2N}\times[0,T]} m\left[
-\phi_R \partial_t \overline u-\sigma \Delta_{x,v} (\phi_R \overline u)+ D_v u\cdot D_v(\phi_R \overline u)+v\cdot D_x(\phi_R \overline u)\right]\, dxdvdt\\\notag
&&+\int_{{\mathbb R}^{2N}}m(x,v,T)\phi_R \left(G[m_1(T)](x,v)-G[m_2(T)](x,v)\right)dxdv-\int_{{\mathbb R}^{2N}}m_0\phi_R \overline udxdv\\ \notag
&=&\iint_{R^{2N}\times[0,T]} m\phi_R\left(2 v\cdot D_x \overline u+\frac{|D_v u_2|^2-|D_v u_1|^2}2 +F[m_1] -F[m_2]+D_vu\cdot D_v\overline u \right)\, dxdvdt\\\notag
&&+\int_{{\mathbb R}^{2N}}m(x,v,T)\phi_R \left(G[m_1(T)](x,v)-G[m_2(T)](x,v)\right)dxdv-\int_{{\mathbb R}^{2N}}m_0\phi_R \overline udxdv\\\notag
&&+ \iint_{R^{2N}\times[0,T]} m\left(-\sigma \overline u \Delta_{x,v} \phi_R -2\sigma D_{x,v}\phi_R\cdot D_{x,v}\overline u\right)\, dxdvdt\\\label{uniq2}
&& +\iint_{R^{2N}\times[0,T]} m\left(\overline u D_v u\cdot D_v\phi_R +\overline u v\cdot D_x\phi_R\right)\, dxdvdt
\end{eqnarray}
where the second equality is due to equation \eqref{MFG2order}-$(i)$.
Since $m>0$ and $m\in L^\infty((0,T);\mathcal{P}_2({\mathbb R}^{2N}))$ (see Lemma~\ref{visco:buonapos} and Lemma~\ref{visco:lemma4}), by the estimates on $u_i$ in Lemma \ref{visco:lemma5.2}, the dominated convergence theorem ensures that as $R\to\infty$, the first two lines in right hand side of \eqref{uniq2} converge to
\begin{multline*}
\iint_{R^{2N}\times[0,T]} m\left[2 v\cdot D_x \overline u-\frac{|D_v u_1|^2}2 +\frac{|D_v u_2|^2}2 +F[m_1] -F[m_2] +D_vu\cdot D_v\overline u\right]\, dxdvdt\\
+\int_{{\mathbb R}^{2N}}m(x,v,T) \left(G[m_1(T)](x,v)-G[m_2(T)](x,v)\right)dxdv-\int_{{\mathbb R}^{2N}}m_0 \overline udxdv;
\end{multline*}
hence, it remains to prove that the last two lines in the right hand side of~\eqref{uniq2} converge to $0$. Indeed, again by Lemmas~\ref{visco:lemma5.2},~\ref{visco:buonapos} and~\ref{visco:lemma4}, and by our estimates~\eqref{uniq0}, the dominated convergence theorem yields
\[
\iint_{R^{2N}\times[0,T]} m\left(-\sigma \overline u \Delta_{x,v} \phi_R -2\sigma D_{x,v}\phi_R\cdot D_{x,v}\overline u\right)\, dxdvdt\rightarrow 0.
\]
Let us now address to the last integral in the right hand side of~\eqref{uniq2}: the properties in~\eqref{uniq0} entail
\begin{equation*}
\left| m\left(\overline u D_v u\cdot D_v\phi_R +\overline u v\cdot D_x\phi_R\right)\right| \leq C m(1+|v|^2) \chi_R
\end{equation*}
where $\chi_R$ is the characteristic function of $B_{2R}\setminus B_R$. Moreover, since $m\in L^\infty((0,T);\mathcal{P}_2({\mathbb R}^{2N}))$, the right hand side in the last inequality belongs to $L^1$ independently of $R$. Therefore, again by the dominated convergence theorem, we get that as $R\to\infty$ the last integral in \eqref{uniq2} converges to $0$.
\end{proof}
\paragraph{\bf Acknowledgment.}
The authors are grateful to the anonymous referees for their fruitful comments and suggestions.
The work of Y. Achdou and N. Tchou was partially supported by the ANR (Agence Nationale de la Recherche) through MFG project ANR-16-CE40-0015-01.
Y. Achdou acknowledges support from the Chair Finance \& Sustainable Development and the FiME Lab (Institut Europlace de Finance).
The work of N. Tchou is partially supported by the Centre Henri Lebesgue ANR-11-LABX-0020-01.
P. Mannucci and C. Marchi are members of GNAMPA-INdAM and were partially supported also by the research project of the University of Padova ``Mean-Field Games and Nonlinear PDEs'' and by the Fondazione CaRiPaRo Project ``Nonlinear Partial Differential Equations: Asymptotic Problems and Mean-Field Games''.
\bibliographystyle{plain}
|
1,108,101,563,168 | arxiv | \subsection{RS scheme for the asymmetric case: a $(4{,}2{,}3)$ BC example}\label{sec:BC423}
We constitute the RS transmission block for the $(4{,}2{,}3)$ BC as follows.
\begin{itemize}
\item $1$ private symbol, denoted by $u_1$, is sent to Rx1 along a ZF-precoder $\mathbf{v}_1{=}\mathbf{H}_2^{\bot}{\in}\mathbb{C}^{4{\times}1}$ with power exponent $A_1$;
\item $2$ private symbols, denoted by $\mathbf{u}_2^{(1)}{\in}\mathbb{C}^{2{\times}1}$, are sent to Rx2 along a ZF-precoder $\mathbf{V}_2^{(1)}{=}\mathbf{H}_1^{\bot}{\in}\mathbb{C}^{4{\times}2}$ with power exponent $A_2$;
\item $1$ private symbol, denoted by $u_2^{(2)}$, is sent to Rx2 along a precoder $\mathbf{v}_2^{(2)}{\in}\mathbb{C}^{4{\times}1}$ in the subspace spanned by $\hat{\mathbf{H}}_2$. Its power exponent is $(A_2{-}\alpha_1)^+$.
\item A common message, denoted by $\mathbf{c}{\in}\mathbb{C}^{4{\times}1}$, is multicast using the remaining power.
\end{itemize}
Moreover, the power exponents $A_1$ and $A_2$ are defined as $A_1{\in}[0{,}\alpha_2]$ and $A_2{\in}[0{,}1]$. Mathematically, the transmitted and received signals write as
\begin{IEEEeqnarray}{rcl}\label{eq:RS423}
\!\!\!\!\!\!\mathbf{s}&{=}&\underbrace{\mathbf{c}}_{P}{+}\underbrace{\mathbf{v}_1u_1}_{P^{A_1}}{+}
\underbrace{\mathbf{V}_2^{(1)}\mathbf{u}_2^{(1)}}_{P^{A_2}}{+}\underbrace{\mathbf{v}_2^{(2)}u_2^{(2)}}_{P^{(A_2{-}\alpha_1)^+}}
\IEEEyesnumber\IEEEyessubnumber\label{eq:s423}\\
\!\!\!\!\!\!\mathbf{y}_1&{=}&\underbrace{\mathbf{H}_1^H\mathbf{c}}_{P}{+}
\underbrace{\mathbf{H}_1^H\mathbf{v}_1u_1}_{P^{A_1}}{+}
\underbrace{\mathbf{H}_1^H\left(\mathbf{V}_2^{(1)}\mathbf{u}_2^{(1)}{+}\mathbf{v}_2^{(2)}u_2^{(2)}\right)}_{P^{(A_2{-}\alpha_1)^+}}
,\IEEEyessubnumber\label{eq:y1_423}\\
\!\!\!\!\!\!\mathbf{y}_2&{=}&\underbrace{\mathbf{H}_2^H\mathbf{c}}_{P}{+}
\underbrace{\mathbf{H}_2^H\mathbf{v}_1u_1}_{P^{A_1{-}\alpha_2}}{+}
\underbrace{\mathbf{H}_2^H\mathbf{V}_2^{(1)}\mathbf{u}_2^{(1)}}_{P^{A_2}}{+}
\underbrace{\mathbf{H}_2^H\mathbf{v}_2^{(2)}u_2^{(2)}}_{P^{(A_2{-}\alpha_1)^+}}.\IEEEyessubnumber\label{eq:y2_423}
\end{IEEEeqnarray}
As we can see from the received signal, if $A_2{\leq}\alpha_1$, the undesired private symbols are drowned into the noise. If $A_2{>}\alpha_1$, the power allocation policy ensures that all the three private symbols intended for Rx2 are received by Rx1 with the same power level. Considering that each receiver decodes the common message and the desired private symbols successively, the following DoF tuple is achievable
\begin{IEEEeqnarray}{rcl}\label{eq:dof423}
\!\!\!\!\!\!\text{\rm At Rx1:}\,\, d_c&{\leq}&d_c^{(1)}{\triangleq}2{-}\max\{A_1{,}A_2{-}\alpha_1\}{-}(A_2{-}\alpha_1)^+\!\!,\IEEEyesnumber\IEEEyessubnumber\label{eq:dc1_423}\\
\!\!\!\!\!\!d_{p1}&{=}&(A_1{-}(A_2{-}\alpha_1)^+)^+,\IEEEyessubnumber\label{eq:dp1_423}\\
\!\!\!\!\!\!\text{\rm At Rx2:}\,\, d_c&{\leq}&d_c^{(2)}{\triangleq}3{-}2A_2{-}(A_2{-}\alpha_1)^+,\IEEEyessubnumber\label{eq:dc2_423}\\
\!\!\!\!\!\!d_{p2}&{=}&2A_2{+}(A_2{-}\alpha_1)^+.\IEEEyessubnumber\label{eq:dp2_423}
\end{IEEEeqnarray}
With the above achievable DoF tuple, we can see that when $\alpha_1{=}\alpha_2{=}0$, the sum DoF $3$ is achieved with $d_{p2}{=}3$, $d_c{=}0$ and $d_{p1}{=}0$. This result is consistent with the optimal sum DoF when there is no CSIT. Besides, when $\alpha_1{=}\alpha_2{=}1$, the sum DoF $4$ is achieved with $d_{p1}{=}2$, $d_{p2}{=}1$ and $d_c{=}1$. This result is consistent with the optimal sum DoF of the perfect CSIT case.
Next, we characterize the achievable DoF region of the $(4{,}2{,}3)$ MIMO BC by finding the maximum achievable sum DoF. We will firstly show the achievability of corner points $\mathcal{P}_{10}$ and $\mathcal{P}_{10^\prime}$ in the case $\Phi_{BC}{\leq}0$, and secondly show the achievability of corner points $\mathcal{P}_{12}$, $\mathcal{P}_{10^\prime}$ and $\mathcal{P}_{20}$ in the case $\Phi_{BC}{\geq}0$ by performing a Space-Time transmission.
\subsubsection{When $\alpha_1{\geq}\frac{1{+}\alpha_2}{2}$, i.e., $\Phi_{BC}{\leq}0$}
\begin{figure}[t]
\renewcommand{\captionfont}{\small}
\captionstyle{center}
\centering
\subfigure[$\alpha_1{\geq}\frac{1{+}\alpha_2}{2}$]
\centering
\includegraphics[width=0.24\textwidth,height=3cm]{BC423caseI}
\label{fig:BC423caseI}
\subfigure[$1{-}\alpha_2{\leq}\alpha_1{<}\frac{1{+}\alpha_2}{2}$]
\centering
\includegraphics[width=0.24\textwidth,height=3cm]{BC423caseIIa}
\label{fig:BC423caseIIa}
\\
\subfigure[$\frac{1{-}\alpha_2}{2}{\leq}\alpha_1{\leq}\min\{1{-}\alpha_2{,}\frac{1{+}\alpha_2}{2}\}$]
\centering
\includegraphics[width=0.24\textwidth,height=3cm]{BC423caseIIb}
\label{fig:BC423caseIIb}
\subfigure[$\alpha_1{\leq}\frac{1{-}\alpha_2}{2}$]
\centering
\includegraphics[width=0.24\textwidth,height=3cm]{BC423caseIIc}
\label{fig:BC423caseIIc}
\caption{Sum DoF of a $(4{,}2{,}3)$ MIMO BC}\label{fig:ds423}
\end{figure}
Let us define the achievable sum DoF as a function of the power levels, i.e., $d_s(A_1{,}A_2){\triangleq}\min\{d_s^{(1)}(A_2){,}d_s^{(2)}(A_1{,}A_2)\}$, where
\begin{IEEEeqnarray}{rcl}
d_s^{(1)}(A_2)&{=}&2{+}2A_2{-}(A_2{-}\alpha_1)^+,\IEEEyesnumber\IEEEyessubnumber\label{eq:ds1_423}\\
d_s^{(2)}(A_1{,}A_2)&{=}&3{+}(A_1{-}(A_2{-}\alpha_1)^+)^+,\IEEEyessubnumber\label{eq:ds2_423}
\end{IEEEeqnarray}
are obtained by summing \eqref{eq:dc1_423}, \eqref{eq:dp1_423}, \eqref{eq:dp2_423} and \eqref{eq:dp1_423}, \eqref{eq:dc2_423}, \eqref{eq:dp2_423}, respectively. Then, it can be shown that the power levels $(A_1^*{,}A_2^*){\triangleq}{\arg\max}d_s(A_1{,}A_2)$ that maximize the sum DoF are given by
\begin{equation}
A_1^*{=}\alpha_2{,}\quad A_2^*{=}\max\left\{\frac{1{+}\alpha_2}{2}{,}1{-}\alpha_1\right\},\label{eq:A1A2_423}
\end{equation}
because $d_s(A_1{,}A_2)$ increases with $A_1$, while $A_2$ is chosen such that the common-message-decodabilities at the two users are equalized, i.e., $d_s^{(1)}{=}d_s^{(2)}$ (or $d_c^{(1)}{=}d_c^{(2)}$). Figure \ref{fig:ds423} illustrates the maximum sum DoF for different values of $\alpha_1$ and $\alpha_2$ (the highest point of the red solid curve).
Here, as we are considering $\alpha_1{\geq}\frac{1{+}\alpha_2}{2}$, i.e., $\Phi_{BC}{\leq}0$, it can be verified that the sum DoF is maximized with $A_2^*{=}\frac{1{+}\alpha_2}{2}$ (as shown in Figure \ref{fig:BC423caseI}), which is smaller than $\alpha_1$. Plugging $A_2^*{=}\frac{1{+}\alpha_2}{2}$ and $A_1^*{=}\alpha_2$ into \eqref{eq:dp1_423}, \eqref{eq:dp2_423} and \eqref{eq:dc1_423} yields $d_{p1}{=}\alpha_2$, $d_{p2}{=}1{+}\alpha_2$ and $d_{c}{=}2{-}\alpha_2$. If the common message only carries information intended for user $1$ (resp. user $2$), we obtain the corner point $\mathcal{P}_{10}{=}(2{,}1{+}\alpha_2)$ (resp. $\mathcal{P}_{10^\prime}{=}(\alpha_2{,}3)$). Note that in this case, $L_{2{,}BC}$ in \eqref{eq:BCwsum} is inactive and the DoF region is formed by corner points $\mathcal{P}_{10}$ and $\mathcal{P}_{10^\prime}$.
\subsubsection{When $\alpha_1{\leq}\frac{1{+}\alpha_2}{2}$, i.e., $\Phi_{BC}{\geq}0$}
In this case, as shown by the highest point of the red solid curves in Figure \ref{fig:BC423caseIIa}, \ref{fig:BC423caseIIb} and \ref{fig:BC423caseIIc}, the optimal $A_2^*$ in \eqref{eq:A1A2_423} is greater than or equal to $\alpha_1$. Notably, this fact contrasts the power allocation in the MISO case where choosing the power level $A_1{=}A_2{=}\min\{\alpha_1{,}\alpha_2\}$ suffices to achieve the maximal sum DoF. The reason responsible for this observation is that with $A_2^*{\geq}\alpha_1$, the transmitter exploits the larger spatial dimension at user $2$ by delivering $3$ private messages to user $2$, while the interference overheard by user $1$ spans only $2$ dimensions.
However, the sum DoF can be further improved by a Space-Time transmission when $\Phi_{BC}{\geq}0$, which leads to the corner point $\mathcal{P}_{10^\prime}$ and $\mathcal{P}_{12}$. The transmission lasts for $T$ time slots. Letting $A_{k{,}l}$ denote the power level chosen for user $k$ in slot $l$, we choose $(A_{1{,}l}{,}A_{2{,}l}){=}(\alpha_2{,}1)$ for $l{=}1{,}\cdots{,}{\rho}T$ and $(A_{1{,}l}{,}A_{2{,}l}){=}(\alpha_2{,}\alpha_1)$ for $l{=}{\rho}T{+}1{,}\cdots{,}T$, where $0{\leq}\rho{\leq}1$. Note that we consider that $T$ is a sufficiently large integer such that $\rho T$ is an integer as well. The decoding is performed focusing on the aggregate received signals, namely $\left[\mathbf{y}_{k}(1){,}\cdots{,}\mathbf{y}_{k}(T)\right]^T$. Then, by plugging these power levels into \eqref{eq:ds1_423} and \eqref{eq:ds2_423} and computing the average sum DoF over the total $T$ channel uses, we have $d_{s{,}ST}(\rho){\triangleq}\min\{d_{s{,}ST}^{(1)}(\rho){,}d_{s{,}ST}^{(2)}(\rho)\}$, where
\begin{IEEEeqnarray}{rcl}
d_{s{,}ST}^{(1)}(\rho)&{=}&\rho d_s^{(1)}(1){+}(1{-}\rho)d_s^{(1)}(\alpha_1),\IEEEyesnumber\IEEEyessubnumber\label{eq:ds1_423rho}\\
d_{s{,}ST}^{(2)}(\rho)&{=}&\rho d_s^{(2)}(\alpha_2{,}1){+}(1{-}\rho)d_s^{(2)}(\alpha_2{,}\alpha_1).\IEEEyessubnumber\label{eq:ds2_423rho}
\end{IEEEeqnarray}
In Figure \ref{fig:BC423caseIIb} and \ref{fig:BC423caseIIc}, $d_{s{,}ST}^{(2)}(\rho)$ is illustrated by the green dotted line with $d_{s{,}ST}^{(2)}(0){=}d_s^{(2)}(\alpha_2{,}\alpha_1)$ and $d_{s{,}ST}^{(2)}(1){=}d_s^{(2)}(\alpha_2{,}1)$. However, in Figure \ref{fig:BC423caseIIa}, $d_{s{,}ST}^{(2)}(\rho)$ coincides with $d_s^{(2)}(\alpha_2{,}A_2)$ because $d_s^{(2)}(\alpha_2{,}A_2)$ is linear within the range $A_2{\in}[\alpha_1{,}1]$. Besides, $d_{s{,}ST}^{(1)}(\rho)$ coincides with $d_s^{(1)}(A_2)$ in Figure \ref{fig:BC423caseIIa}, \ref{fig:BC423caseIIb} and \ref{fig:BC423caseIIc}. In all the three figures, the maximum sum DoF achieved with Space-Time transmission is obtained with $\rho^*$ such that $d_{s{,}ST}^{(1)}(\rho^*){=}d_{s{,}ST}^{(2)}(\rho^*)$ holds (see the diamond points). Compared to the sum DoF achieved without Space-Time transmission (i.e., the highest point on the red solid curve), we can read from Figure \ref{fig:BC423caseIIb} and \ref{fig:BC423caseIIc} that $d_{s{,}ST}(\rho^*){>}d_s(A_1^*{,}A_2^*)$. However, in Figure \ref{fig:BC423caseIIa}, we have $d_{s{,}ST}(\rho^*){=}d_s(A_1^*{,}A_2^*)$. Through some calculation, we present the choices of $\rho^*$ and the the sum DoF achieved with and without Space-Time transmission in Table \ref{tab:ST}.
\begin{table*}[t]
\renewcommand{\arraystretch}{1.3}
\vspace{.6em}
\centering
\begin{tabular}{|c|c|c|}
\hline
Conditions & Without Space-Time Transmission & With Space-Time Transmission\\
\hline
a) & $A_2^*{=}\frac{1{+}\alpha_2}{2}$, $d_s{=}3{+}\alpha_2$ & N/A\\
\hline
b) & $A_2^*{=}\frac{1{+}\alpha_2}{2}$, $d_s{=}\frac{5{+}\alpha_2{+}2\alpha_1}{2}$ & $\rho^*{=}\frac{1{-}2\alpha_1{+}\alpha_2}{2{-}2\alpha_1}$ $d_s{=}\frac{5{+}\alpha_2{+}2\alpha_1}{2}$\\
\hline
c) & $A_2^*{=}\frac{1{+}\alpha_2}{2}$, $d_s{=}\frac{5{+}\alpha_2{+}2\alpha_1}{2}$ & $\rho^*{=}\frac{1{-}2\alpha_1{+}\alpha_2}{1{-}\alpha_1{+}\alpha_2}$ $d_s{=}3{+}\frac{\alpha_1\alpha_2}{1{-}\alpha_1{+}\alpha_2}$\\
\hline
d) & $A_2^*{=}1{-}\alpha_1$, $d_s{=}3$ & $\rho^*{=}\frac{1{-}2\alpha_1{+}\alpha_2}{1{-}\alpha_1{+}\alpha_2}$ $d_s{=}3{+}\frac{\alpha_1\alpha_2}{1{-}\alpha_1{+}\alpha_2}$\\
\hline
\end{tabular}
\caption{Sum DoF achieved with different schemes in a $(4{,}2{,}3)$ MIMO BC, where conditions a), b), c) and d) are such that in Figure \ref{fig:BC423caseI}, \ref{fig:BC423caseIIa}, \ref{fig:BC423caseIIb} and \ref{fig:BC423caseIIc}, respectively.}\label{tab:ST}
\vspace{-0mm}
\end{table*}
With the power allocation across the $T$ slots, we can compute
\begin{IEEEeqnarray}{rcl}
d_{p1}&{=} & \rho^*(\alpha_1{+}\alpha_2{-}1)^+{+}(1{-}\rho^*)\alpha_2{,}\IEEEyesnumber\IEEEyessubnumber\\
d_{p2}&{=} & \rho^*(3{-}\alpha_1){+}(1{-}\rho^*)\cdot2\alpha_1{,}\IEEEyessubnumber\\
d_c&{=} & \rho^*\alpha_1{+}(1{-}\rho^*)(3{-}\alpha_1),\IEEEyessubnumber
\end{IEEEeqnarray}
where $\rho^*$ is given in Table \ref{tab:ST}. Considering that the common message only carries information intended for Rx$1$ and Rx$2$, we obtain the corner points $\mathcal{P}_{12}$ and $\mathcal{P}_{10^\prime}$ in Figure \ref{fig:BCcase2}, respectively.
To be complete, the corner point $\mathcal{P}_{20}{=}(2{,}2\alpha_1)$ is achievable by substituting $A_1{=}\alpha_2$ and $A_2{=}\alpha_1$ into \eqref{eq:dc1_423} through to \eqref{eq:dp2_423}, and assuming that the common message only carries information for Rx$1$.
\subsection{RS scheme for the asymmetric case: Unified Framework}\label{sec:BCuni}
In this part, we consider the asymmetric MIMO case with $M{\leq}N_1{+}N_2$ and $M{\geq}N_2{\geq}N_1$, as the achievability for other cases can be shown by switching off the redundant transmit/receive antennas. Motivated by the $(4{,}2{,}3)$ MIMO BC example in the last subsection, the transmission block is constructed as follows.
\begin{itemize}
\item $M{-}N_2$ private symbols, denoted by $\mathbf{u}_1{\in}\mathbb{C}^{(M{-}N_2){\times}1}$, are sent to Rx1 with power exponent $A_1$ along a ZF-precoder $\mathbf{V}_1{=}\mathbf{H}_2^{\bot}{\in}\mathbb{C}^{M{\times}(M{-}N_2)}$ ;
\item $M{-}N_1$ private symbols, denoted by $\mathbf{u}_2^{(1)}{\in}\mathbb{C}^{(M{-}N_1){\times}1}$, are sent to Rx2 with power exponent $A_2$ along a ZF-precoder $\mathbf{V}_2^{(1)}{=}\mathbf{H}_1^{\bot}{\in}\mathbb{C}^{M{\times}(M{-}N_1)}$;
\item $N_1{+}N_2{-}M$ private symbols, denoted by $\mathbf{u}_2^{(2)}$, are sent to Rx2 along a precoder $\mathbf{V}_2^{(2)}{\in}\mathbb{C}^{4{\times}1}$ in the subspace spanned by $\hat{\mathbf{H}}_2$. Its power exponent is $(A_2{-}\alpha_1)^+$.
\item A common message, denoted by $\mathbf{c}{\in}\mathbb{C}^{M{\times}1}$, is multicast using the remaining power.
\end{itemize}
The power exponents are defined as $0{\leq}A_1{\leq}\alpha_2$ and $0{\leq}A_2{\leq}1$. Mathematically, the transmitted and received signals write as
\begin{IEEEeqnarray}{rcl}
\!\!\!\!\!\!\!\!\mathbf{s}&{=}&\underbrace{\mathbf{c}}_{P}{+}
\underbrace{\mathbf{V}_1\mathbf{u}_1}_{P^{A_1}}{+}
\underbrace{\mathbf{V}_2^{(1)}\mathbf{u}_2^{(1)}}_{P^{A_2}}{+}
\underbrace{\mathbf{V}_2^{(2)}\mathbf{u}_2^{(2)}}_{P^{(A_2{-}\alpha_1)^+}}{,}\IEEEyesnumber\IEEEyessubnumber\label{eq:JMBBC}\\
\!\!\!\!\!\!\!\!\mathbf{y}_1&{=}&\underbrace{\mathbf{H}_1^H\mathbf{c}}_{P}{+}
\underbrace{\mathbf{H}_1^H\mathbf{V}_1\mathbf{u}_1}_{P^{A_1}}{+}
\underbrace{\mathbf{H}_1^H\left(\mathbf{V}_2^{(1)}\mathbf{u}_2^{(1)}{+}
\mathbf{V}_2^{(2)}\mathbf{u}_2^{(2)}\right)}_{P^{(A_2{-}\alpha_1)^+}}{,}\IEEEyessubnumber\label{eq:y1BC}\\
\!\!\!\!\!\!\!\!\mathbf{y}_2&{=}&\underbrace{\mathbf{H}_2^H\mathbf{c}}_{P}{+}
\underbrace{\mathbf{H}_2^H\mathbf{V}_1\mathbf{u}_1}_{P^{A_1{-}\alpha_2}}{+}
\underbrace{\mathbf{H}_2^H\mathbf{V}_2^{(1)}\mathbf{u}_2^{(1)}}_{P^{A_2}}{+}
\underbrace{\mathbf{H}_2^H\mathbf{V}_2^{(2)}\mathbf{u}_2^{(2)}}_{P^{(A_2{-}\alpha_1)^+}}{.}\IEEEyessubnumber\label{eq:y2BC}
\end{IEEEeqnarray}
For the MACs given in \eqref{eq:y1BC} and \eqref{eq:y2BC}, using the proof presented in Appendix A, the common message and private messages are successfully decoded if the DoF tuple lies in
\begin{IEEEeqnarray}{rcl}\label{eq:dofnew}
\!\!\!\!\text{\rm At Rx1:}\,\, d_{p1}&{=}&(M{-}N_2)(A_1{-}(A_2{-}\alpha_1)^+)^+,\IEEEyesnumber\IEEEyessubnumber\label{eq:dp1BC}\\
\!\!\!\!d_c&{\leq}&d_c^{(1)}{\triangleq}N_1{-}(M{-}N_2)\max\{A_1{,}A_2{-}\alpha_1\}{-}\nonumber\\
\!\!\!\!&&(N_1{+}N_2{-}M)(A_2{-}\alpha_1)^+,\IEEEyessubnumber\label{eq:dc1BC}\\
\!\!\!\!\text{\rm At Rx2:}\,\, d_{p2}&{=}&(M{-}N_1)A_2{+}(N_1{+}N_2{-}M)(A_2{-}\alpha_1)^+.\IEEEyessubnumber\label{eq:dp2BC}\\
\!\!\!\!d_c&{\leq}&d_c^{(2)}{\triangleq}N_2{-}(M{-}N_1)A_2{-}\nonumber\\
\!\!\!\!&&(N_1{+}N_2{-}M)(A_2{-}\alpha_1)^+.\IEEEyessubnumber\label{eq:dc2BC}
\end{IEEEeqnarray}
Following the footsteps in the $(4{,}2{,}3)$ example, we find that the sum DoF without Space-Time transmission is maximized with the power exponents
\begin{equation}
A_2^*{=}\max\left\{\frac{N_2{-}N_1{+}(M{-}N_2)\alpha_2}{M{-}N_1}{,}1{-}\frac{M{-}N_2}{N_2{-}N_1}\alpha_1\right\}.\label{eq:A2tmp}
\end{equation}
\subsubsection{When $\alpha_1{\geq}\frac{N_2{-}N_1{+}(M{-}N_2)\alpha_2}{M{-}N_1}$, i.e., $\Phi_{BC}{\leq}0$} In this case, choosing $A_2^*{=}\alpha_1^\prime{=}\frac{N_2{-}N_1{+}(M{-}N_2)\alpha_2}{M{-}N_1}{\leq}\alpha_1$ and $A_1^*{=}\alpha_2$ allows us to achieve the maximum sum DoF $N_2{+}(M{-}N_2)\alpha_2$. If $\mathbf{c}$ only carries information intended for Rx1 (resp. Rx2), the corner points $\mathcal{P}_{10}{=}(N_1{,}(M{-}N_1)\alpha_1^\prime)$ (resp. $\mathcal{P}_{10^\prime}{=}((M{-}N_2)\alpha_2{,}N_2)$) in Figure \ref{fig:BCcase1} is achieved.
\subsubsection{When $\alpha_1{\leq}\frac{N_2{-}N_1{+}(M{-}N_2)\alpha_2}{M{-}N_1}$, i.e., $\Phi_{BC}{\geq}0$} In this case, similar to the $(4{,}2{,}3)$ example, we further enhance the sum DoF by performing a Space-Time transmission, where the power exponents are $(A_1{,}A_2){=}(\alpha_2{,}\alpha_1)$ for a fraction $\rho$ of the total time, while the power exponents are $(A_1{,}A_2){=}(\alpha_2{,}1)$ for the rest of the time. The sum DoF is maximized by choosing the optimal $\rho{=}\rho_{BC}^*$ such that the common message decodabilities at the two receivers are balanced (focusing on the aggregate received signals). We present the value of $\rho_{BC}^*$ as
\begin{equation}
\!\!\!\rho_{BC}^*{=} \frac{(M{-}N_1)(1{-}\alpha_1){-}(M{-}N_2)(1{-}\alpha_2)}{(N_2{-}N_1)(1{-}\alpha_1){+}(M{-}N_2)(\alpha_2{-}(\alpha_2{+}\alpha_1{-}1)^+)},
\end{equation}
while the derivation is omitted as it follows the same footsteps as the $(4{,}2{,}3)$ example. Then, the achievable DoF tuple writes as
\begin{IEEEeqnarray}{rcl}
d_{p1{,}ST}(\rho_{BC}^*)&{=}&(M{-}N_2)\left[\rho_{BC}^*(\alpha_1{+}\alpha_2{-}1)^+{+}\right.\nonumber\\
&&\left.(1{-}\rho_{BC}^*)\alpha_2\right], \IEEEyesnumber\IEEEyessubnumber\label{eq:dp1TBC}\\
d_{p2{,}ST}(\rho_{BC}^*)&{=}&\rho_{BC}^*\left(N_2{-}(N_1{+}N_2{-}M)\alpha_1\right){+}\nonumber\\
&&(1{-}\rho_{BC}^*)(M{-}N_1)\alpha_1. \IEEEyessubnumber\label{eq:dp2TBC}\\
d_{c{,}ST}^{(2)}(\rho_{BC}^*)&{=}&\rho_{BC}^*(N_2{+}N_1{-}M)\alpha_1{+}\nonumber\\
&&(1{-}\rho_{BC}^*)\left(N_2{-}(M{-}N_1)\alpha_1\right), \IEEEyessubnumber\label{eq:dc2TBC}
\end{IEEEeqnarray}
If $\mathbf{c}$ only carries information intended for Rx1 and Rx2, the corner point $\mathcal{P}_{12}$ and $\mathcal{P}_{10^\prime}$ in Figure \ref{fig:BCcase2} are obtained, respectively.
To be complete, it remains to achieve the corner point $\mathcal{P}_{20}$ in Figure \ref{fig:BCcase2}. Using the new RS scheme, taking $A_k{=}\alpha_j$ into \eqref{eq:dofnew} yields $d_c{=}N_1{-}(M{-}N_2)\alpha_2$ and $d_{pk}{=}(M{-}N_j)\alpha_j$ for $k{,}j{=}1{,}2{,}k{\neq}j$. Then, corner point $\mathcal{P}_{20}{=}(N_1{,}(M{-}N_1)\alpha_1)$ is immediate if $\mathbf{c}$ is intended for Rx1. Linking $\mathcal{P}_{20}$ and $\mathcal{P}_{12}$ yields $L_2$ in Proposition \ref{prop:BC}.
\subsection{Case I: $M_1{\geq}N_2$}\label{sec:icI}
In this part, for convenience, we employ the notation $N_2^\prime{\triangleq}\min\{M_2{,}N_2\}$. Since $M_2{\geq}N_1$ and $M_1{\geq}N_2$, we point out that this antenna configuration yields a scenario similar to BC based on the following facts: 1) the desired signal of each receiver is completely mixed with the interference signal, and 2) both receivers are able to deliver ZF-precoded private messages in the null space of the cross-link. Accordingly, we build the RS scheme similar to that in the asymmetric MIMO BC but in a distributed manner. Specifically, the transmitted signals write as
\begin{IEEEeqnarray}{rcl}\label{eq:ICs}
\mathbf{s}_1&{=}&\underbrace{\mathbf{c}_1}_{P}{+}\underbrace{\mathbf{V}_1\mathbf{u}_1}_{P^{A_1}}{,} \IEEEyesnumber\IEEEyessubnumber\label{eq:s1caseI}\\
\mathbf{s}_2&{=}&\underbrace{\mathbf{c}_2}_{P}{+}\underbrace{\mathbf{V}_2^{(1)}\mathbf{u}_2^{(1)}}_{P^{A_2}}{+}
\underbrace{\mathbf{V}_2^{(2)}\mathbf{u}_2^{(2)}}_{P^{(A_2{-}\alpha_1)^+}}{,}\IEEEyessubnumber\label{eq:s2caseI}
\end{IEEEeqnarray}
where $\mathbf{V}_1{=}\hat{\mathbf{H}}_{21}^\bot$ and $\mathbf{V}_2^{(1)}{=}\hat{\mathbf{H}}_{12}^\bot$ are ZF-precoders, $\mathbf{u}_1{\in}\mathbb{C}^{(M_1{-}N_2){\times}1}$ and $\mathbf{u}_2^{(1)}{\in}\mathbb{C}^{(M_2{-}N_1){\times}1}$ are the ZF-precoded private symbols intended for Rx1 and Rx2, respectively, while $\mathbf{u}_2^{(2)}{\in}\mathbb{C}^{(N_1{+}N_2^\prime{-}M_2){\times}1}$ is precoded with the full rank matrix $\mathbf{V}_2^{(2)}{\in}\mathbb{C}^{M_2{\times}(N_1{+}N_2^\prime{-}M_2)}$ in the subspace of $\hat{\mathbf{H}}_{22}$. The power exponents are defined as $A_1{\in}[0{,}\alpha_2]$ and $A_2{\in}[0{,}1]$. Unlike the BC case where the common messages are generally denoted by $\mathbf{c}$, we introduce $\mathbf{c}_k$ to denote the common message carries information intended for Rx$k$, $k{=}1{,}2$, as $\mathbf{c}_1$ and $\mathbf{c}_2$ are transmitted from different transmitters. The resultant received signals are expressed as
\begin{IEEEeqnarray}{rcl}\label{eq:ICy}
\!\!\!\!\!\!\mathbf{y}_1&{=}&\underbrace{\mathbf{H}_{11}^H\mathbf{c}_1}_{P}{+}\underbrace{\mathbf{H}_{12}^H\mathbf{c}_2}_{P}{+}
\underbrace{\mathbf{H}_{11}^H\mathbf{V}_1\mathbf{u}_1}_{P^{A_1}}{+}
\underbrace{\boldsymbol\eta_1}_{P^{(A_2{-}\alpha_1)^+}}
,\IEEEyesnumber\IEEEyessubnumber\label{eq:y1caseI}\\
\!\!\!\!\!\!\mathbf{y}_2&{=}&\underbrace{\mathbf{H}_{21}^H\mathbf{c}_1}_{P}\!\!{+}\underbrace{\mathbf{H}_{22}^H\mathbf{c}_2}_{P}{+}\!\!\!\!\!\!
\underbrace{\boldsymbol\eta_2}_{P^{A_1{-}\alpha_2}}\!\!\!\!\!\!{+}
\underbrace{\mathbf{H}_{22}^H\mathbf{V}_2^{(1)}\mathbf{u}_2^{(1)}}_{P^{A_2}}\!\!\!{+}
\underbrace{\mathbf{H}_{22}^H\mathbf{V}_2^{(2)}\mathbf{u}_2^{(2)}}_{P^{(A_2{-}\alpha_1)^+}}.\IEEEyessubnumber\label{eq:y2caseI}
\end{IEEEeqnarray}
where $\boldsymbol\eta_1{\triangleq}\mathbf{H}_{12}^H\left(\mathbf{V}_2^{(1)}\mathbf{u}_2^{(1)}{+}\mathbf{V}_2^{(2)}\mathbf{u}_2^{(2)}\right)$ and $\boldsymbol\eta_2{\triangleq}\mathbf{H}_{21}^H\mathbf{V}_1\mathbf{u}_1$.
Following the derivations in Appendix A, the MACs in \eqref{eq:y1caseI} and \eqref{eq:y2caseI} yield the following achievable DoF tuple
\begin{IEEEeqnarray}{rcl}\label{eq:dofcaseI}
\text{\rm At Rx1:} \quad d_{c1}&{\leq}&N_1{-}(M_1{-}N_2)\max\{A_1{,}A_2{-}\alpha_1\}{-}\nonumber\\
&&(N_1{+}N_2{-}M_1)(A_2{-}\alpha_1)^+,\IEEEyesnumber\IEEEyessubnumber\label{eq:dc1y1caseI}\\
d_{c2}&{\leq}&\text{\rm r.h.s. of \eqref{eq:dc1y1caseI}},\IEEEyessubnumber\label{eq:dc2y1caseI}\\
d_{c1}{+}d_{c2}&{\leq}&\text{\rm r.h.s. of \eqref{eq:dc1y1caseI}},\IEEEyessubnumber\label{eq:dcsy1caseI}\\
d_{p1}&{=}&(M_1{-}N_2)(A_1{-}(A_2{-}\alpha_1)^+)^+.\IEEEyessubnumber\label{eq:dp1caseI}\\
\text{\rm At Rx2:}\quad d_{c1}&{\leq}&N_2{-}(M_2{-}N_1)A_2{-}\nonumber\\
&&(N_1{+}N_2^\prime{-}M_2)(A_2{-}\alpha_1)^+\!\!,\IEEEyessubnumber\label{eq:dc1y2caseI}\\
d_{c2}&{\leq}&N_2^\prime{-}(M_2{-}N_1)A_2{-}\nonumber\\
&&(N_1{+}N_2^\prime{-}M_2)(A_2{-}\alpha_1)^+\!\!,
\IEEEyessubnumber\label{eq:dc2y2caseI}\\
d_{c1}{+}d_{c2}&{\leq}&\text{\rm r.h.s. of \eqref{eq:dc1y2caseI}},\IEEEyessubnumber\label{eq:dcsy2caseI}\\
d_{p2}&{=}&(M_2{-}N_1)A_2{+}\nonumber\\
&&(N_1{+}N_2^\prime{-}M_2)(A_2{-}\alpha_1)^+.\IEEEyessubnumber\label{eq:dp2caseI}
\end{IEEEeqnarray}
Next, let us proceed to discuss the achievability of the corner points in Figure \ref{fig:M2leqN2} when $M_2{\leq}N_2$ and the corner points in Figure \ref{fig:I2b} and \ref{fig:I2a} when $M_2{>}N_2$, because some of the constraints in \eqref{eq:dofcaseI} become inactive in each particular case, which improves the tractability of the analysis.
\subsubsection{Case I.1: $M_1{\geq}N_2$ and $M_2{\leq}N_2$}
In this case, we have $N_2^\prime{=}\min\{M_2{,}N_2\}{=}M_2$. It can be shown that the r.h.s. of \eqref{eq:dc2y2caseI} is greater than or equal to the r.h.s. of \eqref{eq:dc2y1caseI} for any values of $A_1$ and $A_2$. Therefore, \eqref{eq:dc1y2caseI}, \eqref{eq:dc2y2caseI} and \eqref{eq:dcsy2caseI} become inactive. In this way, from \eqref{eq:dc1y1caseI}, \eqref{eq:dc2y1caseI} and \eqref{eq:dcsy1caseI}, we can see that if only $\mathbf{c}_1$ is transmitted (i.e., $d_{c2}{=}0$), we achieve
\begin{multline}\label{eq:pair1caseI1}
(d_{c1}{+}d_{p1}{,}d_{p2}){=}\left(N_1(1{-}(A_2{-}\alpha_1)^+){,}\right. \\
\left.M_2{-}N_1)A_2{+}N_1(A_2{-}\alpha_1)^+\right).
\end{multline}
If only $\mathbf{c}_2$ is transmitted (i.e., $d_{c1}{=}0$), we achieve
\begin{multline}\label{eq:pair2caseI1}
(d_{p1}{,}d_{c2}{+}d_{p2}){=}\left((M_1{-}N_2)(A_1{-}(A_2{-}\alpha_1)^+)^+{,}\right. \\
\left.N_1{+}(M_2{-}N_1)A_2{-}(M_1{-}N_2)(A_1{-}(A_2{-}\alpha_1)^+)^+\right).
\end{multline}
Clearly, the DoF pairs in \eqref{eq:pair1caseI1} and \eqref{eq:pair2caseI1} yield the sum DoF $N_1{+}(M_2{-}N_1)A_2$. Choosing $A_2{=}1$ yields the maximum sum DoF $d_1{+}d_2{=}M_2$. With the power levels $A_2{=}1$ and $A_1{\leq}1{-}\alpha_1$, the corner points $\mathcal{P}_{12}{=}(N_1\alpha_1{,}M_2{-}N_1\alpha_1)$ and $\mathcal{P}_{10^\prime}{=}(0{,}M_2)$ in Figure \ref{fig:M2leqN2} are obtained using \eqref{eq:pair1caseI1} and \eqref{eq:pair2caseI1}, respectively. Besides, substituting $A_2{=}\alpha_1$ into \eqref{eq:pair1caseI1} yields the corner point $\mathcal{P}_{20}{=}(N_1{,}(M_2{-}N_1)\alpha_2)$ illustrated in Figure \ref{fig:M2leqN2}. Linking $\mathcal{P}_{20}$ with $\mathcal{P}_{12}$ yields $L_{2{,}IC_1}$ in Proposition \ref{prop:IC1} (see Figure \ref{fig:M2leqN2}).
\subsubsection{Case I.2: $M_1{\geq}N_2$ and $M_2{\geq}N_2$}
In this case, we have $N_2^\prime{=}\min\{M_2{,}N_2\}{=}N_2$ and the r.h.s. of \eqref{eq:dc2y2caseI} becomes equal to the r.h.s. of \eqref{eq:dc1y2caseI} and the r.h.s. of \eqref{eq:dcsy2caseI}. Therefore, it is similar to the BC case (see \eqref{eq:dofnew}), namely that the DoF of the common messages, i.e., $d_{c1}$ and $d_{c2}$, are subject to the sum DoF constraints \eqref{eq:dcsy1caseI} and \eqref{eq:dcsy2caseI}. Then, we derive the achievable DoF region following the footsteps in the BC case.
\begin{enumerate}
\item When $\Phi_{IC}{\leq}0$, we find that the maximum sum DoF without space-time transmission is achieved with $A_1{=}\alpha_2$ and $A_1{=}\frac{N_2{-}N_1{+}(M_1{-}N_2)\alpha_2}{M_2{-}N_1}{\leq}\alpha_1$. Plugging $A_1^*{=}\alpha_2$ and $A_2^*{=}\alpha_1^\prime$ into \eqref{eq:dofcaseI}, we can see if only $\mathbf{c}_1$ is transmitted (i.e., setting $d_{c2}{=}0$), $\mathcal{P}_{10}{=}(N_1{,}(M_2{-}N_1)\alpha_1^\prime)$ in Figure \ref{fig:I2b} is achievable; if only $\mathbf{c}_2$ is transmitted (i.e., setting $d_{c1}{=}0$), $\mathcal{P}_{10^\prime}{=}((M_1{-}N_2)\alpha_2{,}N_2)$ in Figure \ref{fig:I2b} is achievable.
\item When $\Phi_{IC}{>}0$ and $\frac{M_1{-}M_2}{M_1{-}N_2}\alpha_1{\leq}1{-}\alpha_2$, we perform a Space-Time transmission, where the power exponents are $(A_1{,}A_2){=}(\alpha_2{,}\alpha_1)$ for a fraction $\rho$ of the total time, while the power exponents are $(A_1{,}A_2){=}(\alpha_2{,}1)$ for the rest of the time. The sum DoF is maximized by choosing the optimal $\rho{=}\rho_{IC}^*$ such that the common message decodabilities at the two receivers are balanced (focusing on the aggregate received signals). We present the value of $\rho_{IC}^*$ as
{\small
\begin{equation}
\rho_{IC}^*{=}\frac{(M_2{-}N_1)(1{-}\alpha_1){-}(M_1{-}N_2)(1{-}\alpha_2){+}M_1{-}M_2}
{(N_2{-}N_1)(1{-}\alpha_1){+}(M_1{-}N_2)(\alpha_2{-}(\alpha_2{+}\alpha_1{-}1)^+)}.\label{eq:TIC}
\end{equation}}
Then, the achievable DoF tuple can be obtained by
\begin{IEEEeqnarray}{rcl}
d_{p1{,}ST}&{=}&(M_1{-}N_2)\left[\rho_{IC}^*(\alpha_1{+}\alpha_2{-}1)^+{+}\right.\nonumber\\
&&\left.(1{-}\rho_{IC}^*)\alpha_2\right],
\IEEEyesnumber\IEEEyessubnumber\label{eq:dp1_T}\\ d_{p2{,}ST}&{=}&\rho_{IC}^*\left(N_2{-}(N_1{+}N_1{-}M_2)\alpha_1\right){+}\nonumber\\
&&(1{-}\rho_{IC}^*)(M_2{-}N_1)\alpha_1,
\IEEEyessubnumber\label{eq:dp2_T}\\
d_{c1{,}ST}{+}d_{c2{,}ST}&{=}&\rho_{IC}^*(N_2{+}N_1{-}M_2)\alpha_1{+}\nonumber\\
&&(1{-}\rho_{IC}^*)\left(N_2{-}(M_2{-}N_1)\alpha_1\right).\IEEEyessubnumber\label{eq:dcs2_T}
\end{IEEEeqnarray}
To be complete, the corner point $\mathcal{P}_{20}{=}(N_1{,}(M_2{-}N_1)\alpha_1)$ in Figure \ref{fig:I2a} is achieved by plugging $A_k{=}\alpha_j$ into \eqref{eq:dc1y1caseI}, \eqref{eq:dc1y2caseI}, \eqref{eq:dp1caseI} and \eqref{eq:dp2caseI}, and considering that only $\mathbf{c}_1$ is transmitted. Linking $\mathcal{P}_{20}$ and $\mathcal{P}_{12}$ yields $L_2$ in Proposition \ref{prop:IC1}.
\item When $\Phi_{IC}{\geq}0$ and $\frac{M_1{-}M_2}{M_1{-}N_2}\alpha_1{>}1{-}\alpha_2$, Rx2 has a greater common-message-decodability than Rx1 with both of the power exponents $(A_1{,}A_2){=}(\alpha_2{,}\alpha_1)$ and $(A_1{,}A_2){=}(\alpha_2{,}1)$. This fact prevents the Space-Time transmission from benefiting the sum DoF performance. In this case, using \eqref{eq:dofcaseI}, we learn that the maximum sum DoF can be achieved choosing $A_1^*{=}1{-}\frac{M_1{-}M_2}{M_1{-}N_2}\alpha_1{<}\alpha_2$ and $A_2^*{=}1$. With that power allocation policy, $\alpha_{0{,}IC}$ (the third line in \eqref{eq:ICalpha0}) is immediate and the corner point $\mathcal{P}_{12}$ (resp. $\mathcal{P}_{10^\prime}$) is obtained if only $\mathbf{c}_1$ (resp. $\mathbf{c}_2$) is transmitted. To be complete, the achievability of the corner point $\mathcal{P}_{20}$ follows that in the case when $\Phi_{IC}{\geq}0$.
\end{enumerate}
\subsection{Case II: $M_1{\leq}N_2$}\label{sec:icII}
\input{ICcaseII_2col}
\subsection{Achievability DoF region of the related MAC}
We aim to show the achievable DoF tuples specified in \eqref{eq:dofnew}, \eqref{eq:dofcaseI} and \eqref{eq:dofcaseII} following the proof in \cite{xinping_mimo}. Without loss of generality, let us write the received signal at Rx$k$ as
\begin{IEEEeqnarray}{rcl}
\text{\rm BC:}\,\mathbf{y}_k&{=}&\mathbf{H}_k^H\mathbf{c}{+}\mathbf{H}_k^H\mathbf{x}_k{+}\boldsymbol\eta_{k{,}BC},\IEEEyesnumber\IEEEyessubnumber\\
\text{\rm IC:}\,\mathbf{y}_k&{=}&\mathbf{H}_{kk}^H\mathbf{c}_k{+}\mathbf{H}_{kj}^H\mathbf{c}_j{+}\mathbf{H}_{kk}^H\mathbf{x}_k{+}\boldsymbol\eta_{k{,}IC},
\IEEEyessubnumber\label{eq:ygeneral}
\end{IEEEeqnarray}
where $\mathbf{x}_k$ refers to the precoded private messages transmitted by Tx in BC and Tx$k$ in IC intended for Rx$k$, while $\boldsymbol\eta_{k{,}BC}{\triangleq}\mathbf{H}_k^H\mathbf{x}_j{+}\mathbf{n}_k{,}k{\neq}j$ and $\boldsymbol\eta_{k{,}IC}{\triangleq}\mathbf{H}_{kj}^H\mathbf{x}_j{+}\mathbf{n}_k{,}k{\neq}j$ represent the interference plus noise in BC and IC, respectively. In the following, let us only focus on \eqref{eq:ygeneral} as the derivation for the BC case follows similarly by simply taking $\mathbf{H}_{kk}{=}\mathbf{H}_{kj}$. For convenience, let us use $\boldsymbol\eta_k$ instead of $\boldsymbol\eta_{k{,}IC}$.
\begin{figure*}
\begin{IEEEeqnarray}{rcl} \label{eq:rate_conditions}
\!\!\!\!\!\!\!\!\!R_{ck}&{\leq}&I(\mathbf{c}_k;\mathbf{y}_k|\mathbf{c}_j{,}\mathbf{x}_k{,}\mathcal{H}_k){=}
h(\mathbf{y}_k|\mathbf{c}_j{,}\mathbf{x}_k{,}\mathcal{H}_k){-}h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathbf{x}_k{,}\mathcal{H}_k),
\IEEEyesnumber\IEEEyessubnumber\label{eq:Rck}\\
\!\!\!\!\!\!\!\!\!R_{cj}&{\leq}&I(\mathbf{c}_j;\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{x}_k{,}\mathcal{H}_k){=}
h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{x}_k{,}\mathcal{H}_k){-}h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathbf{x}_k{,}\mathcal{H}_k),
\IEEEyessubnumber\label{eq:Rcj}\\
\!\!\!\!\!\!\!\!\!R_{pk}&{\leq}&I(\mathbf{x}_k;\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathcal{H}_k){=}
h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathcal{H}_k){-}h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathbf{x}_k{,}\mathcal{H}_k),
\IEEEyessubnumber\label{eq:Rpk}\\
\!\!\!\!\!\!\!\!\!R_{pk}{+}R_{ck}&{\leq}&I(\mathbf{x}_k{,}\mathbf{c}_k;\mathbf{y}_k|\mathbf{c}_j{,}\mathcal{H}_k){=}
h(\mathbf{y}_k|\mathbf{c}_j{,}\mathcal{H}_k){-}h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathbf{x}_k{,}\mathcal{H}_k),
\IEEEyessubnumber\label{eq:Rckpk}\\
\!\!\!\!\!\!\!\!\!R_{pk}{+}R_{cj}&{\leq}&I(\mathbf{x}_k{,}\mathbf{c}_j;\mathbf{y}_k|\mathbf{c}_k{,}\mathcal{H}_k)\nonumber{=}
h(\mathbf{y}_k|\mathbf{c}_k{,}\mathcal{H}_k){-}h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathbf{x}_k{,}\mathcal{H}_k),
\IEEEyessubnumber\label{eq:Rcjpk}\\
\!\!\!\!\!\!\!\!\!R_{ck}{+}R_{cj}{+}R_{pk}&{\leq}&I(\mathbf{c}_k{,}\mathbf{c}_j{,}\mathbf{x}_k;\mathbf{y}_k|\mathcal{H}_k)\nonumber{=}
h(\mathbf{y}_k|\mathcal{H}_k){-}h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathbf{x}_k{,}\mathcal{H}_k),
\IEEEyessubnumber\label{eq:Rckcjpk}
\end{IEEEeqnarray}
\hrulefill
\end{figure*}
As pointed out in \cite{xinping_mimo}, the MIMO system in \eqref{eq:ygeneral} is a MAC as Rx$k$ aims to decode $\mathbf{c}_k$, $\mathbf{c}_j$ and $\mathbf{x}_k$. Then, according to \cite{network_info}, a rate tuple $(R_{c1}{,}R_{c2}{,}R_{pk})$ is achievable if \eqref{eq:rate_conditions} hold for any input distribution $p_{\mathbf{x}_k{,}\mathbf{c}_k{,}\mathbf{c}_j}{=}p_{\mathbf{x}_k}p_{\mathbf{c}_k}p_{\mathbf{c}_j}$, and $\mathcal{H}_k{\triangleq}\{\mathbf{H}_{kk}{,}\mathbf{H}_{kj}\}$ is the set of the channel state. By setting $R_{pk}$ equal to the r.h.s. of \eqref{eq:Rpk} and plugging it into \eqref{eq:Rckpk}, \eqref{eq:Rcjpk} and \eqref{eq:Rckcjpk}, we have
\begin{IEEEeqnarray}{rcl}
R_{ck}&{\leq}&h(\mathbf{y}_k|\mathbf{c}_j{,}\mathcal{H}_k){-}h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathcal{H}_k),
\IEEEyesnumber\IEEEyessubnumber\label{eq:Rckactive}\\
R_{cj}&{\leq}&h(\mathbf{y}_k|\mathbf{c}_k{,}\mathcal{H}_k){-}h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathcal{H}_k),
\IEEEyessubnumber\label{eq:Rcjactive}\\
R_{ck}{+}R_{cj}&{\leq}&h(\mathbf{y}_k|\mathcal{H}_k){-}h(\mathbf{y}_k|\mathbf{c}_k{,}\mathbf{c}_j{,}\mathcal{H}_k).
\IEEEyessubnumber\label{eq:Rckcjactive}
\end{IEEEeqnarray}
Note that the r.h.s. of \eqref{eq:Rckactive} can be interpreted as $I(\mathbf{c}_k;\mathbf{y}_k|\mathbf{c}_j{,}\mathcal{H}_k)$, which is equal to $I(\mathbf{c}_k;\mathbf{y}_k^\prime|\mathcal{H}_k)$ with $\mathbf{y}_k^\prime{\triangleq}\mathbf{H}_{kk}^H\mathbf{c}_k{+}\mathbf{H}_{kk}^H\mathbf{x}_k{+}\boldsymbol\eta_k$. Similarly, the r.h.s. of \eqref{eq:Rck} can be expressed as $I(\mathbf{c}_k;\mathbf{y}_k^{\prime\prime}|\mathcal{H}_k)$ with $\mathbf{y}_k^{\prime\prime}{\triangleq}\mathbf{H}_{kk}^H\mathbf{c}_k{+}\boldsymbol\eta_k$. Since $\mathbf{c}_k{\to}\mathbf{y}_k^{\prime\prime}{\to}\mathbf{y}_k^{\prime}$ forms a Markov Chain, we have $I(\mathbf{c}_k;\mathbf{y}_k^\prime|\mathcal{H}_k){\leq}I(\mathbf{c}_k;\mathbf{y}_k^{\prime\prime}|\mathcal{H}_k)$ due to the data processing inequality \cite{Inf_Theo}. Therefore, the inequalities in \eqref{eq:Rck} and \eqref{eq:Rcj} are inactive and the achievable rate of the common messages is specified by \eqref{eq:Rckactive}, \eqref{eq:Rcjactive} and \eqref{eq:Rckcjactive}.
Let the input be $\mathbf{c}_k\stackrel{d}{\sim}\mathcal{CN}(0{,}P\mathbf{I}_{M_k})$, $\mathbf{c}_j\stackrel{d}{\sim}\mathcal{CN}(0{,}P\mathbf{I}_{M_j})$ and $\mathbf{x}_k\stackrel{d}{\sim}\mathcal{CN}(0{,}\mathbf{B}_k)$. The achievable rate constraints \eqref{eq:Rckactive}, \eqref{eq:Rcjactive}, \eqref{eq:Rckcjactive} and \eqref{eq:Rpk} can be further expressed by
\begin{IEEEeqnarray}{rcl}\label{eq:Rckcjpkcompute}
R_{ck}&{\leq}&{\log}_2\det(\mathbf{Q}_{ck}{+}\mathbf{Q}_k{+}\mathbf{Q}_{\boldsymbol\eta_k}){-}\nonumber\\
&&{\log}_2\det(\mathbf{Q}_{k}{+}\mathbf{Q}_{\boldsymbol\eta_k}),\IEEEyesnumber\IEEEyessubnumber\label{eq:Rckcompute}\\
R_{cj}&{\leq}&{\log}_2\det(\mathbf{Q}_{cj}{+}\mathbf{Q}_k{+}\mathbf{Q}_{\boldsymbol\eta_k}){-}\nonumber\\
&&{\log}_2\det(\mathbf{Q}_{k}{+}\mathbf{Q}_{\boldsymbol\eta_k}),\IEEEyessubnumber\label{eq:Rcjcompute}\\
R_{ck}{+}R_{cj}&{\leq}&{\log}_2\det(\mathbf{Q}_{ck}{+}\mathbf{Q}_{cj}{+}\mathbf{Q}_k{+}\mathbf{Q}_{\boldsymbol\eta_k}){-}\nonumber\\
&&{\log}_2\det(\mathbf{Q}_{k}{+}\mathbf{Q}_{\boldsymbol\eta_k}),\IEEEyessubnumber\label{eq:Rckcjcompute}\\
R_{pk}&{=}&{\log}_2\det(\mathbf{Q}_k{+}\mathbf{Q}_{\boldsymbol\eta_k}){-}
{\log}_2\det(\mathbf{Q}_{\boldsymbol\eta_k}),\IEEEyessubnumber\label{eq:Rpkcompute}
\end{IEEEeqnarray}
where $\mathbf{Q}_{ck}{=}P\mathbf{H}_{kk}^H\mathbf{H}_{kk}$, $\mathbf{Q}_{cj}{=}P\mathbf{H}_{kj}^H\mathbf{H}_{kj}$, $\mathbf{Q}_k{=}\mathbf{H}_{kk}^H\mathbf{B}_k\mathbf{H}_{kk}$ and $\mathbf{Q}_{\boldsymbol\eta_k}{=}\mathbf{H}_{kj}^H\mathbf{B}_j\mathbf{H}_{kj}{+}\mathbf{I}_{N_k}$ denote the covariance matrices of $\mathbf{H}_{kk}^H\mathbf{c}_k$, $\mathbf{H}_{kj}^H\mathbf{c}_j$, $\mathbf{H}_{kk}^H\mathbf{x}_k$ and $\boldsymbol\eta_k$ in \eqref{eq:ygeneral}, respectively.
Next, let us identify the related covariance matrix in the MIMO IC when $M_1{\geq}N_2$ and MIMO IC when $M_1{\leq}N_2$. The derivation of the covariance in MIMO BC follows similarly to the MIMO IC when $M_1{\geq}N_2$ by setting $M_1{=}M_2$ and $N_2^\prime{=}N_2$.
\subsubsection{Case I: $M_1{\geq}N_2$ and $M_2{\geq}N_1$}
In this case, we have $\mathbf{B}_1{=}P^{A_1}\mathbf{S}_{\hat{\mathbf{H}}_{21}^\bot}$ and $\mathbf{B}_2{=}P^{A_2}\mathbf{S}_{\hat{\mathbf{H}}_{12}^\bot}{+}P^{(A_2{-}\alpha_1)^+}\mathbf{S}_{\hat{\mathbf{H}}_{22}}$, where $\mathbf{S}_{\hat{\mathbf{H}}_{21}^\bot}{\triangleq}\mathbf{V}_1\mathbf{V}_1^H$, $\mathbf{S}_{\hat{\mathbf{H}}_{12}^\bot}{\triangleq}\mathbf{V}_2^{(1)}\mathbf{V}_2^{(1)H}$ and $\mathbf{S}_{\hat{\mathbf{H}}_{22}}{\triangleq}\mathbf{V}_2^{(2)}\mathbf{V}_2^{(2)H}$.
At Rx1, as covariance matrices $\mathbf{Q}_{c1}$ and $\mathbf{Q}_{c2}$ are rank $N_1$ (since $M_2{\geq}N_1$ and $M_1{\geq}N_2{\geq}N_1$), it readily shows that ${\log}_2\det(\mathbf{Q}_{c1}{+}\mathbf{Q}_1{+}\mathbf{Q}_{\boldsymbol\eta_1})$, ${\log}_2\det(\mathbf{Q}_{c2}{+}\mathbf{Q}_1{+}\mathbf{Q}_{\boldsymbol\eta_1})$ and ${\log}_2\det(\mathbf{Q}_{c1}{+}\mathbf{Q}_{c2}{+}\mathbf{Q}_1{+}\mathbf{Q}_{\boldsymbol\eta_1})$ are equal to $N_1{\log}_2P{+}o({\log}_2P)$ as $\mathbf{Q}_{c1}$ and $\mathbf{Q}_{c2}$ are dominating compared to $\mathbf{Q}_1$ and $\mathbf{Q}_{\boldsymbol\eta_1}$. Moreover, let us write the eigenvalue decomposition of $\mathbf{Q}_1$ and $\mathbf{Q}_{\boldsymbol\eta_1}$ as $\mathbf{U}_1\mathbf{D}_1\mathbf{U}_1^H$ and $\mathbf{U}_{\boldsymbol\eta_1}\mathbf{D}_{\boldsymbol\eta_1}\mathbf{U}_{\boldsymbol\eta_1}^H$, respectively, where $\mathbf{D}_1{\sim}{\rm diag}(P^{A_1}\mathbf{I}_{M_1{-}N_2}{,}\mathbf{0}_{N_1{+}N_2{-}M_1})$ and $\mathbf{D}_{\boldsymbol\eta_1}{\sim}P^{(A_2{-}\alpha_1)^+}\mathbf{I}_{N_1}$. Then, it follows that
\begin{IEEEeqnarray}{rcl}
{\log}_2\det(\mathbf{Q}_1{+}\mathbf{Q}_{\boldsymbol\eta_1})&{=}&(M_1{-}N_2)\max\{A_2{-}\alpha_1{,}A_1\}{\log}_2P{+}\nonumber\\
&&(N_1{+}N_2{-}M_1)(A_2{-}\alpha_1^+){\log}_2P{+}\nonumber\\
&&o({\log}_2P),\nonumber\\%\IEEEyessubnumber\label{eq:Q1plusQeta1}\\
{\log}_2\det(\mathbf{Q}_{\boldsymbol\eta_1})&{=}&N_1(A_2{-}\alpha_1^+){\log}_2P{+}o({\log}_2P).\nonumbe
\end{IEEEeqnarray}
Plugging the corresponding values into \eqref{eq:Rckcjpkcompute} leads to \eqref{eq:dc1y1caseI}, \eqref{eq:dc2y1caseI}, \eqref{eq:dp1caseI} and \eqref{eq:dcsy1caseI}.
At Rx2, let us write the eigenvalue decomposition of the $\mathbf{Q}_2$ and $\mathbf{Q}_{\boldsymbol\eta_2}$ as $\mathbf{U}_2\mathbf{D}_2\mathbf{U}_2^H$ and $\mathbf{U}_{\boldsymbol\eta_2}\mathbf{D}_{\boldsymbol\eta_2}\mathbf{U}_{\boldsymbol\eta_2}^H$, respectively, where $\mathbf{D}_2{\sim}{\rm diag}(P^{A_2}\mathbf{I}_{M_2{-}N_1}{,}P^{(A_2{-}\alpha_1)^+}\mathbf{I}_{N_1{+}N_2{-}M_1})$ and $\mathbf{D}_{\boldsymbol\eta_2}{\sim}P^{0}\mathbf{I}_{N_1}$ since $A_1{\leq}\alpha_2$. Besides, the covariance matrices $\mathbf{Q}_{c1}$ is rank $N_2$ and $\mathbf{Q}_{c2}$ is rank $N_2^\prime$. Then it can be readily shown that ${\log}_2\det(\mathbf{Q}_{c2}{+}\mathbf{Q}_2{+}\mathbf{Q}_{\boldsymbol\eta_2}){=}N_2^\prime{\log}_2P{+}o({\log}_2P)$, while ${\log}_2\det(\mathbf{Q}_{c1}{+}\mathbf{Q}_2{+}\mathbf{Q}_{\boldsymbol\eta_2}){=}{\log}_2\det(\mathbf{Q}_{c1}{+} \mathbf{Q}_{c2}{+}\\ \mathbf{Q}_2{+}\mathbf{Q}_{\boldsymbol\eta_2}){=}N_2{\log}_2P{+}o({\log}_2P)$, and
\begin{IEEEeqnarray}{rcl}
{\log}_2\det(\mathbf{Q}_2{+}\mathbf{Q}_{\boldsymbol\eta_2})&{=}&
(N_1{+}N_2{-}M_2)(A_2{-}\alpha_1^+){\log}_2P{+}\nonumber\\
&&(M_2{-}N_1)A_2{\log}_2P{+}o({\log}_2P),\nonumber\\%\IEEEyessubnumber\label{eq:Q2plusQeta2}\\
{\log}_2\det(\mathbf{Q}_{\boldsymbol\eta_1})&{=}&o({\log}_2P)\nonumbe
\end{IEEEeqnarray}
Plugging the corresponding values into \eqref{eq:Rckcjpkcompute} leads to \eqref{eq:dc1y2caseI}, \eqref{eq:dc2y2caseI}, \eqref{eq:dp2caseI} and \eqref{eq:dcsy2caseI}.
\subsubsection{Case II: $M_1{\leq}N_2$ and $M_2{\geq}N_1$}
In this case, $\mathbf{B}_1{=}0$ as there is no private messages intended for Rx1, while
\begin{IEEEeqnarray}{rcl}
\mathbf{B}_2&{=}&P\mathbf{S}_{\hat{\mathbf{H}}_{22}}^{(1)}{+}P^{A_2^\prime}\mathbf{S}_{\hat{\mathbf{H}}_{12}^\bot}^{(2)}{+} P^{A_2}\mathbf{S}_{\hat{\mathbf{H}}_{12}^\bot}^{(3)}{+}\nonumber\\
&&P^{A_2^\prime{-}\alpha_1}\mathbf{S}_{\hat{\mathbf{H}}_{22}}^{(4)}{+} P^{(A_2{-}\alpha_1)^+}\mathbf{S}_{\hat{\mathbf{H}}_{22}}^{(5)}\nonumber
\end{IEEEeqnarray}
where $\mathbf{S}_{\hat{\mathbf{H}}_{22}}^{(m)}{\triangleq}\mathbf{V}_2^{(m)}\mathbf{V}_2^{(m)H}{,}m{=}1{,}4{,}5$ and $\mathbf{S}_{\hat{\mathbf{H}}_{12}^\bot}^{(m)}{\triangleq}\mathbf{V}_2^{(m)}\mathbf{V}_2^{(m)H}{,}m{=}2{,}3$. Let us focus on the received signal $\tilde{\mathbf{y}}_k{=}\mathbf{T}_k\mathbf{y}_k{,}k{=}1{,}2$, as linear transformation does not change the mutual information.
At Rx1, covariance matrices $\mathbf{Q}_{c1}$ and $\mathbf{Q}_{c2}$ rewrites as $P\mathbf{T}_1\mathbf{H}_{11}^H\mathbf{H}_{11}\mathbf{T}_1^H$ and $P\bar{\mathbf{H}}_{12}^H\bar{\mathbf{H}}_{12}$, where $\mathbf{T}_1\mathbf{H}_{11}^H$ and $\bar{\mathbf{H}}_{12}^H$ are given by \eqref{eq:T1}. Besides, the eigenvalue decomposition of the covariance matrix $\mathbf{Q}_{\boldsymbol\eta_1}{=}\bar{\mathbf{H}}_{12}^H\mathbf{B}_2\bar{\mathbf{H}}_{12}$ can be expressed as $\mathbf{U}_{\boldsymbol\eta_1}\mathbf{D}_{\boldsymbol\eta_1}\mathbf{U}_{\boldsymbol\eta_1}^H$, where
\begin{IEEEeqnarray}{rcl}
\mathbf{D}_{\boldsymbol\eta_1}&{\sim}&{\rm diag}(P\mathbf{I}_{\tau}{,}P^{A_2^\prime{-}\alpha_1}\mathbf{I}_{\min\{N_1^\prime{,}\mu_1{+}\delta_1\}}{,}\nonumber\\
&&P^{(A_2{-}\alpha_1)^+}\mathbf{I}_{N_1^\prime{-}\min\{N_1^\prime{,}\mu_1{+}\delta_1\}}).
\end{IEEEeqnarray}
This is due to the following reasons: 1) according to $\tilde{\mathbf{y}}_2$, $\tau$ private messages are received with power $P$, $\mu_1{+}\delta_1$ private messages are received with the power level $A_2^\prime{-}\alpha_1$ and $\mu_2{+}\delta_2$ private messages are received with the power level $(A_2{-}\alpha_1)^+$ because of ZFBF with imperfect CSIT, and 2) as $A_2{\leq}A_2^\prime$, the $\mu_2{+}\delta_2$ private messages with power level $(A_2{-}\alpha_1)^+$ are drowned by the other $\tau{+}\mu_1{+}\delta_1$ private messages. Note that if $\mu_1{+}\delta_1{\geq}N_1^\prime$, i.e., $\mu_1{+}\delta_1{+}\tau{\geq}N_1$, the $\mu_2{+}\delta_2$ private messages with power level $(A_2{-}\alpha_1)^+$ do not impact $\mathbf{D}_{\boldsymbol\eta_1}$; otherwise, there are $N_1^\prime{-}\mu_1{-}\delta_1$ eigenvalues with power level $(A_2{-}\alpha_1)^+$. In this way, it can be readily shown that ${\log}_2\det(\mathbf{Q}_{c1}{+}\mathbf{Q}_{c2}{+}\mathbf{Q}_1{+}\mathbf{Q}_{\boldsymbol\eta_1})$, ${\log}_2\det(\mathbf{Q}_{c2}{+}\mathbf{Q}_1{+}\mathbf{Q}_{\boldsymbol\eta_1})$ and ${\log}_2\det(\mathbf{Q}_{c1}{+}\mathbf{Q}_1{+}\mathbf{Q}_{\boldsymbol\eta_1})$ are equal to $N_1{\log}_2P{+}o({\log}_2P)$, while
\begin{IEEEeqnarray}{rcl}
{\log}_2\det(\mathbf{Q}_{\boldsymbol\eta_1})&{=}&{\log}_2\det(\mathbf{Q}_1{+}\mathbf{Q}_{\boldsymbol\eta_1})\nonumber\\
&{=}&\tau{\log}_2P{+}\min\{N_1^\prime{,}\mu_1{+}\delta_1\}(A_2^\prime{-}\alpha_1){\log}_2P{+}\nonumber\\&&
(N_1^\prime{-}\min\{N_1^\prime{,}\mu_1{+}\delta_1\})(A_2{-}\alpha_1)^+{\log}_2P{+}\nonumber\\
&&o({\log}_2P).\nonumber
\end{IEEEeqnarray}
Plugging the corresponding values into \eqref{eq:Rckcjpkcompute} leads to \eqref{eq:dc1y1caseII}, \eqref{eq:dc2y1caseII} and \eqref{eq:dcsy1caseII}.
At Rx2, after the linear transformation, it can be shown that covariance matrices $\mathbf{Q}_{c1}$ and $\mathbf{Q}_{c2}$ rewrites as $P\mathbf{T}_2\mathbf{H}_{21}^H\mathbf{H}_{21}\mathbf{T}_2^H$ and $P\mathbf{T}_2\mathbf{H}_{22}^H\mathbf{H}_{22}\mathbf{T}_2^H$, where $\mathbf{T}_2\mathbf{H}_{21}^H$ and $\mathbf{T}_2\mathbf{H}_{22}^H$ are given by \eqref{eq:T2}. Besides, the covariance matrix $\mathbf{Q}_2{=}\mathbf{T}_2\mathbf{H}_{21}^H\mathbf{B}_2\mathbf{H}_{22}\mathbf{T}_2^H$ can be expressed as $\mathbf{U}_2\mathbf{D}_2\mathbf{U}_2^H$, where
\begin{IEEEeqnarray}{rcl}
\mathbf{D}_2&{\sim}&{\rm diag}(\mathbf{0}_{N_2{-}N_2^\prime}{,}P^{(A_2{-}\alpha_1)^+}\mathbf{I}_{\delta_2}{,}P^{A_2}\mathbf{I}_{\mu_2}{,}\nonumber\\ &&P^{A_2^\prime{-}\alpha_1}\mathbf{I}_{\delta_1}{,}P^{A_2^\prime}\mathbf{I}_{\mu_1}{,}P\mathbf{I}_{\tau})\nonumber
\end{IEEEeqnarray}
As there is no private messages sent to Rx1, $\boldsymbol\eta_2$ only consists of noise so that $\mathbf{Q}_{\boldsymbol\eta_2}{=}P^0\mathbf{I}_{N_2}$. Accordingly, it is clear that ${\log}_2\det(\mathbf{Q}_{c2}{+}\mathbf{Q}_2{+}\mathbf{Q}_{\boldsymbol\eta_2}){=}N_2^\prime{\log}_2P{+}o({\log}_2P)$ and ${\log}_2\det(\mathbf{Q}_{c1}{+}\mathbf{Q}_{c2}{+}\mathbf{Q}_2{+}\mathbf{Q}_{\boldsymbol\eta_2}){=}N_2{\log}_2P{+}\\ o({\log}_2P)$. Moreover, we can see that the last $\tau{+}\mu_1{+}\delta_1{=}N_2{-}M_1$ columns in $\mathbf{U}_2$ do not overlap with the column space of $\bar{\mathbf{H}}_{21}^H$, then, it can be readily shown that
\begin{IEEEeqnarray}{rcl}
{\log}_2\det(\mathbf{Q}_{c1}{+}\mathbf{Q}_2{+}\mathbf{Q}_{\boldsymbol\eta_2})&{=}&
M_1{\log}_2P{+}\tau{\log}_2P{+}\mu_1A_2^\prime{\log}_2P{+}\nonumber\\
&&\delta_1(A_2^\prime{-}\alpha_1){\log}_2P{+}o({\log}_2P),\nonumber\\%\IEEEyessubnumber\\
{\log}_2\det(\mathbf{Q}_2{+}\mathbf{Q}_{\boldsymbol\eta_2})&{=}&
\tau{\log}_2P{+}\mu_1A_2^\prime{\log}_2P{+}\nonumber\\
&&\delta_1(A_2^\prime{-}\alpha_1){\log}_2P{+}\mu_2A_2{\log}_2P{+}\nonumber\\
&&\delta_2(A_2{-}\alpha_1)^+{\log}_2P{+}o({\log}_2P).\nonumbe
\end{IEEEeqnarray}
Plugging the corresponding values into \eqref{eq:Rckcjpkcompute} leads to \eqref{eq:dc1y2caseII}, \eqref{eq:dc2y2caseII}, \eqref{eq:dp2caseII} and \eqref{eq:dcsy1caseII}.
\subsection{Solving the optimization problem in \eqref{eq:optd2_2}}
We firstly transform the problem into two sub-problems by considering $A_2{\leq}\alpha_1$ and $A_2{\geq}\alpha_1$, whose closed-form solutions are convenient to calculate. Then, we obtain the closed-form solution to \eqref{eq:optd2_2} by comparing these two closed-form solutions.
\subsubsection{$A_2{\leq}\alpha_1$} In this case, the optimization problem in \eqref{eq:optd2_2} rewrites as
\begin{IEEEeqnarray}{rcl}\label{eq:optd2_2_1}
\max_{A_2{,}A_2^\prime}&\quad& d_{2{,}(2)}(A_2{,}A_2^\prime{,}\lambda)\IEEEyesnumber\IEEEyessubnumber\\
\text{s.t.}&\quad&
\lambda{\leq}N_1^\prime{-}\xi(A_2^\prime{-}\alpha_1),\IEEEyessubnumber\label{eq:lambdacons1_2_1}\\
&&\lambda{\leq}M_1{-}\mu_2A_2,\IEEEyessubnumber\label{eq:lambdacons2_2_1}\\
&&0{\leq}A_2{\leq}\alpha_1,\IEEEyessubnumber\\
&&\alpha_1{\leq}A_2^\prime{\leq}1.\IEEEyessubnumber
\end{IEEEeqnarray}
As $d_{2{,}(2)}(A_2{,}A_2^\prime{,}\lambda)$ given in \eqref{eq:d2caseII2} is increasing with $A_2$ and $A_2^\prime$, it is straightforward to obtain the optimal solution to \eqref{eq:optd2_2_1}, which writes as
\begin{IEEEeqnarray}{rcl}
A_2{=}\min\left\{\alpha_1{,}\frac{M_1{-}\lambda}{\mu_2}\right\}&{,}\,&
A_2^\prime{=}\min\left\{1{,}\alpha_1{+}\frac{N_1^\prime{-}\lambda}{\xi}\right\}.\label{eq:solution1}
\end{IEEEeqnarray}
\subsubsection{$A_2{\geq}\alpha_1$} In this case, the optimization problem in \eqref{eq:optd2_2} rewrites as
\begin{IEEEeqnarray}{rcl}\label{eq:optd2_2_2}
\max_{A_2{,}A_2^\prime}&\quad& d_{2{,}(2)}(A_2{,}A_2^\prime{,}\lambda)\IEEEyesnumber\IEEEyessubnumber\\
\text{s.t.}&\quad&
\lambda{\leq}N_1^\prime{-}\xi A_2^\prime{+}N_1^\prime\alpha_1){-}(N_1^\prime{-}\xi)A_2,\IEEEyessubnumber\label{eq:lambdacons1_2_2}\\
&&\lambda{\leq}M_1{-}M_1A_2{+}\delta_2\alpha_1,\IEEEyessubnumber\label{eq:lambdacons2_2_2}\\
&&\alpha_1{\leq}A_2{\leq}A_2^\prime,\IEEEyessubnumber\label{eq:equal}\\
&&\alpha_1{\leq}A_2^\prime{\leq}1,\IEEEyessubnumber\label{eq:A2pleq1}
\end{IEEEeqnarray}
where we have used the fact that $\mu_2{+}\delta{=}M_1$ given the condition $M_2{\geq}N_2$. As $d_{2{,}(2)}(A_2{,}A_2^\prime{,}\lambda)$ given in \eqref{eq:d2caseII2} is increasing with $A_2$ and $A_2^\prime$, we learn that the optimal solution is obtained when (at least) two of the constraints \eqref{eq:lambdacons1_2_2}, $\eqref{eq:lambdacons2_2_2}$, $A_2{\leq}A_2^\prime$ and $A_2^\prime{\leq}1$ are active. Notably, from \eqref{eq:lambdacons2_2_2} and $A_2{\geq}\alpha_1$, we see that $\lambda$ should be smaller than or equal to $M_1{-}\mu_2\alpha_1$, otherwise, there is no solution to \eqref{eq:optd2_2_2}. Therefore, the discussion goes into following four cases.
\underline{If $A_2{=}A_2^\prime{=}1$:} In this case, $\lambda$ is such that $\lambda{\leq}\min\{\delta_2\alpha_1{,}N_1^\prime\alpha_1\}{=}\delta_2\alpha_1$ according to \eqref{eq:lambdacons1_2_2} and \eqref{eq:lambdacons2_2_2}.
\underline{If $A_2{<}A_2^\prime{=}1$:} In this case, plugging $A_2^\prime{=}1$ into \eqref{eq:lambdacons1_2_2} and \eqref{eq:lambdacons2_2_2} yields
\begin{IEEEeqnarray}{rcl}
A_2&{=}&\min\left\{1{-}\frac{\lambda{-}\delta_2\alpha_1}{M_1}{,}1{-}\frac{\lambda{-}N_1^\prime\alpha_1}{N_1^\prime{-}\xi}\right\}.\label{eq:case2}
\end{IEEEeqnarray}
\underline{If $A_2{=}A_2^\prime{<}1$:} In this case, plugging $A_2{=}A_2^\prime$ into \eqref{eq:lambdacons1_2_2} and \eqref{eq:lambdacons2_2_2} yields
\begin{IEEEeqnarray}{rcl}
A_2^\prime{=}A^\prime&{=}&\min\left\{1{-}\frac{\lambda{-}\delta_2\alpha_1}{M_1}
{,}1{-}\frac{\lambda{-}N_1^\prime\alpha_1}{N_1^\prime}\right\}.\label{eq:case3}
\end{IEEEeqnarray}
\underline{If $A_2{<}A_2^\prime{<}1$:} In this case, using \eqref{eq:lambdacons1_2_2} and \eqref{eq:lambdacons2_2_2}, we have
\begin{IEEEeqnarray}{rcl}
A_2&{=}&1{-}\frac{\lambda{-}\delta_2\alpha_1}{M_1},\IEEEyesnumber\IEEEyessubnumber\label{eq:case41}\\
A_2^\prime&{=}&1{-}\frac{\left(M_1{-}N_1^\prime{+}\xi\right)}{M_1\xi}\lambda{+}
\frac{\left(\mu_2N_1^\prime{+}\delta_2\xi\right)}{M_1\xi}\alpha_1,\IEEEyessubnumber\label{eq:case42}
\end{IEEEeqnarray}
while $\lambda$ is such that $\frac{\mu_2N_1^\prime{+}\delta_2\xi}{M_1{-}N_1^\prime{+}\xi}\alpha_1{\leq}\lambda{\leq}\frac{\mu_2N_1^\prime}{M_1{-}N_1^\prime}\alpha_1$ according constraints \eqref{eq:equal} and \eqref{eq:A2pleq1}, .
From these four cases, we conclude that the closed-form solution to the optimization problem \eqref{eq:optd2_2_2} is given by \eqref{eq:solution2} at the top of the next page.
\begin{figure*}
\begin{IEEEeqnarray}{rll} \label{eq:solution2}
&\text{\rm For }\lambda{\in}\left[0{,}\delta_2\alpha_1\right],& A_2{=}A_2^\prime{=}1,\IEEEyesnumber\IEEEyessubnumber\\
&\text{\rm For }\lambda{\in}\left[\delta_2\alpha_1{,} \min\left\{\frac{\mu_2N_1^\prime{+}\delta_2\xi}{M_1{-}N_1^\prime{+}\xi}\alpha_1{,}M_1{-}\mu_2\alpha_1\right\}\right],
& A_2{=}\text{\rm Eq.}\eqref{eq:case2}{=}1{-}\frac{\lambda{-}\delta_2\alpha_1}{M_1},A_2^\prime{=}1,\IEEEyessubnumber\\
&\text{\rm For }\lambda{\in}\left[\frac{\mu_2N_1^\prime{+}\delta_2\xi}{M_1{-}N_1^\prime{+}\xi}\alpha_1{,} \min\left\{M_1{-}\mu_2\alpha_1{,}\frac{N_1^\prime\mu_2\alpha_1}{M_1{-}N_1^\prime}\right\}\right],& A_2{=}\text{\rm Eq.}\eqref{eq:case41},A_2^\prime{=}\text{\rm Eq.}\eqref{eq:case42}\IEEEyessubnumber\\
&\text{\rm For }\lambda{\in}\left[\frac{N_1^\prime\mu_2\alpha_1}{M_1{-}N_1^\prime}{,}M_1{-}\mu_2\alpha_1\right],& A_2{=}A_2^\prime{=}\text{\rm Eq.}\eqref{eq:case3}{=}1{-}\frac{\lambda{-}N_1^\prime\alpha_1}{N_1^\prime}.\IEEEyessubnumber
\end{IEEEeqnarray}
\hrulefill
\end{figure*}
\subsubsection{Obtain the solution to \eqref{eq:optd2_2}}
The remaining task is to compare the solution to \eqref{eq:optd2_2_1} and \eqref{eq:optd2_2_2} in order to obtain the solution to \eqref{eq:optd2_2}. As mentioned in the above derivation, the closed form solution to \eqref{eq:optd2_2_2}, namely \eqref{eq:solution2}, is valid when $\lambda{\leq}M_1{-}\mu_2\alpha_1$. In this case, \eqref{eq:solution1} writes as $A_2{=}\alpha_1$ and $A_2^\prime{=}\min\{1{,}\alpha_1{+}\frac{N_1^\prime{-}\lambda}{\xi}\}$. By plugging \eqref{eq:solution1} and \eqref{eq:solution2} into $d_{2{,}(2)}(A_2{,}A_2^\prime{,}\lambda)$, it can be shown that \eqref{eq:solution2} leads to a greater value of $d_{2{,}(2)}(A_2{,}A_2^\prime{,}\lambda)$. Therefore, when $\lambda{\leq}M_2{-}\mu_2\alpha_1$, the closed-form solution to \eqref{eq:optd2_2} is given by \eqref{eq:solution2}, which leads to Condition C, D, E and F shown in Section \ref{sec:II2}. When $\lambda{\geq}M_2{-}\mu_2\alpha_1$, closed-form solution to \eqref{eq:optd2_2} is given by \eqref{eq:solution1}, namely $A_2{=}\frac{M_1{-}\lambda}{\mu_2}$ and $A_2^\prime{=}\min\{1{,}\alpha_1{+}\frac{N_1^\prime{-}\lambda}{\xi}\}$, which leads to Condition A and B shown in Section \ref{sec:II2}.
\subsection{Proof of Proposition \ref{prop:BC_outer}}
\input{proof_outer_2col}
\subsubsection{Case II.1, $M_2{\leq}N_2$ and $M_1{\leq}N_2$}
In this case, using the fact that $A_2{\leq}A_2^\prime{\leq}1$, it can be verified that constraints \eqref{eq:dc2cons2} and \eqref{eq:lambdacons2} are redundant compared to \eqref{eq:dc2cons1} and \eqref{eq:lambdacons1}, respectively. Moreover, as the objective function is monotonically increasing with $d_{c2}$, we can see the optimal solution is taken when \eqref{eq:dc2cons1} is active. Hence, the optimization problem stated in \eqref{eq:optd2} becomes
\begin{IEEEeqnarray}{rcl}\label{eq:optd2_1}
\max_{A_2{,}A_2^\prime}&\quad& d_{2{,}(1)}(A_2{,}A_2^\prime{,}\lambda)\IEEEyesnumber\IEEEyessubnumber\\
\text{s.t.}&\quad&\lambda{\leq}N_1^\prime{-}\xi(A_2^\prime{-}\alpha_1){-}(N_1^\prime{-}\xi)(A_2{-}\alpha_1)^+,
\IEEEyessubnumber\label{eq:lambdacons1_1}\\
&&0{\leq}A_2{\leq}A_2^\prime,\IEEEyessubnumber\\
&&\alpha_1{\leq}A_2^\prime{\leq}1,\IEEEyessubnumber
\end{IEEEeqnarray}
where
\begin{multline}\label{eq:d2caseII1}
d_{2{,}(1)}(A_2{,}A_2^\prime{,}\lambda){=}N_1^\prime{-}\lambda{+}(\delta_2{-}N_1^\prime{+}\xi)(A_2{-}\alpha_1)^+{+} \\
\mu_2A_2{+}\mu_1A_2^\prime{+}(\delta_1{-}\xi)(A_2^\prime{-}\alpha_1){+}\tau,
\end{multline}
is obtained by summing \eqref{eq:dp2caseII} and \eqref{eq:dc2cons1}.
Since the objective function \eqref{eq:d2caseII1} is linearly increasing with $A_2$ and $A_2^\prime$, the optimal solution is obtained when (at least) two of the constraints \eqref{eq:lambdacons1_1}, $A_2{\leq}A_2^\prime$ and $A_2^\prime{\leq}1$ are active. Through some simple calculation, the closed-form solution, i.e., $(A_2^*{,}A_2^{\prime*})$, and the resultant maximum DoF of Rx2, i.e., $d_{2{,}(1)}(A_2^*{,}A_2^{\prime*}{,}\lambda)$ write as
\underline{\emph{For $\lambda{\in}\left[0{,}N_1^\prime\alpha_1\right]$,}}
\begin{IEEEeqnarray}{rcl}
(A_2^*{,}A_2^{\prime*})&{=}&(1{,}1),\IEEEyesnumber\IEEEyessubnumber\\
d_{2{,}(1)}(1{,}1{,}\lambda)&{=}&N_2^\prime{-}\lambda.\IEEEyessubnumber\label{eq:dp2_11}
\end{IEEEeqnarray}
\underline{\emph{For $\lambda{\in}\left[N_1^\prime\alpha_1{,}N_1^\prime\right]$,}}
\begin{IEEEeqnarray}{rcl}
\!\!\!\!(A_2^*{,}A_2^{\prime*})&{=}&\left(\frac{N_1^\prime{-}\lambda{+}N_1^\prime\alpha_1}{N_1^\prime}{,}
\frac{N_1^\prime{-}\lambda{+}N_1^\prime\alpha_1}{N_1^\prime}\right),\IEEEyesnumber\IEEEyessubnumber\\
\!\!\!\!d_{2{,}(1)}(A_2^*{,}A_2^{\prime*}{,}\lambda)&{=}&
M_2{+}(M_2{-}N_1)\alpha_1{-}\nonumber\\
&&\frac{M_2{-}N_1{+}N_1^\prime}{N_1^\prime}\lambda.\IEEEyessubnumber\label{eq:dp2_12}
\end{IEEEeqnarray}
It can be shown that the DoF pair $(\lambda{,}d_{2{,}(1)}(A_2^*{,}A_2^{\prime*}{,}\lambda))$ with $d_{2{,}(1)}(A_2^*{,}A_2^{\prime*}{,}\lambda)$ in \eqref{eq:dp2_11} and \eqref{eq:dp2_12} lie on $L_1$ and $L_2$ in Proposition \ref{prop:IC2}, respectively. When $\lambda{=}N_1^\prime\alpha_1$ and $\lambda{=}N_1^\prime$, we have the corner points $\mathcal{P}_{10}{=}(N_1^\prime{,}(M_2{-}N_1)\alpha_1)$ and $\mathcal{P}_{12}{=}(N_1^\prime\alpha_1{,}N_2^\prime{-}N_1^\prime\alpha_1)$ in Figure \ref{fig:M2leqN2}, respectively.
\subsubsection{Case II.2, $M_2{\geq}N_2$ and $M_1{\leq}N_2$}\label{sec:II2}
\begin{table*}[t]
\renewcommand{\captionfont}{\small}
\footnotesize
\captionstyle{center} \centering
\begin{tabular}{c|l|c|c|c}
& Conditions & $d_{2{,}(2)}(A_2^*{,}A_2^{\prime*}{,}\lambda)$ & $N_2{\geq}M_1{+}N_1$ & $N_2{\leq}M_1{+}N_1$\\
\hline
$\alpha_1{\leq}\frac{M_1{-}N_1^\prime}{\mu_2}$ & A: not hold & & &\\
(Not applicable for $M_1{\leq}N_1$) & B: not hold & & &\\
& C: for $\lambda{\in}\left[\frac{N_1^\prime\mu_2\alpha_1}{M_1{-}N_1^\prime}{,}N_1^\prime\right]$ & eq.\eqref{eq:dp2_C} & $L_2$ & $L_2$\\
& D: for $\lambda{\in}\left[\frac{\mu_2N_1^\prime{+}\delta_2\xi}{M_1{-}N_1^\prime{+}\xi}\alpha_1{,}
\frac{N_1^\prime\mu_2\alpha_1}{M_1{-}N_1^\prime}\right]$ & eq.\eqref{eq:dp2_D} & $L_3$ & $L_4$\\
& E: for $\lambda{\in}\left[\delta_2\alpha_1{,}\frac{\mu_2N_1^\prime{+}\delta_2\xi}{M_1{-}N_1^\prime{+}\xi}\alpha_1\right]$ & eq.\eqref{eq:dp2_E} & $L_1$ & $L_1$\\
& F: for $\lambda{\in}\left[0{,}\delta_2\alpha_1\right]$ & eq.\eqref{eq:dp2_F} & $L_1$ & $L_1$\\
\hline
$\frac{M_1{-}N_1^\prime}{\mu_2}{\leq}\alpha_1{\leq}\frac{M_1{-}N_1^\prime{+}\xi}{\mu_2{+}\xi}$ & A: for $\lambda{\in}\left[M_1{-}\mu_2\alpha_1{,}N_1^\prime\right]$ & eq.\eqref{eq:dp2_A} & $L_2$ & $L_5$\\
& B: not hold & & &\\
& C: not hold & & &\\ & D: for $\lambda{\in}\left[\frac{\mu_2N_1^\prime{+}\delta_2\xi}{M_1{-}N_1^\prime{+}\xi}\alpha_1{,}
M_1{-}\mu_2\alpha_1\right]$ & eq.\eqref{eq:dp2_D} & $L_3$ & $L_4$\\
& E: for $\lambda{\in}\left[\delta_2\alpha_1{,}\frac{\mu_2N_1^\prime{+}\delta_2\xi}{M_1{-}N_1^\prime{+}\xi}\alpha_1\right]$ & eq.\eqref{eq:dp2_E} & $L_1$ & $L_1$\\
& F: for $\lambda{\in}\left[0{,}\delta_2\alpha_1\right]$ & eq.\eqref{eq:dp2_F} & $L_1$ & $L_1$\\
\hline
$\alpha_1{\geq}\frac{M_1{-}N_1^\prime{+}\xi}{\mu_2{+}\xi}$ & A: for $\lambda{\in}\left[N_1^\prime{-}\xi(1{-}\alpha_1){,}N_1^\prime\right]$ & eq.\eqref{eq:dp2_A} & $L_2$ & $L_5$\\
& B: for $\lambda{\in}\left[M_1{-}\mu_2\alpha_1{,}N_1^\prime{-}\xi(1{-}\alpha_1)\right]$ & eq.\eqref{eq:A2A2pstarB} & $L_1$ & $L_1$\\
& C: not hold & & & \\
& D: not hold & & & \\
& E: for $\lambda{\in}\left[\delta_2\alpha_1{,}M_1{-}\mu_2\alpha_1\right]$ & eq.\eqref{eq:dp2_E} & $L_1$ & $L_1$\\
& F: for for $\lambda{\in}\left[0{,}\delta_2\alpha_1\right]$ & eq.\eqref{eq:dp2_F} & $L_1$ & $L_1$\\
\end{tabular}
\caption{Achievability of the weighted-sum constraints in Case II.2.a and II.2.b}\label{tab:dofpairs}
\end{table*}
In this case, we perform the same derivation as in Case II.1 by taking $d_{c2}$ equal to the minimum of r.h.s. of \eqref{eq:dc2cons1} and \eqref{eq:dc2cons2}, because the objective function in \eqref{eq:optd2} is monotonically increasing with $d_{c2}$. Then, the optimization problem can be reformulated as
\begin{IEEEeqnarray}{rcl}\label{eq:optd2_2}
\max_{A_2{,}A_2^\prime}&\quad& d_{2{,}(2)}(A_2{,}A_2^\prime{,}\lambda)\IEEEyesnumber\IEEEyessubnumber\\
\text{s.t.}&\quad&
\lambda{\leq}N_1^\prime{-}\xi(A_2^\prime{-}\alpha_1){-}(N_1^\prime{-}\xi)(A_2{-}\alpha_1)^+,\IEEEyessubnumber\label{eq:lambdacons1_2}\\
&&\lambda{\leq}M_1{-}\mu_2A_2{-}\delta_2(A_2{-}\alpha_1)^+,\IEEEyessubnumber\label{eq:lambdacons2_2}\\
&&0{\leq}A_2{\leq}A_2^\prime,\IEEEyessubnumber\\
&&\alpha_1{\leq}A_2^\prime{\leq}1,\IEEEyessubnumber
\end{IEEEeqnarray}
where
\begin{multline}\label{eq:d2caseII2}
d_{2{,}(2)}(A_2{,}A_2^\prime{,}\lambda){=}
\min\left\{N_2{-}\lambda{,}N_1^\prime{-}\lambda{+}\mu_2A_2{+}\mu_1A_2^\prime{+}\right.\\
\left.(\delta_2{-}N_1^\prime{+}\xi)(A_2{-}\alpha_1)^+{+}(\delta_1{-}\xi)(A_2^\prime{-}\alpha_1){+}\tau\right\},
\end{multline}
is obtained by summing \eqref{eq:dp2caseII} and the minimum of \eqref{eq:dc2cons1} and \eqref{eq:dc2cons2}.
Following the derivations in Appendix B, the closed-form solution, i.e., $(A_2^*{,}A_2^{\prime*})$, and the resultant maximum DoF of Rx2, i.e., $d_{2{,}(2)}(A_2^*{,}A_2^{\prime*}{,}\lambda)$, write in the following six conditions:
\begin{enumerate}
\item[A)] For $\lambda{\in}\left[\max\{M_1{-}\mu_2\alpha_1{,}N_1^\prime{-}\xi(1{-}\alpha_1)\}{,}N_1^\prime\right]$,
\begin{IEEEeqnarray}{rcl}
(A_2^*{,}A_2^{\prime*})&{=}&\left(\frac{M_1{-}\lambda}{\mu_2}{,}\alpha_1{+}\frac{N_1^\prime{-}\lambda}{\xi}\right), \IEEEyesnumber\IEEEyessubnumber\label{eq:A2A2pstarA}\\
d_{2{,}(2)}(A_2^*{,}A_2^{\prime*}{,}\lambda)&{=}&\max\{N_2{,}M_1{+}N_1\}{+}\nonumber\\&&\mu_1\alpha_1{-}
\left(1{+}\frac{\mu_1}{\xi}\right)\lambda;\IEEEyessubnumber
\label{eq:dp2_A}
\end{IEEEeqnarray}
\item[B)] For $\lambda{\in}\left[M_1{-}\mu_2\alpha_1{,}N_1^\prime{-}\xi(1{-}\alpha_1)\right]$,
\begin{IEEEeqnarray}{rcl}
(A_2^*{,}A_2^{\prime*})&{=}&\left(\frac{M_1{-}\lambda}{\mu_2}{,}1\right), \IEEEyesnumber\IEEEyessubnumber\label{eq:A2A2pstarB}\\
d_{2{,}(2)}(A_2^*{,}A_2^{\prime*}{,}\lambda)&{=}&N_2{-}\lambda;\IEEEyessubnumber\label{eq:dp2_B}
\end{IEEEeqnarray}
\item[C)] For $\lambda{\in}\left[\frac{N_1^\prime\mu_2\alpha_1}{M_1{-}N_1^\prime}{,}\min\{M_1{-}\mu_2\alpha_1{,}N_1^\prime\}\right]$,
\begin{IEEEeqnarray}{rcl}
\!\!\!\!\!\!\!\!(A_2^*{,}A_2^{\prime*})&{=}&\left(\frac{N_1^\prime{-}\lambda{+}N_1^\prime\alpha_1}{N_1^\prime}{,}\right.\nonumber\\
&&\left.\frac{N_1^\prime{-}\lambda{+}N_1^\prime\alpha_1}{N_1^\prime}\right),\IEEEyesnumber\IEEEyessubnumber\label{eq:A2A2pstarC}\\
\!\!\!\!\!\!\!\!d_{2{,}(2)}(A_2^*{,}A_2^{\prime*}{,}\lambda)&{=}&
N_2{+}(\mu_1{+}\mu_2)\alpha_1{-}\nonumber\\
&&\frac{N_2{-}N_1{+}N_1^\prime}{N_1^\prime}\lambda;\IEEEyessubnumber\label{eq:dp2_C}
\end{IEEEeqnarray}
\item[D)] For $\lambda{\in}\left[\frac{\mu_2N_1^\prime{+}\delta_2\xi}{M_1{-}N_1^\prime{+}\xi}\alpha_1{,} \min\{M_1{-}\mu_2\alpha_1{,}\frac{N_1^\prime\mu_2\alpha_1}{M_1{-}N_1^\prime}{,}N_1^\prime\}\right]$,
\begin{IEEEeqnarray}{rcl}
(A_2^*{,}A_2^{\prime*})&{=}&\left(\frac{M_1{-}\lambda{+}\delta_2\alpha_1}{M_1}{,}
1{-}\frac{\left(M_1{-}N_1^\prime{+}\xi\right)}{M_1\xi}\lambda{+}\right.\nonumber\\
&&\left.\frac{\left(\mu_2N_1^\prime{+}\delta_2\xi\right)}{M_1\xi}\alpha_1\right),\IEEEyesnumber\IEEEyessubnumber\label{eq:A2A2pstarD}\\
d_{2{,}(2)}(A_2^*{,}A_2^{\prime*}{,}\lambda)&{=}&
N_2{+}\left[1{+}\frac{\mu_2\left(N_1{-}\xi\right)}{M_1\xi}\right]\mu_1\alpha_1{-}\nonumber\\
&&\left[1{+}\frac{M_1{-}N_1^\prime{+}\xi}{M_1\xi}\mu_1\right]\lambda;\IEEEyessubnumber\label{eq:dp2_D}
\end{IEEEeqnarray}
\item[E)] For $\lambda{\in}\left[\delta_2\alpha_1{,} \min\{\frac{\mu_2N_1^\prime{+}\delta_2\xi}{M_1{-}N_1^\prime{+}\xi}\alpha_1{,}M_1{-}\mu_2\alpha_1{,}N_1^\prime\}\right]$,
\begin{IEEEeqnarray}{rcl}
(A_2^*{,}A_2^{\prime*})&{=}&\left(\frac{M_1{-}\lambda{+}\delta_2\alpha_1}{M_1}{,}1\right), \IEEEyesnumber\IEEEyessubnumber\label{eq:A2A2pstarE}\\
d_{2{,}(2)}(A_2^*{,}A_2^{\prime*}{,}\lambda)&{=}&N_2{-}\lambda;\IEEEyessubnumber\label{eq:dp2_E}
\end{IEEEeqnarray}
\item[F)] For $\lambda{\in}\left[0{,}\delta_2\alpha_1\right]$,
\begin{IEEEeqnarray}{rcl}
(A_2^*{,}A_2^{\prime*})&{=}&(1{,}1), \IEEEyesnumber\IEEEyessubnumber\label{eq:A2A2pstarF}\\
d_{2{,}(2)}(A_2{,}A_2^\prime{,}\lambda)&{=}&N_2{-}\lambda.\IEEEyessubnumber\label{eq:dp2_F}
\end{IEEEeqnarray}
\end{enumerate}
To be complete, Table \ref{tab:dofpairs} summarizes the validation of these six conditions for different values of $\alpha_1$, and also present the resultant weighted-sum constraints where the corresponding DoF pair $(\lambda{,}d_{p2}(A_2^*{,}A_2^{\prime*}{,}\lambda))$ lies on for Case II.2.a $N_2{\geq}M_1{+}N_1$ (i.e., $N_1^\prime{\leq}\mu_1$) and Case II.2.b $N_2{\leq}M_1{+}N_1$ (i.e., $N_1^\prime{\geq}\mu_1$).
\subsection{Key ingredients of the proposed RS scheme}
In this part, we highlight the key ingredients that constitute the novel RS transmission blocks designed for asymmetric MIMO BC and IC.
\subsubsection{Additional non-ZF-precoded private symbols}
To address the issues mentioned in the previous section, we propose an RS transmission block by allocating unequal power to the private messages, and by transmitting additional non-ZF-precoded private symbols to Rx2. These features provide more flexibility in balancing the capability of decoding common messages at the two receivers, and exploit the larger antenna array at Rx2. Specifically, for a $(4{,}2{,}3)$ MIMO BC, the RS scheme consists of $1$ ZF-precoded private symbols to Rx1 allocated with power $P^{A_1}$, $2$ ZF-precoded private symbols to Rx2 allocated with power $P^{A_2}$, $1$ non-ZF-precoded private symbols to Rx2 allocated with power $P^{(A_2{-}\alpha_1)^+}$, while the common messages are multicast with the remaining power. As we will see later on, when the CSIT quality of Rx1 is not sufficiently good, choosing $A_2{>}\alpha_1$ is beneficial to the sum DoF, though it results in some level of interference at Rx$1$. This is because the private message spans $3$ dimensions at Rx2, while the interference at Rx1 spans only $2$ dimensions. In the extreme case of $\alpha_1{=}\alpha_2{=}0$, by choosing the power exponents $(A_1{,}A_2){=}(0{,}1)$, the transmitted signal consists of three private symbols intended for Rx2. This yields the sum DoF $3$, which is consistent with the maximum sum DoF with no CSIT.
In contrast, the RS scheme designed for the symmetric case \cite{Ges12} has equal power allocation and no additional private symbols is transmitted. This is because unequal power allocation and delivering non-ZF-precoded private symbols are useless in enhancing the sum DoF when the two receivers have the same number of antennas. In the scheme proposed in \cite{xinping_mimo}, the transmitter delivers non-ZF-precoded private symbols to both receivers. This feature is useful because the overheard interference at each receiver can be exploited as side information when there is perfect delayed CSIT.
\subsubsection{Space-time transmission}
When the CSIT quality of Rx1 is not sufficiently good, we perform a space-time transmission using the proposed transmission block. Specifically, we employ power exponents $(A_1{,}A_2){=}(\alpha_2{,}\alpha_1)$ for a fraction of the total time slots, while employ the power exponents $(A_1{,}A_2){=}(\alpha_2{,}1)$ for the rest of the time. Since choosing $A_2{>}\alpha_1$ is beneficial to the sum DoF when CSIT quality of Rx1 is not sufficiently good, the proposed space-time transmission is carried out to fully exploit the spatial dimension at Rx2.
In contrast, when the CSIT qualities are fixed across the time line, the RS scheme designed for the symmetric case \cite{Ges12} does not employ space-time transmission. This is because choosing power exponents greater than the CSIT quality does not provide sum DoF gain. Besides, the Block-Markov implementation proposed in \cite{xinping_mimo} also spans the time-domain, but it requires perfect delayed CSIT to perform a sequential backward decoding. However, in our space-time implementation, a joint decoding is performed focusing on the aggregate received signals, and only current imperfect CSIT is used.
\subsubsection{Interference space identification}\label{sec:int_space}
We characterize the asymmetric MIMO IC into two cases. Case I has the antenna configuration $M_1{\geq}N_2$ (As a reinder, we consider $M_2{\geq}N_1$ and $N_2{\geq}N_1$). This setting yields a similar scenario to BC because the subspace spanned by the desired signal is completely overlapped with the subspace spanned by the interference signal. Accordingly, we propose an RS scheme by inheriting the key features, i.e., transmitting additional non-ZF-precoded private symbols and space-time implementation, of the RS scheme designed for the asymmetric MIMO BC.
Case II has the antenna configuration $M_1{\leq}N_2$. In this case, no ZF-precoded private symbols is delivered to Rx1. Moreover, by performing a row transformation to the channel matrices, we learn that at Rx2, the subspace spanned by the desired signal is \emph{partially} overlapped with the signal sent by Tx1. Then, since the private symbols lying in the non-overlapping part do not impact the common-message-decodability at Rx2, we modify the RS scheme designed for the Case I by allocating different power exponents to the private symbols that are overlapped with the signal sent by Tx1 and the private symbols that are not overlapped with the signal sent by Tx1.
\subsection{Main Results on Achievable DoF Regions}
We state the achievable DoF regions as follows.
\begin{myprop}\label{prop:BC}
For a $(M{,}N_1{,}N_2)$ MIMO BC, supposing $N_1{\leq}N_2$, an achievable DoF region with imperfect CSIT is characterized by \eqref{eq:BCRegion} at the top of the next page, where $\alpha_{0{,}BC}$ and $\Phi_{BC}$ are defined in \eqref{eq:BCalpha0} and \eqref{eq:BCphi}, respectively.
\begin{figure*}
{\small\begin{IEEEeqnarray}{rrl}\label{eq:BCRegion}
L_0:&d_1{\leq}&\min\{M{,}N_1\},\IEEEyesnumber\IEEEyessubnumber\\
L_0^\prime: &d_2{\leq}&\min\{M{,}N_2\},\IEEEyessubnumber\\
L_1:&d_1{+}d_2{\leq}&\min\{M{,}N_2\}{+}\left[\min\{M{,}N_1{+}N_2\}{-}\min\{M{,}N_2\}\right]\alpha_{0{,}BC},\IEEEyessubnumber\label{eq:BCsum}\\
L_2:&\frac{d_1}{\min\{M{,}N_1\}}{+}\frac{d_2}{\min\{M{,}N_2\}}{\leq}&1{+}\frac{\min\{M{,}N_1{+}N_2\}{-}\min\{M{,}N_1\}}{\min\{M{,}N_2\}}\alpha_1,
\IEEEyessubnumber\label{eq:BCwsum}
\end{IEEEeqnarray}}
{\small\begin{IEEEeqnarray}{rcl}
\alpha_{0{,}BC}&{=}&\left\{\begin{array}{ll}\alpha_2&\text{\rm if }\Phi_{BC}{\leq}0\\
\alpha_2{-}\frac{\Phi_{BC}}{\min\{M{,}N_1{+}N_2\}{-}\min\{M{,}N_1\}}&\text{\rm Else if }\alpha_1{\geq}1{-}\alpha_2;\\
\frac{\alpha_1\alpha_2[\min\{M{,}N_1{+}N_2\}{-}\min\{M{,}N_2\}]}{\left[\min\{M{,}N_2\}{-}\min\{M{,}N_1\}\right](1{-}\alpha_1){+}
\left[\min\{M{,}N_1{+}N_2\}{-}\min\{M{,}N_2\}\right]\alpha_2}&\text{\rm Else if }\alpha_1{\leq}1{-}\alpha_2.\end{array}\right.\label{eq:BCalpha0}\\
\Phi_{BC}&{\triangleq}&\min\{M{,}N_2\}{-}\min\{M{,}N_1\}{+}[\min\{M{,}N_1{+}N_2\}{-}\min\{M{,}N_2\}]\alpha_2{-}\nonumber\\&&
[\min\{M{,}N_1{+}N_2\}{-}\min\{M{,}N_1\}]\alpha_1.\label{eq:BCphi}
\end{IEEEeqnarray}}
\hrulefill
\end{figure*}
\end{myprop}
\begin{figure}[!t]
\renewcommand{\captionfont}{\small}
\captionstyle{center}
\centering
\subfigure[$\Phi_{BC}{\leq}0$]
\centering
\includegraphics[width=0.24\textwidth,height=3cm]{BCcase1}
\label{fig:BCcase1}
\subfigure[$\Phi_{BC}{\geq}0$]
\centering
\includegraphics[width=0.24\textwidth,height=3cm]{BCcase2}
\label{fig:BCcase2}
\caption{Achievable DoF region of $(M{,}N_1{,}N_2)$ MIMO BC.}\label{fig:BCregion}
\end{figure}
Figure \ref{fig:BCregion} illustrates the DoF region stated in Proposition \ref{prop:BC}, where $\mathcal{P}_{ij}$ denotes the intersection of line $L_i$ and $L_j$. When $\alpha_1$ is large enough such that $\Phi_{BC}{\leq}0$, the weighted-sum constraint, i.e.,\eqref{eq:BCwsum}, becomes inactive and the DoF region is formed by $\mathcal{P}_{10}$ and $\mathcal{P}_{10^\prime}$. Moreover, the DoF region with perfect CSIT and no CSIT can be reached with $\alpha_1{=}\alpha_2{=}1$ and $\alpha_1{=}0$ (${\forall}\alpha_2{\in}[0{,}1]$), respectively. When $N_1{=}N_2$, $L_1$ and $L_2$ in Proposition \ref{prop:BC} boil down to the the sum DoF constraint in the symmetric antenna case \cite{JinyuanMIMO}.
For a general $(M_1{,}M_2{,}N_1{,}N_2)$ MIMO IC, as explained in Section \ref{sec:int_space}, we categorize the antenna configurations as Case I with $M_1{\geq}N_2$ and Case II with $M_1{\leq}N_2$. The antenna configuration in Case I yields a similar scenario as BC, while the antenna configuration in Case II implies a different scenario where Tx1 is not able to perform ZFBF, and in the received signals, some messages of Rx2 do not align with the messages intended for Rx1. Due to these facts, the transmission schemes are designed differently in these two cases and lead to different achievable DoF regions.
\begin{myprop}\label{prop:IC1}
For a $(M_1{,}M_2{,}N_1{,}N_2)$ MIMO IC of Case I, an achievable DoF region with imperfect CSIT is characterized by \eqref{eq:ICRegion1} at the top of the next page, where $\alpha_{0{,}IC}$ and $\Phi_{IC}$ are defined in \eqref{eq:ICalpha0} and \eqref{eq:ICphi}, respectively.
\begin{figure*}
{\small\begin{IEEEeqnarray}{rrl}\label{eq:ICRegion1}
L_0:&d_1{\leq}&N_1,\IEEEyesnumber\IEEEyessubnumber\\
L_0^\prime:&d_2{\leq}&\min\{M_2{,}N_2\},\IEEEyessubnumber\\
L_1:&d_1{+}d_2{\leq}&\min\{M_2{,}N_2\}{+}\left[\min\{M_1{,}N_1{+}N_2\}{-}N_2\right]\alpha_{0{,}IC},\IEEEyessubnumber\label{eq:L1}\\
L_2:&\frac{d_1}{N_1}{+}\frac{d_2}{\min\{M_2{,}N_2\}}{\leq}&
1{+}\frac{\left[\min\{M_2{,}N_1{+}N_2\}{-}N_1\right]\alpha_1}{\min\{M_2{,}N_2\}},
\IEEEyessubnumber\label{eq:ICL2}
\end{IEEEeqnarray}}
{\small \begin{IEEEeqnarray}{rcl}
\alpha_{0{,}IC}&{=}&\left\{\begin{array}{ll}0&\text{\rm If }M_2{\leq}N_2\\
\alpha_2&\text{\rm Else if }\Phi_{IC}{\leq}0\\
\frac{\min\{M_2{,}N_1{+}N_2\}{-}N_2}{\min\{M_1{,}N_1{+}N_2\}{-}N_2}\alpha_1&\text{\rm Else if }\frac{\min\{M_1{,}N_1{+}N_2\}{-}\min\{M_2{,}N_1{+}N_2\}}{\min\{M_1{,}N_1{+}N_2\}{-}N_2}\alpha_1{\geq}1{-}\alpha_2\\
\alpha_2{-}\frac{\Phi_{IC}}{\min\{M_1{,}N_1{+}N_2\}{-}N_1}&\text{\rm Else if }\alpha_1{\geq}1{-}\alpha_2\\
\frac{\alpha_1\alpha_2\left[\min\{M_2{,}N_1{+}N_2\}{-}N_2\right]}{(N_2{-}N_1)(1{-}\alpha_1){+}(\min\{M_1{,}N_1{+}N_2\}{-}N_2)\alpha_2}&\text{\rm Else if }\alpha_1{\leq}1{-}\alpha_2
\end{array}\right.\label{eq:ICalpha0}\\
\Phi_{IC}&{=}&N_2{-}N_1{+}[\min\{M_1{,}N_1{+}N_2\}{-}N_2]\alpha_2{-}[\min\{M_2{,}N_1{+}N_2\}{-}N_1]\alpha_1.\label{eq:ICphi}
\end{IEEEeqnarray}}
\hrulefill
\end{figure*}
\end{myprop}
\begin{myprop}\label{prop:IC2}
For a $(M_1{,}M_2{,}N_1{,}N_2)$ MIMO IC of Case II, an achievable DoF region with imperfect CSIT is characterized by \eqref{eq:ICRegion2} at the top of the next page, where where $\mu_2{\triangleq}\min\{M_1{,}\min\{M_2{,}N_1{+}N_2\}{-}N_2{+}M_1{-}N_1^\prime\}$, $N_1^{\prime\prime}{\triangleq}\max\{M_1{,}N_1\}$ and $N_k^\prime{\triangleq}\min\{M_k{,}N_k\}{,}k{=}1{,}2$.
\begin{figure*}
{\small \begin{IEEEeqnarray}{rrl}\label{eq:ICRegion2}
L_0:&d_1{\leq}&N_1^\prime,\IEEEyesnumber\IEEEyessubnumber\\
L_0^\prime:&d_2{\leq}&N_2^\prime,\IEEEyessubnumber\\
L_1:&d_1{+}d_2{\leq}&N_2^\prime,\IEEEyessubnumber\label{eq:L1_2}\\
L_2:&\frac{d_1}{N_1^\prime}{+}\frac{d_2}{N_2^\prime{-}N_1{+}N_1^\prime}{\leq}&
\frac{N_2^\prime{+}\left[\min\{M_2{,}N_1{+}N_2\}{-}N_1\right]\alpha_1}{N_2^\prime{-}N_1{+}N_1^\prime},
\IEEEyessubnumber\label{eq:L2_2}\\
&(d_1{,}d_2)\,\text{\rm subject to }L_3,&\quad\text{\rm If }M_2{\geq}N_2{,}N_1{+}M_1{\leq}N_2\nonumber\\
&(d_1{,}d_2)\,\text{\rm subject to }L_4{,}L_5,&\quad\text{\rm If }M_2{\geq}N_2{,}N_1{+}M_1{\geq}N_2\nonumber
\end{IEEEeqnarray}
\begin{IEEEeqnarray}{rrl}
L_3:&\frac{d_1}{N_1^\prime}{+}\frac{d_2}{N_2{-}N_1^{\prime\prime}{+}N_1^\prime}{\leq}&
\frac{N_2{+}\left[N_2{-}N_1^{\prime\prime}\right]\alpha_1}{N_2{-}N_1^{\prime\prime}{+}N_1^\prime},
\IEEEyessubnumber\label{eq:L3}\\
L_4:&\frac{d_1}{M_1}{+}\frac{d_2}{N_2{-}N_1{+}M_1}{\leq}&\frac{N_2}{N_2{-}N_1{+}M_1}{+}
\left[\frac{N_2{-}N_1^{\prime\prime}}{N_2{-}N_1{+}M_1}{+}\frac{\mu_2(M_1{+}N_1{-}N_2)}{M_1(N_2{-}N_1{+}M_1)}\right]\alpha_1,
\IEEEyessubnumber\label{eq:L4}\\
L_5:&d_1{+}\frac{d_2}{2}{\leq}&\frac{1}{2}(M_1{+}N_1{+}(N_2{-}N_1^{\prime\prime})\alpha_1),
\IEEEyessubnumber\label{eq:L5}
\end{IEEEeqnarray}}
\hrulefill
\end{figure*}
\end{myprop}
\begin{table*}[t]
\renewcommand{\arraystretch}{1.3}
\vspace{.6em}
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Conditions & Active Constraints & Corner Points & Optimality\\
\hline
Case I.1: $M_1{\geq}N_2${,}$M_2{\leq}N_2$ & $L_1$, $L_2$ & $\mathcal{P}_{20}$, $\mathcal{P}_{12}$, $\mathcal{P}_{10^\prime}$ & Yes\\
\hline
Case I.2: $M_1{\geq}N_2${,}$M_2{\geq}N_2$ & & &\\
$\quad$ If $\Phi_{IC}{\leq}0$: & $L_1$ & $\mathcal{P}_{10}$, $\mathcal{P}_{10^\prime}$ & Yes\\
$\quad$ If $\Phi_{IC}{\geq}0$: & $L_1$, $L_2$ & $\mathcal{P}_{20}$, $\mathcal{P}_{12}$, $\mathcal{P}_{10^\prime}$ & Unknown\\
\hline
Case II.1: $M_1{\leq}N_2${,}$M_2{\leq}N_2$ & $L_1$, $L_2$ & $\mathcal{P}_{20}$, $\mathcal{P}_{12}$, $\mathcal{P}_{10^\prime}$ & Yes, if $N_1{\leq}M_1{\leq}N_2$\\
\hline
Case II.2.a: $M_1{\leq}N_2${,}$M_2{\geq}N_2$ and $M_1{+}N_1{\leq}N_2$ & & & Unknown\\
$\quad$ If $\alpha_1{\leq}\frac{M_1{-}N_1^\prime}{\mu_2}$ & $L_1$, $L_2$, $L_3$ & $\mathcal{P}_{20}$, $\mathcal{P}_{23}$, $\mathcal{P}_{13}$, $\mathcal{P}_{10^\prime}$ & \\
$\quad$ If $\alpha_1{\geq}\frac{M_1{-}N_1^\prime}{\mu_2}$ & $L_1$, $L_3$ & $\mathcal{P}_{30}$, $\mathcal{P}_{13}$, $\mathcal{P}_{10^\prime}$ & \\
\hline
Case II.2.b: $M_1{\leq}N_2${,}$M_2{\geq}N_2$ and $M_1{+}N_1{\geq}N_2$ & & & Unknown\\
$\quad$ If $\alpha_1{\leq}\frac{M_1{-}N_1^\prime}{\mu_2}$ & $L_1$, $L_2$, $L_4$ & $\mathcal{P}_{20}$, $\mathcal{P}_{24}$, $\mathcal{P}_{14}$, $\mathcal{P}_{10^\prime}$ &\\
$\quad$ If $\frac{M_1{-}N_1^\prime}{\mu_2}{\leq}\alpha_1{\leq}\frac{N_2{-}N_1}{\mu_2{+}N_2{-}N_1^{\prime\prime}}$ & $L_1$, $L_4$, $L_5$ & $\mathcal{P}_{50}$, $\mathcal{P}_{54}$, $\mathcal{P}_{14}$, $\mathcal{P}_{10^\prime}$ &\\
$\quad$ If $\alpha_1{\geq}\frac{N_2{-}N_1}{\mu_2{+}N_2{-}N_1^{\prime\prime}}$ & $L_1$, $L_5$ & $\mathcal{P}_{50}$, $\mathcal{P}_{15}$, $\mathcal{P}_{10^\prime}$ &\\
\hline
\end{tabular}
\caption{The DoF regions of $(M_1{,}M_2{,}N_1{,}N_2)$ MIMO IC: active constraints, corner points and optimality.}\label{tab:ICregions}
\vspace{-0mm}
\end{table*}
\begin{figure}[!t]
\renewcommand{\captionfont}{\small}
\captionstyle{center}
\centering
\subfigure[Case I.1 and II.1: $M_2{\leq}N_2$]{
\centering
\includegraphics[width=0.2\textwidth,height=2.5cm]{IC_M2leqN2}
\label{fig:M2leqN2}
\subfigure[Case I.2 and $\Phi_{IC}{\leq}0$, where $L_2$ inactive]
\centering
\includegraphics[width=0.2\textwidth,height=2.5cm]{IC_I2b}
\label{fig:I2b}
}
\subfigure[Case I.2 and $\Phi_{IC}{\geq}0$]
\centering
\includegraphics[width=0.2\textwidth,height=2.5cm]{IC_I2a}
\label{fig:I2a}
\subfigure[Case II.2.a: If $\alpha_1{\leq}\frac{M_1{-}N_1^\prime}{\mu_2}$ (only valid when $N_1{\leq}M_1$)]
\centering
\includegraphics[width=0.2\textwidth,height=2.5cm]{IC_II2a1}
\label{fig:II2a1}
}\\
\subfigure[Case II.2.a: If $\alpha_1{\geq}\frac{M_1{-}N_1^\prime}{\mu_2}$, $L_2$ inactive]
\centering
\includegraphics[width=0.2\textwidth,height=2.5cm]{IC_II2a2_v2}
\label{fig:II2a2}
}
\subfigure[Case II.2.b: If $\alpha_1{\leq}\frac{M_1{-}N_1^\prime}{\mu_2}$ (only valid when $N_1{\leq}M_1$), $L_5$ inactive]
\centering
\includegraphics[width=0.2\textwidth,height=2.5cm]{IC_II2b1_v2}
\label{fig:II2b1}
}
\subfigure[Case II.2.b: If $\frac{M_1{-}N_1^\prime}{\mu_2}{\leq}\alpha_1{\leq}\frac{N_2{-}N_1}{\mu_2{+}N_2{-}N_1^{\prime\prime}}$, $L_2$ inactive]
\centering
\includegraphics[width=0.2\textwidth,height=2.5cm]{IC_II2b2_v2}
\label{fig:II2b2}
}
\subfigure[Case II.2.b: If $\alpha_1{\geq}\frac{N_2{-}N_1}{\mu_2{+}N_2{-}N_1^{\prime\prime}}$, $L_2$ and $L_4$ inactive]
\centering
\includegraphics[width=0.2\textwidth,height=2.5cm]{IC_II2b3_v2}
\label{fig:II2b3}
}
\caption{Achievable DoF regions in $(M_1{,}M_2{,}N_1{,}N_2)$ MIMO IC.}\label{fig:ICregions}
\end{figure}
For clarity, we summarize the active constraints and the resulted corner points in Table \ref{tab:ICregions} for different antenna configurations. Figure \ref{fig:ICregions} illustrates the DoF regions in Proposition \ref{prop:IC1} and \ref{prop:IC2}, where $\mathcal{D}^P$ and $\mathcal{D}^N$ stand for the optimal DoF region when there is perfect CSIT \cite{JarfarMIMOIA2pair} and no CSIT \cite{zhunoCSIT}, respectively.
When $M_1{\geq}N_2$, the DoF region is a function of $\alpha_1$ and $\alpha_2$ according to Proposition \ref{prop:IC1}. When $\alpha_1{=}0$ (${\forall}\alpha_2{\in}[0{,}1]$), the DoF regions become the DoF region with no CSIT. The values of $\alpha_1$ and $\alpha_2$ that lead to the DoF region with perfect CSIT are different according to the antenna configurations, namely $\alpha_1{=}\alpha_2{=}1$ if $N_2{\leq}M_1{\leq}M_2$; $\alpha_1{=}1$, $\alpha_2{\geq}\frac{\min\{M_2{,}N_1{+}N_2\}{-}N_2}{\min\{M_1{,}N_1{+}N_2\}{-}N_2}$ if $N_2{\leq}M_2{\leq}M_1$; and $\alpha_1{=}1$, ${\forall}\alpha_2{\in}[0{,}1]$ if $M_2{\leq}N_2$.
When $M_1{\leq}N_2$, we can see that the DoF region is only a function of $\alpha_1$ according to Proposition \ref{prop:IC2}, the DoF region with perfect and no CSIT are reached when $\alpha_1{=}1$ and $\alpha_2{=}0$ (${\forall}\alpha_2{\in}[0{,}1]$), respectively.
Moreover, if $N_1{=}N_2{=}N$, $M_1{=}M_2{=}M$ and $M{\geq}2N$, $L_1$ and $L_2$ in Proposition \ref{prop:IC1} boil down to the the sum DoF constraint in the symmetric case \cite{Tasos}.
\subsection{Discussion on outer-bound}
\subsubsection{MIMO BC}
An outer-bound of the DoF region of MIMO BC with imperfect CSIT is stated in the following proposition.
\begin{myprop}\label{prop:BC_outer}
For a $(M{,}N_1{,}N_2)$ MIMO BC, supposing $N_1{\leq}N_2$, the DoF region with imperfect CSIT lies in \eqref{eq:BC_outer} at the top of the next page.
\begin{figure*}
{\small\begin{IEEEeqnarray}{rrl}\label{eq:BC_outer}
d_1&{\leq}&\min\{M{,}N_1\},\IEEEyesnumber\IEEEyessubnumber\\
d_2&{\leq}&\min\{M{,}N_2\},\IEEEyessubnumber\\
d_1{+}d_2&{\leq}&\min\{M{,}N_2\}{+}\left[\min\{M{,}N_1{+}N_2\}{-}\min\{M{,}N_2\}\right]\alpha_2,\IEEEyessubnumber\label{eq:outer_sum}\\
\frac{d_1}{\min\{M{,}N_1\}}{+}\frac{d_2}{\min\{M{,}N_2\}}&{\leq}&1{+}\frac{\min\{M{,}N_1{+}N_2\}{-}\min\{M{,}N_1\}}{\min\{M{,}N_2\}}\alpha_1,
\IEEEyessubnumber\label{eq:outer_wsum}
\end{IEEEeqnarray}}
\hrulefill
\end{figure*}
\end{myprop}
\begin{proof}
See Appendix C.
\end{proof}
The achievable DoF region stated in Proposition \ref{prop:BC} and the outer-bound stated in Proposition \ref{prop:BC_outer} only differ by the sum DoF inequality. It can be verified that the optimality of the achievable DoF region stated in Proposition \ref{prop:BC} holds in two cases, i.e., $\Phi_{BC}{\leq}0$ and $M{\leq}N_2$. In the first case, the optimal sum DoF is\footnote{Note that when $\Phi_{BC}{\leq}0$, one has $M{\geq}N_2$} $N_2{+}\min\{N_1{,}M{-}N_2\}\alpha_2$. In the second case, the optimal sum DoF is $N_2$. Moreover, when $M{\leq}N_2$, the optimal DoF region with imperfect CSIT coincides with the optimal DoF region with a mixture of perfect delayed CSIT and imperfect current CSIT \cite{xinping_mimo}, which implies the uselessness of the delayed CSIT under the antenna configuration $M{\leq}N_2$.
\subsubsection{MIMO IC}
Allowing transmitters to cooperate produces a $(M_1{+}M_2{,}N_1{,}N_2)$ MIMO BC. Then, by replacing $M$ with $M_1{+}M_2$ into \eqref{eq:BC_outer}, we obtain an outer-bound of the DoF region of $(M_1{,}M_2{,}N_1{,}N_2)$ MIMO IC with imperfect CSIT. We discuss the tightness of the this outer-bound following the cases presented in Table \ref{tab:ICregions}.
\begin{itemize}
\item Case I.1, $M_1{\geq}N_2$ and $M_2{\leq}N_2$: In this case, the obtained outer-bound is loose. However, the optimality of the achievable DoF region stated in Proposition \ref{prop:IC1} holds as it is consistent with the optimal DoF region of a mixture of perfect delayed CSIT and imperfect current CSIT \cite{xinping_mimo}.
\item Case I.2, $M_1{\geq}N_2$ and $M_2{\geq}N_2$: In this case, when $\Phi_{IC}{\leq}0$, the obtained outer-bound is tight and the achievable DoF region stated in Proposition \ref{prop:IC1} is optimal; otherwise, the outer-bound is loose and the optimal DoF region is unknown.
\item Case II.1, $M_1{\leq}N_2$ and $M_2{\leq}N_2$: In this case, the obtained outer-bound is loose. However, when $N_1{\leq}M_1{\leq}N_2$, the optimality of the achievable DoF region stated in Proposition \ref{prop:IC2} holds as it is consistent with the optimal DoF region of a mixture of perfect delayed CSIT and imperfect current CSIT \cite{xinping_mimo}.
\item Case II.2, $M_1{\leq}N_2$ and $M_2{\geq}N_2$: In this case, the obtained outer-bound is loose, and the optimal DoF region is unknown.
\end{itemize}
Next, we will show the achievability proof of Proposition \ref{prop:BC}, \ref{prop:IC1} and \ref{prop:IC2} in Section \ref{sec:BC}, \ref{sec:icI} and \ref{sec:icII}, respectively, by proposing suitable RS schemes with proper power allocation.
\section{Introduction}\label{sec:Intro}
\input{IntroMIMOnetwork}
\section{System Model}\label{sec:SM}
\input{SystemModel}
\section{Prior Art}\label{sec:pa}
\input{PriorArt}
\section{Main Contributions and Results}\label{sec:MR}
\input{MainResults_2col}
\section{Achievability Proof: Broadcast Channel}\label{sec:BC}
\input{achBC_2col}
\section{Achievability Proof: Interference Channel}\label{sec:IC}
\input{achIC_2col}
\section{Conclusion}\label{sec:conclusion}
In this paper, for the first time in the literature, we characterize achievable DoF regions of a general two receiver $(M{,}N_1{,}N_2)$ MIMO BC and $(M_1{,}M_2{,}N_1{,}N_2)$ MIMO IC with imperfect CSIT, whose error decays with the SNR. Without loss of generality, we consider $N_1{\leq}N_2$. We propose Rate-Splitting schemes suitable for the asymmetric antenna deployment. In BC, compared to the RS scheme designed for the symmetric case, the new ingredients of the scheme lie in 1) delivering additional non-ZF-precoded private symbols to Rx2, and 2) a Space-Time implementation. In IC, the scheme proposed for BC is modified according to a row transformation to the channel matrices. Such an operation allows us to identify the signal space where the transmitted signals interfere with each other and derive a proper power allocation policy to achieve a satisfactory DoF region.
We also derive an outer-bound for the DoF region of MIMO BC and IC using the aligned image set and the sliding window lemma. Using this outer-bound and the optimal DoF region when there is a mixture of the imperfect current CSIT and perfect delayed CSIT, we show that our proposed achievable DoF region is optimal under some antenna configurations and CSIT qualities. Remarkably, the maximal sum DoF is achievable in the case $\Phi_{BC}{\leq}0$ and $\Phi_{IC}{\leq}0$. This implies that Rx$1$ (i.e., the user with the smaller number of antennas) needs a greater CSIT quality than Rx$2$ (i.e., the user with the greater number of antennas). This fact contrasts with the symmetric case where the maximal sum DoF is achieved with equal CSIT qualities. On the other hand, if the Rx$1$ does not have a good enough CSIT quality, sending more streams of private messages to Rx$2$ (greater than the dimension of the null space) with the power higher than the CSIT quality is beneficial to the sum DoF performance. This contrasts with the symmetric case where unequal power allocation does not provide sum DoF gain.
Finally, it is noted that studying the DoF of MIMO networks with imperfect CSIT has attracted research attentions. While the paper is under review, another work was posted on arXiv on 3rd April, 2016 by Yuan and Jafar \cite{elevated_mux}. The authors investigated the same problem, but focused on two-receiver MIMO IC only and no outer-bound is provided. Compared to their scheme, so called elevated multiplexing, our RS approach has DoF gain in the case $N_1{\leq}N_2{\leq}\min\{M_1{,}M_2\}$ especially with the space-time transmission, while suffers from DoF loss in the case $N_1{<}M_1{\leq}N_2{<}M_2$. The advantage of our scheme lies in the unified framework, where the precoders and the number of private symbols and the power allocation policy are dynamically determined by the antenna configuration and CSIT qualities. Besides, by assuming the common message only carries information intended for Rx1 or Rx2, we obtain two DoF pairs, which is convenient to find a DoF region. One interesting work in the future would consist in studying how to harmonize both approaches to further tighten the achievability and outer bounds.
\section*{Appendices}
\input{AppMIMOnetwork_2col}
\bibliographystyle{IEEEtran}
\subsubsection{$M{=}N_1{+}N_2$}
The derivation follows the footsteps in \cite{Davoodi14}. There are three main steps. The first step is to obtain a canonical form of the MIMO system, the second step is to define the functional dependence and the aligned image set, while the last step is to bound the probability that two realizations of one user's observation provide the same image in the other user's observation.
\textbf{Step 1:} Let us write the received signals of the MIMO BC as
\begin{equation}
\left[\begin{array}{c}
\mathbf{y}_1 \\
\mathbf{y}_2
\end{array}\right]{=}\left[\begin{array}{cc}
\mathbf{H}_{11}^H & \mathbf{H}_{12}^H \\
\mathbf{H}_{21}^H & \mathbf{H}_{22}^H
\end{array}\right]\left[\begin{array}{c}
\mathbf{s}_1 \\
\mathbf{s}_2
\end{array}\right]{+}\left[\begin{array}{c}
\mathbf{n}_1 \\
\mathbf{n}_2
\end{array}\right],
\end{equation}
where $\mathbf{s}_1$ is the transmitted signal of the first $N_1$ antennas, while $\mathbf{s}_2$ is the transmitted signal of the last $N_2$ antennas. Note that in this section, $\mathbf{H}_{k1}$ denotes the channel matrices from the first $N_1$ transmit antennas to user $k$, while $\mathbf{H}_{k2}$ denotes the channel matrices from the last $N_2$ transmit antennas to user $k$. Assuming there is perfect CSIT for user $1$, the canonical form writes as
\begin{equation}
\left[\begin{array}{c}
\tilde{\mathbf{y}}_1 \\
\tilde{\mathbf{y}}_2
\end{array}\right]{=}\left[\begin{array}{cc}
\mathbf{I}_{N_1} & \mathbf{0}_{N_1{\times}N_2} \\
\mathbf{G}_2 & \mathbf{I}_{N_2}
\end{array}\right]\left[\begin{array}{c}
\tilde{\mathbf{s}}_1 \\
\tilde{\mathbf{s}}_2
\end{array}\right]{+}\left[\begin{array}{c}
\mathbf{n}_1 \\
\mathbf{n}_2
\end{array}\right]\label{eq:canonical_user1},
\end{equation}
where $\mathbf{G}_2{=}\mathbf{H}_{21}^H\mathbf{H}_{11}^{-H}$, $\tilde{\mathbf{s}}_1{=}\mathbf{H}_{11}^H\mathbf{s}_1{+}\mathbf{H}_{12}^H\mathbf{s}_2$ and $\tilde{\mathbf{s}}_2{=}\left(\mathbf{H}_{22}^H{-}\mathbf{G}_2\mathbf{H}_{12}^H\right)\mathbf{s}_2$.
Then, denoting $\bar{\mathbf{s}}_1{\triangleq}\tilde{\mathbf{s}}_1$ and $\bar{\mathbf{s}}_2{\triangleq}\tilde{\mathbf{s}}_2$ as the discretization type of the transmitted signal, and $\bar{\mathbf{y}}_1$ and $\bar{\mathbf{y}}_2$ as the discretization type of the received signal to capture the effect of noise, we have
\begin{IEEEeqnarray}{rcl}
\bar{\mathbf{y}}_1{=}\bar{\mathbf{s}}_1{,}&\quad&
\bar{\mathbf{y}}_2{=}\lfloor\mathbf{G}_2\bar{\mathbf{s}}_1\rfloor{+}\bar{\mathbf{s}}_2{.}
\end{IEEEeqnarray}
Then, enhancing user $1$ with the message of user $2$, we have
\begin{IEEEeqnarray}{rcl}
nR_1&{\leq}&I(W_1{;}\bar{\mathbf{y}}_1^n{|}W_2{,}\mathbf{G}_2)\nonumber\\
&{=}&H(\bar{\mathbf{y}}_1^n{|}W_2{,}\mathbf{G}_2){+}o(\log P){,}\\
nR_2&{\leq}&I(W_2{;}\bar{\mathbf{y}}_2^n{|}\mathbf{G}_2)\nonumber\\
&{\leq}&N_2\log P{-}H(\bar{\mathbf{y}}_2^n{|}W_2{,}\mathbf{G}_2){,}\label{eq:nR2_sum}\\
nR_1{+}nR_2&{\leq}&nN_2\log P{+}H(\bar{\mathbf{y}}_1^n{|}W_2{,}\mathbf{G}_2){-}\nonumber\\
&&H(\bar{\mathbf{y}}_2^n{|}W_2{,}\mathbf{G}_2)\\
&{\leq}&nN_2\log P{+}H(\bar{\mathbf{y}}_1^n{,}\bar{\mathbf{y}}_2^n{|}W_2{,}\mathbf{G}_2){-}\nonumber\\
&&H(\bar{\mathbf{y}}_2^n{|}W_2{,}\mathbf{G}_2)\\
&{=}&nN_2\log P{+}H(\bar{\mathbf{y}}_1^n{|}\bar{\mathbf{y}}_2^n{,}W_2{,}\mathbf{G}_2)\\
&{\leq}&nN_2\log P{+}\sum_{i{=}1}^{N_1}H(\bar{s}_{1{,}i}^n{|}\bar{y}_{2{,}i}^n{,}\mathbf{G}_2)\label{eq:setcardi}
\end{IEEEeqnarray}
\textbf{Step 2: } Functional dependence and aligned image set.\footnote{The code block length $n$ is omitted in step 2 and 3 for convenience.}
For a given channel realization, there are multiple vectors $[\bar{s}_{1{,}1}{,}\cdots{,}\bar{s}_{1{,}N_1}{,}\bar{s}_{2{,}i}]$ that cast the same image in $\bar{y}_{2{,}i}$. Thus, the mapping from $\bar{s}_{1{,}i}$ to $[\bar{s}_{1{,}1}{,}\cdots{,}\bar{s}_{1{,}i{-}1}{,}\bar{s}_{1{,}i{+}1}{,}\cdots{,}\bar{s}_{1{,}N_1}{,}\bar{s}_{2{,}i}]$ are random. We fix the minimum mapping that leads to the smallest number of images in the following discussion.
Consequently, the observation $\bar{y}_{2{,}i}$ can be expressed as a function of $\bar{s}_{1{,}i}$, i.e., $\bar{y}_{2{,}i}(\bar{s}_{1{,}i}{,}\mathbf{G}_2)$. With this notation, let us define the aligned image set as the set of all $s_{1{,}i}$ that have the same image in $\bar{y}_{2{,}i}$, i.e.,
\begin{equation}
\mathcal{S}_v(\mathbf{G}_2){\triangleq}\left\{x{\in}\{s_{1{,}i}\}{:} \bar{y}_{2{,}i}(x{,}\mathbf{G}_2){=}\bar{y}_{2{,}i}(v{,}\mathbf{G}_2)\right\}{.}
\end{equation}
Then, following the derivation in \cite{Davoodi14}, \eqref{eq:setcardi} is bounded by
\begin{equation}
H(\bar{s}_{1{,}i}{|}\bar{y}_{2{,}i}{,}\mathbf{G}_2){\leq}\log \mathbb{E}\left[|\mathcal{S}_{\bar{s}_{1{,}i}}(\mathbf{G}_2)|\right],
\end{equation}
where $|\mathcal{S}_{\bar{s}_{1{,}i}}(\mathbf{G}_2)|$ is the cardinality of $\mathcal{S}_{\bar{s}_{1{,}i}}(\mathbf{G}_2)$.
\textbf{Step 3:} Bounding the probability that two realizations of $\bar{s}_{1{,}i}$ provide the same image in $\bar{y}_{2{,}i}$.
Let us consider two realization of $\bar{s}_{1{,}i}$, e.g., $x$ and $x^\prime$, which map to $\left\{v_{1{,}j{:}{\forall}j{\neq}i}{,}u\right\}$ and $\left\{v_{1{,}j{:}{\forall}j{\neq}i}^\prime{,}u^\prime\right\}$, respectively. Then, if they produce the same image in $\bar{y}_{2{,}i}$, we have \eqref{eq:image} at the top of the next page.
\begin{figure*}
\begin{IEEEeqnarray}{rcl} \label{eq:image}
\lfloor g_ix\rfloor{+}\sum_{j{=}1{,}j{\neq}i}^{N_1}\lfloor g_jv_{1{,}j}\rfloor{+}u&{=}& \lfloor g_ix^\prime\rfloor{+}\sum_{j{=}1{,}j{\neq}i}^{N_1}\lfloor g_jv_{1{,}j}^\prime\rfloor{+}u^\prime\\
\Rightarrow g_i(x{-}x^\prime)&{\in}&\sum_{j{=}1{,}j{\neq}i}^{N_1}\lfloor g_jv_{1{,}j}^\prime\rfloor{-}\lfloor g_iv_{1{,}j}\rfloor{+}u^\prime{-}u{+}(-1{,}1).\\
\text{\rm and }\Rightarrow g_l(v_l{-}v_l^\prime)&{\in}&\sum_{j{=}1{,}j{\neq}i{,}j{\neq}l}^{N_1}\lfloor g_jv_{1{,}j}^\prime\rfloor{-}\lfloor g_jv_{1{,}j}\rfloor{+}\lfloor g_ix^\prime\rfloor{-}\lfloor g_ix\rfloor {+} u^\prime{-}u{+}(-1{,}1){,}{\forall}l{\neq}i.
\end{IEEEeqnarray}
\hrulefill
\end{figure*}
Next, let us define
\begin{equation}
L{\triangleq}\max_{{\forall}l{\neq}i}\{|v_l{-}v_l^\prime|{,}|x{-}x^\prime|\}{.}
\end{equation}
Hence, the value of $g_j$, $j{=}1{,}\cdots{,}N_1$, must lie within the interval of length no more than $\frac{2}{L}$. Therefore, the probability that the images due to $x$ and $x^\prime$ align at $\bar{y}_{2{,}i}$ is bounded as follows
\begin{equation}
\mathbb{P}\left(x{\in}\mathcal{S}_{x^\prime}(G)\right){\leq}f_{\max{,}2}^n\prod_{t{=}1}^{n}\frac{2}{L(t)}{,}
\end{equation}
where $L$ is a time-varying parameter, and the time index $t$ is omitted in the above derivations for simplicity. Moreover, $f_{\max{,}2}{=}O(P^{\alpha_2})$ is a function of the CSIT quality defined in Section \ref{sec:SM}. Consequently, $H(\bar{s}_{1{,}i}^n{|}\bar{y}_{2{,}i}^n{,}\mathbf{G}_2)$ is bounded by $n\alpha_2\log P$. This leads to the sum DoF constraint $d_1{+}d_2{\leq}N_2{+}N_1\alpha_2$.
For the weighted-sum inequality, the derivation only differs by the first step. Specifically, let us write a canonical form by switching the role of user $1$ and user $2$ as
\begin{equation}
\left[\begin{array}{c}
\tilde{\mathbf{y}}_2 \\
\tilde{\mathbf{y}}_1
\end{array}\right]{=}\left[\begin{array}{cc}
\mathbf{I}_{N_2} & \mathbf{0}_{N_2{\times}N_1} \\
\mathbf{G}_1 & \mathbf{I}_{N_1}
\end{array}\right]\left[\begin{array}{c}
\tilde{\mathbf{s}}_2 \\
\tilde{\mathbf{s}}_1
\end{array}\right]{+}\left[\begin{array}{c}
\mathbf{n}_1 \\
\mathbf{n}_2
\end{array}\right],
\end{equation}
where $\mathbf{G}_1{=}\mathbf{H}_{12}^H\mathbf{H}_{22}^{-H}$, $\tilde{\mathbf{s}}_2{=}\mathbf{H}_{22}^H\mathbf{s}_2{+}\mathbf{H}_{21}^H\mathbf{s}_1$ and $\tilde{\mathbf{s}}_1{=}\left(\mathbf{H}_{11}^H{-}\mathbf{G}_1\mathbf{H}_{21}^H\right)\mathbf{s}_1$.
Then, denoting $\bar{\mathbf{y}}_1$ and $\bar{\mathbf{y}}_2$ as the discretization type of the received signal to capture the effect of noise, and denoting $\bar{\mathbf{s}}_1$ and $\bar{\mathbf{s}}_2$ as discretization type of the transmitted signal, we have
\begin{IEEEeqnarray}{rcl}
\bar{\mathbf{y}}_2{=}\bar{\mathbf{s}}_2{,}&\quad&
\bar{\mathbf{y}}_1{=}\lfloor\mathbf{G}_1\bar{\mathbf{s}}_2\rfloor{+}\bar{\mathbf{s}}_1{.}
\end{IEEEeqnarray}
Then, enhancing user $2$ with the message of user $1$, we have
\begin{IEEEeqnarray}{rcl}
nR_1&{\leq}&I(W_1{;}\bar{\mathbf{y}}_1^n{|}\mathbf{G}_1)\nonumber\\
&{=}&nN_1\log P{-}H(\bar{\mathbf{y}}_1^n{|}W_1{,}\mathbf{G}_1){,}\\
nR_2&{\leq}&I(W_2{;}\bar{\mathbf{y}}_2^n{|}W_1{,}\mathbf{G}_2)\nonumber\\
&{=}&H(\bar{\mathbf{y}}_2^n{|}W_1{,}\mathbf{G}_1){-}o(\log P){,}\\
n(&N_2&R_1{+}N_1R_2)\nonumber\\
&{\leq}&nN_1N_2\log P{+}N_1H(\bar{\mathbf{y}}_2^n{|}W_1{,}\mathbf{G}_1){-}\nonumber\\
&&N_2H(\bar{\mathbf{y}}_1^n{|}W_1{,}\mathbf{G}_1)\\
&{\leq}&nN_1N_2\log P{+} \sum_{j{=}1}^{N_2}\left(H(\bar{\mathbf{y}}_{2{,}j{:}j{+}N_1{-}1}^n{|}W_1{,}\mathbf{G}_1){-}\right.\nonumber\\
&&\left.H(\bar{\mathbf{y}}_1^n{|}W_1{,}\mathbf{G}_1)\right) \label{eq:sliding}\\
&{\leq}&nN_1N_2\log P{+} \sum_{j{=}1}^{N_2}\left(H(\bar{\mathbf{y}}_{2{,}j{:}j{+}N_1{-}1}^n{,}\bar{\mathbf{y}}_1^n{|}W_1{,}\mathbf{G}_1){-}\right.\nonumber\\
&& \left. H(\bar{\mathbf{y}}_1^n{|}W_1{,}\mathbf{G}_1)\right)\\
&{=}&nN_1N_2\log P{+}\sum_{j{=}1}^{N_2}H(\bar{\mathbf{y}}_{2{,}j{:}j{+}N_1{-}1}^n{|}\bar{\mathbf{y}}_1^n{,}W_1{,}\mathbf{G}_1)\\
&{\leq}&nN_1N_2\log P{+}\sum_{j{=}1}^{N_2}\sum_{i{=}1}^{N_1}H(\bar{s}_{2{,}j{+}i{-}1}^n{|}\bar{y}_{1{,}i}^n{,}\mathbf{G}_1).\label{eq:setcardi_ws}
\end{IEEEeqnarray}
Inequality \eqref{eq:sliding} follows from the sliding window lemma introduced in \cite[Lemma 1]{Borzoo_Kuser}. The notation $\bar{\mathbf{y}}_{2{,}j{:}j{+}N_1{-}1}$ stand for the $j$th through to the $(j{+}N_1{-}1)$th entries of $\bar{\mathbf{y}}_2$, and the calculation $j{+}N_1{-}1$ is based on modulo $N_2$.
Following step 2 and step 3, one can show that $H(\bar{s}_{2{,}j{+}i{-}1}^n{|}\bar{y}_{1{,}i}^n{,}\mathbf{G}_1){\leq}n\alpha_1\log P$, which leads to the weighted sum DoF $\frac{d_1}{N_1}{+}\frac{d_2}{N_2}{\leq}1{+}\alpha_1$.
\subsubsection{$N_2{\leq}M{\leq}N_1{+}N_2$}
In this case, the linear space spanned by the channel matrices of the two users overlap with each other, and the dimension of the overlapping part is $N_1{+}N_2{-}M$. Hence, we perform a linear transformation to the received signals as follows
\begin{IEEEeqnarray}{rcl}
&\mathbf{F}_1\mathbf{y}_1{=}\hat{\mathbf{y}}_1{=}\left[\begin{array}{c}
\hat{\mathbf{y}}_{1a} \\
\hat{\mathbf{y}}_{1b}
\end{array}\right]{=}\left[\begin{array}{c}
\mathbf{F}_{1a}\mathbf{H}_1^H\mathbf{s}{+} \mathbf{F}_{1a}\mathbf{n}_1 \\
\mathbf{F}_{1b}\mathbf{H}_1^H\mathbf{s}{+} \mathbf{F}_{1b}\mathbf{n}_1
\end{array}\right]{,}&\IEEEyessubnumber\\
&\mathbf{F}_2\mathbf{y}_2{=}\hat{\mathbf{y}}_2{=}\left[\begin{array}{c}
\hat{\mathbf{y}}_{2a} \\
\hat{\mathbf{y}}_{2b}
\end{array}\right]{=}\left[\begin{array}{c}
\mathbf{F}_{2a}\mathbf{H}_2^H\mathbf{s}{+} \mathbf{F}_{2a}\mathbf{n}_2 \\
\mathbf{F}_{2b}\mathbf{H}_2^H\mathbf{s}{+} \mathbf{F}_{2b}\mathbf{n}_2
\end{array}\right]{,}&\IEEEyessubnumber
\end{IEEEeqnarray}
where $\mathbf{F}_k$ is a $N_k{\times}N_k$ full rank matrix, $\mathbf{F}_{1a}$ and $\mathbf{F}_{2a}$ are the first $M{-}N_2$ rows of $\mathbf{F}_1$ and the first $M{-}N_1$ rows of $\mathbf{F}_2$, respectively, while $\mathbf{F}_{1b}$ and $\mathbf{F}_{2b}$ are the remaining $N_1{+}N_2{-}M$ rows of $\mathbf{F}_1$ and $\mathbf{F}_2$, respectively. $\mathbf{F}_{1b}$ and $\mathbf{F}_{2b}$ are such that $\mathbf{F}_{1b}\mathbf{H}_1^H{=}\mathbf{F}_{2b}\mathbf{H}_2^H$. This means that $\hat{\mathbf{y}}_{1b}$ can be obtained using $\hat{\mathbf{y}}_{2b}$ within noise error.
Consequently, one can obtain a canonical form using $\hat{\mathbf{y}}_1$ and $\hat{\mathbf{y}}_2$ as
\begin{equation}
\left[\begin{array}{c}
\tilde{\mathbf{y}}_1 \\
\tilde{\mathbf{y}}_2
\end{array}\right]{=}\left[\begin{array}{cc}
\mathbf{I}_{N_1} & \mathbf{0}_{N_1{\times}(M{-}N_1)} \\
\mathbf{G}_2 & \mathbf{Z}_2
\end{array}\right]\left[\begin{array}{c}
\tilde{\mathbf{s}}_1 \\
\tilde{\mathbf{s}}_2
\end{array}\right]{+}\left[\begin{array}{c}
\mathbf{n}_1 \\
\mathbf{n}_2
\end{array}\right]\label{eq:canonical_user1_case2},
\end{equation}
where $\mathbf{Z}_2{\triangleq}\left[\begin{array}{c}\mathbf{I}_{M{-}N_1}\\ \mathbf{0}_{(N_2{+}N_1{-}M){\times}(M{-}N_1)}\end{array}\right]$, $\mathbf{G}_2{=}\text{Bdiag}\{\mathbf{G}_{2a}{,} \mathbf{I}_{N_1{+}N_2{-}M}\}$ with a $(M{-}N_1){\times}(M{-}N_2)$ matrix $\mathbf{G}_{2a}{=}\mathbf{F}_{2a}\mathbf{H}_{21}^H\mathbf{H}_{11}\mathbf{F}_{1a}^H\cdot \left(\mathbf{F}_{1a}\mathbf{H}_{11}^H\mathbf{H}_{11}\mathbf{F}_{1a}^H\right)^{-1}$, $\tilde{\mathbf{s}}_1{=}\hat{\mathbf{y}}_1$ and $\tilde{\mathbf{s}}_2{=}(\mathbf{F}_{2a}\mathbf{H}_{22}^H{-}\mathbf{G}_{2a}\mathbf{F}_{1a}\mathbf{H}_{12}^H)\mathbf{s}_2$. Note that here $\mathbf{H}_{11}$ and $\mathbf{H}_{21}$ refer to the channel matrices between the first $N_1$ transmit antennas to user $1$ and user $2$, respectively, while $\mathbf{H}_{12}$ and $\mathbf{H}_{22}$ refer to the channel matrices between the remaining $M{-}N_1$ transmit antennas to user $1$ and user $2$, respectively. $\mathbf{s}_1$ and $\mathbf{s}_2$ are the signals transmitted from the first $N_1$ transmit antennas and the remaining $M{-}N_1$ transmit antennas, respectively.
Then, following the footsteps in the case $M{=}N_1{+}N_2$, we bound the sum rate by the summation of $N_1$ conditional entropies as in \eqref{eq:setcardi}. According to the above analysis, since the last $N_1{+}N_2{-}M$ observations of $\tilde{\mathbf{y}}_1$ can be constructed using the $N_1{+}N_2{-}M$ observations of $\tilde{\mathbf{y}}_2$, the last $N_1{+}N_2{-}M$ entropies are equal to $o(\log P)$. This leads to the sum DoF constraint $d_1{+}d_2{\leq}N_2{+}(M{-}N_2)\alpha_2$.
Similarly, for the weighted sum entropy, we switch the role of the two users and write a canonical form as
\begin{equation}
\left[\begin{array}{c}
\tilde{\mathbf{y}}_2 \\
\tilde{\mathbf{y}}_1
\end{array}\right]{=}\left[\begin{array}{cc}
\mathbf{I}_{N_2} & \mathbf{0}_{N_2{\times}(M{-}N_2)} \\
\tilde{\mathbf{G}}_1 & \mathbf{Z}_1
\end{array}\right]\left[\begin{array}{c}
\tilde{\mathbf{s}}_2 \\
\tilde{\mathbf{s}}_1
\end{array}\right]{+}\left[\begin{array}{c}
\mathbf{n}_2 \\
\mathbf{n}_1
\end{array}\right]\label{eq:canonical_user2_case2},
\end{equation}
where $\mathbf{Z}_1{\triangleq}\left[\begin{array}{c}\mathbf{I}_{M{-}N_2}\\ \mathbf{0}_{(N_2{+}N_1{-}M){\times}(M{-}N_2)}\end{array}\right]$, $\mathbf{G}_1{=}\text{Bdiag}\{\mathbf{G}_{1a}{,} \mathbf{I}_{N_1{+}N_2{-}M}\}$ with a $(M{-}N_2){\times}(M{-}N_1)$ matrix $\mathbf{G}_{1a}{=}\mathbf{F}_{1a}\mathbf{H}_{12}^H\mathbf{H}_{22}\mathbf{F}_{2a}^H\cdot \left(\mathbf{F}_{2a}\mathbf{H}_{22}^H\mathbf{H}_{22}\mathbf{F}_{2a}^H\right)^{-1}$, $\tilde{\mathbf{s}}_2{=}\hat{\mathbf{y}}_2$ and $\tilde{\mathbf{s}}_1{=}(\mathbf{F}_{1a}\mathbf{H}_{11}^H{-}\mathbf{G}_{1a}\mathbf{F}_{2a}\mathbf{H}_{21}^H)\mathbf{s}_1$. Note that here $\mathbf{H}_{21}$ and $\mathbf{H}_{11}$ refer to the channel matrices between the first $M{-}N_2$ transmit antennas to user $1$ and user $2$, respectively, while $\mathbf{H}_{22}$ and $\mathbf{H}_{12}$ refer to the channel matrices between the remaining $N_2$ transmit antennas to user $1$ and user $2$, respectively. $\mathbf{s}_1$ and $\mathbf{s}_2$ are the signals transmitted from the first $M{-}N_2$ antennas and the remaining $N_2$ antennas, respectively.
Then, following the footsteps in the case $M{=}N_1{+}N_2$, we bound the weighted sum of the rate as
\begin{IEEEeqnarray}{rcl}
n(&N_2&R_1{+}N_1R_2)\nonumber\\
&{\leq}&nN_1N_2\log P{+}\sum_{j{=}1}^{N_2}H(\bar{\mathbf{y}}_{2{,}j{:}j{+}N_1{-}1}^n{|}\bar{\mathbf{y}}_1^n{,}W_2{,}\mathbf{G}_1)\\
&{\leq}&nN_1N_2\log P{+}\sum_{j{=}1}^{N_2}\sum_{i{=}1}^{N_1}H(\bar{s}_{2{,}j{+}i{-}1}^n{|}\bar{\mathbf{y}}_1^n{,}\mathbf{G}_1)\\
&{=}&nN_1N_2\log P{+}N_1\sum_{j{=}1}^{N_2}H(\bar{s}_{2{,}j}^n{|}\bar{\mathbf{y}}_1^n{,}\mathbf{G}_1){,}\label{eq:conditional_ws_case2}
\end{IEEEeqnarray}
where the last equality is because every observation of $\bar{\mathbf{s}}_2$ is counted $N_1$ times due to the sliding window. According to the above analysis, since the last $N_1{+}N_2{-}M$ observations of $\tilde{\mathbf{y}}_2$ (i.e., $\bar{\mathbf{s}}_2$) can be constructed using the $N_1{+}N_2{-}M$ observations of $\tilde{\mathbf{y}}_1$, the last $N_1{+}N_2{-}M$ entropies are equal to $o(\log P)$. This upper-bounds \eqref{eq:conditional_ws_case2} by $nN_1N_2\log P{+}N_1(M{-}N_1)\alpha_1\log P$, which leads to the weighted sum DoF constraint \eqref{eq:outer_wsum}.
\subsubsection{$N_1{\leq}M{\leq}N_2$}
In this case, the derivation follows the footsteps of the case $M{<}N_1{+}N_2$. Specifically, since $M{<}N_2$, \eqref{eq:nR2_sum} rewrites as $nR_2{\leq}nM{\log}P{-}H(\bar{\mathbf{y}}_2^n{|}W_1{,}\mathbf{G}_2)$. Besides, since $M{<}N_2$, the dimension of the overlapping part between the received signals at the two users is $N_1$. This implies that the $N_1$ observations at user $1$ can be constructed using the user 2's received signal within noise error. Hence, the $N_1$ conditional entropies in \eqref{eq:setcardi} equal to $o(\log P)$, leading to the sum DoF $d_1{+}d_2{\leq}M$.
For the weighted-sum inequality, one has
\begin{IEEEeqnarray}{rcl}
n(MR_1&{+}&N_1R_2)\nonumber\\
&{\leq}&nN_1M\log P{+}N_1H(\bar{\mathbf{y}}_2^n{|}W_1{,}\mathbf{G}_1){-}\nonumber\\
&&MH(\bar{\mathbf{y}}_1^n{|}W_1{,}\mathbf{G}_1)\\
&{=}&nN_1M\log P{+}N_1H(\bar{\mathbf{y}}_{2{,}1{:}M}^n{|}W_1{,}\mathbf{G}_1){+} \nonumber\\ &&\underbrace{N_1H(\bar{\mathbf{y}}_{2{,}M{+}1{:}N_2}^n{|}\bar{\mathbf{y}}_{2{,}1{:}M}^n{,}W_1{,}\mathbf{G}_1)}_{nN_1o(\log P)}{-}\nonumber\\ &&MH(\bar{\mathbf{y}}_1^n{|}W_1{,}\mathbf{G}_1).\label{eq:difference_entropy_case3}
\end{IEEEeqnarray}
Then, since $N_1$ observations of $\bar{\mathbf{y}}_{2{,}1{:}M}$ can be constructed using $\bar{\mathbf{y}}_1$, the difference between the entropies in \eqref{eq:difference_entropy_case3} is bounded by $nN_1(M{-}N_1)\alpha_1{\log}P$, which completes the proof.
|
1,108,101,563,169 | arxiv | \section{Introduction}
In \cite{HL2} and \cite{HL1}, Hutchings and
Lee investigate circle-valued Morse theory for Riemannian manifolds
$X$ with first Betti number $b_1 \geq 1$. Given a generic Morse
function $\phi\co X\to S^1$ representing an element of infinite order in
$H^1(X;\zee)$ and having no extrema, they determine a relationship
between the Reidemeister torsion $\tau(X,\phi)$ associated to
$\phi$, which is in general an element of the field $\cue(t)$, and the
torsion of a ``Morse complex'' $M^*$ defined over the ring
$L_\zee$ of integer-coefficient Laurent series in a single
variable $t$. If $S$ is the inverse image of a regular value of
$\phi$ then upward gradient flow of $\phi$ induces a return map
$F\co S\to S$ that is defined away from the descending manifolds of
the critical points of $\phi$. The two torsions $\tau(X,\phi)$
and $\tau(M^*)$ then differ by multiplication by the zeta
function $\zeta(F)$. In the case that $X$ has dimension three,
which will be our exclusive concern in this paper, the statement
reads
\begin{equation}
\tau(M^*)\zeta(F) = \tau(X,\phi),
\label{HLeqn}
\end{equation}
up to multiplication by $\pm t^k$.
One should think of the left-hand side as ``counting'' gradient flows
of $\phi$; $\tau(M^*)$ is concerned with gradient flows between
critical points of $\phi$, while $\zeta(F)$, defined in terms of
fixed points of the return map, describes the closed orbits of
$\phi$. It should be remarked that $\tau(X,\phi)\in \cue(t)$ is in
fact a polynomial if $b_1(X)>1$, and ``nearly'' so if $b_1(X) =
1$; see \cite{MT} or \cite{Turaev1} for details.
If the three--manifold $X$ is zero-surgery on a knot
$K\subset S^3$ and $\phi$ represents a generator in $H^1(X;\zee)$,
the
Reidemeister torsion $\tau(X,\phi)$
is essentially (up to a standard factor) the Alexander polynomial
$\Delta_K$ of the knot. It has been proved by Fintushel and Stern
\cite{FS} that
the Seiberg--Witten invariant of $X\times S^1$, which can be
identified with the Seiberg--Witten invariant of $X$, is also given by
the Alexander
polynomial (up to the same standard factor). More generally, Meng and
Taubes \cite{MT} show that the Seiberg--Witten invariant of any closed
three--manifold with $b_1(X)\geq 1$ can be identified with the Milnor
torsion
$\tau(X)$ (after summing over the action of the torsion subgroup of
$H^2(X;\zee)$), from which it follows that if $\mathcal S$ denotes
the
collection of spin${}^c$ structures on $X$,
\begin{equation}
\sum_{\alpha\in {\mathcal S}} SW(\alpha) t^{c_1(\alpha)\cdot {S} /2}
=
\tau(X,\phi),
\label{MTthm}
\end{equation}
up to multiplication by $\pm t^k$ (in \cite{MT} the sign is specified).
Here $c_1(\alpha)$ denotes the first
Chern class of the complex line bundle $\det \alpha$ associated to
$\alpha$.
These results point to the natural conjecture, made in \cite{HL1},
that the left-hand side of (\ref{HLeqn}) is equal to the
Seiberg--Witten invariant of $X$---or more precisely to a combination
of invariants as in (\ref{MTthm})---independently of the results of
Meng and Taubes. We remark that the theorem of Meng and Taubes
announced in \cite{MT} depends on surgery formulae for
Seiberg--Witten invariants, and a complete proof of these results
has not yet appeared in the literature. The conjecture of
Hutchings and Lee gives a direct interpretation of the
Seiberg--Witten invariants in terms of geometric information,
reminiscent of Taubes's work relating Seiberg--Witten invariants
and holomorphic curves on symplectic 4--manifolds. The proof of
this conjecture is the aim of this paper; combined with the work
in \cite{HL1} and \cite{HL2} it establishes an alternate proof of
the Meng--Taubes result (for closed manifolds) that does not
depend on the surgery formulae for Seiberg--Witten invariants used
in \cite{MT} and \cite{FS}.
\begin{rem} In fact, the conjecture in \cite{HL1} is more general,
as follows:
Hutchings and Lee define an invariant $I\co {\mathcal S} \to \zee$ of
spin${}^c$ structures based on the counting of gradient flows, which
is conjectured to agree with the Seiberg--Witten invariant. The proof
presented in this paper gives only an ``averaged'' version of this
statement, ie, that the left hand side of (\ref{HLeqn}) is equal
to the left hand side of (\ref{MTthm}). It can be seen from the
results
of \cite{HL1} that this averaged statement is in fact enough to
recover the full Meng--Taubes theorem: see in particular \cite{HL1},
Lemma
4.5. It may also be possible to extend the methods of this paper to
distinguish the Seiberg--Witten invariants of spin${}^c$ structures
whose determinant lines differ by a non-torsion element $a\in
H^2(X;\zee)$ with $a\cdot {S} = 0$.
\end{rem}
We also show that the ``averaged'' Seiberg--Witten invariant is equal
to the intersection number of a pair of totally real submanifolds in a
product of symmetric powers of a slice for $\phi$. This is a
situation strongly analogous to that considered by Ozsv\'ath and
Szab\'o in \cite{OS1} and \cite{OS2}, and one might hope
to define a Floer-type homology theory along the lines of that work.
Such a construction would suggest a generalization of a conjecture of
Salamon, namely that the Seiberg--Witten--Floer homology of $X$ agrees
with this new homology (which is a ``classical'' Floer homology in the
case that $X$ is a mapping torus---see \cite{S}).
\section{Statement of results}
Before stating our main theorems, we need to recall a few definitions
and introduce some notation. First is the notion of the torsion of
an acyclic
chain complex; basic references for this material include \cite{Milnor}
and \cite{Turaev1}.
\subsection{Torsion}
By a {\it volume}
$\omega$ for a vector space $W$ of dimension $n$ we mean a
choice of nonzero element $\omega\in\Lambda^n W$. Let $0\to V'\to
V\to V''\to 0$ be an exact sequence of finite-dimensional vector
spaces over a field $k$. For volumes $\omega'$ on $V'$ and
$\omega''$ on $V''$, the induced volume on $V$ will be written
$\omega'\omega''$; if $\omega_1$, $\omega_2$ are two volume
elements for $V$, then we can write $\omega_1 = c\omega_2$ for
some nonzero element $c\in k$ and by way of shorthand, write $c =
\omega_1/\omega_2$. More generally, let $\{C_i\}_{i=0}^n$ be a
complex of vector spaces with differential $\partial\co C_i\to
C_{i-1}$, and let us assume that $C_*$ is acyclic, ie,
$H_*(C_*)=0$. Suppose that each $C_i$ comes equipped with a
volume element $\omega_i$, and choose volumes $\nu_i$ arbitrarily
on each image $\partial C_i$, $i=2,\ldots,n-1$. From the exact sequence
\[
0\to C_n \to C_{n-1} \to \partial C_{n-1}\to 0
\]
define $\tau_{n-1} = \omega_n\nu_{n-1}/\omega_{n-1}$. For $i=
2,\ldots, n-2$ use
the exact sequence
\[
0\to \partial C_{i+1}\to C_i\to \partial C_i\to 0
\]
to define $\tau_i = \nu_{i+1}\nu_i/\omega_i$. Finally, from
\[
0\to \partial C_2\to C_1\to C_0\to 0
\]
define $\tau_1 = \nu_2\omega_0/\omega_1$. We then define
the {\it torsion} $\tau(C_*, \{\omega_i\}) \in k\setminus\{0\}$
of the (volumed) complex $C_*$ to be:
\begin{equation}
\tau(C_*) = \prod_{i=1}^{n-1} \tau_i^{(-1)^{i+1}}
\label{torsiondef}
\end{equation}
It can be seen that this definition does not depend on the choice of
$\nu_i$.
Note that in the case that our complex consists of just two vector
spaces,
\[
C_* = 0\to C_i\stackrel{\partial}{\longrightarrow} C_{i-1}\to 0,
\]
we have that $\tau(C)= \det(\partial)^{(-1)^i}$. We extend the
definition of $\tau(C_*)$ to non-acyclic complexes by setting
$\tau(C_*) = 0$ in this case.
As a slight generalization, we can allow the chain groups $C_i$ to be
finitely generated free
modules over an integral domain $K$ with fixed ordered bases rather
than vector
spaces with fixed volume elements, as
follows. Write $Q(K)$ for the field of fractions of $K$, then form
the complex of vector spaces $Q(K)\otimes_K C_i$. The bases for
the $C_i$ naturally give rise to bases, and hence volumes, for
$Q(K)\otimes_K C_i$. We understand the torsion of the complex of
$K$--modules $C_i$ to be the torsion of this latter complex, and
it is therefore a nonzero element of the field $Q(K)$.
Let $X$ be a connected, compact, oriented smooth manifold with a
given CW decomposition. Following \cite{Turaev1}, suppose
$\varphi\co {\mathbb Z}[H_1(X;{\mathbb Z})]\to K$ is a ring homomorphism into an
integral domain $K$. The universial abelian cover $\tilde{X}$ has
a natural CW decomposition lifting the given one on $X$, and the
action of the deck transformation group $H_1(X;{\mathbb Z})$ naturally
gives the cell chain complex $C_*(\tilde{X})$ the structure of a
${\mathbb Z}[H_1(X;{\mathbb Z})]$--module. As such, $C_i(\tilde{X})$ is free of
rank equal to the number of $i$--cells of $X$. We can then form
the twisted complex $C_*^\varphi(\tilde{X}) = K\otimes_\varphi
C_*(\tilde{X})$ of $K$--modules. We choose a sequence $e$ of cells
of $\tilde{X}$ such that over each cell of $X$ there is exactly
one element of $e$, called a {\it base} {\it sequence}; this gives
a basis of $C_*^\varphi(\tilde{X})$ over $K$ and allows us to
form the torsion $\tau_\varphi(X,e)\in Q(K)$ relative to this
basis. Note that the torsion $\tau_\varphi(X,e')$ arising from a
different choice $e'$ of base sequence stands in the relationship
$\tau_\varphi(X,e) = \pm\varphi(h)\tau_\varphi(X,e')$ for some
$h\in H_1(X;{\mathbb Z})$ (here, as is standard practice, we write the
group operation in $H_1(X;{\mathbb Z})$ multiplicatively when dealing
with elements of ${\mathbb Z}[H_1(X;{\mathbb Z})]$). The set of all torsions
arising from all such choices of $e$ is ``the'' torsion of $X$
associated to $\varphi$ and is denoted $\tau_\varphi(X)$.
We are now in a position to define the torsions we will need.
\begin{defn}(1)\qua For $X$ a smooth manifold as above with $b_1(X)\geq
1$,
let $\phi\co X\to S^1$ be a map
representing an element $[\phi]$ of infinite order in $H^1(X;{\mathbb Z})$.
Let $C$
be the infinite cyclic group generated by the formal variable $t$,
and let
$\varphi_1\co {\mathbb Z}[H_1(X;{\mathbb Z})]\to {\mathbb Z}[C]$ be the map induced
by the
homomorphism $H_1(X;{\mathbb Z})\to C$, $a \mapsto t^{\langle
[\phi],a\rangle}$.
Then the {\em Reidemeister torsion} $\tau(X,\phi)$ of $X$ associated
to $\phi$ is defined to be the torsion $\tau_{\varphi_1}(X)$.
(2)\qua Write $H$ for the quotient of $H_1(X;{\mathbb Z})$ by its torsion
subgroup, and let $\varphi_2\co {\mathbb Z}[H_1(X;{\mathbb Z})]\to{\mathbb Z}[H]$ be the map
induced by the projection $H_1(X;{\mathbb Z})\to H$. The {\em Milnor
torsion} $\tau(X)$ is defined to be $\tau_{\varphi_2}(X)$.
\label{torsiondefn}
\end{defn}
\begin{rem}(1)\qua Some authors use the term {\it Reidemeister torsion}
to
refer to the torsion $\tau_\varphi(X)$ for arbitrary $\varphi$; and other terms,
eg,
Reidemeister--Franz--DeRham torsion, are also in use.
(2)\qua The torsions in Definition \ref{torsiondefn} are defined for
manifolds $X$ of arbitrary dimension, with or without boundary. We
will be concerned only with the case that $X$ is a closed manifold of
dimension 3 with $b_1(X)\geq 1$. In the case $b_1(X)>1$, work of
Turaev
\cite{Turaev1} shows that $\tau(X)$ and $\tau(X,\phi)$, naturally
subsets of $\cue(H)$ and $\cue(t)$, are actually subsets of ${\mathbb Z}[H]$
and ${\mathbb Z}[t, t^{-1}]$. Furthermore, if $b_1(X)=1$ and $[\phi]\in
H^1(X;{\mathbb Z})$ is a generator, then $\tau(X) = \tau(X,\phi)$
and $(t-1)^2\tau(X)\in{\mathbb Z}[t,t^{-1}]$. Rather than thinking of
torsion as a set of elements in a field we normally identify it
with a representative ``defined up to multiplication by $\pm
t^k$'' or similar, since by the description above any two
representatives of the torsion differ by some element of the
group ($C$ or $H$) under consideration.
\end{rem}
\subsection{$S^1$--Valued Morse Theory}
\label{morsesec}
We review the results of Hutchings and Lee that motivate our theorems.
As in the introduction, let $X$ be a smooth closed oriented 3--manifold
having $b_1(X)\geq 1$ and let $\phi\co X\to S^1$ be a smooth Morse
function. We assume (1) $\phi$ represents an indivisible element of
infinite
order in $H^1(X,{\mathbb Z})$; (2) $\phi$ has no critical points of index
0 or 3; and (3) the gradient flow of $\phi$ with respect to a
Riemannian metric on $X$ is Morse--Smale. Such functions always exist
given our assumptions on $X$.
Given such a Morse function $\phi$, fix a smooth level set $S$ for
$\phi$. Upward gradient flow defines a return map $F\co S\to S$
away from the descending manifolds of the critical points of
$\phi$. The {\it zeta function} of $F$ is defined by the series
\[
\zeta(F) = \exp \left(\sum_{k\geq 1} {\mbox{\rm Fix}}(F^k)\frac{t^k}{k}\right)
\]
where ${\mbox{\rm Fix}}(F^k)$ denotes the number of fixed points (counted with
sign in the usual way) of the $k$-th iterate of $F$. One should think
of $\zeta(F)$ as keeping track of the number of closed orbits of
$\phi$ as well as the ``degree'' of those orbits. For future
reference we note that if $h\co S\to S$ is a diffeomorphism of a surface
$S$ then
\begin{equation}
\zeta(h) = \sum_k L(h^{(k)})t^k
\label{zetasym}
\end{equation}
where $L(h^{(k)})$ is the Lefschetz number of the induced map on the
$k$-th symmetric power of $S$ (see \cite{S}, \cite{IP}).
We now introduce a Morse complex that can be used to keep track of
gradient flow lines between critical points of $\phi$. Write
$L_{{\mathbb Z}}$ for the ring of Laurent series in
the variable $t$, and let $M^i$ denote the free $L_{{\mathbb Z}}$--module
generated by
the index-$i$ critical points of $\phi$. The differential $d_M\co M^i\to
M^{i+1}$ is defined to be
\begin{equation}
d_Mx_\mu = \sum_\nu a_{\mu\nu}(t) y_\nu
\label{diffdef}
\end{equation}
where $x_\mu$ is an index-$i$ critical point, $\{y_\nu\}$
is the set of index-$(i+1)$ critical points, and $a_{\mu\nu}(t)$ is a
series in $t$ whose coefficient of $t^n$ is defined to be the number
of gradient flow lines of $\phi$ connecting $x_\mu$ with $y_\nu$ that
cross $S$ $n$ times. Here we count the gradient flows with sign
determined by orientations on the ascending and descending manifolds
of the critical points; see \cite{HL1} for more details.
\begin{thm}[Hutchings--Lee] In this situation, the relation
(\ref{HLeqn}) holds up to multiplication by $\pm t^k$.
\label{HLthm}
\end{thm}
\subsection{Results}
The main result of this work is that the left hand side of
(\ref{HLeqn}) is equal to the left hand side of (\ref{MTthm}),
without using the results of \cite{MT}. Hence the current work,
together with that of Hutchings and Lee, gives an alternative
proof of the theorem of Meng and Taubes in \cite{MT}.
Our proof of this fact is based on ideas of
Donaldson for computing the Seiberg--Witten invariants of 3--manifolds.
We outline Donaldson's construction here; see Section \ref{tqftsec}
below for
more details. Given $\phi\co X\to S^1$ a generic Morse
function as above and $S$ the inverse image of a regular value,
let $W = X\setminus nbd(S)$ be the complement of a small
neighborhood of $S$. Then $W$ is a cobordism between two copies
of $S$ (since we assumed $\phi$ has no extrema---note we may
also assume $S$ is connected). Note that two spin${}^c$ structures on
$X$ that
differ by an element $a\in H^2(X; {\mathbb Z})$ with $a([S])
= 0$
restrict to the same spin${}^c$ structure on $W$, in particular,
spin${}^c$
structures $\sigma$ on $W$ are determined by their degree $m = \langle
c_1(\sigma), S\rangle$. Note that the degree of a {\mbox{\rm spin${}^c$}} structure is
always even.
Now, a solution of the Seiberg--Witten equations on $W$ restricts to a
solution of the {\it vortex equations} on $S$ at each end of
$W$ (more accurately, we should complete $W$ by adding infinite
tubes $S\times (-\infty, 0]$, $S\times [0,\infty)$ to each
end, and consider the limit of a finite-energy solution on this completed
space)---see \cite{D2}, \cite{MOY} for example. These equations have
been extensively studied, and it is known that the moduli space of
solutions to the vortex equations on $S$ can be identified with
a symmetric power $\mathrm{Sym}^n S$ of $S$ itself: see \cite{B}, \cite{JT}. Donaldson
uses the restriction maps on the Seiberg--Witten moduli space of $W$ to
obtain a self-map $\kappa_n$ of the cohomology of
$\mbox{Sym}^{n}S$, where $n$ is defined by $n=g(S)-1-\frac{1}{2}|m|$ if
$b_1(X)>1$ and $n= g(S)-1 + \frac{1}{2}m$ if $b_1(X) = 1$ (here
$g(S)$ is the genus of the orientable surface $S$). The alternating trace
${\mbox{\rm Tr}}\,\kappa_n$ is identified as the sum of Seiberg--Witten
invariants of spin${}^c$ structures on $X$ that restrict to the
given spin${}^c$ structure on $W$---that is, the coefficient of
$t^n$ on the left hand side of (\ref{MTthm}). For a precise statement,
see Theorem \ref{tracethm}.
Our main result is the following.
\begin{thm} Let $X$ be a Riemannian 3--manifold with $b_1(X)\geq 1$,
and fix an integer $n\geq 0$ as above. Then we have
\begin{equation}
{\mbox{\rm Tr}}\,\kappa_n = [\tau(M^*)\,\zeta(F)]_{n},
\end{equation}
where $\tau(M^*)$ is represented by $t^N\det(d_M)$, and $N$ is the
number of index 1 critical points of $\phi$. Here ${\mbox{\rm Tr}}$ denotes the
alternating trace and $[\,\cdot\,]_n$ denotes the coefficient of $t^n$
of the polynomial enclosed in brackets.
\label{mainthm}
\end{thm}
This fact immediately implies the conjecture of Hutchings and Lee.
Furthermore, we will make the following observation:
\begin{thm}
There is a smooth connected representative $S$ for the Poincar\'e dual
of $[\phi]\in H^1(X;{\mathbb Z})$ such that ${\mbox{\rm Tr}}\, \kappa_n$ is
given by the intersection number of a pair of totally real embedded
submanifolds in $\mathrm{Sym}^{n+N}S \times\mathrm{Sym}^{n+N}S$.
\label{intthm}
\end{thm}
This may be the first step in defining a Lagrangian-type Floer
homology theory parallel to that of Ozsv\'ath and Szab\'o, one whose
Euler
characteristic is {\it a priori} a combination of Seiberg--Witten
invariants. In the case that $X$ is a mapping torus, a program along
these lines has been initiated by Salamon \cite{S}. In this
case the two totally real submanifolds in Theorem \ref{intthm} reduce
to the diagonal and the graph of a symplectomorphism of
$\mathrm{Sym}^n S$ determined by the monodromy of the mapping
torus, both of which are in fact Lagrangian.
The remainder of the paper is organized as follows: Section 3 gives a
brief overview of some elements of Seiberg--Witten theory and the
dimensional reduction we will make use of, and Section 4 gives a few
more details on this reduction and describes the TQFT we use to
compute Seiberg--Witten invariants. Section 5 proves a theorem that
gives a means of calculating as though a general cobordism coming from
an $S^1$--valued Morse function of the kind we are considering posessed a
naturally-defined monodromy map; Section 6 collects a few other
technical results of a calculational nature, the proof of one of
which is the content of Section 9. In Section 7 we prove Theorem
\ref{mainthm} by a calculation that is fairly involved but is not
essentially difficult, thanks to the tools provided by the TQFT.
Section 8 proves Theorem \ref{intthm}.
\section{Review of Seiberg--Witten theory}
We begin with an outline of some aspects of Seiberg--Witten
theory for a 3--manifolds. Recall that a {\mbox{\rm spin${}^c$}} structure on
a 3--manifold $X$ is a lift of the oriented orthogonal frame bundle
of $X$ to a principal ${\mbox{\rm spin${}^c$}} (3)$--bundle $\sigma$. There are two
representations of ${\mbox{\rm spin${}^c$}} (3) = \mbox{Spin}(3)\times
U(1)/\pm1 = SU(2)\times U(1)/\pm 1$ that will interest us, namely
the spin representation ${\mbox{\rm spin${}^c$}} (3)\to SU(2)$ and also the
projection ${\mbox{\rm spin${}^c$}} (3)\to U(1)$ given by $[g, e^{i\theta}]\mapsto
e^{2i\theta}$. For a {\mbox{\rm spin${}^c$}} structure $\sigma$ the first of these
gives rise to the associated {\it spinor bundle} $W$ which is a
hermitian 2--plane bundle, and the second to the {\it determinant
line bundle} $L\cong \wedge^2 W$. We define $c_1(\sigma) :=
c_1(L)$. The Levi--Civita connection on $X$ together with a choice
of hermitian connection $A$ on $L^{1/2}$ gives rise to a
hermitian connection on $W$ that is compatible with the action of
Clifford multiplication $c\co T^*_{\mathbb C} X\to \mbox{End}_0 W$=
\{traceless endomorphisms of $W$\}, and thence to a Dirac
operator $D_A\co \Gamma(W)\to \Gamma(W)$.
The {\it Seiberg--Witten equations} are equations for a pair
$(A,\psi)\in{\cal A}(L)\times \Gamma(W)$ where ${\cal A}(L)$ denotes
the space of hermitian connections on $L^{1/2}$, and read:
\begin{equation}
\begin{array}{rcl} D_A \psi &\,\,=\,\,& 0 \\
c(\star F_A + i\star\mu) &=& \psi\otimes\psi^* - \frac{1}{2}|\psi|^2
\end{array}
\label{sweqns}
\end{equation}
Here $\mu\in\Omega^2(X)$ is a closed form used as a perturbation; if
$b_1(X)>1$ we may choose $\mu$ as small as we like.
On a closed oriented 3--manifold the {\it Seiberg--Witten moduli space} is
the set of $L^{2,2}$ solutions to the
above equations modulo the action of the gauge group ${\cal G}=
L^{2,3}(X;S^1)$, which acts on connections by conjugation and on
spinors by complex multiplication. For generic choice of
perturbation $\mu$ the moduli space ${\cal M}_\sigma$ is a compact
zero--dimensional manifold that is smoothly cut out by its defining
equations (if $b_1(X)>0$). There is a way to orient ${\cal M}_\sigma$
using a so-called homology orientation of $X$, and the {\it
Seiberg--Witten invariant} of $X$ in the {\mbox{\rm spin${}^c$}} structure $\sigma$ is
defined to be the signed count of points of ${\cal M}_\sigma$. One
can show that if $b_1(X)>1$ then the resulting number is independent
of all choices involved and depends only on $X$ (with its orientation);
while if $b_1(X) =
1$ there is a slight complication: in this case we need to make a
choice of generator $o$ for the free part of $H^1(X;{\mathbb Z})$ and require
that $\langle[\mu]\cup o, [X]\rangle > \pi \langle c_1(\sigma)\cup o,
[X]\rangle$.
Suppose now that rather than a closed manifold, $X$ is isometric to a
product $\Sigma\times {\mathbb R}$ for some Riemann surface $\Sigma$. If $t$
is the coordinate in the ${\mathbb R}$ direction, then Clifford
multiplication by $dt$ is an automorphism of square $-1$ of $W$ and
therefore splits $W$ into eigen-bundles $E$ and $F$ on which $dt$
acts
as multiplication by $-i$ and $i$, respectively. In fact
$F = K^{-1}E$ where $K$ is the canonical bundle of $\Sigma$, and
$2E-K = L$, the determinant line of $\sigma$. Writing a section
$\psi$
of $W$ as $(\alpha,\beta)\in\Gamma(E\oplus K^{-1}E)$, we can express
the Dirac operator in this decomposition as:
\[
D_A\psi = \left(\begin{array}{cc} -i\frac{\partial}{\partial t} &
\bar{\partial}_{B,J}^* \\ \mbox{$\bar{\partial}$}_{B,J} & i\frac{\partial}{\partial t}
\end{array}\right)\left( \begin{array}{c} \alpha \\ \beta
\end{array}\right)
\]
Here we have fixed a spin structure (with connection) $K^{1/2}$ on
$\Sigma$
and noted
that the choice of a connection $A$ on $L^{1/2} = E-K^{1/2}$ is
equivalent
to a choice of connection $B$ on $E$. The metric on
$\Sigma\times{\mathbb R}$ induces a complex structure $J$ and area form
$\omega_\Sigma$ on $\Sigma$. Then $\mbox{$\bar{\partial}$}_{B,J}$ is the associated $\mbox{$\bar{\partial}$}$
operator on sections of $E$ with adjoint operator
$\mbox{$\bar{\partial}$}_{B,J}^*$.
The 2--forms $\Omega^2_{\mathbb C}(\Sigma\times{\mathbb R})$ split as
$\Omega^{1,1}(\Sigma)
\oplus
[(\Omega^{1,0}(\Sigma)\oplus\Omega^{0,1}(\Sigma))\otimes\Omega^1_{\mathbb C}({\mathbb R})]$,
and we will write a form $\nu$ as $\Lambda\nu\cdot\omega_\Sigma +
\nu^{1,0} dt + \nu^{0,1} dt$ in this splitting. Thus $\Lambda
\nu$ is a complex function on $\Sigma\times{\mathbb R}$, while
$\nu^{1,0}$ and $\nu^{0,1}$ are 1--forms on $\Sigma$. With these
conventions, the Seiberg--Witten equations become
\begin{equation}
\begin{array}{rcl} i\dot{\alpha} &\,\,=\,\,& \mbox{$\bar{\partial}$}^*_{B,J}\beta \\
i\dot{\beta} &=& -\mbox{$\bar{\partial}$}_{B,J} \alpha \\
2\Lambda F_B- \Lambda F_K + 2i\Lambda\mu &=&
i(|\alpha|^2-|\beta|^2) \\
(2F_B - F_K)^{1,0} + 2i\mu^{1,0} &=& \alpha\otimes
\bar{\beta}
\end{array}
\label{redsweqns}
\end{equation}
One can show
that for a finite-energy solution either $\alpha$ or $\beta$ must
identically vanish; apparently this implies any such solution is
constant, and the above system of equations descends to $\Sigma$ when
written in temporal gauge (ie, so the connection has no $dt$ component).
The above equations (with
$\beta=0$) therefore reduce to the {\it vortex equations} in $E$,
which are for a pair $(B, \alpha)\in {\cal A}(E)\times \Gamma(E)$ and
read
\begin{eqnarray}
\mbox{$\bar{\partial}$}_{B,J} \alpha &=& 0 \label{vortex1}\\
i\star F_B + \frac{1}{2}|\alpha|^2 &=& \tau \label{vortex2}
\end{eqnarray}
where $\tau$ is a function on $\Sigma$ satisfying $\int \tau >
2\pi\deg(E)$ and
incorporates the curvature $F_K$ and perturbation above. These
equations are well-understood, and it is known that the space of
solutions to the vortex equations modulo $\mbox{Map}(\Sigma, S^1)$
is isomorphic to the space of solutions $(B,\alpha)$ of the single
equation
\[
\mbox{$\bar{\partial}$}_{B,J} \alpha = 0
\]
modulo the action of $\mbox{Map}(\Sigma, {\mathbb C}^*)$. The latter is
naturally identified with the space of divisors of degree $d=
\deg(E)$ on $\Sigma$ via the zeros of $\alpha$, and forms a K\"ahler
manifold isomorphic to the $d$-th symmetric power $\mathrm{Sym}^d
\Sigma$, which for brevity we will abbreviate as $\Sigma^{(d)}$
from now on. We write ${\cal M}_d(\Sigma,J)$ (or simply ${\cal
M}(\Sigma)$) for the moduli space of vortices in a bundle $E$ of
degree $d$ on $\Sigma$.
The situation for $\alpha \equiv 0$ is analogous to
the above: in this case $\beta$ satisfies $\mbox{$\bar{\partial}$}^*_{B,J}\beta = 0$ so
that $\star_2\beta$ is a holomorphic section of $K\otimes E^*$.
Replacing $\beta$ by $\star_2\beta$ shows that the Seiberg--Witten
equations reduce to the vortex equations in the bundle $K\otimes E^*$,
giving a moduli space isomorphic to $\Sigma^{(2g-2-d)}$.
\section{A TQFT for Seiberg--Witten invariants}
\label{tqftsec}
In this section we describe Donaldson's ``topological quantum field
theory'' for computing the Seiberg--Witten invariants. Suppose $W$ is
a
cobordism between two Riemann surfaces $S_-$ and $S_+$. We complete
$W$ by adding tubes $S_\pm\times [0,\infty)$ to the boundaries and
endow the completed manifold $\hat{W}$ with a Riemannian metric that
is a product on the ends. By considering finite-energy solutions to
the Seiberg--Witten equations on $\hat{W}$ in some {\mbox{\rm spin${}^c$}} structure
$\sigma$, we can produce a Fredholm problem and show that such
solutions must approach solutions to the vortex equations on $S_\pm$.
Following a solution to its limiting values, we obtain smooth maps
between moduli spaces, $\rho_\pm\co {\cal M}(\hat{W})\to {\cal
M}(S_\pm)$. Thus we can form
\begin{eqnarray*}
\kappa_\sigma = (\rho_-\otimes\rho_+)_*[{\cal M}(\hat{W})]&\in&
H_*({\cal M}(S_-)) \otimes H_*({\cal M}(S_+)) \\
&\cong& \hom (H^*({\cal M}(S_-)),H^*({\cal M}(S_+))).
\end{eqnarray*}
Here we use Poincar\'e duality and work with rational coefficients.
This is the basis for our ``TQFT:'' to a surface $S$ we associate the
cohomology of the moduli space ${\cal M}(S)$, and to a
cobordism $W$ between $S_-$ and $S_+$ we assign the homomorphism
$\kappa_\sigma$:
\begin{eqnarray*}
S&\longmapsto & V_S = H^*({\cal M}(S))\\
W&\longmapsto & \kappa_\sigma\co V_{S_-}\to V_{S_+}
\end{eqnarray*}
In the sequel we will be interested only in cobordisms $W$ that
satisify the topological assumption $H_1(W,\partial W) = {\mathbb Z}$. Under
this asssumption, gluing theory for Seiberg--Witten solutions provides a
proof of the central property of TQFTs, namely that if $W_1$ and $W_2$
are composable cobordisms then $\kappa_{W_1\cup W_2} =
\kappa_{W_2}\circ\kappa_{W_1}$.
If $X$ is a closed oriented 3--manifold with $b_1(X)>0$ then the above
constructions can be used to calculate the Seiberg--Witten invariants
of $X$, as seen in \cite{D1}. We now describe the procedure involved.
Begin with a Morse function $\phi\co X\to
S^1$ as in the introduction, and cut $X$ along the level set $S$
to produce a cobordism $W$ between two copies of $S$, which come
with an identification or ``gluing map'' $\partial_- W\to \partial_+
W$. Write $g$ for the genus of $S$. The cases $b_1(X)>1$ and $b_1(X)=1$ are
slightly different and we consider them separately.
Suppose $b_1(X)>1$, so the perturbation $\mu$ in (\ref{sweqns}) can
be taken to be small. Consider the constant solutions to the equations
(\ref{redsweqns}) on the ends of $\hat{W}$, or equivalently the
possible values of $\rho_\pm$. If $\beta \equiv 0$ then $\alpha$ is a
holomorphic section of $E$ and so the existence of a nonvanishing
solution requires $\deg(E)\geq 0$. Since $\mu$ is small,
integrating the third equation in (\ref{redsweqns}) tells us that
$2E-K$ is nonpositive. Hence existence of nonvanishing solutions
requires $0\leq \deg(E)\leq \frac{1}{2}\deg(K) = g-1$. If
$\alpha\equiv 0$, then $\star_2\beta$ is a holomorphic section of
$K-E$ so to avoid triviality we must have $0\leq\deg(K)-\deg(E)$,
ie, $\deg(E)\leq 2g-2$. On the other hand, integrating the
third Seiberg--Witten equation tells us that $2E-K$ is
nonnegative, so that $\deg(E)\geq g-1$. To summarize we have
shown that constant solutions to the Seiberg--Witten equations on
the ends of $\hat{W}$ in a {\mbox{\rm spin${}^c$}} structure $\sigma$ are just the
vortices on $S$ (with the finite-energy hypothesis). If
$\det(\sigma) = L$ a necessary condition for the existence of
such solutions is $-2g+2\leq \deg(L) \leq 2g-2$ (recall $L =
2E-K$ so in particular $L$ is even). If this condition is
satisfied than the moduli space on each end is isomorphic to
${\cal M}_n(S) \cong S^{(n)}$ where $n =
g-1-\frac{1}{2}|\deg(L)|$. Note that by suitable choice of
perturbation $\mu$ we can eliminate the ``reducible'' solutions,
ie, those with $\alpha \equiv 0 \equiv \beta$, which otherwise
may occur at the extremes of our range of values for $\deg(L)$.
Now assume $b_1(X) = 1$. Integrating the third equation in
(\ref{redsweqns}) shows
\[
\langle c_1(\sigma), S\rangle - \frac{1}{\pi}\langle
[\mu],S\rangle = \frac{1}{2\pi}\int_S |\beta|^2- |\alpha|^2.
\]
The left hand side of this is negative by our assumption on $\mu$, and
we know that either $\alpha\equiv 0$ or $\beta \equiv 0$. The first
of these possibilities gives a contradiction; hence $\beta\equiv 0$
and the system (\ref{redsweqns}) reduces to the vortex equations in $E$
over $S$. Existence of nontrivial solutions therefore requires
$\deg(E)\geq 0$, ie, $\deg(L)\geq 2-2g(S)$. Thus the moduli
space on each end of $\hat{W}$ is isomorphic to ${\cal
M}_n(S) \cong S^{(n)}$, where $n = \deg(E) =
g-1+\frac{1}{2}\deg(L)$ and $\deg(L)$ is any even integer at
least $2-2g(S)$.
\begin{thm}[Donaldson] Let $X$, $\sigma$, $\phi$, $S$, and $W$ be
as above. Write $\langle c_1(\sigma),[S]\rangle = m$ and define
either $n = g(S)-1 -\frac{1}{2}|m|$ or $n = g(S) - 1
+\frac{1}{2}m$ depending whether $b_1(X)>1$ or $b_1(X) = 1$. Then
if $n\geq 0$,
\begin{equation}
{\mbox{\rm Tr}} \,\kappa_\sigma = \sum_{\tilde{\sigma}\in{\cal S}_m}
SW(\tilde{\sigma})
\label{traceeqn}
\end{equation}
where ${\cal S}_m$ denotes the set of {\mbox{\rm spin${}^c$}} structures
$\tilde{\sigma}$ on $X$ such that $\langle
c_1(\tilde{\sigma}),[S]\rangle = m$. If $n<0$ then the right hand
side of (\ref{traceeqn}) vanishes. Here ${\mbox{\rm Tr}}$ denotes the graded
trace.
\label{tracethm}
\end{thm}
Note that with $n$ as in the theorem, $\kappa_\sigma$ is a linear map
\[
\kappa_\sigma\co H^*(S^{(n)})\to H^*(S^{(n)});
\]
as the trace of $\kappa_\sigma$ computes a sum of Seiberg--Witten
invariants rather than just $SW(\sigma)$, we use the notation
$\kappa_n$ rather than $\kappa_\sigma$.
Since $\kappa_n$ obeys the composition law, in order to determine
the map corresponding to $W$ we need only determine the map generated
by elementary cobordisms, ie, those consisting of a single 1-- or
2--handle addition (we need not consider 0-- or 3--handles by our
assumption on $\phi$). In \cite{D1}, Donaldson uses an elegant
algebraic argument to determine these elementary homomorphisms. To
state the result, recall that the cohomology of the $n$-th symmetric
power $S^{(n)}$ of a Riemann surface $S$ is given over ${\mathbb Z}$,
$\cue$, ${\mathbb R}$, or ${\mathbb C}$ by
\begin{equation}
H^*(S^{(n)}) = \bigoplus_{i=0}^n \Lambda^iH^1(S)\otimes
{\mbox{\rm Sym}}^{n-i}(H^0(S)\oplus H^2(S)).
\label{symprodcohom}
\end{equation}
Suppose that $W$ is an elementary cobordism connecting two surfaces
$\Sigma_g$ and $\Sigma_{g+1}$. Thus there is a unique critical point (of index 1) of the
height function
$h\co W\to {\mathbb R}$, and the ascending manifold of this critical point
intersects $\Sigma_{g+1}$ in an essential simple closed curve that we will
denote by $c$.
Now, $c$ obviously bounds a disk $D\subset W$; the
Poincar\'e--Lefschetz dual
of $[D]\in H_2(W,\partial W)$ is a 1--cocycle that we will denote
$\eta_0\in H^1(W)$. It is easy to check that $\eta_0$ is in the kernel
of the restriction $r_1\co H^1(W)\to H^1(\Sigma_g)$, so we may complete
$\eta_0$ to a basis $\eta_0,\eta_1,\ldots,\eta_{2g}$ of $H^1(W)$ with the
property that
$\xi_1:=r_1(\eta_1),\ldots,\hspace{1ex}\xi_{2g}:=r_1(\eta_{2g})$
form a basis for $H^1(\Sigma_g)$. Since the restriction
$r_2\co H^1(W)\to H^1(\Sigma_{g+1})$ is injective, we know
$\bar{\xi}_0:=r_2(\eta_0), \ldots,\hspace{1ex} \bar{\xi}_{2g}:=
r_2(\eta_{2g})$ are linearly independent; note that $r_2(\eta_0)$
is just $c^*$, the Poincar\'e dual of $c$.
The choice of basis $\eta_j$ with its restrictions $\xi_j$,
$\bar{\xi}_j$ gives rise to an inclusion $i\co H^1(\Sigma_g)\to
H^1(\Sigma_{g+1})$ in the obvious way, namely $i(\xi_j) =
\bar{\xi_j}$. One may check that this map is independent of the
choice of basis $\{\eta_j\}$ for $H^1(W)$ having $\eta_0$ as above.
From the decomposition (\ref{symprodcohom}), we can extend $i$ to an
inclusion $i\co H^*(\Sigma_g^{(n)})\hookrightarrow
H^*(\Sigma_{g+1}^{(n)})$. Having produced this inclusion, we now
proceed to suppress it from the notation, in particular in the
following theorem.
\begin{thm}[Donaldson] In this situation, and with $\sigma$ and $n$ as
previously, the map $\kappa_n$ corresponding to the elementary cobordism
$W$ is given by
\[
\kappa_n(\alpha) = c^*\wedge\alpha.
\]
If $\bar{W}$ is the ``opposite'' cobordism between $\Sigma_{g+1}$ and
$\Sigma_g$, the corresponding $\kappa_n$ is given by the contraction
\[
\kappa_n(\beta) = \iota_{c^*}\beta,
\]
\label{wedgethm}
where contraction is defined using the intersection pairing on
$H^1(\Sigma_{g+1})$.
\end{thm}
This result makes the calculation of Seiberg--Witten invariants
completely explicit, as we see in the next few sections.
\section{Standardization of $X$}
\label{stdsec}
We now return to the situation of the introduction: namely, we
consider a closed 3--manifold $X$ having $b_1(X)\geq 1$, with its
circle-valued Morse function $\phi\co X\to S^1$ having no critical
points of index $0$ or $3$, and $N$ critical points of each index $1$
and $2$. We want to
show how to identify $X$ with a ``standard'' manifold $M(g, N, h)$
that depends only on $N$ and a diffeomorphism $h$ of a Riemann
surface of genus $g+N$. This standard manifold will be obtained from two
``compression bodies,'' ie, cobordisms between surfaces
incorporating handle additions of all the same index. Two copies
of the same compression body can be glued together along their
smaller-genus boundary by the identity map, then by a
``monodromy'' diffeomorphism of the other boundary component to
produce a more interesting 3--manifold. Such a manifold lends
itself well to analysis using the TQFT from the previous section,
as the interaction between the curves $c$ corresponding to each
handle is completely controlled by the monodromy. We now will
show that every closed oriented 3--manifold $X$ having $b_1(X)>0$
can be realized as such a glued-up union of compression bodies.
To begin with, we fix a closed oriented genus 0 surface
$\Sigma_0$ (that is, a standard 2--sphere) with an
orientation-preserving embedding $\psi_{0,0}\co S^0\times D^2\to
\Sigma_0$. Here we write $D^n = \{x\in{\mathbb R}^n||x|<1\}$ for the
unit disk in ${\mathbb R}^n$. There is a standard way to perform surgery
on the image of $\psi_{0,0}$ (see \cite{milnor2}) to obtain a new
surface $\Sigma_1$ of genus 1 and an orientation-preserving
embedding $\psi_{1,1}\co S^1\times D^1\to \Sigma_1$. In fact we can
get a cobordism $(W_{0,1},\Sigma_0,\Sigma_1)$ with a
``gradient-like vector field'' $\xi$ for a Morse function
$f\co W_{0,1}\to [0,1]$. Here $f^{-1}(0) = \Sigma_0$, $f^{-1}(1) =
\Sigma_1$, and $f$ has a single critical point $p$ of index 1
with $f(p) = \frac{1}{2}$. We have that $\xi[f] >0$ away from $p$
and that in local coordinates near $p$, $f = \frac{1}{2} -{x_1}^2
+ {x_2}^2 + {x_3}^2$ and $\xi = -x_1\frac{\partial}{\partial
x_1}+x_2\frac{\partial}{\partial x_2}
+x_3\frac{\partial}{\partial x_3} $. The downward flow of $\xi$
from $p$ intersects $\Sigma_0$ in $\psi_{0,0}(S^0\times 0)$ and
the upward flow intersects $\Sigma_1$ in $\psi_{1,1}(S^1\times
0)$.
Choose an embedding $\psi_{0,1}\co S^0\times D^2\to \Sigma_1$ whose
image
is disjoint from $\psi_{1,1}(S^1\times D^1)$. Then we can repeat the
process above to get another cobordism $(W_{1,2}, \Sigma_1,
\Sigma_2)$
with Morse function $f\co W_{1,2}\to [1,2]$ having a single critical
point of index 1 at level $\frac{3}{2}$, and gradient-like vector
field $\xi$ as before.
Continuing in this way, we get a sequence of cobordisms
$(W_{g,g+1},\Sigma_g,\Sigma_{g+1})$ between surfaces of genus
difference 1, with Morse functions $f\co W_{g,g+1}\to [g,g+1]$
and gradient-like vector fields $\xi$. To each $\Sigma_g$, $g\geq 1$, is
also associated a pair of embeddings $\psi_{i,g}\co S^i\times D^{2-i}\to
\Sigma_g$, $i=0,1$. These embeddings have disjoint images, and are
orientation-preserving with respect to the given, fixed orientations
on the $\Sigma_g$. Note that the orientation on $\Sigma_g$ induced
by $W_{g,g+1}$ is opposite to the given one, so the map
$\psi_{0,g}\co S^0\times D^2\to -\Sigma_g=\partial_- W_{g,g+1}$ is
orientation-reversing.
Since the surfaces $\Sigma_g$
are all standard, we have a natural way to compose $W_{g-1,g}$ and
$W_{g,g+1}$ to produce a cobordism $W_{g-1,g+1} = W_{g-1,g} +
W_{g,g+1}$
with a Morse function
to $[g-1,g+1]$ having two index-1 critical points. Furthermore, by
replacing $f$ by $-f$ we can obtain cobordisms $(W_{g+1,g},
\Sigma_{g+1}, \Sigma_g)$ with Morse functions having a single
critical point of index 2, and these cobordisms may be naturally
composed with each other or with the original index-1 cobordisms
obtained before (after appropriately adjusting the values of the
corresponding Morse functions), whenever such composition makes sense.
We may think of $W_{g+1,g}$ as being simply $W_{g,g+1}$ with the
opposite orientation.
In particular, we can fix integers $g,N\geq 0$ and proceed as
follows.
Beginning with $\Sigma_{g+N}$, compose the cobordisms
$W_{g+N,g+N-1}, \ldots, W_{g+1, g}$ to form a ``standard''
compression body, and glue this with the
composition $W_{g,g+1}+\cdots + W_{g+N-1,g+N}$ using the identity
map on $\Sigma_g$. The result is a cobordism $(W, \Sigma_{g+N},\Sigma_{g+N})$
and a Morse function $f\co W\to {\mathbb R}$ that we may rescale to have range $[-N,N]$,
having $N$ critical points each of index 1 and 2. By our
construction, the first half of this cobordism, $W_{g+N,g}$, is
identical with the second half, $W_{g,g+N}$: they differ only in
their choice of Morse function and associated gradient-like
vector field.
Now, by our construction the circles $\psi_{1,g+k}\co S^1\times 0\to
f^{-1}(-k) = \Sigma_{g+k}\subset W$, $1\leq k\leq N$, all survive to
$\Sigma_{g+N}$ under downward flow of $\xi$. This is because the
images of $\psi_{1,q}$ and $\psi_{0,q}$ are disjoint for all $q$.
Thus on the ``lower'' copy of $\Sigma_{g+N}$ we have $N$ disjoint
primitive circles $c_1,\ldots, c_N$ that, under upward flow of
$\xi$, each converge to an index 2 critical point. Similarly,
(since $W_{g,g+N} = W_{g+N,g}$) the circles $\psi_{1,l}\co S^1\times
0 \to f^{-1}(k)=\Sigma_{g+k}\subset W$, $1\leq k\leq N$, survive to
$\Sigma_{g+N}$ under upward flow of $\xi$, and intersect the
``upper'' copy of $\Sigma_{g+N}$ in the circles $c_1,\ldots,c_N$.
Now suppose $h\co \partial_+W =\Sigma_{g+N}\to \Sigma_{g+N}=-\partial_-W$
is a diffeomorphism;
then we can use $h$ to identify the boundaries $f^{-1}(-N)$,
$f^{-1}(N)$ of $W$, and produce a manifold that we will denote by
$M(g, N, h)$. Note that this manifold is entirely determined by the
isotopy class of the map $h$, and that if $h$ preserves orientation
then $M(g, N, h)$ is an orientable manifold having $b_1\geq 1$.
\begin{thm} Let $X$ be a closed oriented 3--manifold and $\phi\co X\to
S^1$ a circle-valued Morse function with no critical points of index
0
or 3, and with $N$ critical points each of index 1 and 2. Assume
that $[\phi]\in H^1(X;{\mathbb Z})$ is of infinite order and indivisible.
Arrange
that $0<\arg\phi(p)<\pi$ for $p$ an index 1 critical point and
$\pi<\arg\phi(q)<2\pi$ for $q$ an index 2 critical point, and let $S_g =
\phi^{-1}(1)$, where $S_g$ has genus $g$. Then $X$ is
diffeomorphic to $M(g, N, h)$ for some
$h\co \Sigma_{g+N}\to\Sigma_{g+N}$ as above.
\label{stdthm}
\end{thm}
Note that $S_g$ has by construction the smallest genus among smooth
slices for $f$.
\proof By assumption $-1$ is a regular value of $\phi$, so
$S_{g+N} = \phi^{-1}(-1)$ is a smooth orientable submanifold of $X$;
it is easy to see that $S_{g+N}$ is a closed surface of genus $g+N$.
Cut $X$ along $S_{g+N}$; then we obtain a cobordism $(W_\phi, S_-,
S_+)$ between two copies $S_\pm$ of $S_{g+N}$, and a Morse function $f\co
W_\phi\to [-\pi,\pi]$ induced by $\arg\phi$. The critical points
of $f$ are exactly those of $\phi$ (with the same index), and by
our arrangement of critical points we have that $f(q)<0$ for any
index 2 critical point $q$ and $f(p)>0$ for any index 1 critical
point $p$. It is well-known that we can arrange for the critical
points of $f$ to have distinct values, and that in this case
$W_\phi$ is diffeomorphic to a composition of elementary
cobordisms, each containing a single critical point of $f$. For
convenience we rescale $f$ so that its image is the interval
$[-N,N]$ and the critical values of $f$ are the half-integers
between $-N$ and $N$. Orient each smooth level set $f^{-1}(x)$ by
declaring that a basis for the tangent space of $f^{-1}(x)$ is
positively oriented if a gradient-like vector field for $f$
followed by that basis is a positive basis for the tangent space
of $W_\phi$.
We will show that $W_\phi$ can be standardized by working ``from the
middle out.'' Choose a gradient-like vector field $\xi_f$ for $f$,
and
consider $S_g = f^{-1}(0)$---the ``middle level'' of
$W_\phi$, corresponding to $\phi^{-1}(1)$. There is exactly one
critical point of $f$ in the region $f^{-1}([0,1])$, of index 1, and
as above $\xi_f$ determines a ``characteristic embedding''
$\theta_{0,g}\co S^0\times D^2\to S_g$. Choose a diffeomorphism
$\Theta_0\co S_g\to \Sigma_g$ such that $\Theta_0\circ\theta_{0,g} =
\psi_{0,g}$; then it follows from \cite{milnor2}, Theorem 3.13, that
$f^{-1}([0,1])$ is diffeomorphic to $W_{g,g+1}$ by some
diffeomorphism $\Theta$
sending $\xi_f$ to $\xi$. (Recall that $\xi$ is the gradient-like
vector field fixed on $W_{g,g+1}$.)
Let $\Theta_1\co S_{g+1}\to \Sigma_{g+1}$ be the restriction of
$\Theta$
to $S_{g+1} = f^{-1}(1)$, and let $\mu_{0,g+1} =
\Theta_1^{-1}\circ\psi_{0,g+1}\co S^0\times D^2\to S_{g+1}$. Now
$\xi_f$ induces an embedding $\theta_{0,g+1}\co S^0\times D^2\to
S_{g+1}$, by considering downward flow from the critical point in
$f^{-1}([1,2])$. Since { any two orientation-preserving
diffeomorphisms $D^2\to D^2$ are isotopic} and $S_{g+1}$ is connected,
we have that $\mu_{0,g+1}$ and $\theta_{0,g+1}$ are isotopic. It is
now a simple matter to modify $\xi_f$ in the region
$f^{-1}([1,1+\epsilon])$ using the isotopy, and arrange that
$\theta_{0,g+1} = \mu_{0,g+1}$. Equivalently,
$\Theta\circ\theta_{0,g+1} = \psi_{0,g+1}$, so the theorem quoted
above shows that $f^{-1}([1,2])$ is diffeomorphic to $W_{g+1, g+2}$.
In fact, since the diffeomorphism sends $\xi_f$ to $\xi$, we get that
$\Theta$ extends smoothly to a diffeomorphism $f^{-1}([0,2])\to
W_{g,g+2}$.
Continuing in this way, we see that after successive modifications of
$\xi_f$ in small neighborhoods of the levels $f^{-1}(k)$, $k =
1,\ldots, N-1$, we obtain a diffeomorphism $\Theta\co f^{-1}([0,N])\to
W_{g,g+N}$ with $\Theta_*\xi_f = \xi$.
The procedure is entirely analogous when we turn to the ``lower
half'' of $W_\phi$, but the picture is upside-down. We have the
diffeomorphism $\Theta_0\co S_g\to \Sigma_g$, but before we can extend
it to a diffeomorphism $\Theta\co f^{-1}([-1,0])\to W_{g+1,g}$ we must
again make sure the characteristic embeddings match. That is,
consider the map $\theta'_{0,g}\co S^0\times D^2\to S_g$ induced by
upward flow from the critical point, and compare it to
$\Theta_0^{-1}\circ\psi_{0,g}$. As before we can isotope $\xi_f$ in
(an open subset whose closure is contained in) the region
$f^{-1}([-\epsilon, 0])$ so that these embeddings agree, and we then
get the desired extension of $\Theta$ to $f^{-1}([-1, N])$. Then the
procedure is just as before: alter $\xi_f$ at each step to make the
characteristic embeddings agree, and extend $\Theta$ one critical
point at a time.
Thus $\Theta\co W_\phi \cong W = W_{g+N,g+N-1}+ \cdots+
W_{g+1,g}+W_{g,g+1}+\cdots + W_{g+N-1,g+N}$. Since $W_\phi$ was
obtained by cutting $X$, it comes with an identification $\iota\co
S_+\to S_-$. Hence $X\cong M(g, N, h)$ where $h =
\Theta\circ\iota\circ\Theta^{-1}\co \Sigma_{g+N}\to\Sigma_{g+N}$.
\endproof
\begin{rem} The identification $X\cong M(g, N, h)$ is not canonical, as it
depends on the initial choice of diffeomorphism $\phi^{-1}(1)\cong
\Sigma_g$, the final gradient-like vector field on
$W_\phi$ used to produce $\Theta$, as well as the function $\phi$. As
with a Heegard decomposition, however, it is the existence of such a
structure that is important.
\end{rem}
\section{Preliminary calculations}
This section collects a few lemmata that we will use in the proof of
Theorem \ref{mainthm}. Our main object here is to make the quantity
$[\zeta(F)\det(d)]_n$ a bit more explicit.
We work in the standardized setup of the previous section, identifying
$X$ with $M(g, N, h)$. The motivation for doing so is mainly that our
invariants are purely algebraic---ie, homological---and the
standardized situation is very easy to deal with on this level.
Choose a metric $k$ on $X=M(g, N, h)$; then gradient flow with
respect to $k$ on $(W, \Sigma_{g+N},\Sigma_{g+N})$ determines
curves $\{c_i\}_{i=1}^N$ and $\{d_j\}_{j=1}^N$ on $\Sigma_{g+N}$,
namely $c_i$ is the intersection of the descending manifold of
the $i$th index-2 critical point with the lower copy of
$\Sigma_{g+N}$ and $d_j$ is the intersection of the ascending
manifold of the $j$th index-1 critical point with the upper copy
of $\Sigma_{g+N}$.
\begin{defn} The pair $(k,\phi)$ consisting of a metric $k$ on $X$
together with the Morse function $\phi\co X\to S^1$ is said to be
{\em symmetric}
if the following conditions are satisfied. Arrange the critical
points of $\phi$ as in Theorem \ref{stdthm}, so that all critical
points have distinct values. Write $W_\phi$ for the cobordism
$X\setminus\phi^{-1}(-1)$, and $f\co W_\phi\to[-N,N]$ for the
(rescaled) Morse function induced by $\phi$ as in the proof of Theorem
\ref{stdthm}. Write $I$ for the (orientation-reversing) involution
obtained by swapping the factors in the expression $W_\phi \cong
W_{g+N,g}\cup W_{g,g+N}$. We require:
\begin{enumerate}
\item $I^*f = -f$.
\item For every $x\in W_{g+N,g}$ we have $(\nabla f)_{I(x)} = -I_*(\nabla
f)_x$.
\end{enumerate}
\end{defn}
Symmetric pairs $(k,\phi)$ always exist: choose any metric on
$X$, and then in the construction used in the proof of Theorem
\ref{stdthm}, take our gradient-like vector field $\xi_f$ to be a
multiple of the gradient of $f$ with respect to that metric. It is a
straightforward exercise to see that the isotopies of $\xi_f$ needed in
that proof may be obtained by modifications of the metric.
We use the term ``symmetric'' here because the gradient flows of the
Morse function $f$ on the portions $W_{g+N,g}$ and $W_{g,g+N}$ are
mirror images of each other. We will also say that the flow of $\nabla
f$ or of $\nabla\phi$ is symmetric in this case.
Suppose $M(g,N,h)$ is endowed with a symmetric pair, and consider the
calculation of $\zeta(F)\tau(M^*)$ in this case. Recall that $F$ is
the return map of the flow of $\nabla\phi$ from $\Sigma_g$ to itself
(though $F$ is only partially defined due to the existence of
critical points). Because of the symmetry of the flow, it is easy to see
that:
\begin{enumerate}
\item[(I)] The fixed points of iterates $F^k$ are in 1--1 correspondence
with fixed points of iterates $h^k$ of the gluing map in the
construction of $W$, and the Lefschetz signs of the fixed points
agree. Indeed, if $h$ is sufficiently generic, we can
assume that the set of fixed points of $h^k$ for $1\leq k\leq n$ (an
arbitrary, but fixed, $n$) occur away from the $d_j$ (which
agree with the $c_i$ under the identification
$I$ by symmetry). \item[(II)] The $(i,j)$th entry of the matrix of
$d_M\co M^1\to M^2$ in the Morse complex is given by the series
\begin{equation}
\sum_{k\geq 1} \langle {h^k}^* c_i,c_j\rangle t^{k-1},
\label{dformula}
\end{equation}
where $\langle \cdot,\cdot\rangle$ denotes the cup product pairing on
$H^1(\Sigma_{g+N},{\mathbb Z})$ and we have identified the curves $c_i$ with
the Poincar\'e duals of the homology classes they represent.
\end{enumerate}
We should remark that a symmetric pair is not {\it a priori}
suitable for calculating the invariant $\zeta(F)\tau(M^*)$ of
Hutchings and Lee, since it is not generic. Indeed, for a
symmetric flow each index-2 critical point has a pair of upward
gradient flow lines into an index-1 critical point. However, this
is the only reason the flow is not generic: our plan now is to
perturb a symmetric metric to one which does not induce the
behavior of the flow just mentioned; then suitable genericity of
$h$ guarantees that the flow is Morse--Smale.
\begin{lemma} Assume that there are no ``short'' gradient flow lines
between critical points, that is, every flow line between critical
points intersects $\Sigma_g$ at least once.
Given a symmetric pair $(g_0,\phi)$ on $M(g, N, h)$ and suitable genericity
hypotheses on $h$, there exists a $C^0$--small
perturbation of $g_0$ to a metric $\tilde{g}$ such that for given $n\geq 0$
\begin{enumerate}
\item The gradient flow of $\phi$ with respect to $\tilde{g}$ is
Morse--Smale; in particular the hypotheses of Theorem \ref{HLthm}
are satisfied.
\item The quantity $[\zeta(F)\tau(M^*)]_m$, $m\leq
n$ does not change under this perturbation.
\end{enumerate}
\label{perturblemma}
\end{lemma}
We defer the proof of this result to Section \ref{lemmapfsec}.
\begin{rem} We can always arrange that there are no short gradient
flow lines, at the expense of increasing $g = \mathrm{genus}(\Sigma_g)$.
To see this, begin
with $X$ and $\phi\co X\to S^1$ as before, with $\Sigma_g =
\phi^{-1}(1)$ and the critical points arranged according to
index. Every gradient flow line then intersects $\Sigma_{g+N}$.
Now rearrange the critical points by an isotopy of $\phi$ that is
constant near $\Sigma_{g+N}$ so that the index-1 points occur in
the region $\phi^{-1}(\{e^{i\theta}|\pi<\theta<2\pi\})$ and the
index-2 points in the complementary region. This involves moving
all $2N$ of the critical points past $\Sigma_g$, and therefore
increasing the genus of the slice $\phi^{-1}(1)$ to $g+2N$; we
still have that every gradient flow line between critical points
intersects $\Sigma_{g+N}$. Cutting $X$ along this new $\phi^{-1}(1)$ gives
a cobordism $\tilde{W}$ between two copies of $\Sigma_{g+2N}$ and thus
standardizes $X$ in the way we need while ensuring that there are no
short flows.
\end{rem}
\begin{cor} The coefficients of the torsion $\tau(X,\phi)$ may be
calculated homologically, as the coefficients of the quantity
$\zeta(h)\tau(M^*_0)$ where $M^*_0$ is the Morse complex coming
from a symmetric flow.
\label{symcor}
\end{cor}
That is, we can use properties I and II of symmetric pairs to
calculate each coefficient of the right-hand side of (\ref{HLeqn}).
\begin{lemma}\label{detlemma}
If the flow of $\nabla\phi$ is symmetric, the torsion
$\tau(M^*)$ is represented by a polynomial whose $k$th coefficient is given by
\[
[\tau(M^*)]_k = \sum_{s_1+\cdots + s_N = k \atop \sigma\in{\mathfrak
S}_N} (-1)^{{\mathrm{sgn}}(\sigma)}\langle {h^{s_1}}^*c_1,c_{\sigma(1)}\rangle\cdots
\langle {h^{s_N}}^*c_N,c_{\sigma(N)}\rangle.
\]
\end{lemma}
\proof Since there are only two nonzero terms in the Morse
complex, the torsion is represented by the determinant of the differential
$d_M\co M^1\to M^2$. Our task is to calculate a single coefficient of
the determinant of this matrix of polynomials. It will be convenient
to multiply the matrix of $d_M$ by $t$; this multiplies $\det(d_M)$ by
$t^N$, but $t^N\det(d_M)$ is still a representative for $\tau(M^*)$.
Multiplying formula (\ref{dformula}) by $t$ shows
\begin{eqnarray*}
t^N\det(d_M) &=& \sum_{\sigma\in{\mathfrak S}_N} (-1)^{{\mathrm{sgn}}(\sigma)}
\prod_i \left( \sum_k \langle {h^{k}}^* c_i,c_{\sigma(i)}\rangle
t^k\right) \\
&=& \sum_{k} \sum_{\sigma\in{\mathfrak S}_N} \sum_{s_1+\cdots s_N =
k}(-1)^{{\mathrm{sgn}}(\sigma)} \left( \prod_i \langle {h^{s_i}}^*c_i,
c_{\sigma(i)}\rangle\right) t^k
\end{eqnarray*}
and the result follows.\endproof
\section{Proof of Theorem \ref{mainthm}}
We are now in a position to explicitly calculate ${\mbox{\rm Tr}} \,\kappa_n$ using Theorem
\ref{wedgethm} and as a result prove Theorem \ref{mainthm}, assuming
throughout that $X$ is identified with $M(g, N, h)$ and the flow
of $\nabla\phi$ is symmetric. Indeed, fix the nonnegative integer $n$ as
in Section \ref{tqftsec} and consider the cobordism $W_\phi$ as
above, identified with a composition of standard elementary
cobordisms. Using Theorem \ref{wedgethm} we see that the first
half of the cobordism, $W_{g+N,g}=f^{-1}([0,N])$, induces the map:
\begin{eqnarray*}
A_1\co H^*(\Sigma_{g+N}^{(n+N)})&\to&
H^*(\Sigma_{g}^{(n)})\\
\alpha&\mapsto&\iota_{c_N^*}\cdots\iota_{c_1^*}\,\alpha
\end{eqnarray*}
The second half, $f^{-1}([N,2N])$, induces:
\begin{eqnarray*}
A_2\co H^*(\Sigma_g^{(n)})&\to& H^*(\Sigma_{g+N}^{(n+N)})\\
\beta&\mapsto& c_1^*\wedge\cdots\wedge c_N^*\wedge\beta
\end{eqnarray*}
To obtain the map $\kappa_n$ we compose the above with the gluing
map $h^*$ acting on the symmetric power $\Sigma_{g+N}^{(n+N)}$. The
alternating trace ${\mbox{\rm Tr}} \,\kappa_n$ is then given by ${\mbox{\rm Tr}}(h^*\circ
A_2\circ A_1)$.
Following MacDonald \cite{MD}, we can take a monomial basis for
$H^*(\Sigma_g^{(n)})$. Explicitly, if $\{x_i\}_{i = 1}^{2g}$ is a
symplectic basis for $H^1(\Sigma)$ having $x_i\cup x_{j+g} =
\delta_{ij}$ for $1\leq i,j\leq g$, and $x_i\cup x_j = 0$ for other
values of $i$ and $j$, $1\leq i<j\leq 2g$, and $y$ denotes the
generator of $H^2(\Sigma_g)$ coming from the orientation class, the
expression (\ref{symprodcohom}) shows that the set
\[
B_g^{(n)} = \{\alpha\} = \{x_Iy^q = x_{i_1}\wedge\cdots\wedge x_{i_k}\cdot
y^q| I = \{i_1<\ldots<i_k\}\subset\{1,\ldots,2g\}\},
\]
where $q=1,\ldots,n$ and $k=0,\ldots,n-q$, forms a basis for
$H^*(\Sigma_g^{(n)})$. We take $H^*(\Sigma_{g+k}^{(n+k)})$ to have
similar bases $B_{g+k}^{(n+k)}$, using the images of the $x_i$ under the inclusion
$i\co H^1(\Sigma_{g+k-1})\to H^1(\Sigma_{g+k})$ constructed in section
\ref{tqftsec}, the (Poincar\'e duals of the) curves $c_1,\ldots,c_k$,
and (the Poincar\'e duals of) some chosen dual curves $d_i$ to the $c_i$ as a
basis for $H^1(\Sigma_{g+k})$. Our convention is that $c_i\cup d_j =
\delta_{ij}$, where we now identify $c_i$, $d_j$ with their Poincar\'e duals.
The dual basis for $B_{g+k}^{(n+k)}$ under the cup product pairing will be denoted
${B}_{n+k}^{\circ} = \{\alpha^{\circ}\}$. Thus $\alpha^\circ\cup \beta =
\delta_{\alpha\beta}$ for basis elements $\alpha$ and
$\beta$. By abuse of notation, we will write $B_g^{(n)}\subset
B_{h}^{(m)}$ for $g\leq h$ and $n\leq m$; this makes use of the
inclusions on $H^1(\Sigma_g)$ induced by our standard cobordisms.
With these conventions, we can write:
\begin{eqnarray*}
{\mbox{\rm Tr}}\,\kappa_n &=& \sum_{\alpha\in B_{g+N}^{(n+N)}}(-1)^{\deg(\alpha)}
\alpha^\circ\cup h^*\circ A_2\circ A_1(\alpha)\\
&=& \sum_{\alpha\in B_{g+N}^{(n+N)}} (-1)^{\deg(\alpha)}\alpha^\circ\cup
h^*(c_1\wedge\cdots\wedge c_N \, \iota_{c_N}\cdots\iota_{c_1}
\alpha)
\end{eqnarray*}
For a term in this sum to be nonzero, $\alpha$ must be of a particular
form. Namely, we must be able to write $\alpha = d_1\wedge\cdots
\wedge d_N\wedge \beta$ for some $\beta\in B_g^{(n)}$. The sum then can
be written:
\begin{eqnarray}
&=& \sum_{\beta\in B_g^{(n)}}
(-1)^{\deg(\beta)+N}(d_1\wedge\cdots \wedge d_N\wedge \beta)^\circ \cup
h^*(c_1\wedge\cdots \wedge c_N\wedge \beta)
\label{traceform}
\end{eqnarray}
In words, this expression asks us to find the coefficient of
$d_1\wedge \cdots \wedge d_N\wedge \beta$ in the basis expression of
$h^*(c_1\wedge \cdots \wedge c_N \wedge\beta)$, and add up the results
with particular signs. Our task is to express this coefficient in
terms of intersection data among the $c_i$ and the Lefschetz numbers
of $h$ acting on the various symmetric powers of $\Sigma_g$.
Consider the term of (\ref{traceform}) corresponding to $\beta =
x_Iy^q$ for $I=\{i_1,...,i_k\}\subset\{1,...,2g\}$ and $x_I =
x_{i_1}\wedge\cdots\wedge x_{i_k}$. The coefficient of
$d_1\wedge\cdots\wedge d_N\wedge x_Iy^q$ in the basis expression of
$h^*(c_1\wedge\cdots\wedge c_N\wedge x_Iy^q)$ is computed by pairing
each of $\{c_1,...,c_N,x_{i_1},...,x_{i_k}\}$ with each of
$\{d_1,...,d_N,x_{i_1},...,x_{i_k}\}$ in every possible way, and
summing the results with signs corresponding to the permutation
involved. To make the notation a bit more compact, for given $I$ let $\bar{I} =
\{1,...,N,i_1,...,i_k\}$ and write the elements of $\bar{I}$ as
$\{\bar{\imath}_m\}_{m=1}^{N+k}$. Likewise, set $\bar{I}' =
\{N+1,...,2N,i_1,...,i_k\} =
\{\bar{\imath}'_1,...,\bar{\imath}'_{N+k}\}$.
Write $\{\xi_i\}_{i=1}^{2N+2g}$ for our basis of $H^1(\Sigma_{g+N})$:
\begin{eqnarray*}
& \xi_1 = c_1, \quad\cdots,\quad \xi_N = c_N,\,\,\xi_{N+1} =
d_1,\quad\cdots,\quad \xi_{2N} = d_N &\\
&\xi_{2N+1} =x_1,\quad\cdots,\quad \xi_{2N+2g} = x_{2g}&
\end{eqnarray*}
and let $\{\xi_i'\}$ be the dual basis: $\langle \xi_i,\xi'_j
\rangle = \delta_{ij}$. Define $\zeta_i = h^*(\xi_i)$.
Then since $\deg\beta = |I| = k$ modulo 2, the term of
(\ref{traceform}) corresponding to $\beta = x_Iy^q$ is
\begin{equation}
(-1)^{k+N} \sum_{\sigma\in\mathfrak{S}_{k+N}} (-1)^{{\mathrm{sgn}}(\sigma)}
\langle \zeta_{\bar{\imath}_1},
\xi'_{\bar{\imath}_{\sigma(1)}'}\rangle \cdots \langle
\zeta_{\bar{\imath}_{k+N}}, \xi'_{\bar{\imath}'_{\sigma(k+N)}}\rangle,
\label{coeffform}
\end{equation}
and (\ref{traceform}) becomes
\begin{equation}
{\mbox{\rm Tr}}\,\kappa_n =\sum_{k=0}^{\mathrm{min}(n,2g+2N)}\hspace*{-1em}
(2(n-k)+1)\hspace{-2em}\sum_{I\subset\{2N+1,\ldots,2N+2g\}\atop
|I| = k}\hspace*{-1.5em} [\mbox{formula (\ref{coeffform})}].
\label{traceform2}
\end{equation}
Here we are using the fact that for each $k =
0,\ldots,\mathrm{min}(n,2g+2N)$ the space
$\Lambda^kH^1(\Sigma_{g+N})$ appears in $H^*(\Sigma^{(n)})$
precisely $2(n-k)+1$ times, each in cohomology groups of all the
same parity.
Note that from (\ref{traceform}) we can see that the result is unchanged
if we allow not just sets $I\subset\{2N+1,\ldots,2N+2g\}$ in our
sum as above, but extend the sum to include sets $I =
\{i_1,\ldots,i_k\}$, where $i_1\leq\cdots \leq i_k$ and each
$i_j\in\{1,\ldots,2N+2g\}$. That is, we can allow $I$ to include
indices referring to the $c_i$ or $d_i$, and allow repeats: terms
corresponding to such $I$ contribute 0 to the sum. Likewise, we
may assume that the sum in (\ref{traceform2}) is over $k =
0,\ldots,n$ since values of $k$ larger than $2g+2N$ necessarily
involve repetitions in $\bar{I}$.
Consider the permutations $\sigma\in\mathfrak{S}_{k+N}$ used in the
above. The fact that the first $N$ elements of $\bar{I}$ and
$\bar{I}'$ are distinguished (corresponding to the $c_j$ and $d_j$,
respectively) gives such permutations an additional structure.
Indeed, writing $A =\{1,...,N\}\subset\{1,...,N+k\}$, let $\bar{A}$
denote the orbit of $A$ under powers of $\sigma$, and set $B =
\{1,...,N+k\}\setminus \bar{A}$. Then $\sigma$ factors into a
product $\sigma = \rho\cdot\tau$ where $\rho = \sigma|_{\bar{A}}$
and $\tau = \sigma|_B$. By construction, $\rho$ has the property that
the orbit of $A$ under $\rho$ is all of $\bar{A}$. Given any
integers $0\leq m\leq M$, we let $\mathfrak{S}_{M;m}$ denote the collection
of permutations $\alpha$ of $\{1,...,M\}$ such that the orbit of
$\{1,...,m\}$ under powers of $\alpha$ is all of $\{1,...,M\}$. The
discussion above can be summarized by saying that if $\bar{A} =
\{a_1,...,a_N,a_{N+1},...,a_{N+r}\}$ (where $a_i = i$ for $i=1,...,N$)
and $B = \{b_1,...,b_t\}$ then $\sigma$ preserves each of $\bar{A}$
and $B$, and $\sigma(\bar{A})= \{a_{\rho(1)},...,a_{\rho(N+r)}\}$,
$\sigma(B) = \{b_{\tau(1)},...,b_{\tau(t)}\}$ for some
$\rho\in\mathfrak{S}_{N+r;N}$, $\tau\in\mathfrak{S}_t$. Furthermore,
${\mathrm{sgn}}(\sigma) = {\mathrm{sgn}}(\rho)+ {\mathrm{sgn}}(\tau)$ mod 2.
Finally, for $\rho\in \mathfrak{S}_{N+r;N}$ as above, we define
\[
s_i = \min\{m>0|\rho^m(i)\in\{1,...,N\}\}.
\]
The definition of $\mathfrak{S}_{N+r;N}$ implies that $\sum_{i=1}^N
s_i = r+N$.
In (\ref{traceform2}) we are asked to sum over all sets $I$ with $|I|=k$ and all
permutations $\sigma\in\mathfrak{S}_{N+k}$ of the subscripts of
$\bar{I}$ and $\bar{I}'$. From the preceding remarks, this is
equivalent to taking a sum over all sets
$\bar{A}\supset\{1,...,N\}$ and $B$ with $|\bar{A}|+|B|=N+k$, and
all permutations $\rho$ and $\tau$,
$\rho\in\mathfrak{S}_{N+r;N}$, $\tau\in\mathfrak{S}_t$ (where
$|\bar{A}| = N+r$, $|B| = t$). Since we are to sum over all $I$
and $k$ and allow repetitions, we may replace $\bar{I}$ by
$\bar{A}\cup B$, meaning we take the sum over all $\bar{A}$ and
$B$ and all $\rho$ and $\tau$ as above, and eliminate reference
to $I$. Thus, we replace $\xi_{\bar{\imath}_{a_j}}$ by
$\xi_{a_j}$ and $\xi_{\bar{\imath}_{a_j}'}$ by $\xi_{a_j'}$ if we
define $\bar{A}' =
\{N+1,...,2N\}\cup(\bar{A}\setminus\{1,...,N\})$. (Put another
way, pairs $(\bar{I},\sigma)$ are in 1--1 correspondence with
4--tuples $(\bar{A}, B, \rho, \tau)$.) Then we can write
${\mbox{\rm Tr}}\,\kappa_n$ as:
\begin{eqnarray*}
&&\hspace*{-2em}\sum_{k=0}^n
(2(n-k)+1)(-1)^{k+N}\hspace{-1em} \sum_{\bar{A},B\atop |\bar{A}|+|B| =
k+N}
\sum_{\rho\in\mathfrak{S}_{|A|;N}\atop\tau\in\mathfrak{S}_{|B|}}
(-1)^{{\mathrm{sgn}}(\rho)}\hspace{-1.1em}\prod_{i=1,\ldots,N\atop
m=0,\ldots,s_i-1}\langle
\zeta_{a_{\rho^m(i)}},\xi'_{a'_{\rho^{m+1}(i)}}\rangle\\
&&\hspace*{2in}\times
(-1)^{{\mathrm{sgn}}(\tau)}\prod_{r=1}^{|B|}\langle\zeta_{b_r},\xi'_{b_{\tau(r)}}\rangle
\end{eqnarray*}
Carrying out the sum over all $B$ of a given size $t$ and all
permutations $\tau$, this becomes:
\begin{eqnarray*}
&&\hspace*{-2em}\sum_{k=0}^n \sum_{\bar{A};
|\bar{A}| = k+N-t\atop t=0,\ldots,k}
\sum_{\rho\in\mathfrak{S}_{|A|;N}}
(-1)^{{\mathrm{sgn}}(\rho)+k+N}(2(n-k)+1)\hspace{-1em}\prod_{i=1,\ldots,N\atop
m=0,\ldots,s_i-1}\langle
\zeta_{a_{\rho^m(i)}},\xi'_{a'_{\rho^{m+1}(i)}}\rangle\\
&&\hspace*{2in}\times
\mathrm{tr}(h^*|_{\Lambda^tH^1(\Sigma_{g+N})})
\end{eqnarray*}
Reordering the summations so that the sum over $\bar{A}$ is on
the outside and the sum on $t$ is next, we find that $k =
|\bar{A}|-N+t$ and the expression becomes:
\begin{eqnarray*}
&&\hspace*{-2em} \sum_{\bar{A}\atop |\bar{A}|-N =
0,\ldots,n}\sum_{t=0}^{n-(|\bar{A}|-N)}
(-1)^{|\bar{A}|+{\mathrm{sgn}}(\rho)}\sum_{\rho\in\mathfrak{S}_{|A|;N}}
\prod_{i=1,\ldots,N\atop m=0,\ldots,s_i-1}\langle
\zeta_{a_{\rho^m(i)}},\xi'_{a'_{\rho^{m+1}(i)}}\rangle\\
&&\hspace*{1in}\times (-1)^t(2[n-(t-(|\bar{A}|-N))]+1)
\mathrm{tr}(h^*|_{\Lambda^tH^1(\Sigma_{g+N})})
\end{eqnarray*}
Again using the fact that $\Lambda^tH^1(\Sigma_{g+N})$ appears exactly
$2(|\bar{A}|-t)+1$ times in $H^*(\Sigma^{(|\bar{A}|-N)})$ and
writing $|\bar{A}| = N+r$, we can
carry out the sum over $t$ to get that ${\mbox{\rm Tr}}\,\kappa_n$ is:
\[
\sum_{r=0}^n \left[\sum_{\bar{A}\atop|\bar{A}|-N =r}
\sum_{\rho\in\mathfrak{S}_{r+N;N}}
(-1)^{{\mathrm{sgn}}(\rho)+|\bar{A}|}\hspace{-1em}\prod_{i=1,\ldots,N \atop
m=0,\ldots,s_i-1}
\langle\zeta_{a_{\rho^m(i)}},\xi'_{a'_{\rho^{m+1}(i)}}\rangle \right]\cdot
L(h^{(n-r)})
\]
Here $L(h^{(n-r)})$ is the Lefschetz number of $h$ acting on
the $(n-r)$th symmetric power of $\Sigma_{g+N}$ which, as remarked
in (\ref{zetasym}), is the $(n-r)$th coefficient of $\zeta(h)$. In
view of Corollary \ref{symcor}, we will be done if we show
that the quantity in brackets is the $r$th coefficient of the
representative $t^N\det(d_M)$ of $\tau(M^*)$. Recalling the definition
of $\bar{A}$, $\zeta_i$, and $\xi_i$, note that the terms that we are
summing in the brackets above are products over all $i$ of formulae
that look like
\begin{equation}
\langle c_i,\xi'_{a'_{\rho(i)}}\rangle\langle
h^*(\xi_{a_{\rho(i)}}),\xi'_{a'_{\rho^2(i)}}\rangle\cdots\langle
h^*(\xi_{\rho^{s_i-1}(i)}),c_{\tilde{\rho}(i)}\rangle
\label{example}
\end{equation}
where $\tilde{\rho}(i)\in\{1,\ldots,N\}$ is defined to be
$\rho^{s_i}(i)$. If we sum this quantity over all $\bar{A}$ and all
$\rho$ that induce the same permutation $\tilde{\rho}$ of
$\{1,\ldots,N\}$, we find that (\ref{example}) becomes simply $\langle
{h^*}^{s_i}(c_i),c_{\tilde{\rho}(i)}\rangle$. Therefore the
quantity in brackets is a sum of terms like
\[
(-1)^{{\mathrm{sgn}}(\rho)+r+N}\langle{h^*}^{s_1}c_1,c_{\tilde{\rho}(1)}\rangle\cdots\langle
{h^*}^{s_N}(c_N),c_{\tilde{\rho}(N)}\rangle,
\]
where we have fixed $s_1,\ldots,s_N$ and $\tilde{\rho}$ and carried out the
sum over all $\rho$ such that
\begin{enumerate}
\item $\mathrm{min}\{m>0|\rho^m(i)\in\{1,\ldots,N\}\} = s_i$, and
\item The permutation $i\mapsto \rho^{s_i}(i)$ of
$\{1,\ldots,N\}$ is $\tilde{\rho}$.
\end{enumerate}
(As we will see, ${\mathrm{sgn}}(\rho)$ depends only on
$\tilde{\rho}$ and $|\bar{A}|$.) It remains to sum over partitions
$s_1+\cdots +s_N$ of $s = |\bar{A}| = r+N$ and over permutations
$\tilde{\rho}$. But from Corollary \ref{symcor} and Lemma
\ref{detlemma}, the result of those two summations is precisely
$[\tau(M^*)]_r$, if we can see just that ${\mathrm{sgn}}(\tilde{\rho}) =
{\mathrm{sgn}}(\rho) + |\bar{A}|$ mod 2. That is the content of the next lemma.
\begin{lemma}
Let $A = \{1,\ldots,N\}$ and $\bar{A} = \{1,\ldots,s\}$ for some
$s\geq N$. Let $\rho\in\mathfrak{S}_{s;N}$ and define
\[
\tilde{\rho}(i)\in\mathfrak{S}_N, \,\,\tilde{\rho}(i) = \rho^{s_i}(i)
\]
where $s_i$ is defined as above.
Then ${\mathrm{sgn}}(\rho) = {\mathrm{sgn}}(\tilde{\rho}) + m$ modulo 2.
\end{lemma}
\proof Suppose $\rho = \rho_1\cdots\rho_p$ is an expression
of $\rho$ as a product of disjoint cycles; we may assume that the
initial elements $a_1,\ldots,a_p$ of $\rho_1,\ldots,\rho_p$ are
elements of $A$ since $\rho\in\mathfrak{S}_{m;N}$. For convenience we
include any 1--cycles among the $\rho_i$, noting that the only elements
of $\bar{A}$ that may be fixed under $\rho$ are in $A$. It is easy to
see that cycles in $\rho$ are in 1--1 correspondence with cycles
of $\tilde{\rho}$, so the expression of $\tilde{\rho}$ as a
product of disjoint cycles is $\tilde{\rho} =
\tilde{\rho}_1\cdots\tilde{\rho}_p$ where each $\tilde{\rho}_i$
has $a_i$ as its initial element. For $a\in A$, define
\begin{eqnarray*}
n(a) &=& \min\{m>0| \rho^m(a)\in A\} \\
\tilde{n}(a) &=& \min\{m>0 | \tilde{\rho}^m(a) = a\}.
\end{eqnarray*}
Note that $n(a_i) = s_i$ for $i=1,...,N$, $\sum s_i = s$,
and $\tilde{n}(a_i)$ is the length of the cycle
$\tilde{\rho}_i$. The cycles $\rho_i$ are of the form
\[
\rho_i =
(a_i\cdots\tilde{\rho}(a_i)\cdots\tilde{\rho}^2(a_i)\cdots\cdots
\tilde{\rho}^{\tilde{n}(a_i) - 1}(a_i) \cdots)
\]
where ``$\cdots$'' stands for some number of elements of
$\bar{A}$. Hence the cycles $\rho_i$ have length
\[
l(\rho_i)\hspace{1ex} = \sum_{m=0}^{\tilde{n}(a_i) - 1}
(n(\tilde{\rho}^m(a_i))+1) \hspace{1ex}=\hspace{1ex}
\tilde{n}(a_i) +\hspace{-.5em}
\sum_{m=0}^{\tilde{n}(a_i)-1}\hspace{-.5em}n(\tilde{\rho}^m(a_i)).
\]
Modulo 2, then, we have
\begin{eqnarray*}
{\mathrm{sgn}}(\rho) &=& \sum_{i=1}^p (l(\rho_i) - 1) \\
&=& \sum_{i=1}^p\left[\left(\tilde{n}(a_i) +
\sum_{m=0}^{\tilde{n}(a_i)-1}n(\tilde{\rho}^m(a_i))\right) - 1\right]
\\
&=& \sum_{i=1}^p (\tilde{n}(a_i)-1) +
\sum_{i=1}^p\sum_{m=0}^{\tilde{n}(a_i)-1} n(\tilde{\rho}^m(a_i)) \\
&=& {\mathrm{sgn}}(\tilde{\rho}) + s,
\end{eqnarray*}
since because $\rho\in\mathfrak{S}_{s;N}$ we have
$\sum_{i=1}^p\sum_{m=0}^{\tilde{n}(a_i)-1} n(\tilde{\rho}^m(a_i)) =
\sum_{i=1}^N s_i = s$.\endproof
\section{Proof of Theorem \ref{intthm}}
The theorem of Hutchings and Lee quoted at the beginning of this
work can be seen as (or more precisely, the logarithmic derivative
of formula (\ref{HLeqn}) can be seen as) a kind of Lefschetz
fixed-point theorem for partially-defined maps, specifically the
return map $F$, in which the torsion $\tau(M^*)$ appears as a
correction term (see \cite{HL1}). Now, the Lefschetz number of a
homeomorphism $h$ of a closed compact manifold $M$ is just the
intersection number of the graph of $h$ with the diagonal in
$M\times M$; such consideration motivates the proof of Theorem
\ref{HLthm} in \cite{HL1}. With the results of Section
\ref{stdsec}, we can give another construction.
Given $\phi\co X = M(g,N,h)\to S^1$ our circle-valued Morse function, cut along
$\phi^{-1}(-1)$ to obtain a cobordism $W_\phi$ between two copies
of $\Sigma_{g+N}$. Write $\gamma_i$, $i=1,\ldots,N$ for the
intersection of the ascending manifolds of the index-1 critical
points with $\partial_+W$ and $\delta_i$ for the intersection of
the descending manifolds of the index-2 critical points with
$\partial_-W$. Since the homology classes $[\gamma_i]$ and
$[\delta_i]$ are the same (identifying
$\partial_+W=\partial_-W=\Sigma_{g+N}$), we may perturb the
curves $\gamma_i$ and $\delta_i$ to be parallel, ie, so that they do not
intersect one another (or any other $\gamma_j$, $\delta_j$ for
$j\neq i$ either). Choose a complex structure on $\Sigma_{g+N}$
and use it to get a complex structure on the symmetric powers
$\Sigma_{g+N}^{(k)}$ for each $k$. Write $T_\gamma$ for the
$N$--torus $\gamma_1\times\cdots\times \gamma_N$ and let
$T_\delta = \delta_N\times\cdots\times\delta_1$. Define a function
\[
\psi\co T_\gamma\times \Sigma_{g+N}^{(n)}\times T_\delta\to
\Sigma_{g+N}^{(n+N)}\times\Sigma_{g+N}^{(n+N)}
\]
by mapping the point $(q_1,\ldots,q_N,\sum p_i,q'_N,\ldots,q'_1)$
to $(\sum p_i + \sum q_j, \sum p_i + \sum q_j')$.
The perhaps unusual-seeming orders on the $\delta_i$ and in the
domain of $\psi$ are chosen to obtain the correct sign in the sequel.
\begin{prop} $\psi$ is a smooth embedding, and $D =
\mathrm{Im}\psi$ is a totally real submanifold of
$\Sigma_{g+N}^{(n+N)}\times\Sigma_{g+N}^{(n+N)}$.
\end{prop}
The submanifold $D$ plays the role of the diagonal in the
Lefshetz theorem.
\proof That $\psi$ is one-to-one is clear since the
$\gamma_i$ and $\delta_j$ are all disjoint. For smoothness, we
work locally. Recall that the symmetric power $\Sigma_g^{(k)}$ is
locally isomorphic to ${\mathbb C}^{(k)}$, and a global chart on the latter is
obtained by mapping a point $\sum w_i$ to the coefficients of the
monic polynomial of degree $k$ having zeros at each $w_i$. Given a
point $(\sum p_i +\sum q_j, \sum p_i+\sum q_j')$
of $\mathrm{Im}(\psi)$ we can choose a coordinate chart on
$\Sigma_{g+N}$ containing all the points $p_i,q_j,q_j'$ so that the $\gamma_i$ and
$\delta_j$ are described by disjoint curves in ${\mathbb C}$. Thinking of
$q_j\in\gamma_j\subset{\mathbb C}\cong{\mathbb C}^{(1)}$ and simlarly for $q_j'$, we
have that locally $\psi$ is just the multiplication map:
\begin{eqnarray*}
&&\hspace*{-.5in}\left(
(z-q_1),\ldots,(z-q_N),\prod_{i=1}^n(z-p_i),(z-q_1'),\ldots,(z-q_N')\right)\\
&&\hspace*{.5in}\mapsto \left(\prod_{i=1}^n
(z-p_i)\prod_{j=1}^N(z-q_j),\hspace{1ex}\prod_{i=1}^n(z-p_i)\prod_{j=1}^N(z-q_j')\right)
\end{eqnarray*}
It is clear that the coefficients of the polynomials on the right hand side
depend smooth\-ly on the coefficients of the one on the left
and on the $q_j$, $q_j'$.
On the other hand, if $(f(z),g(z))$ are the polynomials whose
coefficients give the local coordinates for a point in
$\mathrm{Im}(\psi)$, we know that $f(z)$ and $g(z)$ share exactly
$n$ roots since the $\gamma_i$ and $\delta_j$ are disjoint. If
$p_1$ is one such shared root then we can write $f(z) =
(z-p_1)\tilde{f}(z)$ and similarly for $g(z)$, where
$\tilde{f}(z)$ is a monic polynomial of degree $n+N-1$ whose
coefficients depend smoothly (by polynomial long division!) on
$p_1$ and the coefficients of $f$. Continue factoring in this way
until $f(z) = f_0(z)\prod_{i=1}^n(z-p_i)$, using the fact that
$f$ and $g$ share $n$ roots to find the $p_i$. Then $f_0$ is a
degree $N$ polynomial having one root on each $\gamma_i$, hence
having all distinct roots. Those roots (the $q_j$) therefore depend smoothly on
the coefficients of $f_0$, which in turn depend smoothly on the
coefficients of $f$. Hence $D$ is smoothly embedded.
That $D$ is totally real is also a local calculation, and is a
fairly straightforward exercise from the definition.\endproof
We are now ready to prove the ``algebraic'' portion of Theorem
\ref{intthm}.
\begin{thm}\label{algintthm}
Let $\Gamma$ denote the graph of the map $h^{(n+N)}$
induced by the gluing map $h$ on the symmetric product
$\Sigma_{g+N}^{(n+N)}$. Then
\[
D.\Gamma = {\mbox{\rm Tr}}\,\kappa_n.
\]
\end{thm}
\proof Using the notation from the previous section, we
have that in cohomology the duals of $D$ and $\Gamma$ are
\begin{eqnarray*}
D^* &=& \sum_{\beta\in B_{g+N}^{(n)}} (-1)^{\epsilon_1(\beta)}
(c_1\wedge\cdots \wedge c_N\wedge \beta^{\circ})\times
(c_1\wedge\cdots \wedge c_N\wedge\beta) \\
\Gamma^*&=& \sum_{\alpha\in B_{g+N}^{(n+N)}}(-1)^{\deg(\alpha)}
\alpha^\circ\times {h^*}^{-1}(\alpha).
\end{eqnarray*}
Here $\epsilon_1(\beta) =
\deg(\beta)(N+1)+\frac{1}{2}N(N-1)$. Indeed, since
the diagonal is the pushforward of the graph by $1\times
h^{-1}$, we get that the dual of the graph is the pullback of the
diagonal by $1\times h^{-1}$. We will find it convenient to write
\[
D^* = \sum_\beta (-1)^{\epsilon_1(\beta) + \epsilon_2(\beta)}
(c_1\wedge\cdots \wedge c_N\wedge \beta)\times (c_1\wedge\cdots
\wedge c_N\wedge\beta^{\circ}),
\]
by making the substitution $\beta\mapsto\beta^{\circ}$ in the
previous expression. Since $\beta^{\circ\circ} =
\pm\beta$, the result is still a sum over the monomial basis with
an additional sign denoted by $\epsilon_2$ in the above but which
we will not specify.
Therefore the
intersection number is
\begin{equation}\begin{split}
D^*\cup \Gamma^* =& \sum_{\alpha,\beta}
(-1)^{\epsilon_1 + \epsilon_2+ \epsilon_3(\alpha,\beta)}\\&
(\alpha^\circ\cup(c_1\wedge\cdots\wedge c_N\wedge\beta))\times
({h^*}^{-1}\alpha\cup(c_1\wedge\cdots\wedge
c_N\wedge\beta^\circ))\end{split}
\label{intformula1}
\end{equation}
where $\epsilon_3(\alpha,\beta) = \deg(\alpha)(1+\deg(\beta) +
N)$. Since this is a sum over a monomial basis $\alpha$, the
first factor in the cross product above vanishes unless $\alpha =
c_1\wedge\cdots\wedge c_N\wedge\beta$, and in that case is 1.
Therefore $\deg(\alpha) = \deg(\beta)+N$, which gives
$\epsilon_3(\alpha,\beta) \equiv 0$ mod 2, and (\ref{intformula1}) becomes
\begin{eqnarray}
D^*\cup\Gamma^* &=& \sum_\beta (-1)^{\epsilon_1 + \epsilon_2}
{h^*}^{-1}(c_1\wedge\cdots\wedge c_N\wedge\beta)\cup
(c_1\wedge\cdots\wedge c_N\wedge\beta^\circ)\nonumber\\
&=& \sum_\beta (-1)^{\epsilon_1+\epsilon_2}
(c_1\wedge\cdots\wedge c_N\wedge\beta) \cup
h^*(c_1\wedge\cdots\wedge c_N\wedge\beta^\circ)\nonumber\\
&=& \sum_\beta (-1)^{\epsilon_1} (c_1\wedge\cdots\wedge
c_N\wedge\beta^\circ) \cup h^*(c_1\wedge\cdots\wedge c_N\wedge\beta)
\label{intformula2}
\end{eqnarray}
where we have again used the substitution
$\beta\mapsto\beta^\circ$ and therefore cancelled the sign $\epsilon_2$.
Now, some calculation using the cup product structure of
$H^*(\Sigma_{g+N}^{(n+N)})$ derived in \cite{MD} shows that
\[
c_1\wedge\cdots\wedge c_N\wedge\beta^\circ =
(-1)^{\epsilon_4(\beta)}(d_1\wedge\cdots\wedge d_N\wedge\beta)^\circ.
\]
where $\epsilon_4(\beta) = N\deg(\beta) +
\frac{1}{2}N(N+1) \equiv \epsilon_1(\beta) + \deg(\beta) + N$ mod 2.
Note that ${(\cdot)}^\circ$ refers to duality in
$H^*(\Sigma_{g+N}^{(n)})$ on the left hand side and in
$H^*(\Sigma_{g+N}^{(n+N)})$ on the right. Returning with this to
(\ref{intformula2}) gives
\[
D^*\cup\Gamma^* = \sum_\beta (-1)^{\deg(\beta)+N}
(d_1\wedge\cdots\wedge d_N\wedge\beta)^\circ\cup
h^*(c_1\wedge\cdots\wedge c_N\wedge\beta),
\]
which is ${\mbox{\rm Tr}}\,\kappa_n$ by (\ref{traceform}). Theorem
\ref{algintthm} follows.\endproof
To complete the proof of Theorem \ref{intthm}, we recall that we
have already shown that $D$ is a totally real submanifold of
$\Sigma_{g+N}^{(n+N)}\times\Sigma_{g+N}^{(n+N)}$. The graph of
$h^{(n+N)}$, however, is not even smooth unless $h$ is an
automorphism of the chosen complex structure of $\Sigma_{g+N}$:
in general the set-theoretic map induced on a symmetric power by
a diffeomorphism of a surface is only Lipschitz continuous.
Salamon \cite{S} has shown that if we choose a path of
complex structures on $\Sigma$ between the given one $J$ and
$h^*(J)$, we can construct a symplectomorphism of the moduli
space ${\cal M}(\Sigma,J)\cong\Sigma_{g+N}^{(n+N)}$ that is
homotopic to the induced map $h^{(n+N)}$. Hence $\Gamma$ is
homotopic to a Lagrangian submanifold of
$\Sigma_{g+N}^{(n+N)}\times-\Sigma_{g+N}^{(n+N)}$. Since
Lagrangians are in particular totally real, and since
intersection numbers do not change under homotopy, Theorem
\ref{intthm} is proved.
\section{Proof of Lemma \ref{perturblemma}}
\label{lemmapfsec}
We restate the lemma:
{\sl Assume that there are no ``short'' gradient flow lines
between critical points, that is, every flow line between critical
points intersects $\Sigma_g$ at least once.
Given a symmetric pair $(g_0,\phi)$ on $M(g, N, h)$ and suitable genericity
hypotheses on $h$, there exists a $C^0$--small
perturbation of $g_0$ to a metric $\tilde{g}$ such that for given $n\geq 0$
\begin{enumerate}
\item The gradient flow of $\phi$ with respect to $\tilde{g}$ is
Morse--Smale; in particular the hypotheses of Theorem \ref{HLthm}
are satisfied.
\item The quantity $[\zeta(F)\tau(M^*)]_m$, $m\leq
n$ does not change under this perturbation.
\end{enumerate}}
\proof
Alter $g_0$ in a small neighborhood of $\Sigma_g\subset M(g, N, h)$ as
follows, working in a half-collar neighborhood of $\Sigma_g$ diffeomorphic to
$\Sigma_g\times (-\epsilon, 0]$ using the flow of $\nabla_{g_0}\phi$ to
obtain the product structure on this neighborhood.
Let $p_1,\ldots,p_{2N}$ denote the
points in which the ascending manifolds (under gradient flow of $f$
with respect to the symmetric metric $g_0$) of the index-2 critical points
intersect $\Sigma_g$ in $W_\phi$. Since $g_0$ is symmetric, these
points are the same as the points $q_1,\ldots,q_{2N}$ in which the
descending manifolds of the index-1 critical points intersect
$\Sigma_g$. Let ${\cal O}$ denote the union of all closed orbits of
$\nabla\phi$ (with respect to $g_0$) of degree no more than $n$, and
all gradient flow lines connecting index-1 to index-2 critical points. We may
assume that this is a finite set. Choose small disjoint coordinate disks
$U_i$ around each $p_i$ such that $U_i\cap ({\cal O}\cap \Sigma_g) =
\emptyset$.
In $U_i\times (-\epsilon,0]$, we may suppose the Morse function $f$ is given by
projection onto the second factor, $(u,t)\mapsto t$, and the metric is
a product $g_0 = g_{\Sigma_g}\oplus (1)$. Let ${X}_i$ be a
nonzero constant
vector field in the coordinate patch $U_i$ and $\mu$ a cutoff function
that is equal to $1$ near $p_i$ and zero off a small neighborhood of
$p_i$ whose closure is in $U_i$. Let $\nu(t)$ be a bump function that
equals 1 near $t = \epsilon/2$ and vanishes near the ends of the
interval $(-\epsilon,0]$. Define the vector field $v$ in the set
$U_i\times (-\epsilon, 0]$ by $v(u,t) = \nabla_{g_0}\phi + \nu(t)\mu(u)
X(u)$. Now define the metric $g_{X_i}$ in $U_i\times (-\epsilon,0]$
by declaring that $g_{X_i}$ agrees with $g_0$
on tangents to slices $U_i\times\{t\}$, but that $v$ is orthogonal to the
slices. Thus, with respect to $g_{X_i}$, the gradient $\nabla\phi$ is
given by a multiple of $v(u,t)$ rather than $\partial/\partial t$.
It is easy to see that repelacing $g_0$ by $g_{X_i}$ in $U_i\times
(-\epsilon,0]$ for each $i = 1,\ldots,2N$ produces a metric $g_X$ for
which upward gradient flow of $\phi$ on $W_\phi$ does not connect index-2
critical points to index-1 critical points with ``short'' gradient
flow lines. Elimination of gradient flows of $\phi$ from index-2 to index-1
points that intersect $\Sigma_{g+N}$ is easily arranged by small
perturbation of $h$, as are transverse intersection of ascending and
descending manifolds and nondegeneracy of fixed points of $h$ and its
iterates. Hence the new metric $g_X$ satisfies condition (1) of the
Lemma.
For condition (2), we must verify that we have neither created nor
destroyed either closed orbits of $\nabla\phi$ or flows from index-1
critical points to index-2 critical points. The fact that no such
flow lines have been destroyed is assured by our choice of neighborhoods
$U_i$. We now show that we can choose the vector fields $X_i$ such
that no fixed points of $F^k$ are created, for $1\leq k\leq n$.
Let $F_1\co \Sigma_g\to \Sigma_{g+N} = \partial_+W_\phi$ be the map induced by gradient
flow with respect to $g_0$, defined away from the $q_j$, and let
$F_2\co \Sigma_{g+N} = \partial_-W_\phi\to\Sigma_g$ be the similar map
from the bottom of the cobordism, defined away from the $c_j$. Then
the flow map $F$, with respect to $g_0$, is given by the composition
$F = F_2\circ h\circ F_1$ where this is defined. The return map with
respect to the $g_X$--gradient, which we will write $\tilde{F}$, is
given by $F$ away from the $U_i$ and by $F + cX$ in the coordinates on
$U_i$ where $c$ is a nonnegative function on $U_i$ depending on $\mu$
and $\nu$, vanishing near $\partial U_i$.
Consider the graph $\Gamma_{F^k}\subset \Sigma_g\times\Sigma_g$. Since
$F^k$ is
not defined on all of $\Sigma_g$ the graph is not closed, nor is its
closure a cycle since $F^k$ in general has no continuous extension to
all of $\Sigma$. Indeed, the boundary of $\Gamma_{F^k}$ is given by a
union of products of ``descending slices'' (ie, the intersection of
a descending manifold of a critical point with $\Sigma_g$) with
ascending slices. Restrict attention to the neighborhood
$U$ of $p$, where for convenience $p$ denotes any of the
$p_1,\ldots,p_{2N}$ above. We have chosen $U$ so that there are no fixed
points of $F^k$ in this neighborhood, ie, the graph and the diagonal
are disjoint over $U$. If there is an open set around $\Gamma_{F^k}\cap
(U\times U)$ that misses the diagonal $\Delta\subset U\times
U$, then any sufficiently small choice of $X$ will keep $\Gamma_{F^k}$
away from $\Delta$ and therefore produce no new closed orbits of the
gradient flow. However, it may be that $\partial \Gamma_{F^k}$ has points
on $\Delta$. Indeed, if $c\subset\partial_+W_\phi = \Sigma_{g+N}$ is
the ascending slice of the critical point corresponding to $p=q$,
suppose $h^k(c)\cap c\neq \emptyset$. Then it is not hard to see that
$(p,p)\in\partial\Gamma_{F^k}$, and this situation cannot
be eliminated by genericity assumptions on $h$. Essentially, $p$ is
both an ascending slice and a descending slice, so
$\partial\Gamma_{F^k}$ can contain both $\{p\}\times(\mathrm{asc.
slice})$ and $(\mathrm{desc. slice})\times\{p\}$, and ascending and
descending slices can have $p$ as a boundary point.
Our perturbation of $F$ using $X$ amounts, over $U$, to a ``vertical''
isotopy of $\Gamma_{F^k}\subset U\times U$. The question of whether there is an $X$
that produces no new fixed points is that of whether there is a
vertical direction to move $\Gamma_{F^k}$ that results in the
``boundary-fixed'' points like $(p,p)$ described above remaining
outside of $int(\Gamma_{F^k})$. The existence of such a direction is
equivalent to the jump-discontinuity of $F^k$ at $p$. This argument is
easy to make formal in the case $k=1$, and for $k>1$ the ideas are the
same, with some additional bookkeeping. We leave the general argument
to the reader.
Turn now to the question of whether any new flow lines between
critical points are created. Let $D = (h\circ F_1)^{-1}(\bigcup c_i)$ denote
the first time that the descending manifolds of the critical points
intersect $\Sigma_g$, and let $A = F_2\circ h (\bigcup c_i)$ be the
similar ascending slices. Then except for short flows, the flow lines
between critical points are in 1--1 correspondence with intersections
of $D$ and $F^k(A)$, for various $k\geq 0$. We must show that our
perturbations do not introduce new intersections between these sets.
It is obvious from our constructions that only $F^k(A)$ is affected by
the perturbation, since only $F_2$ is modified.
Since there are no short flows by assumption, there are no
intersections of $h^{-1}(c_j)$ with $c_i$ for any $i$ and $j$. This means
that $D$ consists of a collection of embedded circles in $\Sigma_g$,
where in general it may have included arcs connecting various $q_i$.
Hence, we can choose our neighborhoods $U_i$ small enough that
$U_i\cap D = \emptyset $ for all $i$, and therefore the perturbed
ascending slices $\tilde{F}^k(A)$ stay away from $D$. Hence no new
flows between critical points are created.
This concludes the proof of Lemma \ref{perturblemma}.\endproof
|
1,108,101,563,170 | arxiv | \section{Introduction}
In recent years, uniformly doped quantum wells (QWs) have generated increasing interest due to the long relaxation times measured therein.\cite{tribollet2, eble,chamarro}
The long relaxation times are due to spins localized on donor centers.
While similar relaxation times have been measured in modulation doped systems, their duration has not been as reliable
due to the weaker binding energy of localized states and potential fluctuations from remote impurities.\cite{tribollet1,astakhov, zhukov}
Localization is either not seen at all\cite{tribollet1} or localization centers
thermally ionize rapidly with increasing temperature due to a small binding energy.\cite{zhukov, astakhov}
QWs uniformly doped within the well have the advantage of being characterized by well defined impurity centers with a larger
binding energy.
The experimental control in the amount of doping and well size make doped QWs particularly appealing to the study of quasi-two-dimensional spin dynamics.
Much of the theoretical study of spin relaxation in semiconducting systems (QWs in particular) has either focused solely on itinerant
electrons\cite{weng, zhou,kainz} or solely on localized electrons\cite{kavokin1, kavokin2} without regard for either the presence of the other state or the
interaction between the two states. Recently the existence and interaction between itinerant and localized states has been dealt with in bulk systems
by Putikka and Joynt\cite{putikka} and Harmon et al.\cite{harmon}. The results of these calculations are in very good quantitative and qualitative agreement with experimental
observations\cite{kikkawa, ghosh} in bulk n-GaAs and n-ZnO. In this paper, the theory of two interacting spin subsystems is applied to QWs.
The paper is structured as follows:
Section \ref{section2} describes the optical generation of spin polarization in QWs;
Section \ref{blochSection} introduces a set of modified Bloch equations to model spin dynamics;
Section \ref{occupations} calculates the equilibrium populations of localized and conductions states;
Section \ref{relaxationSection} determines the relaxation rates for all pertinent mechanisms for localized and conduction electrons;
Sections \ref{resultsGaAs} and \ref{resultsCdTe} compare our results to two GaAs QWs (uniformly doped and undoped) and one uniformly doped CdTe QW;
Section \ref{discussion} discusses our findings, suggests future work, and proposes QWs for spin liftime optimization;
we conclude in Section \ref{conclusion}.
\section{Spin Polarization in Quantum Wells}\label{section2}
In QWs at low temperatures the creation of non-zero spin polarization, in the conduction band and donor states, proceeds from the formation of
trions (charged excitons, $X^{\pm}$) and exciton-bound-donor complexes ($D^0 X$) respectively, from the absorption of circularly polarized light.
Polarization via the trion avenue is most relevant for modulation doped QWs where donor centers in the well are sparse.\cite{tribollet1, chamarro}
Due to the modulation doping outside the well, the number of conduction electrons in the well may be plentiful.
In such cases, assuming incident $\sigma^+$ pump pulse, a $+\frac{3}{2}$ hole and $-\frac{1}{2}$ electron are created. These bind with a resident electron from
the electron gas in the QW to form a trion ($X^-_{3/2}$). The `stolen' electron will be $+\frac{1}{2}$ to form a singlet state with the exciton's electron.
Hence, the electron gas will be left negatively polarized since the excitons are preferentially formed with spin up resident electrons.
If the hole spin relaxes faster than the trion decays the electron gas will remain polarized.\cite{tribollet1} Selection rules dictate $+\frac{3}{2}$ ($-\frac{3}{2}$) holes
will recombine only with $-\frac{1}{2}$
($+\frac{1}{2}$) electrons. Therefore if the hole spins relax rapidly, the released electrons will have no net polarization and the polarized electron gas will remain
predominantly negatively oriented.
A very similar picture is given for the polarization of donor bound electrons in uniformly doped QWs where the donor bound electrons play the role of the
resident electrons.
\cite{tribollet2, eble} At low temperatures the donors are nearly all occupied and the density of the electron gas will be negligible.
When excitations are tuned at the exciton-bound-donor resonance, instead of photo-excitons binding with the resident electron gas,
they bind with neutral donors to form the complexes $D^0 X_{3/2}$. This notation implies that
a $+\frac{3}{2}$ hole -
$-\frac{1}{2}$ electron exciton bound to a $+\frac{1}{2}$ donor bound electron. Once again for very short hole relaxation times, the donor bound electrons can be spin
polarized.
The measured long spin relaxation times in uniformly doped QWs imply that spin polarization remains after short time processes such as
$X$ and $D^0 X$ recombination have completed. In other words, the translational degrees of freedom thermalize much more quickly than the spin degrees of freedom.
The occupational statistics of itinerant and localized electrons are important and can be determined from equilibrium thermodynamics.
As the temperature is increased, the electrons bound to donors thermally ionize and become itinerant.
In analogy with the trion case,
if the excitation energy is maintained at the $D^0 X$ frequency, the initial polarization should decrease as there are fewer $D^0 X$ complexes allowed.\cite{zhukov}
However as the number of electrons in the conduction states increases, the spin that exists on the donors will equilibrate by cross relaxing to
conduction states by the isotropic exchange
interaction.
If cross relaxation is rapid enough, the total spin, which is conserved by exchange, will now exist
in the donor and conduction states weighted by their respective equilibrium densities.\cite{putikka, mahan, harmon} The polarized electron moments will then
proceed to relax via different processes
for the localized and itinerant states.
Since trion binding energies ($\sim 2$ meV)\cite{zhukov, astakhov} are smaller than donor-exciton binding energies ($\sim 4.5$ meV),\cite{kheng}
polarization of itinerant electrons via trion formation should be negligible as the temperature is increased.
The above description is complicated when the photoexcitation energy is at the exciton resonance and not the exciton-bound-donor resonance.
In such a case, the excitons may
recombine or the electron-in-exciton spin may relax before binding to a donor so one expects the low temperature spin relaxation to
reflect also the exciton spin dynamics instead of the donor electron spin dynamics alone.\cite{zhukov}
In essence, the electrons in an exciton represent a third spin environment with a characteristic spin relaxation time scale different
from that of the localized donor and itinerant electrons. Because of the electron's proximity to a hole, relaxation may result from spin exchange or recombination.
Therefore to understand the spin dynamics in QWs, it is imperative to examine the relaxation processes that affect the polarized spin moments
of the various spin systems.
\section{Modified Bloch Equations}\label{blochSection}
After rapid exciton-donor-bound complex formation, recombination, and hole relaxation, we model the zero field spin dynamics of the system in terms of modified Bloch equations:
\begin{align}
\label{bloch0}\frac{d m_{c}}{dt} & = -\Big( \frac{1}{\tau_{c}} + \frac
{n_{l}}{\gamma^{cr}_{c,l}} \Big) m_{c} + \frac{n_{c}}{\gamma^{cr}_{c,l}} m_{l}\nonumber\\
\frac{d m_{l}}{dt} & = \frac{n_{l}}{\gamma^{cr}_{c,l}} m_{c} -\Big( \frac{1}%
{\tau_{l}} + \frac{n_{c}}{\gamma^{cr}_{c,l}} \Big) m_{l}
\end{align}
where $m_c$ ($m_l$) are the conduction (localized) magnetizations, $n_c$ ($n_l$) are the conduction (localized) equilibrium occupation densities,
$\tau_c$ ($\tau_l$) are the conduction (localized) spin relaxation times, and $\gamma^{cr}_{c,l}$ is a parameter describing the cross relaxation time between the two
spin subsystems. Mahan and Woodworth\cite{mahan} have shown the cross relaxation time between impurity and conduction electron spins to be much shorter than any of the other spin relaxation times relevant here. We shall assume below that the same is true for the cross relaxation between electrons bound in an exciton and conduction or impurity electron spins. The motivation of these modified Bloch equations is set forth in Refs. (\onlinecite{putikka}) and (\onlinecite{harmon}).
Eqs. (\ref{bloch0}) is valid for photoexcitation energies that do not cause free exciton formation (only two relevant spin systems). It is important to note that Eqs. (\ref{bloch0}) hold only for intermediate time scales. These scales are long compared with laser pulse times, energy relaxation times that determine subsystem populations and donor-bound exciton formation times. Fortunately, these intermediate time scales are the ones probed in the experiments.
Standard methods can be used to solve these differential equations with initial conditions $m_c(0)$ and $m_l(0)$. We assume that the initial spin polarization is perpendicular to the QW's growth plane and that the excitation density, $N_x$, is small enough such that the resultant spin relaxation time, $\tau_s$, will not depend strongly on $N_x$.\cite{eble} The solutions yield a time dependence of the total magnetization
$m(t) = m_c(t) + m_l(t)$ to be a sum of two exponentials - one of which is $\exp(-t/\tau_s)$ and the other of which has a time constant proportional to the cross
relaxation time.
In the case of rapid cross relaxation (faster than all spin relaxation mechanisms), only one exponential survives and we express the total relaxation rate as
\begin{equation}\label{bloch}
\frac{1}{\tau_s} = \frac{n_l}{n_{imp}} \frac{1}{\tau_l} + \frac{n_c}{n_{imp}} \frac{1}{\tau_c}
\end{equation}
where $n_{imp} = n_l + n_c$ is the total impurity concentration.
This model, or variations of it, has been successfully applied to bulk n-GaAs and bulk n-ZnO. \cite{putikka, harmon}
If the photoexcitation energy is set near the exciton energy, the Bloch equations must be modified to take into account exciton spin relaxation and multiple cross relaxations: $\gamma_{i,j}$ for $i,j \in c, l, x$ for conduction, localized, and excition spins respectively.
We model exciton spin relaxation as electron-in-exciton spin relaxation\cite{adachi2} and assume that hole spin relaxation is very rapid. Eq. (\ref{bloch0}) generalizes to
\begin{align}
\label{bloch2}
\frac{d m_{c}}{dt} & = -\Big( \frac{1}{\tau_{c}} + \frac
{n_{l}}{\gamma^{cr}_{c,l}}+\frac{n_x}{\gamma^{cr}_{c,x}} \Big) m_{c} + \frac{n_{c}+N_x - n_x}{\gamma^{cr}_{c,l}} m_{l} + \frac{n_{c}+N_x - n_x}{\gamma^{cr}_{c,x}} m_{x} \nonumber\\
\frac{d m_{l}}{dt} & = \frac{n_{l}}{\gamma^{cr}_{c,l}} m_{c} -\Big( \frac{1}%
{\tau_{l}} + \frac{n_{c} + N_x - n_x}{\gamma^{cr}_{c,l}}+\frac{n_x}{\gamma^{cr}_{l,x}} \Big) m_{l} + \frac{n_{l}}{\gamma^{cr}_{l,x}} m_{x} \nonumber\\
\frac{d m_{x}}{dt} & = \frac{n_x}{\gamma^{cr}_{c,x}} m_{c} + \frac{n_x}{\gamma^{cr}_{l,x}} m_{l} -\Big( \frac{1}%
{\tau_{x}} + \frac{n_{c} + N_x - n_x}{\gamma^{cr}_{c,x}} + \frac{n_l}{\gamma^{cr}_{l,x}} \Big) m_x,
\end{align}
where $\tau_x$ represents spin lifetime of an electron bound to a hole. $n_x$ ($m_x$) is the number (magnetization) of electrons bound in an exciton.
$N_x$ is the initial density of photoexcited electrons and the quantity $N_x - n_x$ is the number of photoexcited electrons that do not participate in an exciton.
We assume quasi-equilibrium such that $n_x$ is determined from thermodynamics (see Section \ref{occupations}). It should be stated that Eq. (\ref{bloch2}) is valid only for
times shorter than the recombination time; in other words, on a time scale where $N_x$ can be assumed to not change significantly.
Recombination times have been measured\cite{adachi} in similar systems as to those studied here to be longer than the observed spin relaxation times so this
approximation seems justified.
In Section \ref{resultsGaAs}, we find that the effects of recombination of free carriers can be added to $1/\tau_c$ to obtain excellent agreement with the experimental data.
If we solve the system of equations in Eq. (\ref{bloch2}) as we did for Eq. (\ref{bloch0}), we obtain the relaxation rate
\begin{equation}\label{bloch3}
\frac{1}{\tau_s} = \frac{n_l}{n_{imp}+N_x} \frac{1}{\tau_l} + \frac{n_c+N_x-n_x}{n_{imp}+N_x} \frac{1}{\tau_c} + \frac{n_x}{n_{imp}+N_x} \frac{1}{\tau_x}.
\end{equation}
For both Eqs. (\ref{bloch}) and (\ref{bloch3}), we allow $\tau_l$, $\tau_c$, and $\tau_x$ to be phenomenological parameters of the form
$\tau_i^{-1} = \sum_j 1/\tau_j$ where $j$ refers to a type of spin relaxation mechanism.
From the experimental constraints and results, we can determine which relaxation mechanisms are important.
\section{Occupation Concentrations}\label{occupations}
As shown above, the relative occupations of localized and itinerant states play an important role in our theory.
Fortunately, in two dimensional systems, the occupation probabilities of the two states ($n_l/n_{imp}$ and $n_c/n_{imp}$) can be determined
exactly. The densities we are interested in are dilute enough such that the non-degenerate
limit (Boltzmann statistics) can be utilized.
\begin{figure}[hbtp]\label{occ}
\begin{centering}
\includegraphics[scale = 0.32,trim = 0 00 00 00, angle = 0,clip]{qwFig1}
\caption[]{Occupation probabilities of localized (solid line) and conduction (dash-dotted line) states with impurity density $n_{imp} = 4 \times 10^{10}$ cm$^{-2}$ determined from Eqs. (\ref{oc1}, \ref{oc2}). Other parameters
for GaAs are $a_B^{\ast} = 10.4$ nm and $m^{\ast} = 0.067 m$.}
\end{centering}
\end{figure}
The probability for a donor to be singly occupied (only the ground state needs to be considered\cite{look}) is\cite{ashcroft}
\begin{equation}
\frac{n_l}{n_{imp}} = \frac{1}{\frac{1}{2} e^{(E_B - \mu)/k_B T} + 1}.
\end{equation}
The density of itinerant states is given by
\begin{equation}
n_c = N_c e^{ \mu/k_B T}
\end{equation}
where $N_c = m^{\ast} k_B T / \hbar^2 \pi$ and the conduction band edge is taken to be zero energy.
The chemical potential $\mu$ can be found using the constraint
\begin{equation}\label{oc1}
\frac{n_l}{n_{imp}}+\frac{n_c}{n_{imp}} = 1.
\end{equation}
Using the result for $\mu$, one obtains
\begin{equation}\label{oc2}
\frac{n_l}{n_{imp}} = \frac{\sqrt{1+Q(T, n_{imp})}-1}{\sqrt{1+Q(T, n_{imp})}+1},
\end{equation}
where
\begin{equation}
Q(T,n_{imp}) = \frac{8 n_{imp}}{N_c} e^{-E_b/k_B T}.
\end{equation}
An example of the temperature dependence of these occupation probabilities is shown for a GaAs QW in Figure 1 where $n_{imp} = 4 \times 10^{10}$ cm$^{-2}$.
At the lowest temperatures, the donors are fully occupied. As the temperature increases, $n_l$ decreases and $n_c$ increases to where at around $50$ K, the two occupation probabilities
are equal. From Eqs. (\ref{bloch}, \ref{bloch3}), it is evident that these occupational statistics have ramifications in the measured spin relaxation times.
The results here are also applied to the excitons in quasi-equilibrium.
\section{Spin Relaxation}\label{relaxationSection}
We now discuss the relevant spin relaxation mechanisms for both localized and conduction electrons.
The electron-in-exciton spin relaxation, $\tau_x$, is a combination of electron-hole recombination and electron-hole exchange relaxation.
Due to its complicated nature we defer the calculation of $\tau_x$ to future work. Here we treat it as a phenomenological parameter.
\subsection{Localized Spin Relaxation}
First we discuss spin relaxation via the anisotropic spin exchange for donor bound electrons. This has been treated extensively elsewhere.
\cite{dzyaloshinskii, moriya, gorkov2, kavokin1} Most recently it has been examined by Kavokin in Ref. (\onlinecite{kavokin2}). It is his treatment that we
detail below for semiconducting QWs.
Kavokin argues\cite{kavokin2} that some portion of localized relaxation results from spin diffusion due to the exchange interaction between donors. Anisotropic
corrections to the isotropic exchange Hamiltonian cause a spin to rotate through an angle $\gamma_{i,j}$ when it is transferred between two donor centers located at
positions $r_i$ and $r_j$.
The angle-averaged rotation angle is $\langle \gamma_{i,j}^2 \rangle ^{1/2} = \langle r_{i,j}^2 \rangle ^{1/2}/L_{s.o.}$ where $L_{s.o.}$ is the spin orbit length.\cite{kavokin2}
The spin is relaxed when the accumulated rotation angle $\Gamma$ becomes on the order of unity such that $\Gamma^2 =\sum \langle \gamma_{i,j} ^2
\rangle = \sum \langle r_{i,j}^2 \rangle/L_{s.o.}^2 = 2 D_{ex} \tau_{ex}/L_{s.o.}^2 = 1$ where $D_{ex}$ is the diffusion coefficient and
the relaxation time is
\begin{equation}
\tau_{ex} =\frac{L^2_{so}}{2 D_{ex}}.
\end{equation}
In quasi-2D (100) QWs where Dresselhaus bulk inversion asymmetry (BIA) terms dominate,\cite{kavokin1}
\begin{equation}
L_{s.o.}= \Big(\frac{2 \alpha \hbar}{\sqrt{2 m^* E_g}} \langle k_z^2 \rangle\Big)^{-1},
\end{equation}
where $\alpha$ is a dimensionless measure of the spin orbit strength and $\langle k_z^2 \rangle$ is due to the quasi-2D confinement and is of the form $\beta^2/L^2$. For infinite well confinement $\beta = \pi$.
The diffusion coefficient is approximately\cite{kavokin2}
\begin{equation}
D_{ex}= \frac{1}{2}\langle r_{i,j}^2 \rangle \langle J \rangle/\hbar.
\end{equation}
with exchange constant\cite{ponomarev} in 2D
\begin{equation}
J_{2D}= 15.21 E_b \Big(\frac{r_{i,j}}{a_B}\Big)^{7/4} e^{-4 r_{i,j}/a_B}
\end{equation}
where $E_b$ is the binding energy: $E_b = \hbar^2/(2 m^* a_B^2)$.
How $r_{i,j}$ is to be determined will be discussed in Section \ref{resultsGaAs}.
These results can be combined to obtain the relaxation rate in terms of a dimensionless impurity separation scale, $x$:
\begin{equation}\label{dm2}
\frac{1}{\tau_{ex}} =15.21 \frac{\alpha^2 \hbar^3 \langle k_z^2 \rangle^2}{E_g {m^{\ast}}^2}\langle x^2 \rangle \langle x^{7/4} e^{-4 x} \rangle
\end{equation}
where $x= r_{i,j}/a_B$.
Localized electron spins may also relax due to nuclear fields. A localized electron is coupled to many nuclear spins by the hyperfine interaction.
To the electrons, these nuclear spins appear as a randomly fluctuating field but these nuclear fields can be assumed quasi-stationary since the nuclear evolution
time is much longer than electron evolution time due to the contrast in magnetic moments.\cite{kavokin2}
What governs the electron spin evolution is the electron correlation time, $\tau_{corr}$.
If $\tau_{corr}$ is long such that $\mu_B g^{\ast}/\hbar \langle B_N^2 \rangle ^{1/2} \tau_{corr}> 1$ (where $\langle B_N^2 \rangle ^{1/2} =
B_N^{max}/\sqrt{N_{L}}$ is the root-mean-square field, $B_N^{max}$ is the maximum nuclear field, and $N_{L}$ is the number of nuclei in the electron's localization
volume),
then the electron polarization decays due to ensemble
dephasing; there will be random electron precession frequencies due to a random distribution of frozen nuclear fields.\cite{merkulov}
If the mechanism contributing to the electron correlation time is exchange induced spin diffusion,
$\tau_{corr}$ is estimated to be $(n_{imp}^{1/2} D_{ex})^{-1}$ in quasi-two dimensions.\cite{kavokin2}
Merkulov et al.\cite{merkulov} find a dephasing rate for quantum dots to be
\begin{equation}\label{hyperfine}
\frac{1}{\tau_{nuc}} = \sqrt{\frac{16 \sum_j I_j (I_j+1) A_j^2}{3 \hbar^2 N_L}}
\end{equation}
where the sum over $j$ is a sum over all nuclei in the unit cell, $I_j$ is the nuclear spin, $A_j$ is the hyperfine constant, and $N_L$ is the number
of nuclei in the electron's localized volume.
It is important to state that this spin dephasing does not decay exponentially but decreases to $10\%$ of the original spin polarization in $\tau_{nuc}$
and then increases to $33\%$ of the original spin polarization in $2 \tau_{nuc}$ where
it will then decay at a much slower rate.\cite{merkulov, braun}
If $\tau_{corr}$ is short such that $\mu_B g^{\ast}/\hbar \langle B_N^2 \rangle ^{1/2} \tau_{corr} \ll 1$, then the relaxation will be of the motional narrowing type.\cite{kavokin2}
\subsection{Conduction Spin Relaxation}
Conduction band states undergo ordinary impurity and phonon scattering. Each scattering event gives a change in the wave vector $\mathbf{k}$, which in turn
changes the effective magnetic field on the spin that comes from spin-orbit coupling. This fluctuating field relaxes the spin. This is known as the D'yakonov-Perel' (DP) spin relaxation mechanism.\cite{dyakonov1, dyakonov2} The effective field strength
is proportional to the conduction band splitting.
In this article, we are interested in conduction spin relaxation in (001) and (110) oriented QWs.
For (001) QWs the spin relaxation rate results from a spin-orbit term in the Hamiltonian,
$H_{s.o} =\frac{\hbar}{2} \Omega(\mathbf{k_{||}})\cdot \mathbf{\sigma}$ where\cite{kainz}
\begin{displaymath}
\Omega(\mathbf{k_{||}})= \frac{2 \gamma}{\hbar}
\left(\begin{array}{c}
k_x (k_y^2 - \langle k_z^2 \rangle)\\
k_y (\langle k_z^2 \rangle - k_x^2)\\
0
\end{array}\right).
\end{displaymath}
The angular brackets denote spatial averaging across the well width. $\gamma$ is a band parameter that governs the magnitude of the spin-orbit splitting. For GaAs, $\gamma \sim 17$ meV nm$^3$.\cite{fu} We assume the QWs have been grown symmetrically and therefore ignore any
Rashba contribution.\cite{rashba}
The resulting spin relaxation has been worked out in detail by Kainz et al. in Ref. (\onlinecite{kainz}). For the experiment\cite{ohno} we compare to, we find the non-degenerate limit to be
applicable and hence use the relaxation rate for spin oriented in the z-direction,
\begin{eqnarray}\label{DP}
\frac{1}{\tau_z} &=&
\frac{4}{\hbar^2} \tau_p(T) \Bigg[\gamma^2 \langle k_z^2 \rangle^2 \frac{2 m^{\ast}k_B T}{\hbar^2} k_B T-
\frac{\gamma^2 \langle k_z^2 \rangle}{2} \Big(\frac{2 m^{\ast}k_B T}{\hbar^2 }\Big)^2 j_2 +{}\nonumber\\
& & {}\gamma^2 \frac{1+\tau_3/\tau_1}{16} \Big(\frac{2 m^{\ast}k_B T}{\hbar^2 }\Big)^3 j_3\Bigg]
\end{eqnarray}
where $j_2 \approx 2$ and $j_3 \approx 6$ depend on the type of scattering mechanism.
We assume Type I scattering as defined in Ref. (\onlinecite{kainz}).
The ratio $\tau_3/\tau_1$ is unity for Type I scattering. $\tau_p(T)$ is the momentum relaxation time which can be extracted
from mobility measurements.
A more interesting case is that of (110) QWs where the spin-orbit Hamiltonian is\cite{hassenkam}
\begin{equation}
H_{s.o.} = -\gamma \mathbf{\sigma}_z k_x \big(\frac{1}{2}\langle k_z^2 \rangle - \frac{1}{2} (k_x^2 - 2 k_y^2) \big)
\end{equation}
which is obtained from the (001) Hamiltonian by transforming the coordinate system such that $x||[\overline{1}10]$, $y||[001]$, and $z||[110]$. As can be seen from
the form of this Hamiltonian, the effective magnetic field is in the direction of the growth plane. Hence, spins oriented along the effective field will experience
no spin relaxation.
Conduction spins also relax due to the Elliott-Yafet (EY) mechanism\cite{elliott, yafet} which arises from spin mixing in the wavefunctions. Due to spin-orbit
interaction, when a conduction electron is scattered by a spin-independent potential from state $\mathbf{k}$ to $\mathbf{k}'$, the initial and final
states are not eigenstates of the spin projection operator $S_z$ so the process relaxes the spin. In bulk, the relaxation rate is known to be of the form
$1/\tau_{EY} = \alpha_{EY} T^2/\tau_p(T)$ where $\alpha_{EY}$ is a material-dependent parameter and $\tau_p$ is the momentum relaxation time.\cite{chazalviel}
However the EY mechansim in quasi-two dimensions will not take the same form since $\mathbf{k}$ will be quantized in one direction (the direction of confinement).
The treatment in bulk\cite{ridley} has been extended to QWs to obtain\cite{tackeuchi}
\begin{equation}
\frac{1}{\tau_{EY}} \approx \Big(\frac{\Delta_{s.o.}}{\Delta_{s.o.}+E_g}\Big)^2 \Big(1-\frac{m^{\ast}}{m}\Big)^2 \frac{E_c k_B T}{E_g^2} \frac{1}{\tau_p(T)},
\end{equation}
where $\Delta_{s.o.}$ is the spin-orbit splitting energy and $E_c$ is the QW confinement energy.
Spins may also relax due to the Bir-Aronov-Pikus (BAP) mechanism\cite{bir} which arises from the scattering of electrons and holes.
This relaxation mechansim
is commonly considered efficient only in $p$-type materials when the number of holes is large.\cite{song}
We fit the experimental data in Section \ref{resultsGaAs} without consideration of this mechanism.
We will now examine how these relaxation mechanisms are manifest in two different QWs.
\section{Results for G\MakeLowercase{a}A\MakeLowercase{s}/A\MakeLowercase{l}G\MakeLowercase{a}A\MakeLowercase{s} Quantum Well}\label{resultsGaAs}
We apply our method to measured spin relaxation times of two GaAs/AlGaAs QWs by Ohno et al.: \cite{ohno, ohno2}
(100) n-doped QW with doping $n_{imp} = 4 \times 10^{10}$ cm$^{-2}$, well width $L = 7.5$ nm;
and a (110) undoped QW with well width $L = 7.5$ nm.
In both (pump-probe) experiments, the pump or photoexcitation energy was tuned to the heavy hole exciton resonance and normally incident on the sample.
As mentioned in Section \ref{section2}, the exciton spin becomes important at low temperatures for such excitation energies.
The experimental spin relaxation times as a function of temperature are displayed (solid circles) in Figures 2 and 3.\cite{footnote}
\begin{figure}[hbtp]\label{110}
\begin{centering}
\includegraphics[scale = 0.32,trim = 0 00 00 00, angle = 0,clip]{qwFig2}
\caption[]{Spin relaxation versus temperature in undoped (110) GaAs QW. Points are experiment of Ref. (\onlinecite{ohno2}).
Dash-dotted line: Using only conduction portion of
Eq. (19) and $1/\tau_c = 1/\tau_{EY} + 1/\tau_r$. Dashed line: using only excitonic portion of
Eq. (19).
Solid line: Eq. (19). Spin relaxation rate of excitons decreases with temperature increase due to thermal ionization. Conduction spin relaxation is longer in (110) QW than in other oriented QWs due to vanishing DP mechanism.}
\end{centering}
\end{figure}
For the undoped (110) QW, Eq. (4) is modified to become
\begin{equation}\label{bloch4}
\frac{1}{\tau_s} = \frac{N_x-n_x}{N_x} \frac{1}{\tau_c} + \frac{n_x}{N_x} \frac{1}{\tau_x}.
\end{equation}
For this sample, at low temperatures, $n_x = N_x$ so the $\tau_s = \tau_x \approx 0.15$ ns.
At higher temperatures, recombination (in time $\tau_r$) and EY act to relax conduction spins since DP relaxation is significantly reduced for the (110) QW orientation.
To account for the quasi-two dimensional nature of the QW, we use an intermediate value (between 2D and 3D values) for the exciton's binding energy.\cite{harrison}
Eq. (\ref{bloch4}) (solid line) fits the data (points) with excellent agreement in Figure 2 when
$N_x = 1.5 \times 10^{10}$ cm$^{-2}$ and $\tau_r = 2$ ns which are near the experimentally reported values\cite{adachi} ($N_x \approx 10^{10}$ cm$^{-2}$ and $\tau_r \approx 1.6$ ns).
The contributions from the excitons and conduction electrons are also shown (dashed and dash-dotted lines respectively).
The trend in the data is well described by our theory - at low temperatures excitons predominate and the spin relaxation time is $\tau_x$.
When the temperature increases, the excitons thermally ionize leading to net moment in the conduction band.
Since the conduction band spin relaxation time is longer than the exciton spin relaxation time, the measured relaxation time increases with temperature as
described in Eq. (\ref{bloch4}). We expect the relaxation times to eventually level out as the excitons disappear.
Eventually, the relaxation time will decrease as the temperature dependence of EY takes effect.
For the doped (100) QW, Eq. (4) should be used to describe the temperature dependence of the relaxation rate.
Using the values from above and $n_{imp} = 4 \times 10^{10}$ cm$^{-2}$, $\tau_s = 0.35$ ns, we can extract the approximate value of $\tau_l$.
In doing so we obtain $\tau_l \approx 0.5$ ns. We stress that this value has considerable uncertainty due to the uncertainty in the parameters (namely $N_{x}$) that determine $\tau_l$.
The presence of impurities has lengthened the observed low temperature spin relaxation time by more than a factor of two. The relaxation
time in the doped sample can be further increased by reducing the excitation density.
As the temperature is increased, donors become unoccupied and conduction electrons will play a larger role in relaxation as expressed in Eq. (4).
We can determine the main conduction spin relaxation mechanism by investigating its temperature dependence.
\begin{figure}[hbtp]\label{gaasQW}
\begin{centering}
\includegraphics[scale = 0.32,trim = 0 00 00 00, angle = 0,clip]{qwFig3}
\caption[]{Spin relaxation versus temperature in n-doped (100) GaAs QW.
Points are from Ref. (\onlinecite{ohno}). Dashed line: excitonic contribution in Eq. (4). Dotted line: localized contribution in Eq. (4).
Dash-dotted line: conduction contribution in Eq. (4). Solid black line: Eq. (4). Both exciton and localized spin relaxation contribute to the observed low temperature spin relaxation. Conduction spin relaxation is the most strong contributor to the observed relaxation at higher temperatures. }
\end{centering}
\end{figure}
We are now left with the task of determining what the localized and conduction spin relaxation mechanisms are.
We plot the relaxation rate for the n-doped GaAs QW as a function of temperature in Figure 3.
The dashed, dotted, and dash-dotted lines refer to the three terms of Eq. (4) - the density weighted average of the respective relaxation rates.
The solid line is the sum of all three terms.
We begin by calculating spin relaxation due to spin exchange diffusion in Eq. (\ref{dm2}).
This is difficult due to the exponential dependence on $r_{i,j}$.
For GaAs, $\alpha = 0.06$, $E_g = 1.52$ eV, $m^{\ast} = 0.067 m$, and $a_B = 10.4$ nm.
To calculate $ \langle k_z^2 \rangle = \beta^2/L^2$, we need to know the band offsets and assume a finite square well.
The potential depth for a AlGaAs QW is about $V_0 = 0.23$
eV. This comes from $\frac{\Delta E_c}{\Delta E_g} = 0.62$ and $\Delta E_g = 0.37$ eV in GaAs.\cite{davies}
From this we can determine $\beta$ which will also depend on the well width $L$.
For $L = 7.5$ nm, $\beta = 2.19$. Of course in the limit of $V_0 \rightarrow \infty$, $\beta \rightarrow \pi$.
What remains to be determined is $r_{i,j}$ which is proportional to the inter-donor separation $r_{i,j} = \gamma n_{imp}^{-1/2}$.
For average inter-donor spacing in two dimensions, $\gamma_{av} = 0.564$.
When we allow $\gamma$ to be fitting parameter, we obtain $r_{i,j} = 19.5$ nm which corresponds to $\gamma = 0.4$.
We now determine the relaxation rate due to the hyperfine interaction.
Since $\mu_B g^{\ast}/\hbar \langle B_N^2 \rangle ^{1/2} \tau_{corr} \gg 1$ when $n_{imp} = 4 \times 10^{10}$ cm$^{-2}$, the hyperfine relaxation is described by Eq. (\ref{hyperfine}).
Since nearly all nuclei have the same spin\cite{schliemann} ($I = 3/2$), we can express Eq. (\ref{hyperfine}) as
\begin{equation}
\frac{1}{\tau_{nuc}} = 2 \sqrt{\frac{5\sum_j A_j^2}{\hbar^2 N_L}},
\end{equation}
with
$\sum_j A_j^2 = 1.2 \times 10^{-3}$ meV$^2$ and $N_L \sim 2.1 \times 10^5$.\cite{merkulov} This yields $\tau_{nuc} = 3.9$ ns.
Due to the donor's confinement in the QW, its wavefunction may shrink thereby reducing the localization volume and therefore also reducing $N_L$ and $\tau_{nuc}$ .\cite{harrison}
In Figure 3, we find find excellent agreement with experiment over a large temperature range when $\tau_p(T)$ in Eq. (\ref{DP}) is made a factor of three smaller than what is reported in Ref. (\onlinecite{kainz}).
We attain approximately the same quantitative accuracy as in Ref. (\onlinecite{kainz}) but since we also take into account the localized spins, we find excellent qualitative agreement
as well.
It should be emphasized that the quadratic and cubic terms of Eq. (\ref{DP}) are important in the high temperature regime.
The EY rate is qualitatively and quantitatively different from the data. For instance, $1/\tau_{EY} \approx 0.1$ ns$^{-1}$ at 300 K so we rule it out of contention.
We also now ignore recombination of carriers since an appreciable amount of
equilibrium carriers exist (n-doped system) leading to recombination of primarily non-polarized spins.
One would not expect these results to agree with spin relaxation measurements in modulation doped QWs.
In modulation doped systems, the occupation densities $n_l$ and $n_c$ cannot be calculated as we have done here.
In such systems different spin relaxation dependencies are seen.\cite{adachi, dohrmann}
\section{Results for C\MakeLowercase{d}T\MakeLowercase{e}/C\MakeLowercase{d}M\MakeLowercase{g}T\MakeLowercase{e} Quantum Well}\label{resultsCdTe}
The experiment by Tribollet et. al. on a n-CdTe QW offers an instructive complement to the previous experiments on GaAs.
In their experiment, Tribollet et al. measure spin relaxation times $\tau_s \approx 20$ ns for CdTe/CdMgTe QWs with $n_{imp} = 1 \times 10^{11}$ cm$^{-2}$.
Importantly, they excited with laser energies at the donor bound exciton frequency instead of the heavy hole exciton frequency.
For CdTe, $E_g = 1.61$ eV, $m^{\ast} = 0.11 m$, and $a_B = 5.3$ nm. The spin-orbit parameter, $\alpha$ is not known but we approximate it by noting that
the spin-orbit splitting energy in CdTe is $\Delta_{s.o} = 0.927$ eV whereas in GaAs, it is $\Delta_{s.o} = 0.34$ eV.
Since $\alpha$ is approximately proportional to $\Delta_{s.o}$, we obtain $\alpha = 0.164$ for CdTe.
To obtain potential well depth for CdTe QW, use $E_g(x_{Mg}) = 1.61+1.76 x_{Mg}$ where $x_{Mg}$ gives fraction of Mg in Cd$_{1-x}$Mg$_{x}$Te.\cite{zaitsev}
If we use $x_{Mg}=0.1$, we get $V_0 = 0.12$eV which leads to $\beta = 2.18$.
We now determine the relaxation rate due to the hyperfine interaction.
Since all nuclei with non-zero spin will have the same spin\cite{schliemann} ($I = 1/2$), we can express Eq. (\ref{hyperfine}) as
\begin{equation}
\frac{1}{\tau_{nuc}} = 2 \sqrt{\frac{\sum_j A_j^2 P_j}{\hbar^2 N_L}},
\end{equation}
where $P_j$ has been addended to account for isotopic abundances.\cite{tribollet2} The natural abundancies of spin-1/2 Cd and Te nuclei dictate that $P_{Cd} = 0.25$ and $P_{Te} =0.08$.
The remaining isotopes are spin-0. $N_L = 1.8 \times 10^{4}$,
$A_{\textrm{Cd}} = 31$ $\mu$eV, and $A_{\textrm{Te}} = 45$ $\mu$eV which yields $\tau_{nuc} = 4.4$ ns.\cite{tribollet2}
The confined donor wavefunction in CdTe should shrink less than in GaAs since the effective Bohr radius is half as large.
We see that this value is within an order of magnitude of what we have calculated for relaxation due to the hyperfine interaction.
We can also compare the experimental time to what we obtain for spin exchange diffusion.
When we allow $\gamma$ to be a fitting parameter, we obtain $r_{i,j} = 19.3$ nm which corresponds to $\gamma = 0.61$. This is in reasonable agreement with $\gamma_{av}$.
Unfortunately no relaxation measurements have been performed at higher temperatures in n-doped CdTe QWs that we are aware of.
We are also not aware of mobility measurements in n-doped CdTe QWs.
The prevalent mechansim (DP or EY) will depend on the mobility so we forgo determining the more efficient rate.
However, in analogy to bulk systems, we expect the CdTe QW mobilities to be less than
the GaAs QW mobilities.\cite{putikka,segall}
In the next section we analyze CdTe's spin relaxation rate for (110) grown crystal so DP can be ignored.
\section{Comparison of G\MakeLowercase{a}A\MakeLowercase{s} and C\MakeLowercase{d}T\MakeLowercase{e} Quantum Wells}\label{discussion}
First we discuss the low temperature spin relaxation.
Interestingly, the localized relaxation time in CdTe is about 20 times longer than in GaAs.
This can be explained by the spin exchange relaxation despite the larger spin orbit parameter in the CdTe.
This is more than offset by the smaller effective Bohr radius in CdTe ($5.3$ nm vs. $10.4 $ nm) and the exponential behavior of the anistropic exchange relaxation.
However due to the exponential factor, any discrepancy between the two QWs can be explained by adjusting their respective $\gamma$s appropriately, though the fitted $\gamma$s do fall near $\gamma_{avg}$.
The discrepancy in times is difficult to explain by the hyperfine interaction since the two calculated relaxation times are very near each other.
Additionally, no plateau effect is seen that is indicative of hyperfine dephasing.\cite{tribollet2, braun}
Another possibility is that one QW is governed by relaxation from spin exchange and the other from hyperfine interactions.
Without experimental data, answering these questions is difficult.
It is our hope that further experiments will be done to sort out these questions.
However, we can propose ways in which these answers can be discovered.
Relaxation by anisotropic spin exchange is strongly dependent on the impurity density.
By altering the impurity doping within the well, one should see large changes in the spin relaxation time if this mechanism is dominant.
From Eq. (\ref{dm2}) we see that this mechansim will also depend on the confinement energy.
Hence this mechanism should also be affected by changing the well width.
The hyperfine dephasing mechanism should be largely unaffected by impurity concentration differences as long as they are not so extensive as to
cause the correlation time to become very short and enter a motional narrowing regime.
Varying the well width will have an effect on the donor wavefunctions, but as long as they are not squeezed too thin the effect should
not be dramatic.
\begin{figure}[hbtp]\label{temp2}
\begin{centering}
\includegraphics[scale = 0.32,trim = 0 00 00 00, angle = 0,clip]{qwFig4}
\caption[]{Spin relaxation in GaAs (100) QWs with different well widths (all other parameters, including $\tau_l$ and $\tau_x$, do not change).
Points are from Ref. (\onlinecite{ohno}) where $L_0 = 7.5$ nm.
Dotted: $2 L_0$; dash-dotted: $3 L_0/2$; solid: $L_0$;
dashed: $L_0/2$.}
\end{centering}
\end{figure}
For spin relaxation at higher temperatures, DP prevails in (100) GaAs QWs as mentioned earlier.
Whether DP or EY is more efficient in CdTe depends on the momentum relaxation time.
By changes in momentum relaxation times (by changing well width or impurity concentration),
we predict the the possibility to induce a clear `dip' in the temperature dependence which we see in
Figure 4. This same non-monotonicity has been observed bulk GaAs and ZnO.\cite{kikkawa, ghosh, putikka, harmon}
Using our results we propose that n-doped (110) QWs should optimize spin lifetimes (when excited at exciton-bound-donor frequency) since DP is suppressed.
Figure 5 displays our results for GaAs and CdTe (110) QWs as impurity densities $n_{imp} = 4 \times 10^{10}$ cm$^{-2}$ and $n_{imp} = 1 \times 10^{11}$ cm$^{-2}$
respectively.
The decrease seen in GaAs is now due to depopulation of donor states instead of exciton thermalization.
The depopulation is much slower in CdTe since the doping is higher.
The up-turn in the CdTe curve as room temperature is reached is due to EY which is too weak to be seen in GaAs.
We plot the data points from the undoped (110) GaAs
QW for comparison. By avoiding the creation of excitons and their short lifetimes, long spin relaxation times can be achieved.
\begin{figure}[hbtp]\label{gaasCDTE}
\begin{centering}
\includegraphics[scale = 0.32,trim = 0 00 00 00, angle = 0,clip]{qwFig5}
\caption[]{Spin relaxation in (110) GaAs ($n_{imp} = 4 \times 10^{10}$ cm$^{-2}$): dashed-dotted line.
Spin relaxation in (110) CdTe ($n_{imp} = 1 \times 10^{11}$ cm$^{-2}$): solid line. Points from undoped (110) GaAs QW experiment\cite{ohno2} are included for comparison. For both systems, $\tau_p(T)$ from Ref. (\onlinecite{kainz}) were used. EY is too weak over the temperature range depicted to be seen in the GaAs system.
However EY is the cause of the increase in spin relaxation rate for the CdTe system.}
\end{centering}
\end{figure}
\section{Conclusions}\label{conclusion}
We find that the spin relaxation times in n-doped QWs can be well described by a theory invoking spin exchange between spin species. In undoped (110) QWs, where DP is absent, we find that exciton spin relaxation is important and leads to the observed surprising temperature dependence. We predict that a similar temperature dependence (though with longer relaxation times) should be observed in n-doped (110) QWs when excited at the exciton-bound-donor frequency.
We have suggested future experimental work to resolve what mechanisms relax spin localized on donors in n-doped GaAs and CdTe QWs. The theory allows us to predict experimental conditions that should optimize the measured spin relaxation times in GaAs and CdTe QWs.
\section{Acknowledgements}
Financial support was provided by the National Science Foundation, Grant Nos.
NSF-ECS-0523918 (NH and WP) and NSF-ECS-0524253 (RJ). NH also acknowledges the Center for Emergent Materials at the Ohio State University, an NSF MRSEC (Award Number DMR-0820414), for providing partial funding for this research.\ \
\ \ \
\
|
1,108,101,563,171 | arxiv |
\section{Introduction}
Linear algebra is a fundamental building block of many of today's critical applications; from weather modeling~\cite{Coiffier2011} to ubiquitous DNN~\cite{Deisenroth2020} workloads.
Its importance is reflected in the large number of accelerator libraries
and hardware devices devoted to fast linear algebra.
These range from specialized devices such as Google's TPU~\cite{tpu} to the tensor cores on NVIDIA~\cite{voltauarch}
among many others~\cite{IntelAIweb, Arm2020, Jouppi2021, Anderson2021, Fowers2019}.
While such devices promise significant performance for
an important class of
applications~\cite{Dally2020}, their uptake is limited by their
programmability~\cite{Domke2021}. Typically, these accelerators and libraries are accessed
via calls to specialized APIs, meaning existing code has to be rewritten. Given
the
volume~\cite{Kalliamvakou2014} and variety~\cite{Livshits2015} of existing legacy code, such
rewriting is a %
significant undertaking~\cite{Dally2020}.
The combined importance of linear algebra acceleration and the difficulty
of rewriting legacy code to accelerators has led to recent %
work which attempts to automate the process.
These techniques search user code for matrix multiplications using constraints~\cite{idl,kernelfarer} or polyhedral analyses~\cite{Bhaskaracharya2020}
and replace regions of code with appropriate API calls or instructions.
However, as we show in Section~\ref{sec:cprograms}, these approaches are fragile.
Constraints capture only a limited set of program patterns
and small variations in the user code defeat them. While they work
well on curated benchmarks, they perform poorly on real-world code~\cite{kernelfarer,facc},
defeated by function calls, optimized code and inline assembler.
Neural classification %
(e.g.~\cite{Cummins2021})
can effectively
detect code despite these challenges.
However, it does not provide a path to acceleration, but requires
further steps.
These include generating variable mappings and checking
for equivalence ~\cite{facc} which has shown promising results
for Fourier Transforms.
However, one of the key challenges in matching code to APIs is the cost of searching for user program variables that map to API formal parameters.
As the width of the API and complexity of the user program increase, this becomes combinatorially expensive.
As we show in Section
\ref{sec:complexity}
existing approaches \cite{facc} fail to scale to the
challenges that %
linear algebra APIs present.
We present ATC, a compiler that applies program synthesis to compile
general user-code to
linear algebra
accelerators.
We identify and solve key %
challenges %
enabling the detect/synthesize paradigm
to scale to the more complex APIs of linear algebra acceleration.
In addition, ATC employs
a trained platform predictor
to determine whether acceleration is profitable or not.
We applied our approach to 50 GitHub GEMM and 15 convolution projects and discovered between 2.6 and 7x more linear operators
compared to KernelFaRer \cite{kernelfarer}, IDL \cite{idl}, Polly \cite{llvmpolly} or FACC\cite{facc}. This resulted in more than an order of magnitude performance improvement.
This paper makes the following contributions:
\begin{itemize}[nolistsep,noitemsep]
\item We present ATC, which maps matrix
multiplication and convolution programs to hardware accelerators,
up to 7x more frequently than existing techniques.
\item We introduce novel heuristics to reduce the mapping search space
by four orders of magnitude.
\item We develop novel dynamic analyses to determine higher-level
information about variables, enabling synthesis without
costly whole-program analyses.
\end{itemize}
\section{Motivation} \label{sec:motivation}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{figures/motivation}
\caption{Example application of API replacement. The above program is taken from the parboil benchmark~\cite{parboil}, a widely-used benchmark suite, which is transformed into a call to an optimized matrix-multiplication accelerator API.} \label{fig:example}
\end{figure}
\subsection{Exisiting Match and replace}
\paragraph{IDL and KernelFaRer} Both aim to detect linear algebra operations in user programs and replace them with an appropriate accelerator library call.
To illustrate this consider the code in Figure~\ref{fig:example}.
This shows a straight-forward matrix multiplication program fragment, from the parboil benchmark suite~\cite{parboil}. They aim to
detect this matrix-multiplication and replace it with a call to the library, shown at the bottom of the diagram.
To replace code with an API call they have to both detect the code performing a matrix multiplication and also determine which user program variables correspond to the arguments of the API call.
Both approaches are able to detect that this is a matrix multiplication, and
can determine the mapping between user variables and API parameters.
\subsection{Examples of complex GEMM programs}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/motivation2} \label{fig:avx2}
\caption{GEMM code optimized for AVX2 found on GitHub consisting of 120 lines of hand-optimized intrinsics and how ATC matches the code to the accelerator API} \label{fig:avx2}
\end{figure}
Unfortunately, in practice, user code can be complex such that code structure or pattern-based approaches
inevitably fail.
As an example, consider the code found on GitHub shown in Figure~\ref{fig:avx2}
which implements a matrix-multiplication algorithm (only a fragment of the 120 lines of user code are shown here).
The code structure is complex and difficult to understand as it makes extensive use
of inline assembler intrinsics which defeats the code structure analysis approaches of IDL and KernelFaRer, preventing acceleration.
\begin{figure*}[ht]
\includegraphics[width=0.9\linewidth]{figures/figure2v2}
\caption{ATC compiler architecture} \label{fig:architecture}
\end{figure*}
\subsection{Our approach - ATC}
Rather than relying on code structure to guide detection, ATC uses behavioral equivalence
to determine if a section of code is a linear algebra operation.
Firstly, ATC uses neural program classification \cite{Cummins2021} to detect that the code in Figure \ref{fig:avx2} is probably a GEMM.
It then searches variable matches to determine the potential source and output arrays. As the search space is combinatorially large, we introduce scalable, algorithm-independent heuristics (which we discuss in Section~\ref{ReduingMatchingSpace}) that keep the number of mappings manageable.
Next, ATC generates different input values for the arrays and records the output.
After generating many randomized inputs, it observes that it has the equivalent behavior to the corresponding API and is able to replace the AVX2 code with the GEMM call at the bottom of Figure \ref{fig:avx2}.
\paragraph{Legality}
Now, IO behavioral equivalence is not proof
that a section of code is a particular linear algebra operation - similarly IDL and KernelFaRer do not prove equivalence. For proof, bounded model checking based on Kleene~\cite{Collie2022} can be deployed.
In practice, as demonstrated in our experimental section, IO equivalence gives no false positives. For further guarantees, we can ask for programmer sign-off or employ model checking.
\paragraph{Profitable}
Once we have detected and can replace a section of code with an accelerator call, we need to determine if it is profitable to do. %
Due to hardware evolution, we do not use a hard-wired heuristic to determine profitability.
Instead, we learn, off-line, a simple predictive model to determine if the target accelerator is faster than a CPU implementation.
The model is called at runtime, determining if offloading is worthwhile.
\paragraph{FACC}
Behavioral equivalence is also employed in FACC \cite{facc}.
Unfortunately, it is restricted to FFTs and one-dimensional arrays, and cannot detect the replacement in Figure~\ref{fig:example}.
Therefore, we extended FACC to FACC* to consider GEMMs and multi-dimensional arrays. This, however, exposes its weak variable binding model which is combinatorial in the number of user array variables and their dimensionality.
Furthermore, it relies on program synthesis to determine the length of arrays,
which scales poorly to problems with many potential length parameters
for arrays such as GEMM\@.
FACC also relies on brittle inter-procedural liveness analyses to determine
the liveness status of variables.
This restricts it to running only at link time, rendering it invalid
for use in shared libraries. %
We will see in
Section \ref{sec:evaluation} that the combination
of these issues results in excessively large search spaces.
\section{System overview} \label{sec:overview}
Figure~\ref{fig:architecture} gives a system flow overview of ATC.
We first detect regions of code that are likely to be linear algebraic operations using a neural program classifier. The classifier is trained ahead of time, based on programs that are equivalent to the accelerator
and prior examples of linear algebra code.
Once candidate code sections have been identified, we apply program analysis to match user program variables with the particular API formal parameters. Given the combinatorially large search space, we develop novel
techniques to make the problem tractable.
For each candidate matching, we generate multiple data inputs, execute the user code section and record the output values.
If the input/output pairs correspond to the input/output behavior of the accelerator API, we can say they are behavioral equivalent and candidates for replacement.
While candidate user code may be replaceable with a call to an accelerator API, it may not be profitable.
Therefore, we employ a simple ML classifier, trained offline, and invoked at runtime to see if acceleration is appropriate for the user code for the runtime known array sizes.
\subsection{ Neural Program Classification}
To detect potentially acceleratable parts of a program, we use prior work in neural program classification~\cite{Cummins2021}.
A network is trained with multiple instances of different program classes.
We use the OJClone dataset~\cite{ojclone}, which includes 105 classes of different programs, and add examples of the programs that we want to detect {\em e.g.} GEMMs and convolutions, gathered from benchmark suite repositories other than GitHub.
At compile time a new candidate program is divided into functions, which are presented to the neural classifier.
The classifier assigns each function in the program a probability of belonging to a certain class.
We consider the most probable class, which in the case of a GEMM or convolution is then considered for variable matching and eventual code replacement
as described in the following sections.
Classification is fast ($\leq$ 1.5 sec) and has negligible impact on compilation time (see Section~\ref{sec:complexity}).
\section{Variable Matching} \label{sec:t1}
\label{VariableMatchingSection}
To check if a section of user code is behaviorally equivalent
to the API, we have to match up the user program variables with API
formal parameters.
We first detect what variables are livein/liveout (Section~\ref{sec:liveinliveout}) and then the dimensions of arrays (Section~\ref{sec:dimensions}).
\subsection{Detecting livein and liveout variables} \label{sec:liveinliveout}
Detecting livein and liveout variables via standard static analysis is
straightforward for well-structured programs
but fails for more diverse real-world codes,
which may use assembly code or intrinsic functions.
ATC uses dynamic analysis to determine which variables are livein and liveouts inside a function.
In C, variables are passed by value so non-pointers variables are always livein.
In the case of pointers (or arrays), we generate random inputs with arbitrary sizes.
If the values in memory change after executing the program, the array is considered liveout.
This allows us to detect which variables are livein or liveout, but not both livein and liveout at the same time.
We generate a new random input for liveout variables and re-execute the function.
If the output differs from the first execution, it is both livein and liveout.
We implement this algorithm as a just-in-time compiler pass in LLVM~\cite{llvm}.
\subsection{Detecting the dimensions of arrays} \label{sec:dimensions}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{figures/figure1}
\caption{Dimension detection algorithm overview for a target example array called A.} \label{fig:iodetection}
\end{figure}
Detecting arrays length enables offloading of appropriately-sized
regions of codes, so it is a critical step in ATC.
For some programs, lengths can be found using static analysis (e.g.~\cite{Ravitch2009}), but this fails in more complex cases.
We use runtime analysis to determine which program variables define
array size using a modified form of runtime array bound checking.
For each set of variables that could define an array's size (typically, from the argument list), we set such variables to a fixed value.
We then execute the user code that is modified to check runtime array accesses.
\begin{algorithm}[t]
\caption{Dimensions detection algorithm} \label{alg:dims}
\begin{algorithmic}[1]
\For{arr in function}
\State $\Call{fakeLoadAndStoresExcept}{arr}$
\State $\Call{replaceLoadAndStores}{arr}$
\Repeat
\State $c = \Call{getNextCombination}{arr}$
\State $\Call{ffi\_call}{A, V}$
\If{not failed}
\State $found = True$
\EndIf
\Until{not found}
\State Add $c$ to $C$
\EndFor
\State \Return $C$
\end{algorithmic}
\end{algorithm}
First, the compiler selects a target array to find its size.
Then, to generate the modified program, we tweak the load and store instructions in the user program, replacing them with custom function calls in the IR.
If a load or store does not access the array we are interested in, we modify it to load and store at a constant, safe location.
If it does, the instruction is replaced with a function call that will check at runtime if the access is out of bounds.
If so, the program exits with a custom error code.
If not, we have found a valid array size.
The basic idea is depicted in Figure~\ref{fig:iodetection}.
This is used by our JIT analysis as shown in Algorithm~\ref{alg:dims} and implemented in LLVM\@.
This way, the compiler can assign different input sizes to a given array and check the exit code.
Therefore, the compiler iterates over all the possible dimensions combinations until one of the executions does not end with the custom error exit code.
That means that the program was completed without any illegal access to the target array, which indicates that it is the right dimension of the array.
\section{Automatic profitability detection} \label{sec:t3}
We assume %
that user code runs faster when replaced by a platform-specific library. The question is whether it is best to run on a CPU or accelerator version (XPU) of the library. This in turn depends on
the input size, which is only known at runtime.
We use a predictive model based on empirical data to
enable accurate predictions as platforms and libraries evolve by retraining the model.
\paragraph{SVM} We use the well-known
support vector machine (SVM) classifier with a polynomial kernel of degree 3 with gamma=1 and $C$=100.
We sample the CPU and the accelerator with a common
dataset of input sizes, which produces a dataset that is small enough to be processed in less than five minutes, but large enough to be highly accurate.
Data is labeled with 0 or 1 meaning that the CPU or the XPU is faster. The model is then trained and deployed at runtime, when matrix sizes are known,
The training phase is done only once, at ``factory time'', and the resulting model when deployed has negligible ($ \leq 0.3 msec$) runtime overhead (see Section~\ref{sec:performance}).
\section{Reducing the matchings search space} \label{sec:t2}
\label{ReduingMatchingSpace}
To match code to APIs, the compiler generates
different candidates for the variable to formal parameter mappings
and then tests them using IO equivalence.
For small APIs, all mappings can be explored,
but the combinatorial cost makes it prohibitive for real-world accelerator APIs.
We develop techniques that reduce the mapping space by exploiting arrays information and human coding styles.
\subsection{Exploiting array information}
Using array dimensions (Section~\ref{sec:dimensions}),
we can reduce the number of possible matches that must
be checked, as assigning one array to another means that the
dimensions of each array must line up.
\subsubsection{Automatic matching algorithm} \label{sub:arrdim1}
\begin{algorithm}[t]
\caption{Automatic matching algorithm} \label{alg:algo}
\begin{algorithmic}[1]
\Function{dimsMatch}{$f1a, f2a, p, n$}
\State $S = \emptyset$
\State $idx \gets 0$
\For{$args1$ in f1a}
\State $args2$ = f2a[p[idx]]
\State Add $\{args1, args2\}$ to $S$
\State $idx \gets idx+1$
\EndFor
\State \Return \Call{Size}{S} = $n$
\EndFunction
\\
\Function{outMatch}{$f1o, f2o, p$}
\State idx = \Call{IndexOf}{$f2o$, 1}
\State \Return \Call{IndexOf}{$p$, idx} = \Call{IndexOf}{$f1o$, 1}
\EndFunction
\\
\Function{findMatchings}{$f1a, f2a, f1o, f2o, n$}
\State $B = \emptyset$
\For{p in \Call{permutations}{$0$...$n$}}
\If{\Call{dimsMatch}{f1a, f2a, p} \textbf{and} \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Call{outMatch}{f1o, f2o, p}}
\State Add $p$ to $B$
\EndIf
\EndFor
\State \Return $B$
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{figures/algo}
\caption{Example application of the matching algorithm. The right match is found the algorithm automatically. Permutations in red means they are invalid, while the green permutation means valid.} \label{fig:examplealgo}
\end{figure}
We first generate all $n!$ permutations of the $n$ array variables to $n$ parameters mapping.
We discard all permutations where variable livenesses do not match. Then for each candidate user array and parameter array pair, we generate the constraints defining how their dimensions match. If we find contradictory constraints for any permutation, we discard it. The algorithm is shown in Algorithm~\ref{alg:algo}.
\subsubsection{Automatic Matching Algorithm: Example}
To illustrate this, Figure~\ref{fig:examplealgo} shows an example where we have two functions with three 2D arrays each.
First, the algorithm generates all the permutations between 0 and $n-1$ ($n=3$ in this example).
Then, for each permutation, it tries matching each variable in every array in the user code with the corresponding variable in the array of the API (here we show only three of the six possible permutations).
In the first case (with the permutation $[0, 1, 2]$), the algorithm tries matching the array
variables of the user program $X, Y, Z$ with API parameters $A, B, C$ . We then examine each of the variables defining each of the corresponding arrays. Comparing $X$ and $A$
gives a match of $x0 \rightarrow y0$ and $x1 \rightarrow y1$.
For the second array variable $Y$ and API parameter $B$, we have $x1 \rightarrow y1$ and $x2 \rightarrow y2$ and for the third variable pair $Z,C$ we have $x2 \rightarrow y2$ and $x0 \rightarrow y0$.
All of these are consistent with $n$=3 constraint, which
satisfies the condition (\texttt{dimsMatch} in Algorithm~\ref{alg:algo}).
Liveout information is also satisfied
so this permutation is added as a potential mapping.
In the second permutation $[1,0,2]$, where $X,Y,Z$ maps to $B,A,C$, the constraints are inconsistent {\em e.g.}
$x1\rightarrow y2$ and $x1 \rightarrow y0$ leading to 6 $\geq 3$,
so it is not a valid match.
In the third and last example, constraints are equal to $n$, but the liveout arrays do not match.
Thus, the only valid match is the one found in the first permutation.
\subsection{Using argument names}
Programs are developed by humans, so we can assume that the functions that humans write follow common patterns.
We exploit this by analyzing the argument names of the API and the user program to find lexical similarities.
\begin{figure}[ht]
\centering
\begin{equation} \label{eq2}
lev(a,b) = \left\lbrace
\begin{array}{ll}
|a| & \textup{if }|b| = 0,\\
|b| & \textup{if }|a| = 0,\\
lev(tail(a), tail(b)) & \textup{if }a[0] = b[0],\\
1 + min \left\lbrace
\begin{array}{l}
lev(tail(a),b) \\
lev(a, tail(b)) \\
lev(tail(a),tail(b))
\end{array}
\right. & \textup{otherwise}
\end{array}
\right.
\end{equation}
\caption{Levenshtein recursive definition} \label{fig:editdistance}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{figures/distance}
\caption{Levenshtein distance calculation for the arguments of the tensor core API (above) and an example user program.} \label{fig:distancexample}
\end{figure}
To compare argument names, we use the Levenshtein distance~\cite{levenshtein} to compute the distance between each of the user programs and API arguments.
Figure~\ref{fig:editdistance} shows the definition of the Levenshtein distance, which calculation is based on the minimal number of modifications needed to transform one word into another, representing how close are those words.
After computing the distance, the compiler selects the combination that minimizes the Levenshtein distance.
Figure~\ref{fig:distancexample} shows an application example of the Levenshtein distance to a real case of GEMM matching.
For calculating the distance, we strip the API suffix (\texttt{tc\_}) and convert all names to lowercase.
Results show that the most probable mapping for \texttt{tc\_A} is \texttt{A} in the user code, and for \texttt{tc\_lda} is \texttt{lda}, which are the right matches.
\subsection{IO generation} \label{sec:iogen}
Once we have a candidate match we generate
random inputs of different sizes and test for input-output (IO) equivalence. We use 30 inputs of varying sizes.
Although IO behavioral equivalence is not proof, we can increase the number of tests for increased confidence. No existing technique such as IDL or KernelFaReR can prove that a matched piece of code is
provably equivalent to an API and therefore rely on user sign-off.
\subsubsection{Behavioral Equivalence and the Limits of Verification}
ATC, like prior work on floating-point accelerators~\cite{facc}, uses
behavioral equivalence. The downside of this strategy is that it requires
programmer sign-off to make any substitution. However, due to the complexities
of verifying floating-point programs~\cite{facc}, verification of such liftings
are some way off.
In summary, the key challenges that all competing techniques face are:
\begin{itemize}[nolistsep,noitemsep]
\item Floating-point numbers often raise challenges in theorem provers
as they are challenging to reason about.
\item Floating-point functions may have different accuracies in different
input ranges, meaning that the obvious checks of correctness
(even within bounds) are difficult to apply.
\end{itemize}
The backend of ATC is not tied to using behavioral equivalence. As we will
see, the use of such behavioral equivalence results in no false positives.
Further development of theorem prover technologies would mean that the weak
behavioral equivalence in ATC could easily be replaced with a theorem prover
guaranteeing correctness and enabling automatic transformations.
\section{Setup} \label{sec:setup}
We evaluate GEMM and convolution acceleration on specialized platforms.
For GEMM, we used an Intel i7-11700 (CPU) with an NVIDIA Quadro RTX 5000 (tensor cores) (XPU).
For convolution, we used the Google Cloud Platform (GCP) services equipped with a TPUv3 with 8 TPU cores.
Compilation benchmarks in Section~\ref{sec:complexity} are executed in an AMD EPYC 7413.
The Intel/NVIDIA platform runs CentOS 8.3 with kernel 4.18.0.
LLVM was downloaded from the official Git repository, using commit \texttt{329fda3}\@.
User codes were compiled using gcc 11.2.0 with \texttt{-O3 -march=native} flags.
We used cuBLAS 11.2 and MKL 2020.2.254 for compiling codes to the XPU and CPU, respectively.
For compiling convolution programs to the CPU, we used oneDNN v1.96.
The TPU system
runs Debian 10 with kernel 4.19.0-14.
\subsection{User code} \label{sec:setupsw}
\begin{figure*}[t]
\centering
\begin{minipage}[t]{0.47\linewidth}
\resizebox{\linewidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Algorithm & Code & LoC & Layout & Sizes & Optimizations \\ \hline \hline
\multirow{12}{*}{Naive} & 1 & 22 & Column-major & Squared & None \\ \cline{2-6}
& 2 & 127 & Both & Any & None \\ \cline{2-6}
& 3 & 18 & Row-major & Any & None \\ \cline{2-6}
& 4 & 41 & Column-major & Squared & None \\ \cline{2-6}
& 5 & 11 & Row-major & Any & None \\ \cline{2-6}
& 6 & 11 & Row-major & Any & None \\ \cline{2-6}
& 7 & 30 & Row-major & Any & None \\ \cline{2-6}
& 8 & 18 & Column-major & Any & None \\ \cline{2-6}
& 9 & 40 & Column-major & Any & None \\ \cline{2-6}
& 10 & 39 & Column-major & Any & None \\ \cline{2-6}
& 11 & 43 & Row-major & Any & None \\ \cline{2-6}
& 12 & 11 & Row-major & Squared & None \\ \hline
\multirow{5}{*}{\begin{tabular}[c]{@{}l@{}}Naive \\ parallel\end{tabular}} & 13 & 39 & Row-major & Squared & OpenMP \\ \cline{2-6}
& 14 & 28 & Column-major & Squared & OpenMP \\ \cline{2-6}
& 15 & 164 & Row-major & Any & OpenMP \\ \cline{2-6}
& 16 & 22 & Row-major & Multiple of nthreads & C++ threads \\ \cline{2-6}
& 17 & 107 & Row-major & Squared & C++ threads \\ \hline
\multirow{4}{*}{Unrolled} & 18 & 57 & Row-major & Any & None \\ \cline{2-6}
& 19 & 50 & Row-major & Any & None \\ \cline{2-6}
& 20 & 63 & Row-major & Squared & OpenMP \\ \cline{2-6}
& 21 & 38 & Row-major & Squared, multiple of bs & None \\ \hline
\multirow{4}{*}{Kernel Calls} & 22 & 46 & Column-major & Any & None \\ \cline{2-6}
& 23 & 115 & Column-major & Any & OpenMP \\ \cline{2-6}
& 24 & 61 & Column-major & Any & None \\ \cline{2-6}
& 25 & 105 & Column-major & Any & Unrolled \\ \hline
\end{tabular}
}
\end{minipage}\qquad
\begin{minipage}[t]{0.47\linewidth}
\resizebox{\linewidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Algorithm & Code & LoC & Layout & Sizes & Optimizations \\ \hline \hline
Kernel Calls & 26 & 164 & Column-major & Any & Unrolled \\ \hline
\multirow{9}{*}{Blocked} & 27 & 104 & Row-major & Any & Block \\ \cline{2-6}
& 28 & 30 & Row-major & Squared & OpenMP \\ \cline{2-6}
& 29 & 52 & Column-major & Any & None \\ \cline{2-6}
& 30 & 35 & Row-major & Squared & None \\ \cline{2-6}
& 31 & 38 & Column-major & Squared & None \\ \cline{2-6}
& 32 & 42 & Row-major & Multiple of bs & Unrolled \\ \cline{2-6}
& 33 & 49 & Row-major & Squared & None \\ \cline{2-6}
& 34 & 18 & Row-major & Squared & None \\ \cline{2-6}
& 35 & 21 & Row-major & Squared & None \\ \hline
\multirow{2}{*}{Goto} & 36 & 247 & Column-major & Squared & Intrinsics (SSE) \\ \cline{2-6}
& 37 & 89 & Row-major & Squared & None \\ \hline
\multirow{3}{*}{Strassen} & 38 & 210 & Row-major & Squared & None \\ \cline{2-6}
& 39 & 315 & Row-major & Squared, power of 2 & None \\ \cline{2-6}
& 40 & 162 & Row-major & Squared & None \\ \hline
\multirow{10}{*}{Intrinsics} & 41 & 102 & Row-major & Squared & Intrinsics (AVX2) \\ \cline{2-6}
& 42 & 91 & Row-major & Multiple of 8 & Intrinsics (AVX2) \\ \cline{2-6}
& 43 & 82 & Row-major & Multiple of 8 & Intrinsics (AVX2) \\ \cline{2-6}
& 44 & 58 & Row-major & Any & Intrinsics (SSE) \\ \cline{2-6}
& 45 & 112 & Row-major & Multiple of bs & Intrinsics (AVX2) \\ \cline{2-6}
& 46 & 136 & Row-major & Multiple of bs & Intrinsics (AVX2) \\ \cline{2-6}
& 47 & 120 & Row-major & Any & Intrinsics (AVX2) \\ \cline{2-6}
& 48 & 143 & Row-major & Multiple of bs & Intrinsics (AVX2) \\ \cline{2-6}
& 49 & 57 & Row-major & Multiple of bs & Intrinsics (AVX2) \\ \cline{2-6}
& 50 & 60 & Row-major & Any & Intrinsics (SSE) \\ \hline
\end{tabular}
}
\end{minipage}
\vspace*{-0.35cm}
\caption{List of GEMM codes} \label{gemmcodes}
\end{figure*}
\begin{table}[t]
\resizebox{1\linewidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Algorithm & Code & LoC & Nº Args & Optimizations & Constraints & C struct? \\ \hline \hline
\multirow{9}{*}{Direct} & 1 & 35 & 12 & None & None & No \\ \cline{2-7}
& 2 & 36 & 10 & OpenMP & FW = FH = 3 & No \\ \cline{2-7}
& 3 & 34 & 8 & OpenMP & FW = FH = 3 & No \\ \cline{2-7}
& 4 & 43 & 11 & None & FW = FH = 3 & No \\ \cline{2-7}
& 5 & 39 & 8 & OpenMP & FW = FH = 3 & No \\ \cline{2-7}
& 6 & 76 & 16 & None & N = 1 & No \\ \cline{2-7}
& 7 & 209 & 18 & Vectorized & N = 1 & Yes \\ \cline{2-7}
& 8 & 102 & 12 & None & None & No \\ \cline{2-7}
& 9 & 42 & 16 & None & None & No \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}im2col+\\ gemm\end{tabular}} & 10 & 189 & 15 & None & N = 1 & Yes \\ \cline{2-7}
& 11 & 286 & 15 & BLAS & N = 1 & Yes \\ \cline{2-7}
& 12 & 179 & 17 & BLAS & FW = FH & Yes \\ \hline
\multirow{3}{*}{Winograd} & 13 & 687 & 17 & Intrinsics + OpenMP & FW = FH = 3 & No \\ \cline{2-7}
& 14 & 254 & 12 & None & N = 1 & Yes \\ \cline{2-7}
& 15 & 782 & 12 & Intrinsics + OpenMP & FW = FH = 3 & No \\ \hline
\end{tabular}
}
\caption{List of convolution codes} \label{convcodes}
\end{table}
We explored GitHub looking for C and C++ GEMM codes, analyzing more than 400 programs from which we selected 50 programs.
We discarded the rest of them because of wrong implementations, compilation errors or duplicated code.
The final list of programs is shown in Table~\ref{gemmcodes}.
We categorize the codes as follows:
{\em Naive:} naive implementations with the traditional 3-loop structure;
{\em Naive Parallel:} as Naive but with simple outer loop parallelization;
{\em Unrolled:} naive implementation with unrolled loops;
{\em Kernel Calls:} implementations that divide the loops into different function calls;
{\em Blocked:} tiled implementations;
{\em Goto:} implementations of the Goto algorithm~\cite{goto};
{\em Strassen:} implementations of the Strassen algorithm~\cite{strassen};
{\em Intrinsics:} implementations using Intel intrinsics.
In addition, we selected 50 non-GEMM projects to check whether any of the approaches gave false positives.
\paragraph{Convolutions} We explored GitHub looking for C and C++ 4D convolution implementations.
We analyzed around 50 programs from which we a selected list of 15 programs based on the same methodology used for selecting GEMMs.
The list of convolution programs is shown in Table~\ref{convcodes}.
We have included codes from the most relevant convolution implementations:
{\em Direct:} the direct convolution algorithm;
{\em im2col+gemm:} an algorithm that casts the input as matrices (im2col) and later uses a GEMM, as in Caffe~\cite{caffe};
{\em Winograd:} the Winograd algorithm.
\subsection{Methods}
We evaluate our approach against 4 well known schemes:\\
{\bf IDL:}
Idioms are described using an idiom description language \cite{idl}, which is translated into a set of constraints over LLVM IR.\\
{\bf KernelFaRer:}
Uses different pattern matching to detect specific code constructs, matching specific matrix-multiplication structures \cite{kernelfarer}.\\
{\bf Polly:}
Detects static control parts (SCoPs) in the code using the polyhedral model \cite{llvmpolly}.
It does not replace the code with a call to an optimized library.\\
{\bf FACC*:}
FACC uses neural embeddings and behavioral synthesis to detect candidates for acceleration~\cite{facc}. It is limited to 1D arrays
so we developed an extended version, FACC*, which supports multi-dimensional arrays. \\
\section{Results} \label{sec:evaluation}
\subsection{Detection} \label{sec:cprograms}
\begin{figure*}[t]
\includegraphics[width=\linewidth]{evaluation/matched_plots}
\caption{Percentage of matched GEMM codes by different techniques.}
\label{fig:matched}
\end{figure*}
\begin{figure}[t]
\includegraphics[width=\linewidth]{evaluation/matched_reasons}
\caption{Percentage of matched GEMM codes by ATC divided by failure reason.} \label{fig:atcsuccess}
\end{figure}
Figure~\ref{fig:matched} shows the percentage of GEMM programs matched by each technique across each of 8 categories listed in Table~\ref{gemmcodes}.
\paragraph{IDL} The constraint based scheme~\cite{idl} only matches 6 out of 50 cases.
These programs are largely naive implementations of GEMM, with a simple loop structure.
It is able to manage 2 programs containing unrolled loops but fails on anything more complex.
Matching more diverse cases would require writing a new IDL constraint description for each sub-class.
\paragraph{KernelFaRer} This code matching approach~\cite{kernelfarer} is more successful, matching
11 GEMMs due to a more robust pattern matcher. For straightforward sequential
implementations, it is able to match all but one of the cases.
However, any code variation, including loop unrolling, defeats it.
\paragraph{Polly}
Although it does not match and replace GEMMs, it can detect SCoPs which may be candidates for replacement
with appropriate API calls. It is less successful than KernelFaRer in detecting naive implementations but is more robust across other more complex categories including one parallel and unrolled versions and 2 blocked cases.
It slightly outperforms KernelFaRer, matching 13 vs. 11 out of 50 cases.
\paragraph{FACC*}
Unlike the other approaches, FACC* performed poorly on naive implementations, but better on others.
Here, the size of the mapping search space is the limiting factor.
It was able to find 10 cases in the available time
(timeout $\leq$ 10 mins). We examine the reasons for this in Section~\ref{sec:complexity}.
\paragraph{ATC} Our approach is significantly more robust across all categories,
matching 42 out of 50 cases.
It is able to detect all naive implementations and the majority within each other category.
It detects more naive parallel implementations, unrolled and blocked programs than Polly and is the only technique to detect GEMMs in codes containing kernel calls and intrinsic instructions.
\subsubsection{Accuracy}
Figure~\ref{fig:atcsuccess} provides a summary of ATC's success and failure by
type. In 8 cases ATC failed to detect that the program contained a GEMM. In one
case, program 23, this is due to
there being too many candidate matches, 280 which is above our timeout threshold of 100 candidates.
The remaining cases are due to overly aggressive search pruning, missing a legal match.
Improved search heuristics are likely to improve program coverage.
\paragraph{False positives} None of the methods classified any of the 50 non-GEMMs as a GEMM\@. Across all methods, there were no false positives.
\subsection{Performance}
\label{sec:performance}
\begin{figure*}[t]
\includegraphics[width=0.9\linewidth]{evaluation/speedup_big}
\caption{Geometric mean speedup obtained by IDL, KernelFaRer, FACC* and ATC in GEMM programs with
$n=8192$.}
\label{fig:speedups}
\end{figure*}
The performance of each approach
is shown in Figure
\ref{fig:speedups}.
Polly is not included here as although it can detect SCoPs, it does not explicitly identify them as GEMMs for API replacement.
We show two bars for KernelFaRer, which correspond to the strategy of GEMM code with an optimized CPU implementation as described in~\cite{kernelfarer} and KFR (XPU) which is our extension, replacing the CPU library with the optimized XPU implementation.
IDL and FACC* directly target the accelerator, while ATC chooses the CPU or accelerator based on its SVM platform predictor. This runtime prediction cost is negligible $\leq 0.3msec$ and included in Figure~\ref{fig:speedups}.
What is immediately clear is that detecting more GEMMs leads to better overall speedup. In the Naive category, KFR and ATC are both able to achieve good performance, with a speedup of 726x and 1031x, respectively. The gap is narrowed when using KFR (XPU). However, KFR is unable to detect GEMMs in any other category leading to just a 6.2x speedup overall while ATC achieves 344.0x. Unsurprisingly, there is more performance available on naive sequential implementations than in those cases where the programmer has spent effort in optimizing the program.%
\subsection{Candidate search complexity and compile time}
\label{sec:complexity}
\begin{figure*}[t]
\includegraphics[width=0.95\linewidth]{evaluation/candidates_log}
\caption{Comparison of the number of candidates generated for matching GEMM codes: FACC* vs our approach.} \label{fig:numcand}
\end{figure*}
One of the key challenges in matching code to APIs is searching for program variables that map to API formal parameters.
As the width of the API and complexity of the user program increase, this becomes combinatorially expensive.
Figure~\ref{fig:numcand} evaluates FACC* naive matching of variables and our approach based on the Levenshtein distance.
Naive matching varies considerably from just 4 candidates to over 1 million.
Our approach greatly reduces the number of candidates for the majority of the programs.
There is one special case, code 23, where we reduce the number of candidates, but it is still too high. %
\begin{figure}[t]
\includegraphics[width=\linewidth]{evaluation/timing}
\caption{Compilation time for different number of candidates.} \label{fig:comptime}
\end{figure}
Figure~\ref{fig:comptime} shows the compilation time of ATC.
The initial neural classifier has a negligible constant execution time of 1.3 seconds, while the other phases' compilation time grows with the number of candidates.
As the number of candidates begins to increase
compilation time becomes prohibitively expensive.
Code 23 has 280 candidates which would take 35 mins more to evaluate.
We limit the number of candidates considered to 100 which corresponds to a timeout of $\leq 10$ minutes.
\subsection{Profitability accuracy}
To measure the accuracy of the SVM platform predictor, we built a model offline and
tested it on unseen data values.
\begin{table}[t]
\resizebox{\linewidth}{!}{%
\begin{tabular}{lllllll}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}\\[-16pt]Parameter\\[-0pt] Value\\[-0pt] (mnk)\end{tabular}} & \multicolumn{5}{l}{\begin{tabular}[c]{@{}l@{}} \\m\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Global \\ Accuracy\end{tabular}} \\ \cmidrule{2-6}
& \multicolumn{1}{l}{2000} & \multicolumn{1}{l}{4000} & \multicolumn{1}{l}{6000} & \multicolumn{1}{l}{8000} & 10000 & \\ \midrule
111 & \multicolumn{1}{l}{100\%} & \multicolumn{1}{l}{100\%} & \multicolumn{1}{l}{100\%} & \multicolumn{1}{l}{70.0\%} & 100\% & 93.8\% \\
123 & \multicolumn{1}{l}{100\%} & \multicolumn{1}{l}{78.9\%} & \multicolumn{1}{l}{100\%} & \multicolumn{1}{l}{100\%} & 100\% & 95.9\% \\
312 & \multicolumn{1}{l}{100\%} & \multicolumn{1}{l}{84.3\%} & \multicolumn{1}{l}{100\%} & \multicolumn{1}{l}{100\%} & 100\% & 96.9\% \\
136 & \multicolumn{1}{l}{100\%} & \multicolumn{1}{l}{89.5\%} & \multicolumn{1}{l}{100\%} & \multicolumn{1}{l}{100\%} & 100\% & 97.9\% \\
\bottomrule
\end{tabular}
}
\caption{SVM accuracy for different sizes.
111 means m = 1 $\times$ m, n = 1 $\times$ m, k = 1 $\times$ m.
123 means m = 1 $\times$ m, n = 2 $\times$ m, k = 3 $\times$ m etc} \label{tab:svm}
\end{table}
Table~\ref{tab:svm} summarizes the SVM accuracy with different input sizes and shapes.
The SVM achieves a global accuracy of 99.7\%, where the misprediction occurs between $m=2000$ and $m=8000$ which is the ``edge'' between the CPU and the XPU.
In all other intervals, the prediction is always correct.
The best accuracy is achieved with non-squared matrices, while square matrices give slightly lower accuracy.
Overall, this is a highly accurate predictor with a negligible runtime overhead of $`\leq 0.3msec$.
\subsection{Convolutions}
\begin{figure}[t]
\includegraphics[width=\linewidth]{evaluation/matched_plots_conv}
\caption{Matched convolution codes by ATC.}
\label{fig:matched_conv}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\linewidth]{evaluation/speedup_conv}
\caption{ATC speedup in convolution programs with $h=w=224$, $kw=kh=11$, $c=3$, $k=96$ and $n=100$.}
\label{fig:speedups_conv}
\end{figure}
Our approach is generic and can be applied to other APIs other than GEMMs.
As an example, we consider tensor convolutions which are a significant component of DNN workloads.
While IDL, KernelFaRer, Polly and FACC* were unable to detect any of the convolutions, ATC detected 10 of the 15 convolutions as shown in Figure \ref{fig:matched_conv};
we were unable to match 5 due to the excessive number of candidates.
Figure~\ref{fig:speedups_conv} shows the performance achieved by replacing with library code
for each of the programs we are able to accelerate. Across all codes, the SVM predicts that the TPU accelerator outperforms the CPU, giving an average 17.8x performance improvement across the programs.
\section{Related work}
\paragraph{Matching in Programs}
Matching high-level program structure has been used
to discover parallelism~\cite{di1996pap},
heterogenous offloading~\cite{andion2015compilation,Murray2011} and
many other core compiler tasks~\cite{MatchPD}.
Constraint languages make these tasks easier~\cite{idl,Blindell2018,MatchPD} but their
constraints are very sensitive to code structure~\cite{kernelfarer}.
For matrix multiplications in particular,
KernelFaRer~\cite{kernelfarer} provides a more robust approach,
detecting characteristics that define matrix multiplications.
Polyhedral analyses can also be used to target matrix multiplication
accelerators~\cite{Bhaskaracharya2020,Sun2021}, but both these techniques
fail to scale to the diversity of real code.
FACC~\cite{facc} uses IO equivalence, which is robust to program structure, but only addresses the challenges of FFTs and does not scale to longer function signatures used for GEMM.
To support any accelerator type, the compiler should support multi-dimensional arrays, while FACC only supports 1D arrays.
Because in 1D arrays and FFTs the search space in matching the API parameters is small, FACC does not include anything to reduce it.
With more complex programs and domains, this limitation makes compiling programs intractable.
Mask~\cite{Samak2020} uses symbolic execution to prove equivalence, which does not work well for floating-point problems.
Fuzzy classification techniques based on code clone
detection~\cite{Lu2021,Su2016}, domain-classification~\cite{Uhrie2021}, pattern matching~\cite{collie2019type}, code embeddings~\cite{Alon2019,Allamanis2015,DeFreez2018} and
identifiers~\cite{Numata2016,Klainongsuang2019} can be used
to help compile to accelerators~\cite{facc}. These
classification strategies are able to classify diverse
code structures, but do not provide a compilation strategy
for using an accelerator on their own.
A large class of techniques focus on migrating \textit{between}
APIs. These techniques often use program synthesis~\cite{Collie2020},
NLP~\cite{Ni2021} and code embeddings~\cite{Nguyen2017,Phan2017}.
These techniques are unable to extract existing code into APIs.
\paragraph{Compiling for GEMM Accelerators}
Existing compilation strategies largely focus on \textit{lowering}
code from intrinsics to accelerators using rewrite rules~\cite{Steuwer2015a,Schlaak2022,Weng2021}
and synthesis techniques~\cite{Cowan2020}.
Existing approaches to extracting matrix multiplications~\cite{idl,kernelfarer} are brittle.
Synthesis-based techniques~\cite{Ahmad2019,Mendis2015,Angstadt2020a}
and rewriting-based techniques~\cite{chelini2021progressive,Steuwer2016}
have been developed to extract these DSLs that can then
be lowered: but they largely require flexible DSLs, rather than
APIs presented by hardware accelerators.
\paragraph{Performance Prediction}
Predicting code the performance of hardware accelerators
is challenging, as the break-even point may depend on
many different arguments within a function's interface~\cite{logca}.
LogCA~\cite{logca} introduces static performance comparison
models for hardware accelerators and similar models have been applied
in offloading tasks~\cite{Yuan2020}.
Machine learning has often been applied in profitability settings,
such as OpenCL Kernels~\cite{openclmapping,mergeorseparate} and
OpenMP~\cite{Mishra2020}.
Similar techniques have been applied to FPGAs,
by estimating power/performance~\cite{Fuhr2019} and tracking actual performance~\cite{Rigamonti2016}.
\section{Conclusions} \label{sec:conclusions}
This work presented ATC, a flexible domain-agnostic compiler that matches legacy linear algebra code to accelerators.
By using IO behavioral equivalence and smart search space reduction, we are able to match over 80\% of challenging real-world programs to accelerator APIs, significantly outperforming all alternative approaches.
Supporting new domains different from GEMM and convolution is easy because ATC focuses on behavior rather than code structure, which makes it very flexible and extensible.
Furthermore, to support other accelerators in GEMM or convolution, only the accelerator API is needed: ATC adapts to the new specification automatically.
Future work will examine how to further reduce the search space using online learning and to expand the complexity of user code considered.
Longer-term, we wish to automatically target a range of accelerators with diverse functionality, matching and transforming user code to maximize performance.
|
1,108,101,563,172 | arxiv | \section{Introduction and Motivation}
The notion of residual properties was first introduced by Philip
Hall in 1954 \cite{PH54}. Let $X$ be a class of groups. $G$ is
residually-$X$ if, for every non-identity element $g$ in $G$,
there is an epimorph of $G$ in $X$ such that the element
corresponding to $g$ is not the identity.
In this paper we focus on residual solvability. The notion of
residual solvability of groups was pioneered particularly by
Gilbert Baumslag in his celebrated paper \cite{GB71} where he
proved that positive one-relator groups are residually solvable.
The author in \cite{DK} studied this notion in general. In
\cite{DK2005} the author studied residual solvability of
residually solvable groups.
In 1963 Gilbert Baumslag studied residual finiteness of
generalized free products of finitely generated torsion-free
nilpotent groups \cite{GB63}. He showed first that if both factors
are non-abelian then this amalgamation is not residually-finite
and he found conditions that ensure $G$ is residually finite. He
actually proved in general that this kind of structure is
free-by-residually-finite, or meta-residually-finite. A few years
later, in 1968, one of his students, Joan Landman Dyer, continued
his work \cite{JD68} on residual finiteness of these structures.
She showed in particular that if the factors are not finitely
generated nilpotent groups, these kinds of groups are not
free-by-residually-finite (by taking isomorphic factors of class
three), but rather are
residually-finite-by-free-by-residually-finite.
It is interesting to mention that the generalized free product of
two finitely generated nilpotent groups (or two finitely generated
free groups) with a finitely generated subgroup amalgamated has a
solvable word problem (see page 150 \cite{GB93}). In this work we
consider the generalized free product of finitely generated
nilpotent groups, and discuss how close such groups are to being
residual solvable. We show how conditions on the amalgamating
subgroup affects residual solvability. Note that the
free products of finitely generated nilpotent groups are residually solvable.\\
\newline
\noindent {\bf Effect of abelianization on amalgamated products of
nilpotent groups}
\newline
\noindent We give a complete explanation of abelianization of
generalized free products. In particular, we see how the
abelianization of such groups with nilpotent factors is not
trivial, using the Frattini theorem, which states that the
commutator subgroup of a finitely generated nilpotent group is a
non-generator set. As a result, we conclude that generalized free
products of two finitely generated nilpotent groups are not
perfect. Note that the condition of being finitely generated is
required in these results since we use the Frattini theorem.
\begin{theorem} \label{Not_Perfect} The generalized free product
of two finitely generated nilpotent groups amalgamated by a proper
subgroup of them is not perfect.
\end{theorem}
\noindent This property of finitely generated nilpotent groups
plays a critical role in our approach to our question. For example
finitely generated polycyclic factors that satisfy the maximal
condition, but fail to satisfy the condition of the Frattini
theorem, turn out to be perfect under certain conditions.\\\\
\noindent
{\bf Cyclic amalgamated subgroup and residual solvability}
\newline
\noindent We first consider the case when the amalgamating
subgroup is cyclic. It turns out that choosing an appropriate
solvable filtration of each factor so that the generator of
amalgamating subgroup does not lie in the $n$-th term of upper
central series enables us to prove that it is residually
solvable-by-solvable.
\begin{theorem} \label{Cyclic_amalgam} The generalized free product of two
finitely generated nilpotent groups amalgamated by a cyclic
subgroup is residually solvable.
\end{theorem}
\noindent {\bf The amalgamated subgroup is central in all factors}
\newline
\noindent We next define the generalized central product of an
arbitrary number of groups, and show that each factor injects into
such a product. We then look at the case where the amalgamating
subgroup is central in all factors, and we show using the
generalized central products that these structures turn out to be
free-by-nilpotent. We note that a nilpotent extension of a free
group is not residually nilpotent but rather is residually
solvable.
\begin{theorem} \label{finiteNilpotentRS} The generalized free product of an arbitrary number of finitely
generated nilpotent groups of bounded class, amalgamating a
central subgroup in each of the factors, is an extension of a free
group by a nilpotent group. Furthermore, such groups are
residually solvable.
\end{theorem}
\noindent {\bf The case where one of the factors is abelian}
\newline
\noindent We then consider the case where just one of the factors
is abelian. It turns out that such groups with an abelian factor
are free-by-nilpotent.
\begin{theorem} \label{abelianonefactor} The generalized free product of a finitely generated
torsion-free abelian and a nilpotent group is residually
solvable-by-abelian-by-finite abelian. Furthermore such groups are
residually solvable.
\end{theorem}
\noindent {\bf The case when the amalgamated subgroup is of finite
index in at least one of the factors}
\begin{theorem} \label{finind} The generalized free product of two finitely
generated torsion-free nilpotent groups amalgamating a proper
subgroup which is of finite index in at least one of the factors,
is an extension of a free group by a torsion-free nilpotent group.
Furthermore, such groups are residually solvable.
\end{theorem}
Note that here the condition of being torsion-free is necessary
for the factors, because of Mal'cev's Fundamental Theorem, as we
see later.\\\\
\noindent {\bf Doubles of nilpotent groups and residual
solvability, arbitrary number of factors}
\newline
\noindent By a double we mean the amalgamated product of two
groups where the factors are isomorphic and the amalgamated
subgroups are identified under the same isomorphism. One can
generalize the definition of doubles to arbitrary number of
factors.
\begin{theorem}\label{finite_double}
Let $\{A_i| i \in I \}$ be an arbitrary indexed family of
isomorphic torsion-free nilpotent groups, such that $\bigcap_{i
\in I} A_i = C$, and let $G$ be the generalized free product of
$A_i$s amalgamated by $C$:
\begin{eqnarray*}
G = \{ {\prod_{i \in I}}^* A_i; C \}.
\end{eqnarray*}
Then $G$ is an extension of a free by nilpotent group.
Furthermore, $G$ is residually solvable.
\end{theorem}
\noindent {\bf Generalized free products of nilpotent groups
sometimes may fail to be residually solvable.}
\newline
\noindent Gilbert Baumslag gave a counter example in \cite{GB72}
that shows that not every subgroup of the generalized free product
of two finitely generated torsion-free nilpotent groups is
indicable. We use this example to prove the following proposition:
\begin{proposition} \label{nil_neg} There exist two finitely generated torsion-free non-abelian free
nilpotent groups $A$ and $B$ such that the amalgamated product of
them with abelian amalgamating subgroup $C_A = C_B$,
\begin{eqnarray*}
G = \{ A \ast B ; C_A = C_B \},
\end{eqnarray*}
is not residually solvable and not poly-residually solvable.
\end{proposition}
This implies that an abelian amalgamating subgroup is not
sufficient for residual solvability.\\
\newline
\noindent {\bf Poly-residual solvability of the generalized free
product of nilpotent groups}\\
\noindent As Proposition \ref{nil_neg} suggests, we need to impose
some conditions on the amalgamating subgroup to ensure that the
generalized free product of finitely generated nilpotent group be
poly-residually solvable; by a poly-residually solvable group we
mean a group that has at least one poly-residually solvable series
(see Section \ref{Poly_RS} for precise definition). Here is the
theorem that gives us these conditions:
\begin{theorem} \label{Poly-RS} The generalized free product of two finitely generated
nilpotent groups, $A$ and $B$ amalgamating subgroups of them,
$C_A$, $C_B$ respectively is poly-residually solvable if the
following condition holds: the solvable filtrations of $A$ and $B$
which are defined on the upper central series of $A$ and $B$
respectively, are solvable with compatible filtrations, i.e.
$\xi_i A \cap C_A \stackrel{\phi_i}{=} \xi_i B \cap C_B$.
\end{theorem}
\subsection*{Acknowledgment} I thank my Ph.D. supervisor
G.Baumslag and also K.J.Falconer and P.de la Harpe for helpful
comments.
\section{Background and Preliminary Results}
In this section we recall some definitions and facts and prove
some lemmas to be used later.
Recall $G$ is called an extension of $A$ by $Q$ if there exist $A$
and $Q$ and a short exact sequence $1 \rightarrow A \rightarrow G
\rightarrow Q \rightarrow 1$. We say $G$ is meta-$X$, where $X$ is
a property (or class), if $G$ is an extension of $A$ by $Q$ where
$A$ and $Q$ have property (or class) $X$.
\subsection{Subgroups of amalgamated products}
\label{Hanna} We will use a theorem of Hanna Neumann \cite{HN49}
extensively in this paper. With regard to abstract groups, Hanna
Neumann showed in the 1950s that, in general, subgroups of
amalgamated products are no longer amalgamated products, but
generalized free products, indeed she proved the following: let
$K$ be a subgroup of $G =\{ A \ast B; C \}$, then $K$ is an
HNN-extension of a tree product in which the vertex groups are
conjugates of subgroups of either $A$ or $B$ and the edge groups
are conjugates of subgroups of $C$. The associated subgroups
involved in the HNN-extension are also conjugates of subgroups of
$C$. If $K$ misses the factors $A$ and $B$ (i.e. $K \cap A = \{1\}
= K \cap B$), then $K$ is free; and if $K$ misses the amalgamated
subgroup $C$ (i.e. $K \cap C = \{1\}$), then $K = {\prod_{i \in
I}}^* X_i \ast F$, where the $X_i$ are conjugates of subgroups of
$A$ and $B$ and $F$ is free (see \cite{GB93} for more
information).
Let us mention that later a description was given by the
Bass-Serre theory \cite{S80}, with groups acting on graphs to give
geometric intuition: the fundamental group of a graph of groups
generalizes both amalgamated products, HNN-extensions and tree
products.
\section{Some results on the structure of the abelianization}
\label{Abelian} The following lemma formulates the abelianization
of the amalgamated products of two groups. (We leave it as an
exercise to the reader to check this.)
\begin{lemma}
\label{abelianization_lemma} Let $G$ be the amalgamated product of
two groups $A$ and $B$ amalgamating $C$,
\begin{eqnarray*}
G=\{A \ast B; C \}.
\end{eqnarray*}
Then the abelianization of $G_{ab}$ of $G$ takes the following
form:
\begin{eqnarray*}
G_{ab} = (A_{ab} \times B_{ab})/{gp(\bar{c} \alpha \bar{c}^{-1}
\beta | c \in C)},
\end{eqnarray*}
where $\alpha$ is the monomorphism from $C$ into $A$ and $\beta$
is the monomorphism from $C$ into $B$.
\end{lemma}
\begin{lemma} \label{onto_abelianization} Let $G$ be the amalgamated product of $A$ and $B$ identifying $C_A$ with $C_B$,
\begin{eqnarray*}
G = \{A \ast B;C_A = C_B\}.
\end{eqnarray*}
Then the abelianization $G_{ab}$ of $G$ maps onto
\begin{eqnarray*}
D = A_{ab}/{{gp({\bar{c}}_a | c_a \in C_A)}} \times
B_{ab}/{gp({{\bar{c}}_b}^{-1} | c_b \in C_B)}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
By Lemma \ref{abelianization_lemma}, the abelianization of $G$ can
be expressed as:
\begin{eqnarray*}
G_{ab} = (A_{ab} \times B_{ab})/{gp(\bar{c_a}{\bar{c_b}}^{-1})}.
\end{eqnarray*}
Put
\begin{eqnarray*}
N & = & gp(\bar{c_a}{\bar{c_b}}^{-1})\\
N_1 & = & {gp({\bar{c}}_a)}\\
N_2 & = & {gp({{\bar{c}}_b}^{-1})}.
\end{eqnarray*}
Let $\theta$ be the map from $G_{ab}$ into $D$,
\begin{eqnarray*}
\theta : G_{ab} & \rightarrow & D\\
\theta : (A_{ab} \times B_{ab})/N & \rightarrow & A_{ab}/{N_1} \times B_{ab}/{N_2}
\end{eqnarray*}
defined by:
\begin{eqnarray*}
\bar{a} N \mapsto \bar{a} N_1 \text{ and } \bar{b} N \mapsto
\bar{b} N_2.
\end{eqnarray*}
A typical element $w$ in $N$ takes the form $w = \bar{c_a}
{\bar{c_b}}^{-1}$, so
\begin{eqnarray*}
w = \bar{c_a} {\bar{c_b}}^{-1} \mapsto (\bar{c_a} N_1,
{\bar{c_b}}^{-1} N_2) = (N_1 , N_2).
\end{eqnarray*}
and by Von Dyck's theorem $\theta$ is an epimorphism as requested.
\end{proof}
The following is an alternative proof of Lemma
\ref{onto_abelianization}, using presentations.
\begin{proof}
Let $A$ and $B$ have presentations:
\begin{eqnarray*}
A = \langle X;R \rangle \; \text{ and } B = \langle Y;S \rangle.
\end{eqnarray*}
So the abelianizations of $A$ and $B$ have presentations:
\begin{eqnarray*}
A_{ab} & = & \langle X; R \cup \{[x_i, x_j]| x_i, x_j \in X\} \rangle \\
B_{ab} & = & \langle Y; S \cup \{[y_i, y_j]| y_i, y_j \in Y\} \rangle .
\end{eqnarray*}
The direct product of $A_{ab}$ and $B_{ab}$ has the following
presentation since, by definition of the direct product, all
elements of $A_{ab}$ commute with all elements of $B_{ab}$:
\begin{eqnarray*}
A_{ab} \times B_{ab} = \langle X \cup Y;&& R \cup S \cup \{ [x_i, x_j]|\; x_i, x_j \in X \}\\
&& \cup \{ [y_i, y_j]|\; y_i, y_j \in Y \}\\
&& \cup \{\;[x, y] \;|\; x \in X, y \in Y \} \rangle .
\end{eqnarray*}
With $\alpha$ the monomorphism from $C$ into $A$, and $\beta$ the
monomorphism from $C$ into $B$, put
\begin{eqnarray*}
N & = & gp(c \alpha \; c {\beta}^{-1} | c \in C )\\
N_1 & = & gp(c \alpha | c \in C)\\
N_2 & = & gp(c {\beta}^{-1} | c \in C).
\end{eqnarray*}
By Lemma \ref{abelianization_lemma}, $G_{ab}$, the abelianization
of $G$, has the following presentation:
\begin{eqnarray*}
G_{ab} = (A_{ab} \times B_{ab})/N = \langle X \cup Y; R \cup S & \cup & \{[x_i, x_j]| x_i, x_j \in X\} \\
& \cup & \{[y_i, y_j]| y_i, y_j \in Y\} \\
& \cup & \{[x, y]\;\;| x \in X, y \in Y \} \\
& \cup & \{ c \alpha \; c {\beta}^{-1} | c \in C \} \rangle .
\end{eqnarray*}
Then the presentation of each factor of $D$ is
\begin{eqnarray*}
{A_{ab}}/{N_1} & = & \langle X; R \cup \{ [x_i, x_j]|\; x_i, x_j \in X \} \cup \{c \alpha | c \in C\} \rangle \\
{B_{ab}}/{N_2} & = & \langle Y; S \cup \{ [y_i, y_j]|\; y_i, y_j
\in Y \} \cup \{c {\beta}^{-1} | c \in C\} \rangle .
\end{eqnarray*}
Their direct product $D$ has the following presentation:
\begin{eqnarray*}
D = {A_{ab}}/{N_1} \times {B_{ab}}/{N_2} = && \langle X \cup Y; R \cup S \cup \{ [x_i, x_j]|\; x_i, x_j \in X \}\\
&& \cup \{ [y_i, y_j]|\; y_i, y_j \in Y \}\\
&& \cup \{ c \alpha| c \in C\}\\
&& \cup \{ c {\beta}^{-1} | c \in C\}\\
&& \cup \{ [x ( c \alpha), y (c {\beta}^{-1})]| x \in X, y \in Y, c \in C \} \rangle .
\end{eqnarray*}
Now define $\theta$ to be a map from $G_{ab}$ into $D$, by sending
$x_i \mapsto x_i$ and $y_i \mapsto y_i$. Then $N \mapsto 1$, so by
Von Dyck's Theorem, $\theta$ defines a homomorphism from $G_{ab}$
onto $D$.
\end{proof}
\section{Effects of the order of the abelianization of amalgamated products of nilpotent groups}
In order to study the effect of the order of the abelianization of
the amalgamated products of nilpotent groups, we first study the
effect of indicability on the order of finitely generated
nilpotent groups. Recall that a group $A$ is termed indicable if
there exists a homomorphism of $A$ onto the infinite cyclic group.
A finitely generated group $G$ is indicable if and only if
$G_{ab}$ is infinite. Higman \cite{GH40} proves that every
finitely generated torsion-free nilpotent group is indicable and
is therefore infinite. We conclude that the abelianization of
every finitely generated torsion-free nilpotent group is also
infinite. Note that the abelianization of an infinite finitely
generated nilpotent group $A$ is again infinite, since there is a
canonical homomorphism from $A$ onto $A/{\tau A}$, where $\tau A$
is the torsion group of $A$. Since $A/{\tau A}$ is a finitely
generated torsion-free nilpotent group, $A/{\tau A}$ is infinite.
Hence $A$ is infinite.
To prove the following proposition we use a theorem of G. Baumslag
\cite{GB72} that states that the amalgamated product of two
finitely generated torsion-free nilpotent groups is indicable.
\begin{proposition} \label{Abel_Amal_t-f_Nilp} The abelianization of the amalgamated product of two
finitely generated nilpotent groups is infinite only if one of the factors
is infinite.
\end{proposition}
\begin{proof} First let $A$ and $B$ be two finitely generated torsion-free nilpotent
groups, let $C$ be a proper subgroup of both of them, and $G$ be
the amalgamated product of them, amalgamating $C$,
\begin{eqnarray*}
G= \{A \ast B;C\}.
\end{eqnarray*}
$G$ is indicable by the theorem of G.Baumslag. Therefore $G_{ab}$,
the abelianization of $G$, is infinite (see \cite{GB93} for more
information). Now let just one of the factors be torsion-free.
There exists an epimorphism from $G$ onto a finitely generated
torsion-free nilpotent group,
\begin{eqnarray*}
G^* = \{A/{\tau A} \ast B/{\tau B} ; C/{\tau C}\},
\end{eqnarray*}
the amalgamated product of finitely generated torsion-free
nilpotent groups. Therefore ${G^*}_{ab}$ is infinite, using the
first case, Implying that $G_{ab}$ is infinite.
\end{proof}
\section{Proof of Theorem \ref{Not_Perfect}}
In this section we first recall the Frattini Subgroup Theorem.
Recall that the Frattini subgroup of $G$, $\Phi G$, is the
intersection of all maximal subgroups of $G$. In particular, $\Phi
G$ is characteristic. We also recall that $g \in G$ is called a
non-generator of $G$, if whenever $G=gp(X,g)$ then $G=gp(X)$. A
theorem of Frattini states that $\Phi G$ is the set of all
non-generators in $G$. In particular if $G$ is a finitely
generated nilpotent group, then $\delta_2 G \leqslant \Phi G$,
where $\delta_2 G = [G, G]$ (see \cite{DR95} for more
information).
\begin{proof} Let $A$ and $B$ be two finitely generated nilpotent
groups, and let $C_A$ and $C_B$ be proper subgroups of $A$ and $B$
respectively. We want to show that the generalized free product $G
= \{ A \ast B ; C_A = C_B \}$ is not perfect, that is that the
abelianization $G_{ab}$ of $G$ is not trivial.
\newline
\noindent By Lemma \ref{onto_abelianization}, $G_{ab}$ maps onto
\begin{eqnarray*}
D = A_{ab}/{{gp({\bar{c}}_a | c_a \in C_A)}} \times
B_{ab}/{gp({{\bar{c}}_b}^{-1} | c_b \in C_B)}.
\end{eqnarray*}
\noindent
where $\bar{c_a}$ and $\bar{c_b}$ are the images of $c_a
\in C_A$ and $c_b \in C_B$ in $A_{ab}$ and $B_{ab}$ respectively.
\newline
We claim that under the conditions of the Theorem $D$ is not
trivial. In order to prove the claim, we will show that
\begin{eqnarray*}
A_{ab}/{gp({\bar{c}}_a)} \not= \{1\} \;\; \text{ and }
B_{ab}/{gp({{\bar{c}}_b}^{-1})} \not= \{1\}.
\end{eqnarray*}
Since $C_A$ is a proper subgroup of $A$, the Frattini Subgroup
Theorem implies that $gp(C_A , \delta_2 A)$ is a proper subgroup
of $A$, where $\delta_2 A= [A,A]$. Since $C_A$ is a proper
subgroup of $A$, $gp(C_A , \delta_2 A)$ is also a proper subgroup
of $A$, so
\begin{eqnarray*}
A/{gp(C_A , \delta_2 A)} & \not\simeq & \{1\}.
\end{eqnarray*}
\noindent
By using the third isomorphism theorem we have:
\begin{eqnarray*}
\frac{A_{ab}}{gp(\bar{c_a})} \simeq \frac{\frac{A}{\delta_2
A}}{\frac{gp(C_A, \delta_2 A)}{\delta_2 A}} \simeq \frac{A}{gp(C_A
,\delta_2 A)} \not\simeq \{1\}.
\end{eqnarray*}
\noindent Similarly
\begin{eqnarray*}
{B_{ab}} /{gp({\bar{c_b}}^{-1})} \not= \{1\}.
\end{eqnarray*}
\noindent This proves our claim that, $D \not= \{1\}$ and so
$G_{ab} \not= \{1\}$. Hence $G$ is not perfect.
\end{proof}
\section{Proof of Theorem \ref{Cyclic_amalgam}}
\begin{proof} Let $A$ and $B$ be two finitely generated nilpotent
groups. Let $a$ be a non-identity element of $A$, and $b$ be a
non-identity element of $B$. We can find $m \geqslant 1$ and $n
\geqslant 1$ such that $1 \not= a \in {\xi}_{m+1} A \backslash
{\xi}_{m} A$, and $1 \not= b \in {\xi}_{n+1} B \backslash
{\xi}_{n} B$, (where ${\xi}_i A$ is the $i-$th term of the upper
central series of $A$, and ${\xi}_j B$ is the $j-$th term of the
upper central series of $B$). Let $G=\{A \ast B; a=b \}$ be the
generalized free product of $A$ and $B$ amalgamating $a$ with $b$.
Let $D$ be the central product of $A/{{\xi}_m A}$ and $B/{{\xi}_n
B}$ amalgamating $a{\xi}_m A$ with $b{\xi}_n B$,
\begin{eqnarray*}
D= \{{A/{{\xi}_m A}} \times B/{{\xi}_n B}; a{\xi}_m A= b{\xi}_n B
\}.
\end{eqnarray*}
Note that
\begin{eqnarray*}
C {{\xi}_{n}}B / {{\xi}_{n}}B \simeq C / {C \cap {{\xi}_{n}}}B
\end{eqnarray*}
and
\begin{eqnarray*}
C {{\xi}_{m}}A / {{\xi}_{m}}A \simeq C / {C \cap {{\xi}_{m}}}A
\end{eqnarray*}
are cyclic groups. This confirms the fact that
\begin{eqnarray*}
a{\xi}_m A= b{\xi}_n B.
\end{eqnarray*}
Map $G$ into $D$ and let $K$ be the kernel of this map. Then
observe that $K \cap C = \{1\}$, where $C = gp(a)=gp(b)$.
Therefore $K$ is a free product of conjugates of subgroups of $A$
and $B$, and a free group by the Hanna Neumann Theorem, see
Section \ref{Hanna}. So $K$ is residually solvable. Hence $G$ is
an extension of a residually solvable group by a solvable group.
\end{proof}
\section{The amalgamated subgroup is central in all factors}
\subsection{Generalized central products and some related results}
Here we introduce a general definition of the generalized central
product of an arbitrary number of factors. Let us mention the work
of D.Robinson in \cite{DR95} for finite number of factors. We
adopt the notation used for generalized free products \cite{GB93}:
\begin{definition} \label{gen_cent_prod_def}
\end{definition}
\noindent
Suppose that
\begin{eqnarray*}
\{ A_i = \langle X_i;R_i \rangle |i \in I\}
\end{eqnarray*}
is an indexed family of presentations
\begin{eqnarray*}
A_i = \langle X_i ; R_i \rangle
\end{eqnarray*}
of the groups $A_i$, and suppose $C$ is another group equipped
with monomorphisms
\begin{eqnarray*}
\phi_i : C \rightarrow A_i \; \text{ and } C \leqslant \xi A_i
\;\; (\text{for all } i \in I, \text{ where } \xi A_i \text{ is
the center of } A_i).
\end{eqnarray*}
We term the group $A$ defined by the presentation
\begin{eqnarray*}
A = \langle \cup X_i ; \cup R_i \cup \{ c \phi_i c^{-1} \phi_j |
c\in C, i, j \in I \} \cup \{ [x_i,x_j]=1 \; i,j \in I \} \rangle,
\end{eqnarray*}
where we assume that the $X_i$ are disjoint, the generalized
central product of $A_i$ amalgamating the central subgroup $C$.
\begin{eqnarray*}
A = {\prod_{i \in I}}^{\times} \{A_i ; C\}.
\end{eqnarray*}
If $C=1$, then $A$ is termed the direct product of the $A_i$.\\
According to Von Dyck's theorem, there are canonical homomorphisms
$\mu_i$ of each $A_i$ to $A$. We will prove that the $\mu_i$ are
monomorphisms, and, if we identify $A_i$ with $A_i \mu_i$, then
\begin{eqnarray*}
c \phi_i = c \phi_j \; \text{for all } c \in C, i,j \in I.
\end{eqnarray*}
Thus, we can identify $C$ with any of its images $C \phi_i$ which
are already identified with $C \phi_j \mu_i$. Then $A_i \cap A_j =
C \; (i,j \in I, i \not= j)$ and $A=gp({\cup}_{i \in I}A_i)$. $A$
can also be written as
\begin{eqnarray*}
A = {({\prod_{i \in I}}^{\times} A_i)}/{gp(c \phi_i c^{-1} \phi_j|
i,j \in I,\; c \in C)}.
\end{eqnarray*}
\begin{lemma} \label{central_any_factor}
$\mu_i : A_i \rightarrow {({\prod_{i \in I}}^{\times} A_i)} / gp(c
\phi_i c^{-1} \phi_j | i, j \in I, c\in C)$ is a monomorphism for
all $i \in I$.
\end{lemma}
\begin{proof}
Put $S = gp(c \phi_i c^{-1} \phi_j | i, j \in I, c \in C),
\;\text{ where } C \stackrel{\phi_i}{\rightarrow} A_i.$ To show
that $\mu_i$ is a monomorphism we must show that $\ker \mu_i =1$.
Let $a \in A_i$ and $a \in \ker \mu_i$, so that $a \mu_i = 1$. We
want to show that $a=1$. A generic element in $S$ has the
following form:
\begin{eqnarray*}
s = (c_1 \phi_{i_1} {c_1}^{-1} \phi_{j_1}) \cdots (c_n \phi_{i_n}
{c_n}^{-1} \phi_{j_n}).
\end{eqnarray*}
Since $a \mu_i = 1$, then $a \in S$, and
\begin{eqnarray*}
a = (c_1 \phi_{i_1} {c_1}^{-1} \phi_{j_1}) \cdots (c_n \phi_{i_n}
{c_n}^{-1} \phi_{j_n}).
\end{eqnarray*}
Let us consider two cases: the case where none of $i_r$ and $j_r$
are equal to $i$,
and the case where some of them are equal to $i$.\\
Case 1: Since none of the subscripts are equal to $i$, this
implies that
\begin{eqnarray*}
a \in gp(A_k |k \not=i).
\end{eqnarray*}
But $a$ is also an element in $A_i$. Therefore
\begin{eqnarray*}
a \in A_i \cap gp(A_k | i \not= k) =1
\end{eqnarray*}
(by a property of the direct product), so $a=1$.
\newline
Case 2: Now suppose that some of the indexes are equal to $i$, say
$i_l =i$. Note that for $c \in C$, $c \phi_i$ is central in $A_i$.
Thus we have:
\begin{eqnarray*}
a=(\prod_{i_l} c_l \phi_{i_l} {c_l}^{-1} \phi_{j_l})(\prod_{k_m
\not= i} c_1 \phi_{k_1} {c_1}^{-1} \phi_{k_1} \cdots c_n
\phi_{k_n} {c_n}^{-1} \phi_{k_n}).
\end{eqnarray*}
By a similar argument,
\begin{eqnarray*} (\prod_{k_m \not= i} c_1 \phi_{k_1}
{c_1}^{-1} \phi_{k_1} \cdots c_n \phi_{k_n} {c_n}^{-1} \phi_{k_n}
) = 1.
\end{eqnarray*}
Now
\begin{eqnarray*}
\prod_{i_l} c_l \phi_{i_l} & = & ({\prod} c_l) \phi_{i} = c
\phi_i.
\end{eqnarray*}
Hence
\begin{eqnarray*}
a & = & ( c \phi_{i})(\prod_{j_l \not= i} {c_l}^{-1} \phi_{j_l}).
\end{eqnarray*}
We have
\begin{eqnarray*}
a c^{-1} \phi_{i} = (\prod_{j_l \not= i} {c_l}^{-1} \phi_{j_l})
\in A_i \cap gp(A_k| k \not=i) =1,
\end{eqnarray*}
so $a = c \phi_{i}$. This implies that $a= 1$, since
\begin{eqnarray*}
a= c \phi_i = (\prod_{i_l} c_l \phi_{i_l} {c_l}^{-1} \phi_{j_l}).
\end{eqnarray*}
The right hand side has an even number of factors, so this is
possible only if it is equal to $1$, as required.
\end{proof}
\subsection{Proof of Theorem \ref{finiteNilpotentRS}}
Before proving the Theorem, we bring together some of the facts
and related lemmas that we will use in the proof. One of these
facts is that a direct product of finitely many nilpotent groups
is nilpotent. However we note that a direct product of infinitely
many nilpotent groups is not necessarily nilpotent. (see D.Segal's
book \cite{DS83} for more information)
\begin{lemma} \label{Central_Prod_Nil} The generalized central product of finitely many nilpotent groups is nilpotent.
\end{lemma}
\begin{proof}
The generalized central product of nilpotent groups is the
quotient of the direct product of finitely many nilpotent groups,
(which is again nilpotent, \cite{DS83} page 6) and another
nilpotent group. The quotient of nilpotent groups is nilpotent.
Therefore the generalized central product of finitely many
nilpotent groups is nilpotent.
\end{proof}
We note that the generalized central product of finitely many
solvable groups is solvable. For the case of abelian factors, the
generalized central product of an arbitrary number of abelian
groups is abelian.
\newline
Now we prove Theorem \ref{finiteNilpotentRS}:
\begin{proof} Suppose that $\{A_i| i \in I\}$ is an indexed family of
finitely generated nilpotent groups and let $G$ be the generalized
free product of the $A_i$, with amalgamating subgroup $C$:
\begin{eqnarray*}
G = \{{\prod_{i \in I}}^{\ast} A_i ; C \}.
\end{eqnarray*}
Let $S$ be the generalized central product of $A_i$ (see
Definition \ref{gen_cent_prod_def}):
\begin{eqnarray*}
S = {{\prod_{i \in I}}^{\times} A_i}/{gp(c \phi_i \; c^{-1}
\phi_j| i,j \in I,\; c \in C)}.
\end{eqnarray*}
$S$ contains a canonical homomorphism of $A_i$ for all $i \in I$.
Since these homomorphisms coincide on $C$, they can be extended to
a homomorphism $\mu$ from $G$ into $S$. Let $K$ be the kernel of
$\mu$. Since $S$ is nilpotent,
it follows that $G/K$ is nilpotent. By Lemma
\ref{central_any_factor}, $\mu$ is one-to-one restricted to each
factor, i.e.
\begin{eqnarray*}
K \cap A_i =1 \text{ for all } i \in I.
\end{eqnarray*}
So, by the theorem of Hanna Neumann mentioned in Section
\ref{Hanna}, $K$ is free. Hence $G$ is a nilpotent extension of a
free group, so is also residually solvable.
\end{proof}
The following proposition is for the case of two finitely
generated solvable factors.
\begin{proposition} \label{central} The generalized free product of two finitely generated
solvable groups amalgamated by central subgroups, is a solvable
extension of a residually solvable group. Furthermore such groups
are residually solvable.
\end{proposition}
\begin{proof} Let $N$ be the normal closure of $B$ in $G$, i.e.
$N = {gp}_G (B)$ Since $A/C$ is solvable and $G/N \simeq A/C$ then
$G/N$ is a solvable group. Since $N$ is an iterated, infinite
untwisted double of copies of $B$, which therefore can be mapped
onto $B$ with free kernel, $N$ is residually solvable. So $G$ is a
solvable extension of a residually solvable group.
\end{proof}
For the case of abelian factors, we have the following corollary:
\begin{corollary} \label{infiniteabelianRS} The generalized free product an of
arbitrary number of abelian groups is an extension of a free group
by an abelian group. Further, such groups are residually solvable.
\end{corollary}
\section{Proof of Theorem \ref{abelianonefactor}}
Here we prove Theorem \ref{abelianonefactor}. Note that this is a
special instance of the case where the amalgamating subgroup is
central in only one of the factors.
\begin{proof} Let $A$ and $B$ be finitely generated groups, with $A$ torsion-free abelian and $B$ torsion-free nilpotent,
and let $G$ be their amalgamated product of them with amalgamating
subgroup $C$,
\begin{eqnarray*}
G= \{A \ast B;C_A = C_B\}.
\end{eqnarray*}
Then $C$ is a direct factor of a subgroup $A_1$ of $A$ of finite
index, i.e.
\begin{eqnarray*}
A_1 = C \times H \;\; \text{ where } [A : A_1] = n < \infty.
\end{eqnarray*}
There is an epimorphism from $G$ onto $A/{A_1} = \bigcup_{i =1}^n
a_i A_1$, where the $a_i$ are a distinct set of coset
representatives of $A_1$ in $A$. The kernel of this map is $K =
gp_G (B, A_1) = \{{\prod_{i = 1}^n}^{\ast} B^{a_i} \ast A_1; C\}$.
Now let $D$ be the normal closure of finitely many copies of $B$
in $K$:
\begin{eqnarray*}
D = gp_K (\bigcup_{i=1}^n B^{a_i}) = \{{{\prod^n_{i=1}}}^*
B^{a^i}; C\}.
\end{eqnarray*}
Since $[a_i, C]=1$, then $D$ is the generalized free product of a
finite number of doubles, and hence $D$ is residually solvable. So
$K$ is an extension of a residually solvable group by an abelian
group. Therefore $G$ is residually solvable-by-abelian-by-finite
abelian. Therefore $G$ is residually solvable.
\end{proof}
\subsection{An example where one of the factors is abelian}
Here we construct an example to illustrate the above theorem.
\begin{example} Let $A = gp(a,b,c)$ be a free abelian group, and $B
= gp(x, y, z)$ be a free nilpotent group of class $2$.
$C_A=gp(a^2, b)$ and $C_B = gp(y, z)$ are free abelian groups of
rank $2$. If we form
\begin{eqnarray*}
G=\{ A \ast B ; a^2 = x , b = z \},
\end{eqnarray*}
then $G$ is residually solvable. Note that $C_B$ is normal but not
central in $B$.
\end{example}
\begin{proof} The presentations of $A$, $B$, $C_A$ and $C_B$ are as
follows:
\begin{eqnarray*}
&& A = \langle a, b, c; [a,b], [b,c], [a,c] \rangle \; \text{ free of rank $3$},\\
&& C_A = gp(a^2 , b) = \langle a^2, b; [a^2,b] \rangle \; \text{ free abelian of rank $2$},\\
&& B = \langle x, y, z; [x,y]=z, [x,z], [y,z] \rangle \; \text{ free nilpotent of class $2$ and of rank $3$},\\
&& C_B = gp(y ,z)=\langle y, z; [y, z] \rangle \; \text{free
abelian of rank $2$}.
\end{eqnarray*}
Define an isomorphism $\phi$ which maps
\begin{eqnarray*}
a^2 \mapsto y \; \text{ and } b \mapsto z.
\end{eqnarray*}
Form the generalized free product of $A$ and $B$ identifying $C_A$
with $C_B$:
\begin{eqnarray*}
G = \{A \ast B; C_A = C_B\}.
\end{eqnarray*}
$C_B$ is normal in $B$ but is not central in $B$, and observe that $\xi B \cap C_B \not= \{1\}$.\\
However, $C_A$ is central and therefore normal in $A$. On the
other hand the quotient group $B/{C_B}$ is an infinite cyclic
group generated by $x C_B$,
\begin{eqnarray*}
B/{C_B} = gp(x C_B).
\end{eqnarray*}
Let $K$ be the normal closure of $A$ in $G$,
\begin{eqnarray*}
K= gp_G (A) = gp_B (A) = gp(A^{x^i}| i \in \mathbb{Z}).
\end{eqnarray*}
$G / K$ is an infinite cyclic group,
\begin{eqnarray*}
G / K = gp ( x K).
\end{eqnarray*}
Note that
\begin{eqnarray*}
a^{x^i} =b^i a^2 \;\; \text{ and } b^{x^i} =b \; \text{for all } i
\in \mathbb{Z}.
\end{eqnarray*}
Hence
\begin{eqnarray*}
A \cap A^{x^i} = C_A \;\;\; (\forall i \in \mathbb{Z})
\end{eqnarray*}
so defining
\begin{eqnarray*}
A_i := gp(A, A^{x^i})= \{ A \ast A^{x^i}; C_A \}
\end{eqnarray*}
we have
\begin{eqnarray*}
\forall w_i \in A_i \; \exists w_j \in A_j \text{ s.t. } w_i =
w_j^{x^{(i-j)}}
\end{eqnarray*}
Thus \begin{eqnarray*} K = \langle A_i, x^i (i \in \mathbb{Z});
H_i = {H_j}^{x^{(i-j)}}(i,j \in \mathbb{Z}) \rangle.
\end{eqnarray*}
Then $K$ is an HNN-extension, where the base groups are the union
of $A_i \; (i \in {\mathbb Z})$, the stable letters are $x_i = x^i
(i \in \mathbb{Z})$, and the associated subgroups are $H_i$, where
$H_i$ is a subgroup of $A_i$ (for $i \in {\mathbb Z}$). In
particular by using the subgroup theorem, we can express $K$ as
the generalized free product of conjugates of free abelian groups.
\begin{eqnarray*}
K = \{{\prod_{i \in {\mathbb Z}}}^* A^{x^i}; C_A\}.
\end{eqnarray*}
Hence $K$ is an extension of a free group by an abelian group.
Consider the following short exact sequence:
\begin{eqnarray*}
1 \rightarrow K \rightarrow G \rightarrow B/{C_B} \simeq \mathbb{
Z } \rightarrow 1.
\end{eqnarray*}
$G/K$ is an infinite cyclic group, so therefore $G$ is an
extension of a residually solvable group by an infinite cyclic
group, and thus $G$ is residually solvable.
\end{proof}
\section{Proof of Theorem \ref{finind}}
We first give some background we need to prove Theorem
\ref{finind}.
\subsection{The Mal'cev completion and the fundamental theorem of
torsion-free nilpotent groups} Here we recall some definitions and
theorems that we use to prove Theorem \ref{finind}. A group is
called complete if, for every element $a$ of $G$ and natural
number $n$, the equation $x^n =a$ has at least one solution in
$G$, or, in other words, every root of every element of $G$
belongs to $G$ . There is a theorem that states that a finitely
generated nilpotent group is complete if and only if it contains
no proper subgroup of finite index; hence every group is contained
in a complete group. Let $\mathcal{D}$ denote the set of the class
of groups in which extraction of roots is always uniquely
possible. Let us term a minimal torsion-free nilpotent
$\mathcal{D}$-group, $m(G)$, containing a given torsion-free
nilpotent group $G$, a Mal'cev completion of $G$.
Let $G$ be a finitely generated torsion-free nilpotent group of
class $c$, let $g$ be an element of $G$ and let $n$ be a positive
integer. Then $G$ can be embedded in a finitely generated
torsion-free nilpotent group of class $c$ in which $g$ has an
$n$-th root. Therefore every finitely generated torsion-free
nilpotent group can be embedded in a nilpotent
$\mathcal{D}$-group.
We recall the Mal'cev Theorem: let $A$ be a torsion-free nilpotent
group, then $A$ can be embedded in a torsion-free nilpotent
$\mathcal{D}$-group. As a corollary we have: let $G$, $H$ be
torsion-free nilpotent groups, let $\phi$ be a homomorphism of $G$
into $H$ and let $m(G)$ and $m(H)$ be any Mal'cev completions of
$G$ and $H$; then $\phi$ can be extended uniquely to a
homomorphism $m(\phi)$ of $m(G)$ onto $m(H)$. Also if $m(G)$ and
$m'(G)$ are Mal'cev completions of $G$ then they are isomorphic.
We now state Mal'cev's fundamental theorem of torsion-free
nilpotent groups. If $G_1 ^*$ and $G_2 ^*$ are two completions of
$G$, then there exists an isomorphism between them that extends
the identity automorphism of $G$, and this isomorphism is unique
(for more details see \cite{GB71_}, \cite{AGK60}).
\subsection{Proof of Theorem \ref{finind}}
Note that here the condition of being torsion-free is necessary,
because of Mal'cev's Fundamental Theorem.
\begin{proof} Let $A$ and $B$ be two finitely generated torsion-free nilpotent
groups, and let $C$ be a proper subgroup of $A$ and $B$, such that
the index of $C$ in, say, $A$ is finite. We want to show that the
generalized free product of $A$ and $B$ with $C$ amalgamating,
\begin{eqnarray*}
G=\{ A \ast B; C \},
\end{eqnarray*}
is a finitely generated torsion-free nilpotent extension of a free
group. Let $m(A)$ and $m(B)$ be Mal'cev completions of $A$ and $B$
respectively. Since the index of $C$ in $A$ is finite, then $m(A)$
is also a Mal'cev completion of $C$. There exists a monomorphism
$\mu$ from $C$ into $B$. Then $\mu$ can be extended uniquely to a
homomorphism $m(\mu)$ of $m(A)$ onto $m(B)$. Now, there is a
homomorphism from $G$ into $m(B)$, which is consistent on $C$, and
injective when restricted to $A$ and $B$. Let $K$ be the kernel of
this homomorphism, then we have
\begin{eqnarray*}
K \cap A = \{1\} = K \cap B.
\end{eqnarray*}
Therefore $K$ is free by the result of Hanna Neumann mentioned in
Section \ref{Hanna}. Hence $G$ is a torsion-free nilpotent
extension of a free group.
\end{proof}
\section{Proof of Theorem \ref{finite_double}}
We first need to prove the following lemma which will be used in
proving Theorem \ref{finite_double}.
\begin{lemma} \label{double_key_lemma} If $A$ is a
group, $C$ is a subgroup of $A$, $\phi$ is an isomorphic mapping
of $A$ onto a group $B$, and $D$ is the amalgamated product of $A$
and $B$ amalgamating $C$ with $C \phi$, that is
\begin{eqnarray*}
D = \{ A \ast B ; C = C \phi \},
\end{eqnarray*}
then there is a homomorphism, $\psi$, from $D$ onto one of the
factors, with kernel:
\begin{eqnarray*}
\ker \psi = gp(a (a \phi)^{-1} | a \in A).
\end{eqnarray*}
Furthermore $\psi$ injects into each factor.
\end{lemma}
\begin{proof} Let $\alpha$ be the homomorphism from $A$ onto itself, and $\beta$ be
the homomorphism from $B$ onto the inverse of the isomorphic copy
of $A$, i.e. $\beta = {\phi}^{-1}$. These homomorphisms can be
extended to a homomorphism from $D$ onto $A$, (\cite{GB93} page
103, \cite{BN49}). Since $\psi(C a b) = C a (b \phi^{-1})$ it
follows easily $K = \ker \psi = gp (a (a \phi^{-1})| a \in A).$ By
the way that $\alpha$ and $\beta$ are defined, it follows that
this homomorphism is one-to-one restricted to either $A$ or $B$.
\end{proof}
Now we are ready to prove Theorem \ref{finite_double}.
\begin{proof} For each $i \in I$ let $\phi : G \rightarrow A_i$ be an epimorphism, and let $K$ be the kernel of $\phi$.
$K$ is free, since $\phi$ restricted to each factor is injective,
and
\begin{eqnarray*}
A_i \cap K =\{1\} \;\;(\forall i \in I).
\end{eqnarray*}
Therefore $G$ is free-by-nilpotent.
\end{proof}
\section{Proof of Proposition \ref{nil_neg}}
\begin{proof}
Let $A = gp(a,x)$ be a free nilpotent group of class $2$, and $B =
gp(b,y)$ be a free nilpotent group of class $3$. Then $C_A=gp(a,
a^x)$ and $C_B = gp( b, [b, b ^y])$ are free abelian groups of
rank $2$. If we form
\begin{eqnarray*}
G & = & \{ A \ast B ; C_A = C_B \}\\
& = & \{ A \ast B ; a=b, a^x = [b , b^y] \},
\end{eqnarray*}
then
\begin{eqnarray*}
a & = & [a, a^y]^x = [a^x , a^{y^x}],
\end{eqnarray*}
so $a$ lies in every term of the derived series of $G$. Hence $G$
is not residually solvable. Now put $N={gp}_{G} (a)$ which is
perfect. Since a poly-residually solvable groups can not have a
perfect subgroup, $G$ is not poly-residually solvable.
\end{proof}
Note that this contrasts with the case where residual finiteness
of the groups has been studied, where Baumslag proved that
generalized free products of finitely generated torsion-free
nilpotent groups in general are meta-residually solvable.
We have shown that if the amalgamating subgroup of the generalized
free product of torsion-free nilpotent groups is cyclic
(Proposition \ref{Cyclic_amalgam}) or central in both factors
(Theorem \ref{central}) then these groups are residually solvable.
One may conjecture that it would be the case when the amalgamating
subgroup is abelian. However this is false by the example of
Baumslag as mentioned above.
This can be easily shown using Baumslag's example of an
amalgamated product, which can be modified to give an example
where the factors are finite, but the product is not residually
solvable.
\section{Proof of Theorem \ref{Poly-RS}}
\label{Poly_RS} Let us recall the definition of a poly-property.
Let $X$ be a property (or class) of groups. A finite normal series
\begin{eqnarray*}
1 = G_0 \leqslant G_{1} \leqslant \cdots \leqslant G_l = G
\;\;\;\;\; (1)
\end{eqnarray*}
of $G$ is termed a poly-$X$ series for $G$ if
\begin{eqnarray*}
{G_{i+1}}/{G_i} \in X \text{ for } i=0, \cdots, l-1.
\end{eqnarray*}
A group $G$ is termed poly-$X$ if it has at least one poly-$X$
series. The length of the series (1) is $l$.
The proof of Theorem \ref{Poly-RS} uses an upper central
filtration approach:
\begin{proof}
Let $G=\{A \ast B; C_A=C_B\}$ be the generalized free product of
finitely generated nilpotent groups $A$ and $B$. Assume that
$\xi_i A$ and $\xi_i B$ be $i$-th terms of the upper central
series of $A$ and $B$ respectively and $(\xi_i A), (\xi_i B)$ (for
$i$ is bounded by the maximum class of nilpotency of $A$ and $B$)
be solvable filtrations of $A$ and $B$ such that they are also
compatible filtrations. We want to show that $G$ has an invariant
series
\begin{eqnarray*}
1=G_0 \leq G_1 \leq \cdots \leq G_k = G \;\; (k < \infty)
\end{eqnarray*}
such that ${G_{i+1}}/{G_i}$ is residually solvable (for $i=0,
\cdots, k-1$). Map
\begin{eqnarray*}
G \rightarrow G/{gp_G (C_A \cap \xi_1 A)}.
\end{eqnarray*}
Note that, since $\xi G = \xi A \cap \xi B \cap C_A$, then
\begin{eqnarray*}
gp_G (C_A \cap \xi_1 A) < \xi_1 G.
\end{eqnarray*}
\begin{eqnarray*}
G/{gp_G (C_A \cap \xi_1 A)} \simeq &&\{A/{gp_G (C_A \cap \xi_1 A)} \ast B/{gp_G (C_A \cap \xi_1 A)};\\
&& {C_A}/{gp_G (C_A \cap \xi_1 A)} = {C_B}/{gp_G (C_A \cap \xi_1 A)}\}\\
G/{gp_G (C_A \cap \xi_1 A)} \simeq &&\{ A/{C_A \cap \xi_1 A} \ast B/{C_B \cap \xi_1 B};\\
&& {C_A}/{C_A \cap \xi_1 A} = {C_B}/{C_B \cap \xi_1 B)}\}.
\end{eqnarray*}
One can check that this is residually solvable.
\begin{eqnarray*}
&& {G/{gp_G (C_A \cap \xi_1 A)}}/{gp_G ({\xi_1 A}/{C_A \cap \xi_1
A},
{\xi_1 B}/{C_B \cap \xi B})} \simeq \\
&& \{ A/{\xi_1 A} \ast B/{\xi_1 B}; {C_A \xi_1 A}/{\xi_1 A} = {C_B
\xi_1 B}/{\xi_1 B}\}.
\end{eqnarray*}
Inductively we can show that for $i$ each quotient is residually
solvable and this completes the proof of the theorem.
\end{proof}
\bibliographystyle{amsplain}
|
1,108,101,563,173 | arxiv | \section{Introduction}
\IEEEPARstart{T}{he} performance of modern communication networks is largely constrained by the limited battery life of wireless devices (WDs). Once the energy is depleted, a WD needs manual replacement/recharging of its battery, which can result in frequent interruption to normal device operation and severe communication performance degradation. Alternatively, the recent development of wireless energy transfer (WET) technology enables a novel networking paradigm named wireless powered communications network (WPCN) \cite{2015:Bi,2015:Lu,2016:Bi}, where the information transmissions of WDs can be continuously and remotely powered by the microwave energy transmitted by dedicated energy nodes. The use of WET can effectively reduce the battery replacement/recharging cost and also improve the communication quality with reduced energy outages. With its potential to tackle the critical energy constraints, we can expect that WET will be an important building block in future wireless communication networks.
There are extensive studies on implementing WPCN in low-power applications, such as wireless sensor network (WSN) and radio frequency identity (RFID) networks \cite{2012:Xie}, to prolong the network operating lifetime or increase the data rate \cite{2016:Bi1,2016:Bi2,2014:Huang}. In a WPCN, the energy node and the information access point (that receives information from WDs) can either be separately located or co-located as a hybrid access point (HAP) \cite{2016:Bi1}. While the former scheme enjoys more degree of freedom in device placement, the latter can save network deployment cost and is easier for the HAP to centrally coordinate the energy and information transmissions. In this paper, we focus on studying a WPCN using a HAP for both energy provision and information access.
The throughput performance of a multi-user WPCN coordinated by a HAP is first studied in \cite{2014:Ju}, which proposes a harvest-then-transmit protocol that applies the HAP to first broadcasts radio frequency (RF) energy to all WDs in the downlink, and then the WDs transmit their individual information with time-division-multiple-access (TDMA) to HAP using their harvested energy in the uplink. It is also revealed in \cite{2014:Ju} that such design will lead to severe user unfairness problem, namely the ``doubly near-far" problem, due to distance-dependent power loss. In particular, some devices' data rates can be two orders of magnitude smaller than the others, which directly decreases the sensing accuracy of a WPCN. One effective method to improve the throughput fairness is through user cooperation, where close-to-HAP users help forward the messages of far-away users \cite{2014:Ju1,2015:Chen,2017:Zhong}. Interestingly, \cite{2014:Ju1} shows that by helping the far-away user in a two-user WPCN, the close-to-HAP user can also improve its data rate, resulting a win-win situation. Further, the two-user cooperation is later studied when the two cooperating users form a distributed virtual antenna array for information transmission in \cite{2017:Zhong} and extended to a general multiple user cooperation scenario in \cite{2015:Chen}.
The above studies on the throughput performance of WPCN mostly consider using a single-antenna HAP and focus on optimizing the transmit time allocation to improve the throughput performance. The single-antenna HAP, however, suffers from very low energy transfer efficiency due to the fast signal power attenuation of omnidirectional energy transmission. Instead, when the HAP is equipped with multiple antennas, it can apply energy beamforming (EB) technique \cite{2013:Zhang} to focus the transferred energy to desired directions to enhance the energy transfer efficiency to specific devices, and thus the data rates of energy-harvesting devices. The optimal EB design has been studied in several practical setups, e.g., training sequence design \cite{2015:Zeng}, hardware feedback complexity constraints \cite{2014:Xu}, and under per-antenna transmit power constraint \cite{2017:Rezaei}. Besides, the multiple antennas can also improve the communication performance by leveraging spatial diversity or multiplexing gains in the uplink.
A number of recent works have considered the design of WPCN when a multi-antenna HAP is applied. For instance, \cite{2014:Liu} first studies the optimal energy and information beamforming design and uplink/downlink transmit time allocation, and showed the use of multiple antenna can significantly improve the throughput performance compared to its single-antenna counterpart in \cite{2014:Ju}. The throughput optimization is then studied in \cite{2015:Yang} when the HAP has a large number of antennas (i.e., massive MIMO). Nonetheless, the doubly-near far problem in WPCN still exists regardless of the number of antennas at the HAP. Therefore, cooperation methods are also widely adopted when multi-antenna HAP is concerned. For instance, \cite{2017:Liang} considers using a fixed single-antenna relay to forward the message of an energy-harvesting user to a multi-antenna HAP, and studies the optimal beamforming design and transmit time allocation. \cite{2017:Xiong} proposes a group collaboration where two communication groups cooperate with each other under the coordination of a multi-antenna HAP.
In this paper, we consider a cluster-based user cooperation in a WPCN as shown in Fig.~\ref{101}, where a multi-antenna HAP applies WET to power a cluster of remote WDs and receives their data transmissions. This may correspond to a practical scenario in WSNs, where a mobile HAP pauses in its route to power a cluster of densely deployed sensors monitoring a particular area. Like a conventional WSN, we designate one of the WDs as the cluster head (CH) to forward the information transmission of the other cluster members (CMs) to the HAP. Intuitively, the throughput performance of some far-away WDs can be improved thanks to the cooperation. However, like cluster-based cooperation in conventional WSN (e.g., \cite{2009:Chen}), the CH inevitably suffers from high energy consumption as it needs to transmit all the users' messages including its own. For a cluster with a large number of WDs, the CH's limited battery will become the performance bottleneck of the network. To solve this energy imbalance problem, we propose to exploit the capability of multi-antenna energy beamforming at the HAP, where the HAP can focus more transferred power to the CH to balance the energy consumption in assisting other WDs. The detailed contributions of this paper are as follows.
\begin{itemize}
\item We propose a cluster-based cooperation method in WPCN, where a WD is designated as the CH to forward the information transmissions of the other sensors. To address the high energy consumption of the CH in conventional cluster-based cooperation scheme, we apply EB technique at the multi-antenna HAP to balance the different energy consumption rates of the WDs.
\item With the proposed cooperation method, we formulate a joint optimization problem of EB design, the transmit time allocation among the HAP and WDs, and the transmit power allocation of the CH, to maximize the minimum data rate achievable among all the WDs (i.e., the max-min throughput) for improved user fairness. An efficient optimal solution algorithm is proposed to solve the non-convex optimization problem.
\item We perform numerical analysis to study the impact of different system setups to the performance of the proposed method. By comparing with other benchmark schemes, we show that the proposed cooperation can effectively improve the throughput performance. Besides, the proposed cooperation method is most effective when the WD that is closest to the cluster center is selected as the CH, the WDs are closely located with strong intra-cluster channels, and the number of cooperating WDs is moderate to support efficient cooperations.
\end{itemize}
The rest of the paper is organized as follows. We introduce the system model and propose the cluster-based cooperation method in Section II. We analyze the per-WD throughput performance in Section III. In Section IV, we formulate the maxi-min throughput optimization problem and propose optimal solution algorithm. In Section V, we evaluate the performance of the proposed cooperation using simulations. Finally, the paper is concluded in Section VI.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig1.eps}\\
\caption{A schematic of the considered cluster-based cooperation in WPCN, where W$_0$ is the cluster head and the rest $(N-1)$ WDs are cluster member.}
\label{101}
\end{figure}
\section{System Model}
\subsection{Channel Model}
As shown in Fig.1, we consider a WPCN consisting of a HAP and $N$ WDs. The HAP is equipped with $M$ antennas ($M<<N$ in practice), while each WD is equipped with one single antenna. Specifically, the HAP broadcasts wireless energy to and receives wireless information transmission (WIT) from the WDs. The HAP has stable power supply and each WD has a rechargeable battery to store the harvested wireless energy from the HAP. The HAP and all the WDs operate over the same frequency band, where a time division duplexing (TDD) circuit \cite{2013:Zhou} is implemented at both the HAP and the WDs to separate the energy and information transmissions.
In this paper, one of the WDs is selected as the CH that helps relay the WIT of the other CMs. The impact of CH selection method to the system performance will be discussed in Section V. Without loss of generality, the CH is indexed as W$_0$, and the CMs are indexed as W$_1$, $\cdots$, W$_{N-1}$. All the channels are assumed to be independent and reciprocal and follow quasi-static flat-fading, such that all the channels coefficients remain constant during each block transmission time, denoted by $T$, but can vary from in different blocks. The channel coefficient vector between the HAP and W$_{i}$ is denoted by $\mathbf{a}_{i} \in\mathcal{C}^{M\times1}$, where $\mathbf{a}_{i} \sim \mathcal{CN} (\mathbf{0}, \sigma_{i}^2 \mathbf{I})$ and $\sigma_i^2$ denotes the average channel gain, $i=0,1,\cdots, N-1$. Besides, the channel coefficient between the $j$-th CM and the CH is denoted by $c_j\sim \mathcal{CN} ( 0, \delta_{j}^2)$, $j=1,\cdots, N-1$. Here, we use $h_{i} \triangleq |\mathbf{a}_{i}|^2$ and $g_i\triangleq |c_i|^2$ to denote the corresponding channel gains, where $|\cdot|$ denotes the 2-norm operator.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Fig2.eps}\\
\caption{The proposed cluster-based cooperation protocol in WPCN.}
\label{102}
\end{figure}
\subsection{Cluster-based Cooperation Protocol}
The operation of the proposed cluster-based cooperation in a transmission time block is illustrated in Fig.~\ref{102}. At the beginning of a transmission block, channel estimation (CE) is performed within a fixed duration $\tau_{0}$. During the CE stage, the WDs take turns to broadcast their pilot signals, so that HAP has the knowledge of $\mathbf{a}_i$, $i=0,1,\cdots, N-1$, and the CH knows $c_i$, $i=1, \cdots, N-1$, respectively. Then, the CH sends its estimation of $c_i$'s to the HAP, such that the HAP has the full knowledge of CSI in the network.
After the CE stage, the system operates in three phases. In the first phase with time duration $\tau_{1}$, the HAP broadcasts wireless energy with fixed transmit power $P$. In the next two phases with $T-\tau_{0}-\tau_{1}$ amount of time, the $N$ WDs transmit their independent information to the HAP using their individually harvested energy. Specifically, the $(N-1)$ CMs first transmit in turn to the CH, where the $i$-th CM transmits for $\tau_{2,i}$ amount of time, $i=1, \cdots, N-1$. In the third phase, the CH transmits the decoded messages of the $(N-1)$ CMs along with its own message to the HAP. The time taken to transmit the $i$-th WD's message is denoted as $\tau_{3,i}$, $i=0,1,\cdots, N-1$, Evidently, the time allocations satisfy the following inequality
\begin{equation}
\label{1}
\tau_0+\tau_1+\sum^{N-1}_{i=1}\tau_{2,i}+\sum^{N-1}_{i=0}\tau_{3,i} \leq T.\\
\end{equation}
Notice that $\tau_0$ is a known parameter. Without loss of generality, we assume $T=1$ throughout this paper. Based on the knowledge of global CSI, the HAP can calculate the optimal time allocation and then broadcast to all the WDs such that they can keep their time-switching circuit synchronized for either energy and information transmission. Notice that, besides the transmission in the third phase, the HAP can also overhear each CM's message in the second phase, although not dedicated to it, which can be used to improve the overall transmission rate compared to decoding the message in the third phase alone. In the next section, we derive the throughput performance of the proposed cooperation protocol and formulate the max-min throughput optimization problem.
\section{Per-WD Throughput Analysis}
In this section, we derive the throughput of each WD achieved by the proposed cluster-based cooperation protocol. The results will be used in the next section to optimize the throughput fairness of the WPCN.
\subsection{Phase I: Energy Transfer}
We notice that the CH needs to transmit $N$ messages in total, which would consume significantly more energy than the other CMs, making CH the performance bottleneck of the network. To balance the energy consumed and harvested for each WD, the HAP adopts EB to deliver different power to WDs located in different directions. Specifically, in the first phase of time $\tau_1$, the HAP transmits $\mathbf{w}(t) \in C^{M\times 1}$ random energy signals on the $M$ antennas, where the transmit power of HAP is constrained by
\begin{equation}
\label{2}
E\left[|\mathbf{w}(t)|^2\right] = \text{tr}\left(E\left\{\mathbf{w}(t)\mathbf{w}(t)^H\right\}\right) \triangleq \text{tr}(\mathbf{Q}) \leq P.
\end{equation}
where $\text{tr}(\cdot)$ denotes the trace of a matrix, $(\cdot)^H$ denotes the complex conjugate operator, and $\mathbf{Q}\succeq \mathbf{0}$ is the beamforming matrix. Then, the received energy signal by the $i$-th WD is
\begin{equation}
\label{3}
y_i^{(1)}(t) = \mathbf{a}_i^T\mathbf{w}(t) + n^{(1)}_i(t), i = 0, \cdots, N-1,
\end{equation}
where $n_i^{(1)}(t)$ denotes the receiver noise power. With the noise power neglected, the amount of energy harvested by the WDs can be expressed as [7]
\begin{equation}
\label{4}
E_{i}=\eta \tau_1 E\left[|y_i^{(1)}(t)|^2\right] = \eta \tau_1 \cdot \text{tr}(\mathbf{A}_i\mathbf{Q}).
\end{equation}
Here, $\mathbf{A}_i \triangleq \mathbf{a}_i\mathbf{a}_i^H$ and $\eta\in(0,1]$ denotes the energy harvesting efficiency, which is assumed equal for all the WDs.
\subsection{Phase II: Intra-cluster Transmissions}
We assume that the CMs exhaust the harvested energy to transmit to the CH during the second stage. Then, the transmit power of the $i$-th CM is
\begin{equation}
\label{5}
P_{2, i}=\frac {E_{i}}{\tau_{2,i}} =\eta \frac{\tau_1}{\tau_{2,i}}\text{tr}(\mathbf{A}_i\mathbf{Q}),\ i=1,\cdots,N-1.
\end{equation}
Let $s_{i}^{(2)}(t)$ denote the baseband signal of the $i$-th WD transmitted in the second phase with $E[|s_{i}^{(2)}(t)|^2]=1$, the received signal at the CH is expressed as
\begin{equation}
\label{6}
\begin{aligned}
&y_{0,i}^{(2)}(t)= c_i \sqrt{P_{2, i}}s_{i}^{(2)}(t)+n_{i}^{(2)}(t),
\end{aligned}
\end{equation}
where $n_{i}^{(2)}(t)$ denotes the receiver noise with power $E\left[|n_{i}^{(2)}(t)|^2\right]=N_0$. Then, the CH can decode the $i$-th CM's message at a rate given by
\begin{equation}
\label{7}
\begin{aligned}
R_{i}^{(2)}&=\tau_{2,i} \log_{2}\left(1 + \frac{g_i P_{2,i}}{N_0}\right),i=1,\cdots,N-1.
\end{aligned}
\end{equation}
Meanwhile, the HAP can also overhear the transmission of the CMs, such that it receives
\begin{equation}
\label{8}
\mathbf{y}_{H,i}^{(2)}(t) = \mathbf{a}_i \sqrt{P_{2, i}}s_{i}^{(2)}(t)+ \mathbf{n}_{H,i}^{(2)}(t).
\end{equation}
during the $i$-th CM's transmission, where $i=1,\cdots,N-1$, and $\mathbf{n}_{H,i}^{(2)}(t) \sim \mathcal{CN}(\mathbf{0},N_0\mathbf{I})$. For simplicity, we neglect the energy consumption on decoding and consider only data transmission consuming the harvested energy. However, the proposed method can be easily extended to the case with non-zero decoding energy consumption by including a constant circuit power term.
\subsection{Phase III: Cluster-to-HAP Transmission}
After decoding the CMs' messages, the CH transmits the $(N-1)$ CMs' messages along with its own message one by one to the HAP. Let $s_{0}^{(3)}(t)$ denote CH's baseband signal and $s_{i}^{(3)}(t)$ denote the re-encoded baseband signal of the $i$-th CM transmitted in the third phase. Besides, we assume $E[|s_i^{(3)}(t)|^2]=1$, $i=0,\cdots,N-1$. Let $P_{3,i}$ denote the power used to transmit the $i$-th WD's message. Then, the received signal of the $i$-th WD's message at the HAP is
\begin{equation}
\label{9}
\mathbf{y}_{i}^{(3)}(t) = \mathbf{a}_0 \sqrt{P_{3,i}}s_{i}^{(3)}(t) + \mathbf{n}_{i}^{(3)}(t),\ i=0,1,\cdots, N-1.
\end{equation}
The total energy consumed by CH is upper bounded by its harvested energy $E_0$, i.e.,
\begin{equation}
\label{10}
\sum^{N-1}_{i=0}\tau_{3,i} P_{3,i} \leq \eta \tau_1 \text{tr}(\mathbf{A}_0 \mathbf{Q}).
\end{equation}
We assume that the HAP uses maximal ratio combining (MRC) to maximize the receive signal-to-noise power ratio (SNR), where the combiner output SNR of the $i$-th WD is
\begin{equation}
\label{11}
\begin{aligned}
\gamma^{(3)}_{i} = \frac{|\mathbf{a}_0|^2 P_{3,i}}{N_0}=\frac{h_0 P_{3,i}}{N_0}, \ i=0,\cdots, N-1.
\end{aligned}
\end{equation}
Denote the time allocation as $\pmb{\tau}=[\tau_1, \tau_{2,1}, \cdots, \tau_{2,N-1}, \tau_{3,0}, \\ \tau_{3,1}, \cdots, \tau_{3,N-1}]'$, and the transmit power as $\mathbf{P}=[P_{3,0}, P_{3,1}, \cdots, P_{3,N-1}]'$. Then, the data rate of the CH at the HAP is
\begin{equation}
\label{12}
\begin{aligned}
R_{0}(\pmb{\tau}, \pmb{P})&= \tau_{3,0} \log_{2}\left(1 + \frac{h_0 P_{3,0}}{N_0}\right).
\end{aligned}
\end{equation}
For each CM's message, however, is received in both the second and third phases. In this case, the HAP can jointly decode each CM's message across two phases at a rate given by \cite{2014:Ju}
\begin{equation}
\label{13}
R_{i}(\pmb{\tau}, \pmb{P}, \mathbf{Q}) = \min\left\{R_{i}^{(2)}(\pmb{\tau}, \mathbf{Q}), V_{i}^{(2)}(\pmb{\tau}, \mathbf{Q}) + V_{i}^{(3)}(\pmb{\tau}, \pmb{P})\right\}.
\end{equation}
where $i=1, \cdots,N-1$, and $R_{i}^{(2)}(\pmb{\tau}, \mathbf{Q})$ is given in (\ref{7}).
$V_{i}^{(2)}(\pmb{\tau}, \mathbf{Q})$ denotes the information that can be extracted by the HAP from the received signal in (\ref{6}) (in the second phase) using an optimal MRC receiver, which is given by
\begin{equation}
\label{14}
\begin{aligned}
V_{i}^{(2)}(\pmb{\tau}, \mathbf{Q})&=\tau_{2,i} \log_{2}\left(1 + \eta \frac{\tau_1}{\tau_{2,i}} \frac{h_i\text{tr}(\mathbf{A}_i\mathbf{Q})}{N_0}\right).
\end{aligned}
\end{equation}
$V_{i}^{(3)}(\pmb{\tau}, \pmb{P}, \mathbf{Q})$ denotes the achievable rates of the transmissions from CH to the HAP, which are given by
\begin{equation}
\label{15}
\begin{aligned}
V_{i}^{(3)}(\pmb{\tau}, \pmb{P})&=\tau_{3,i} \log_{2}\left(1+\frac{h_0 P_{3,i}}{N_0}\right).
\end{aligned}
\end{equation}
An important performance metric of a WPCN is the max-min throughput, defined as
\begin{equation}
S = \min_{0 \leq i \leq N-1} R_i,
\end{equation}
i.e., the minimum achievable per-WD throughput, which reflects the throughput fairness among the WDs. The max-min throughput has important practical implication. For instance, the max-min throughput in a WSN reflects the accuracy of data reported by the ``bottleneck" sensor, which can directly affect the overall sensing accuracy of the network. In the next section, we formulate the max-min throughput optimization problem and solve it optimally. In fact, our proposed method in this paper can also be extended to maximize (weighted) sum throughput of the WDs, which is omitted for brevity.
\section{Max-min Throughput Optimization}
\subsection{Problem Formulation}
In this section, we are interested in maximizing the minimum (max-min) throughput of all WDs in each block, by jointly optimizing the energy beamforming $\mathbf{Q}$, the time allocation $\pmb {\tau}$, and the transmit power allocation $\pmb P$, i.e.,
\begin{equation}
\label{16}
\begin{aligned}
(P1):\; &\max_{\mathbf{\pmb {\tau, P}}, \mathbf{Q}}& &S= \min_{0 \leq i \leq N-1} R_i(\pmb {\tau, P}, \mathbf{Q})\\
&\text{s. t.}& & (1) \;\rm{and} \; (10), \\
& & &\tau_{1} \geq 0, \; \tau_{2,i} \geq 0, \;i=1,\cdots, N-1,\\
& & & \tau_{3,i} \geq 0,\;P_{3,i}\geq 0,\;i=0,1,\cdots, N-1,\\
& & & \text{tr}(\mathbf{Q}) \leq P, \;\mathbf{Q} \succeq \mathbf{0}, \; \pmb {\tau} \geq \mathbf{0}.\\
\end{aligned}
\end{equation}
By introducing a variable $\overline S$, problem (\ref{16}) can be equivalently transformed into its epigraphic form,
\begin{equation}
\label{17}
\begin{aligned}
(P2):\quad &\max_{\pmb {\tau}, \pmb{P}, \mathbf{Q}, \overline S} & & \overline S\\
&\text{s. t.} & & (1) \;\rm{and} \; (10), \\
& & & R_{0}(\pmb{\tau}, \pmb{P}) \geq \overline S,\\
& & &V_{i}^{(2)}(\pmb{\tau}, \mathbf{Q}) + V_{i}^{(3)}(\pmb{\tau}, \pmb{P}) \geq \overline S,\\
& & &R_{i}^{(2)}(\pmb{\tau}, \mathbf{Q}) \geq \overline S, i=1,\cdots, N-1,\\
& & & \text{tr}(\mathbf{Q}) \leq P, \;\mathbf{Q} \succeq \mathbf{0}, \; \pmb {\tau} \geq \mathbf{0}.\\
\end{aligned}
\end{equation}
Due to joint design of user cooperation and energy beamforming, both the data rates in intra-cluster communication (i.e., $R_{i}^{(2)}(\pmb{\tau}, \mathbf{Q})$ and $V_{i}^{(2)}(\pmb{\tau}, \mathbf{Q}))$ and cluster-to-HAP communication (i.e., $R_{0}(\pmb{\tau}, \pmb{P})$ and $V_{i}^{(3)}(\pmb{\tau}, \pmb{P})$) are not concave functions. Besides, the LHS of (\ref{10}) is also not a convex function. Therefore, (P2) is a non-convex problem in its current form, which lacks of efficient optimal algorithm. In the next subsection, we propose an algorithm to solve (P2) optimally.
\subsection{Optimal Algorithm to (P2)}
We first define $\mathbf{W} \triangleq \tau_1\mathbf{Q} \succeq \mathbf{0}$. With the sum transmit power constraint in (\ref{2}), we have
\begin{equation}
\text{tr}\left(\mathbf{W}\right) = \text{tr}\left(\tau_1\mathbf{Q}\right) \leq \tau_1 P.
\end{equation}
Accordingly, we change the variables as
\begin{equation}
z_i \triangleq \tau_1\text{tr}\left(\mathbf{A}_i\mathbf{Q}\right) = \text{tr}\left(\mathbf{A}_i\mathbf{W}\right),
\end{equation}
for $i=0,\cdots, N-1$. Thus, $R_{i}^{(2)}(\pmb{\tau}, \mathbf{Q})$ and $V_{i}^{(2)}(\pmb{\tau}, \mathbf{Q})$ in (\ref{7}) and (\ref{14}) can be re-expressed as functions of $\pmb {\tau}$ and $\pmb z=\left[z_{1}, \cdots, z_{N-1}\right]'$,
\begin{equation}
\label{18}
R_{i}^{(2)}(\pmb{\tau},\pmb {z})=\tau_{2,i} \log_{2}\left(1+ \overline{\rho}_i \frac { z_i}{\tau_{2,i}}\right),
\end{equation}
\begin{equation}
\label{19}
V_{i}^{(2)}(\pmb{\tau},\pmb {z})=\tau_{2,i} \log_{2}\left(1+ \rho_i \frac {z_i}{\tau_{2,i}}\right),
\end{equation}
where $i=1,\cdots,N-1$, and $\overline {\rho}_i\triangleq\eta \frac{g_i}{N_0}$ and $\rho_i\triangleq\eta \frac{h_i}{N_0}$ are parameters.
Subsequently, we define $\theta_{3,i}\triangleq\frac{\tau_{3,i} P_{3,i}}{\eta}$, $i=0,1,\cdots,N-1$, then $R_{0}(\pmb{\tau}, \pmb{P})$ and $V_{i}^{(3)}(\pmb{\tau}, \pmb{P})$ in (\ref{12}) and (\ref{15}) can be reformulated as functions of $\pmb {\tau}$ and $\pmb {\theta}=\left[\theta_{3,0}, \cdots, \theta_{3,N-1}\right]'$, i.e.,
\begin{equation}
\label{20}
\begin{aligned}
R_{0}(\pmb {\tau, \theta})= \tau_{3,0} \log_{2}\left(1 + \rho_0 \frac { \theta_{3,0}}{\tau_{3,0}}\right),
\end{aligned}
\end{equation}
\begin{equation}
\label{21}
\begin{aligned}
V_{i}^{(3)}(\pmb {\tau, \theta})=\tau_{3,i} \log_{2}\left(1+\rho_0 \frac { \theta_{3,i}}{\tau_{3,i}}\right).
\end{aligned}
\end{equation}
where $i=1,\cdots,N-1$, and $\rho_0\triangleq\eta \frac{h_0}{N_0}$. Thus, the power constraint given in (\ref{10}) can be re-expressed as
\begin{equation}
\label{22}
\begin{aligned}
\sum^{N-1}_{i=0}\theta_{3,i}\leq z_0.
\end{aligned}
\end{equation}
Accordingly, problem (\ref{17}) can be transformed into the following equivalent problem.
\begin{equation*}
\label{23}
\begin{aligned}
(P3):\quad &\max_{\pmb {\tau}, \pmb{\theta}, \pmb z, \overline S, \mathbf{W}} & & \overline S\\
&\text{s. t.} & & R_{0}(\pmb{\tau, \theta}) \geq \overline S,\\
& & &V_{i}^{(2)}(\pmb{\tau},\pmb{z}) + V_{i}^{(3)}(\pmb{\tau, \theta}) \geq \overline S,\\
& & &R_{i}^{(2)}(\pmb{\tau},\pmb{z}) \geq \overline S,\ i=1,\cdots, N-1,\\
& & &\tau_0+\tau_1+\sum^{N-1}_{i=1}\tau_{2,i}+\sum^{N-1}_{i=0}\tau_{3,i} \leq 1,\\
& & &z_i=\text{tr}(\mathbf{A}_i\mathbf{W}),\ i=0,1,\cdots, N-1,\\
& & &\sum^{N-1}_{i=0}\theta_{3,i}\leq z_0,\ \pmb {\tau} \geq \mathbf{0},\\
& & & \text{tr}(\mathbf{W}) \leq \tau_1 P,\ \mathbf{W}\succeq \mathbf{0}.
\end{aligned}
\end{equation*}
Before solving (P3), we have the following Lemma 1.
$\underline{Lemma} \; \emph {1:}$ When $x>0$ and $y>0$, $z=x\log_2(1+y/x)$ is jointly concave in ($x$, $y$).
\emph {Proof:} The Hessian of $z(x, y)$ is
\begin{equation}
\label{24}
\bigtriangledown^2 z (x, y)=
\frac{1}{\ln 2 (x+y)^2}\left[
\begin{array}{cc}
-\frac{y^2}{x} & y\\
y & -x\\
\end{array}
\right]
\end{equation}
When $x, y>0$, for any arbitrary vector $\mathbf{d} = (d_1,d_2)'$, we have
\begin{equation}
\mathbf{d}' \cdot \bigtriangledown^2 z \cdot \mathbf{d} = -\frac{\left(\frac{d_1y}{\sqrt{x}}-d_2\sqrt{x}\right)^2}{\ln 2 (x+y)^2} \leq 0.
\end{equation}
Therefore, $\bigtriangledown^2 z$ is a negative semi-definite matrix, which completes the proof. $\hfill\blacksquare$.
From Lemma 1, we can see that both $R^{(2)}_{i}$'s in (\ref{18}) and $V_{i}^{(2)}$'s in (\ref{19}) are concave functions in $(\pmb{\tau},\pmb{z})'$. Besides, $R_{0}$ in (\ref{20}) and $V_{i}^{(3)}$'s in (\ref{21}) are also concave functions in $(\pmb {\pmb{\tau}, \theta})'$. Therefore, the first three sets of constraints in (P3) are convex constraints. Meanwhile, the rest of the constraints are affine. Accordingly, it follows that the objective and all the constraints of (P3) are convex, therefore (P3) is a convex optimization problem, which can be efficiently solved by off-the-shelf optimization algorithms, e.g., interior point method \cite{2004:Boyd}. Let's denote the optimal solution to (P3) as $\left\{\pmb {\tau}^*, \pmb{\theta}^*, \pmb z^*, \overline S^*, \mathbf{W}^*\right\}$. Then, the optimal solution $\pmb {\tau}^*$ of (P1) is the same as that in (P3). The optimal $\mathbf{Q}^*$ and $\mathbf{P}^*$ of (P1) can be restored by letting $\mathbf{Q}^* = \mathbf{W}^*/\tau_1^*$ and $P_{3,i}^* = \eta\theta_{3,i}^*/\tau_{3,i}^*,\ i = 0,\cdots,N-1.$
\subsection{Benchmark Methods}
For performance comparison, we consider two representative benchmark methods. For simplicity, we assume that the time spent on CE ($\tau_0$) is equal for all the schemes.
\subsubsection{Cluster-based cooperation w/o EB}
The only difference from the proposed cooperation method is that the HAP does not apply EB and instead transmitting wireless energy isotropically to the WDs during the WET phase. In this case, the optimal time allocation $\boldsymbol{\tau}^*$ and transmit power allocation $\mathbf{P}^*$ can be obtained by fixing $\mathbf{Q}^*=\frac{P}{M}\mathbf{I}$ in (P1), where $\mathbf{I}$ denotes an identity matrix.
\subsubsection{Independent transmission with EB}
In this case, all the WDs transmit independently to the HAP following the harvest-then-transmit protocol in \cite{2014:Ju}. Specifically, the HAP first uses EB to performs WET for $\tau_1'$ amount of time for the WDs to harvest. Then, the WDs take turns to transmit their messages to the HAP, where each WD's transmission takes $\tau_{2,i}'$ ($i=0,1,\cdots, N-1$) amount of time. Meanwhile, the HAP uses MRC to decode the message of each user.\footnote{Spatial multiplexing is not used at the HAP as the number of WDs is often much larger than the number of antennas at the HAP. Otherwise, either strong interference or high computational complexity will be induced when the WDs transmit to the HAP simultaneously.} Then, the data rate of the $i$-th user is denoted by
\begin{equation}
\label{28}
R_{i}'(\pmb{\tau}', \mathbf{Q'}) = \tau_{2,i}' \log_{2}\left(1 + \gamma_{i}'\right), \ i=0,\cdots,N-1,
\end{equation}
where
\begin{equation}
\label{27}
\gamma_{i}' = \frac{\eta \tau_1' h_i \text{tr}(\mathbf{A}_i\mathbf{Q'})}{ N_0\tau_{2,i}'}
\end{equation}
denotes the output SNR, $\mathbf{Q'}$ denotes the beamforming matrix, and $\pmb{\tau}' \triangleq[\tau_1', \tau_{2,0}', \cdots, \tau_{2,N-1}']'$. Then, the max-min throughput can be obtained by solving the following problem
\begin{equation}
\label{29}
\begin{aligned}
&\max_{\mathbf{\pmb {\tau}'}, \mathbf{Q}'}& &\min_{i=0,\cdots, N-1} R_{i}'(\pmb{\tau}', \mathbf{Q'})\\
&\text{s. t.}& & \tau_0+\tau_1'+\sum^{N-1}_{i=0}\tau_{2,i}' \leq 1, \\
& & & \tau_{1}' \geq 0,\;\tau_{2,i}'\geq 0, \ i=0,\cdots, N-1,\\
& & & \text{tr}(\mathbf{Q'}) \leq P,\; \;\mathbf{Q}' \succeq \mathbf{0}. \\
\end{aligned}
\end{equation}
The optimal solution to the above problem can be similarly obtained as (P3), where the details are omitted for brevity.
\begin{figure}
\centering
\subfigure[Cooperation with EB.]{
\label{figa}
\includegraphics[width=0.45\textwidth]{3a.eps}}
\hspace{1in}
\subfigure[Cooperation without EB.]{
\label{fig:subfig:b}
\includegraphics[width=0.45\textwidth]{3b.eps}}
\caption{The impact of cluster head selection to the max-min throughput with $d=6$ meters. The figure above adopts EB technique at the HAP and the below does not.}
\label{103}
\end{figure}
\section{Simulation Results}
In this section, we evaluate the performance of the proposed cooperation method. In all simulations, we use the Powercast TX91501-3W transmitter as the energy transmitter at the HAP with transmit power $P=3$ watts and P2110 Powerharvester as the energy receiver at each WD with $\eta= 0.51$ energy harvesting efficiency.\footnote{Please see the detailed product specifications on the website of Powercast Co. (http://www.powercastco.com).} Without loss of generality, it is assumed that the number of antennas at HAP is $M=5$ and the noise power $N_{0}$ is $10^{-10}$ $W$ in the considered bandwidth for all receivers. The mean channel gain between any two nodes, either the HAP or a WD, follows a path-loss model. For instance, let $d_{H,i}$ denote the distance between the HAP and the $i$-th WD, then the average channel gain $\delta_i^2 = G_A(\frac{3\times 10^8}{4\pi d_{H,i}f_c})^{\alpha}$, where $G_A$ denotes the antenna gain, $\alpha$ denotes the path-loss factor and $f_{c}$ denotes the carrier frequency. Unless otherwise stated, we assume $G_A=2$, $\alpha =3$, and $f_{c}=915$ $MHz$. Besides, $15$ WDs are uniformly distributed within a circle with radius equal to $r$ meters, and the circle's center is $d$ meters away from the HAP. Each point in the figures is an average of $20$ independent WD placements.
\begin{figure}
\centering
\subfigure[Cooperation with EB.]{
\label{figa}
\includegraphics[width=0.45\textwidth]{4a.eps}}
\hspace{1in}
\subfigure[Cooperation without EB.]{
\label{fig:subfig:b}
\includegraphics[width=0.45\textwidth]{4b.eps}}
\caption{The impact of cluster head selection to the max-min throughput with $r=3$ meters. The figure above adopts EB technique at the HAP and the below does not.}
\label{104}
\end{figure}
In Fig.~\ref{103}, we investigate the impact of cluster head selection method on the throughput performance. Specifically, we consider three CH selection methods: selecting the WD that is closest to the cluster center,\footnote{The location of cluster center can be obtained by taking the average of the location coordinates of the $N$ WDs} closest to the HAP, or randomly \footnote{The performance is an average of $5$ random CH selections for each WD placement.} from the WDs. Specifically, we fix the distance $d=6$ meters and change the radius of the cluster $r$, and consider two different methods with EB either adopted (the proposed cooperation method) or not (cooperation without EB in Section IV.B) at the HAP. As expected, the data rates of the three CH selection methods decrease as the cell radius increases, because the intra-cluster communication links become weaker when the distances between the CMs and the CH increase. Meanwhile, regardless of EB is used or not at the HAP, selecting the WD closest to the cluster center achieves the best performance. Interestingly, we can also see that selecting the WD closest to the HAP performs even worse than selecting a random WD as the CH. This is because, on average, the largest distance between the CMs and the CH is larger for the former scheme than the latter. Similar result is also observed in Fig.~\ref{104} when we fix the cell radius $r=3$ meters and vary the distance $d$. Both Fig.~\ref{103} and Fig.~\ref{104} show that to achieve high throughput fairness, \emph{efficient intra-cluster cooperation} is required such that the distance disparity between the CMs and the CH should be minimized, e.g., by selecting the WD closest to the cluster center. Therefore, we designate the WD \emph{closest to the cluster center as the CH} when cluster-based cooperation is considered in the following simulations.
\begin{figure}
\centering
\subfigure[Max-min throughput.]{
\label{figa}
\includegraphics[width=0.45\textwidth]{5a.eps}}
\hspace{1in}
\subfigure[Sum throughput.]{
\label{fig:subfig:b}
\includegraphics[width=0.45\textwidth]{5b.eps}}
\caption{Performance comparison of the different transmission schemes when $d=6$ and the cluster radius $r$ varies. The figures above and below compare the max-min throughput and sum throughput, respectively.}\Large
\label{105}
\end{figure}
We then compare the throughput performance of the proposed cluster-based cooperation with the two benchmark methods in Section IV.~B. In particular, both max-min throughput (user fairness) and sum throughput (spectral efficiency) are compared. In Fig.~\ref{105}, we first investigate the impact of intra-cluster communication links to the overall throughput performance by fixing $d=6$ and varying $r$. We can see that all the schemes are very sensitive to the degradation of intra-cluster communication links, where both the max-min throughput and sum throughput have dropped by more than $50\%$ for all the schemes when $r$ increases from $1$ to $3$. Nonetheless, we can still observe that the max-min throughput drops more quickly than the sum throughput as $r$ increases, because the max-min throughput is directly determined by the users close to the cluster edge. In both Fig.~\ref{105}(a) and (b), we can see the evident advantage of the proposed method compared to the two benchmark methods, where either cooperation or energy beamforming is absent. On average, the proposed cooperation method achieves around $40\%$ higher max-min throughput than that of cooperation without EB, and over $200\%$ higher max-min throughput than that of the independent transmission method. Moreover, the advantage is even more evident in case of sum throughput performance.
\begin{figure}
\centering
\subfigure[Max-min throughput.]{
\label{figa}
\includegraphics[width=0.45\textwidth]{6a.eps}}
\hspace{1in}
\subfigure[Sum throughput.]{
\label{fig:subfig:b}
\includegraphics[width=0.45\textwidth]{6b.eps}}
\caption{Performance comparison of the different transmission schemes when $r=3$ and the cluster-to-HAP distance $d$ varies. The figures above and below compare the max-min throughput and sum throughput, respectively.}\Large
\label{106}
\end{figure}
We also investigate in Fig.~\ref{106} the impact of cluster-to-HAP communication links to the overall throughput performance by fixing $r=3$ and varying $d$. Similar to Fig.~\ref{105}, we can see that the proposed cooperation method achieves evident performance advantages over the two benchmark method, especially when the cluster-to-HAP distance is small to moderate, e.g., $d<8$ meters. However, as we further increase $d$, all the schemes achieve very low data rates because of the dramatic energy signal attenuation over distance. The results show that the effective operating range of the considered cooperation method is fundamentally limited by the relatively low efficiency of energy transmissions. In fact, wireless powered communication is \emph{effective only when the power transmission distance is not too large}, such that the WDs can harvest sufficient energy to perform information transmission. In practice, we can improve the performance by several methods, e.g., increasing the number of antennas of the HAP, optimizing the route of a mobile HAP, or increasing the HAP's transmit power. Due to the scope of this paper, we omit the simulations on these performance improving methods. The results in Fig.~\ref{105} and \ref{106} show that the proposed cooperation method can \emph{effectively enhance user fairness and spectral efficiency.}
In Fig.~\ref{107}, we evaluate the stability of throughput performance when the number of WDs $N$ increases from $15$ to $30$. Without loss of generality, we set $d=6$ and $r=3$. We can see from Fig.~\ref{107}(a) that the max-min throughput decreases with the number of WDs for all the schemes. This is because on average each WD is allocated with shorter transmission time, and thus the data rate of the worst-performing WD decreases. In particular, the decrease of max-min throughput is moderate when $N$ increases from $15$ to $25$, but becoming significant as $N$ further increases. However, we observe in Fig.~\ref{107}(b) that the sum-throughput increases with $N$, although the data rate of each individual may decrease. This indicates that a tradeoff exists between each individual user's throughput and the aggregate network throughput. In practice, \emph{the number of WDs should be kept moderate}, e.g., less than $25$ in the considered network setup. Nonetheless, we can still observe significant performance gain of the proposed method over the two benchmark methods, where the worst-performing WD can still maintain relatively high data rate when the network size is large (e.g., $N=30$).
\begin{figure}
\centering
\subfigure[Max-min throughput.]{
\label{figa}
\includegraphics[width=0.45\textwidth]{7a.eps}}
\hspace{1in}
\subfigure[Sum throughput.]{
\label{fig:subfig:b}
\includegraphics[width=0.45\textwidth]{7b.eps}}
\caption{Performance comparison of the different transmission schemes when the number of WDs $N$ varies. The figures above and below compare the max-min throughput and sum throughput, respectively.}\Large
\label{107}
\end{figure}
\vspace{-2ex}
\section{Conclusions}
In this paper, we have proposed a cluster-based cooperation method in a WPCN where a WD is designated as the CH to assist the transmission of other WDs. In particular, energy beamforming technique is applied at the multi-antenna HAP to achieve directional energy transfer to balance the different energy consumption rates of the WDs, especially the high power consumption of the CH. We proposed an efficient algorithm to achieve the optimal max-min throughput among the WDs, by jointly optimizing the EB design, the transmit time allocation among the HAP and the WDs, and the transmit power allocation of the CH. Extensive simulations under practical network setups showed that the proposed method can significantly improve both the user fairness and spectral efficiency compared to non-trivial benchmark methods. Moreover, we also found that the proposed cooperation is most effective when selecting the WD closest to the cluster center as the CH, both the intra-cluster and cluster-to-HAP communication links are strong, and the number of cooperating WDs is moderate.
|
1,108,101,563,174 | arxiv | \section{Introduction}\label{sec:intro}
We consider a general probabilistic model on the torus $\T_L=\Z^d/L\Z^d$,
whose realisations live in a product of local spaces. Each local space is associated to one of the vertices of $\T_L$ and elements of the local spaces interact with each other according to a probability measure.
Such a general setting includes various important models in statistical mechanics, for example the spin O(N) model, the quantum Heisenberg anti-ferromagnet and $XY$ model, the dimer and the double-dimer model, lattice permutations, and the loop O(N) model.
We prove that, if a linear functional acting on functions of our state space is \emph{reflection positive}, then several site-monotonicity properties for the two-point function hold. This generalises the monotonicity and positivity results of \cite{L-T} to a very general system.
This general result has the following implications.
Firstly, in their seminal paper \cite{F-S-S}, Fr\"ohlich, Simon and Spencer introduced a method for proving the non-decay of correlations of the two-point function of several statistical mechanics models in dimension $d > 2$.
This method was further developed in \cite{F-I-L-S} and used in many other research works (we additionally refer to \cite{B} for an overview).
More precisely, this method is used to prove that the Ces\`aro sum of the two-point function is uniformly positive.
Our general monotonicity result shows that,
when this method works, a stronger result can often be obtained. Namely not only is
the Ces\`aro sum of the two-point function uniformly positive in the system size, but the two-point function is also uniformly positive \emph{point-wise} for a positive fraction of vertices.
This result was derived by Lees and Taggi in \cite{L-T} in a special case and here it is generalised to an abstract statistical mechanics setting.
As an example of a new application we consider quantum spin systems including the Heisenberg antiferromagnet and XY model, which were not covered by the framework of \cite{L-T}. Quantum spin systems are important class of statistical mechanics models whose realisation space is the tensor product of local Hilbert spaces and can be `represented' as systems of random interacting loops, we refer to \cite{U2} for an overview. It is already known \cite{D-L-S, F-I-L-S, K-L-S,K-L-S2} that the Gibbs states of this model are reflection positive in the presence of anti-ferromagnetic interactions and that, in dimension $d > 2$, the Ces\`aro sum of the two-point function is uniformly positive for large enough values of the inverse temperature parameter and system size. Our result implies that the spin-spin correlation is point-wise uniformly positive for vertices with all odd coordinates, extending the existing results. We fully expect that this uniform positivity should extend to all vertices, not just `odd' vertices.
Our third main result involves a general class of random loop soup models, which we refer to as the random path model. This class includes the loop representation of the spin O(N) model \cite{ B-U, L-T}, the double-dimer model \cite{Kenyon}, lattice permutations \cite{B-T, T}, and the loop O(N) model \cite{P-S}. In \cite{L-T}, site-monotonicity properties for the two-point function -- which is defined as the ratio of partition functions with a walk connecting two-points in a system of loops and the partition function with only loops -- were derived. Here we extend the result to a general class of two-point functions, including the probability that two fixed vertices have a loop passing through both of them.
\section{Model and main result}\label{sec:model}
Consider the torus $\T_L=\Z^d/L\Z^d$ with $d\geq 2$ and $L\in 2\N$.
Denote by $o=(0,\dots,0)$ the origin of the torus. For each $x\in \T_L$ let $\Sigma_x$ be a Polish space of local states (for example $\mathbb{S}^{N-1}$, $\C^{2S+1}$, $\{-1,+1\}$,...). Further let $\otimes$ be some associative product between the $\Sigma_x$'s (for example the cartesian product or the tensor product). Our state space is
\begin{equation}
\Scal=\otimes_{x\in \T_L} \Sigma_x.
\end{equation}
We denote elements of $\Scal$ by $w=(w_x)_{x\in\T_L}$ where $w_x\in \Sigma_x$.
Let $\Acal_L$ be a real, finite dimensional, algebra of functions on $\Scal$ with unit (for example if $\Sigma_x=\mathbb{S}^{N-1}$ then we could take the cartesian product and $\Acal_L$ to be the algebra of functions $\Scal\to \R$ that are measurable with respect to the Haar measure on $\Scal$). Further, let $\langle\cdot\rangle$ be a linear functional on $\Acal_L$ such $\langle 1\rangle=1$. Our key requirement is that $\langle\cdot\rangle$ is \emph{reflection positive}, which we describe briefly.
\subsection{Reflection Positivity}
Consider a plane $R=\{z\in \R^d\, :\, z\cdot \boldsymbol{e}_i =m\}$ for some $m\in \tfrac12\Z\cap [0,L)$ and some $i\in\{1,\dots, d\}$. Let $\vartheta:\T_L\to\T_L$ be the reflection operator that reflects vertices of $\T_L$ in the plane $R$. More precisely, for any $x=(x_1,\dots,x_d)\in \T_L$
\begin{equation}
\vartheta(x)_k:=\begin{cases} x_ k & \text{if }k\neq i, \\ 2m-x_k\mod L & \text{if }k=i. \end{cases}
\end{equation}
If $m\in \tfrac12\Z\setminus \Z$ we call such a reflection a \emph{reflection through edges}, if $m\in\Z$ we call such a reflection a \emph{reflection through vertices}. We denote by $\T_L^+,\T_L^-$ the partition of $\T_L$ into two halves with the property that $\vartheta(\T_L^{\pm})=\T_L^{\mp}$.
We say a function $A\in\Acal_L$ has domain $D\subset \T_L$ if for any $w_1,w_2\in \Scal$ that agree on $D$ we have $A(w_1)=A(w_2)$.
Consider the algebras $\Acal_L^+,\Acal_L^-\subset\Acal_L$, of functions with domain $\T_L^+,\T_L^-$ respectively. The reflection $\vartheta$ acts on elements $w\in \Scal$ as $(\vartheta w)_x=w_{\vartheta x}$ and for $A\in\Acal^+_L$ it acts as $\vartheta A(w)=A(\vartheta w)$.
We say that $\langle\cdot\rangle$ is \emph{reflection positive} with respect to $\vartheta$ if, for any $A,B\in\Acal^+_L$,
\begin{enumerate}
\item $\langle A\vartheta B\rangle=\langle B\vartheta A\rangle $,
\item $\langle A\vartheta A\rangle\geq 0$.
\end{enumerate}
A consequence of this is the Cauchy-Schwarz inequality
\begin{equation}\label{eq:refpos}
\langle A\vartheta B\rangle^2\leq \langle A\vartheta A\rangle\langle B\vartheta B\rangle.
\end{equation}
We say $\langle\cdot\rangle$ is \emph{reflection positive for reflections through edges resp. vertices} if, for any reflection $\vartheta$ through edges resp. vertices, $\langle\cdot\rangle$ is reflection positive with respect to $\vartheta$.
\subsection{Main results}
For $j\in\{1,2\}$ let $F^j_o\in\Acal_L$ be functions with domain $\{o\}$.
Fix an arbitrary site $x \in \T_L$ and let $o=t_0$, $t_1$, $\ldots$, $t_k = x$
be a self-avoiding nearest-neighbour path from $o$ to $t$,
and for any $i \in \{1, \ldots, k\}$, let $\Theta_i$ be the reflection with respect to the plane going through the edge $\{ t_{i-1}, t_{i} \}$.
Define
$$
(F^j_o)^{[x]} : = \Theta_k \circ \Theta_{k-1} \, \ldots \, \circ \Theta_1 \, ( F^j_o ).
$$
Observe that the function $(F^j_o)^{[x]}$ does not depend on the chosen path (See Figure \ref{Fig:refexample} for an illustration).
For a lighter notation denote by $F^j_x=(F^j_o)^{[x]}$ the function obtained from $F^j_o$ by applying a sequence of reflections that send $o$ to $x$.
\begin{figure}
\includegraphics[scale=0.26]{Fig_Functions.pdf}
\centering
\caption{An example of a sequence of reflections sending a function with domain $o$ to a function with domain $x$.
}
\label{Fig:refexample}
\end{figure}
We define the \textit{two-point function},
$$
G_L(x,y) : = \Big \langle \, F^2_x \, \, F^2_{y} \, \, \big ( \prod_{z \in \T_L \setminus \{ x,y\} }F^1_z \big ) \Big \rangle,
$$
omitting the dependence on the functions $F_o^j$ in the notation. For spin system examples we would usually take $F^1_o$ to be the spin at $o$ and $F^2_o=1$, meaning that $G_L(x,y)$ is a spin-spin correlation. We say that the two-point function is \textit{torus symmetric} if,
for any $A,B\subset \T_L$ and $z\in \T_L$
\begin{equation}\label{eq:refinvariance}
\big\langle \prod_{x\in A}F^1_x \prod_{x\in B} F^2_x\big\rangle=\big\langle \prod_{x\in A+z}F^1_x \prod_{x\in B+z} F^2_x\big\rangle,
\end{equation}
where the sum is with respect to the torus metric.
As a consequence, for any $x, y, z \in \T_L$,
\begin{equation}\label{eq:refinvariance2}
G_L(x,y) = G_L(x + z, y + z), \quad \quad G_L(o,x) = G_L(-x, o).
\end{equation}
Our first theorem states several site-monotinicity properties for the two-point function.
\begin{thm}\label{thm:monotonicity}
Consider the torus $\T_L=\Z^d/L\Z^d$ for $d\geq 2$ and $L\in2\N$.
Take $i\in\{1,\dots,d\}$. Suppose that $\langle\cdot\rangle$ is reflection positive for reflections through edges and that the two-point function is torus symmetric. For any $z=(z_1,\dots,z_d)$,
\begin{align}
G_L(o,z) &\leq G_L(o, z_i \boldsymbol{e}_i) & \mbox{ if $z_i$ odd}
\label{eq:oddinequality}
\\
G_L(o,z) & \leq \frac{1}{2} \Big ( G_L \big (o, \boldsymbol{e}_i (z_i - 1) \Big )
+ G_L \big (o, \boldsymbol{e}_i (z_i + 1) \big ) \Big ) & \mbox{ if $z_i$ even} \label{eq:eveninequality}
\end{align}
Further, for $y\in\T_L$ such that $y\cdot\boldsymbol{e}_i=0$ (possibly $y=o$) the function
\begin{equation}\label{eq:oddmonotonicity}
G_L \big (o, y + n \boldsymbol{e}_i \big ) + G_L \big (o,
n \boldsymbol{e}_i \big )
\end{equation}
is a non-increasing function of $n\in(0,L/2)\cap 2\N+1$.
If, in addition, $\langle\cdot\rangle$ is reflection positive for reflections through vertices then $(\ref{eq:oddinequality})$ also holds for $z_i$ even and (\ref{eq:oddmonotonicity}) holds for any $n\in(0,L/2]$.
\end{thm}
Our next theorem is a consequence of Theorem \ref{thm:monotonicity} and consists of the following statements.
Suppose that the two-point function is uniformly bounded from above by a constant $M$,
(i) Whenever the Ces\`aro sum of the two-point function is uniformly positive, the two-point function is\textit{ point-wise} uniformly positive on cartesian axes. (ii) - (iii) If the uniformly positive lower bound to the Ces\`aro sum is close enough to $M$, then the two-point function is point-wise uniformly positive not only on the cartesian axes, but also at any site in a box centred at the origin whose side length is of order $O(L)$.
\begin{thm}\label{thm:positivity}
Consider the torus $\T_L=\T^d/L\Z^d$ for $d\geq 2$ and $L\in2\N$.
Take $i\in\{1,\dots,d\}$. Suppose that $\langle\cdot\rangle$ is reflection positive for reflections through edges and that the two-point function is torus symmetric. Moreover, suppose that for some $C_1>0$ we have
\begin{equation}\label{eq:FSS}
\liminf_{\substack{L\to\infty\\ L\text{ even}}}\frac{1}{|\T_L|}\sum_{x\in\T_L}
G_L(o,x) \, \geq \, C_1>0,
\end{equation}
and that for some $M\in(0,\infty)$ we have that,
\begin{equation}\label{eq:corrrequirement}
\forall L \in 2 \mathbb{N} \quad \forall x, y \in \T_L \quad G_L(x,y) \leq M.
\end{equation}
Then, the following properties hold,
\begin{enumerate}[(i)]
\item For any $\varphi \in (0, \frac{C_1}{2})$ there exists $\varepsilon > 0$ such that for any integer $n \in (- \varepsilon \, L, \varepsilon L )$ and any $i\in\{1,\dots,d\}$,
$$
G_L(o, \boldsymbol{e}_i n ) \geq \varphi. $$
\item For $\varepsilon\in(0,\tfrac12)$ and $L \in 2 \mathbb{N}$ sufficiently large, for any $x\in\T_L$ such that $|x\cdot\boldsymbol{e}_i|\in(0,\varepsilon L)\cap (2\N+1)$ for every $i\in\{1,\dots,d\}$,
$$
G_L(o,x) \geq M-\big(\tfrac14-\tfrac12\varepsilon\big)^{-d}(M-C_1).
$$
\item If $\langle\cdot\rangle$ is also reflection positive for reflections through vertices then
for any $\varepsilon\in(0,\tfrac12)$ and $L \in 2 \mathbb{N}$ sufficiently large,
for all $x\in\T_L$ such that $|x\cdot\boldsymbol{e}_i| \in (0, \varepsilon L)$ for every $i\in\{1,\dots,d\}$,
$$
G_L(o,x)\geq M-\big(\tfrac12-\varepsilon\big)^{-d}(M-C_1).
$$
\end{enumerate}
\end{thm}
\begin{rem} \label{remark}
\begin{enumerate}[(i)]
\item For many statistical mechanics models one has that there exists some positive $c > 0$ such that, if $x$ and $y$ are nearest neighbours, then $G_L(o,x) \geq G_L(o,y) \, \, c$. When such a property is fulfilled,
the properties of point-wise positivity of the two-point function stated in (i) and (ii) can be extended to vertices which are not necessarily odd.
\item If we do not care about the size of the box around $o$ where we can show that two-point functions are uniformly bounded then we can simple look at the limit $\varepsilon\to 0$. In this case the bound in (ii) becomes $M-4^d(M-C_1)$ and the bound in (iii) becomes $M-2^d(M-C_1)$.
\end{enumerate}
\end{rem}
\section{Applications}\label{sec:examples}
\subsection{Quantum Heisenberg model}\label{sec:quantumspin}
For $S\in \tfrac12\N$ we define $\Sigma_x=\C^{2S+1}$ and $\otimes$ to be the tensor product, hence
$
\Scal=\otimes_{x\in\T_L}\C^{2S+1}$.
Let $S^1,S^2,S^3$ denote the spin-$S$ operators on $\C^{2S+1}$. They are hermitian matrices defined by \begin{equation}
[S^1,S^2]=iS^3,\qquad [S^2,S^3]=iS^1,\qquad [S^3,S^2]=iS^2,
\end{equation}
\begin{equation}
(S^1)^2+(S^2)^2+(S^3)^2=S(S+1)\1,
\end{equation}
where $\1$ is the identity matrix. Each spin matrix has spectrum $\{-S,-S+1,\dots,S\}$. We denote by $S^i_x=S^i\otimes \1_{\T_L\setminus\{x\}}$ the operator on $\Scal$ that acts as $S^i$ on $\Sigma_x$ and as $\1$ on each $\Sigma_y$, $y\neq x$. For $u\in [-1,1]$ consider the hamiltonian
\begin{equation}
H_u=-2\sum_{\{x,y\}\in\Ecal_L}(S^1_xS^1_y+uS^2_xS^2_y+S^3_xS^3_y).
\end{equation}
The case $u=1$ gives the Heisenberg ferromagnet, $u=-1$ is equivalent to the Heisenberg antiferromagnet, and $u=0$ is the quantum XY model.
For $\beta\geq0$ corresponding to the \emph{inverse temperature} our linear operator is given by the usual Gibbs state at inverse temperature $\beta$. More precisely, for operator $A$ on $(\C^{2S+1})^{\T_L}$ the expectation of $A$ in the Gibbs state is
\begin{equation}
\langle A\rangle =\frac{1}{Z_{u}(\beta)}{\operatorname {Tr}} \,A e^{-\beta H_u}, \qquad Z_u(\beta)={\operatorname {Tr}} \,e^{-\beta H_u}.
\end{equation}
Take
\begin{equation}
F^1_x=\1_x\quad \text{ and } \quad F^2_x=S^3_x.
\end{equation}
For $u\leq 0$ we have reflection positivity for reflections through edges \cite{F-S-S, K-L-S2,U}. The following theorem is a direct consequence of Theorem \ref{thm:monotonicity}.
\begin{thm}\label{cor:quantummonotonicity}
Let $\beta\geq 0$, $L\in 2\N$, $S\in\tfrac12\N$, $d\geq 2$ and $u\leq 0$. For any $z\in\N\setminus\{0\}$,
\begin{equation}
\langle S^3_oS^3_z\rangle \leq \begin{cases}\langle S^3_oS^3_{(z\cdot\boldsymbol{e}_i)\boldsymbol{e}_i}\rangle & \text{if }z\cdot\boldsymbol{e}_i\in 2\N+1, \\
\tfrac12 \left( \langle S^3_oS^3_{(z\cdot\boldsymbol{e}_i+1)\boldsymbol{e}_i}\rangle + \langle S^3_oS^3_{(z\cdot\boldsymbol{e}_i-1)\boldsymbol{e}_i}\rangle \right) & \text{if }z\cdot\boldsymbol{e}_i\in 2\N\setminus \{o\}. \end{cases}
\end{equation}
Further for $y\in\T_L$ such that $y\cdot\boldsymbol{e}_i=0$ (for example $y=o$) the function
\begin{equation}
\langle S^3_oS^3_{y+n\boldsymbol{e}_i}\rangle + \langle S^3_oS^3_{n\boldsymbol{e}_i}\rangle,
\end{equation}
is a non-increasing function of $n$ for odd $n\in (0,L/2)$.
\end{thm}
We now turn our attention to the consequence of Theorem \ref{thm:positivity}.
It is known from the famous result of Dyson, Lieb and Simon \cite{D-L-S} and various extensions of this result \cite{K-L-S,K-L-S2,U} that for $d\geq 3$ and $S\in\tfrac12 \N$ there are constants $c_1,c_2>0$ such that for $L\in2\N$ sufficiently large
\begin{equation}\label{eq:uniformpositivityquantum}
\frac{1}{|\T_L|}\sum_{x\in\T_L}\langle S^3_oS^3_x\rangle\geq c_1-\frac{c_2}{\beta}.
\end{equation}
Our next theorem extends such a result by showing that the two-point function is \textit{point-wise} uniformly positive on vertices whose coordinates are all odd.
\newpage
\begin{thm}\label{prop:thm2.8}
Suppose that $d\geq 3$ and $u\leq 0$.
\begin{enumerate}[(i)]
\item For any $\varphi \in (0, \frac{c_1}{2})$ there exists $\beta$ large enough and $\epsilon > 0$ such that, for any $L \in
2 \mathbb{N}$, any odd integer $n\in(-\varepsilon L,\varepsilon L)$ and any $i\in\{1,\dots,d\}$,
\begin{equation}
\langle S_o^3S_{n\boldsymbol{e}_i}^3\rangle \geq \varphi.
\end{equation}
\item
There exists an explicit $Q(d, u) \in (0 , \infty)$ such that if $S > Q(d, u)$
and $\beta$ is large enough, then there exists $\varphi, \varepsilon > 0 $ such that, for any $L \in 2 \mathbb{N}$ and $y \in \T_L$ such that $\|y \|_{\infty} \leq \varepsilon L$ and, for each $i\in\{1,\dots,d\}$, $y\cdot\boldsymbol{e}_i\in2\N+1$,
\begin{equation}\label{eq:quantuniformbound}
\langle S_o^3S_{y}^3\rangle \geq \varphi.
\end{equation}
\end{enumerate}
\end{thm}
In particular, $Q(0, 3)$ can be taken equal to $8$ and $Q(-1, 3)$ can be taken equal to
$11$. If we could find a constant $c>0$ as in Remark \ref{remark} (i) then we could extend \eqref{eq:quantuniformbound} to all vertices $y$ such that $\|y\|_{\infty}\leq\varepsilon L$.
\begin{proof}
The first claim follows from (\ref{eq:uniformpositivityquantum}),
and from an immediate application of the claim (i) in Theorem \ref{thm:positivity}. We now prove the claim (ii). We start from (\ref{eq:uniformpositivityquantum}), we have $M=S(S+1)/3$. From \cite{U} obtain an explicit expression for $c_1$,
\begin{equation}
c_1=\frac{S(S+1)}{3}-\frac{1}{\mathpalette\DHLhksqrt{2}}\frac{1}{|\T_L|}\sum_{k\in \T_L^*\setminus\{o\}}\mathpalette\DHLhksqrt{\frac{\varepsilon_u(k)}{\varepsilon(k)}}
\end{equation}
where $\T_L^*$ is the Fourier dual lattice, $\varepsilon(k)=2\sum_{i=1}^d(1-\cos(k_i))$ and $\varepsilon_u(k)=\sum_{i=1}^d\big[(1-u\cos(k_i))\langle S^1_oS^1_{e_i}\rangle + (u-\cos(k_i))\langle S^2_oS^2_{e_i}\rangle\big]$. Now it is easy to check that $\varepsilon_u(k)\leq \tfrac{S(S+1)}{6}(1-u)\varepsilon(k+\boldsymbol{\pi})$, which gives
\begin{equation}\label{eq:quantumc1}
c_1\geq \frac{S(S+1)}{3}-\frac{\mathpalette\DHLhksqrt{1-u}}{2}\mathpalette\DHLhksqrt{\frac{S(S+1)}{3}}J_{d,L}
\end{equation}
where
\begin{equation}\label{eq:J}
J_{d,L}=\frac{1}{|\T_L|}\sum_{k\in \T_L^*\setminus\{o\}}\mathpalette\DHLhksqrt{\frac{\varepsilon(k+\boldsymbol{\pi})}{\varepsilon(k)}}
\end{equation}
satisfies $\lim_{d\to\infty}\lim_{L\to\infty}J_{d,L}=1$. Further $\lim_{L\to\infty}J_{d,L}$ is a decreasing function of $d$ and $\lim_{L\to\infty}J_{3,L}=1.15672\cdots$.
Using these bounds, the inequality (ii) of Theorem \ref{thm:positivity} shows that there is some $\varphi>0$ such that for any $x\in\T_L$ with $|x\cdot\boldsymbol{e}_i|\in(0,\varepsilon L)\cap 2\N+1$ for every $i\in\{1,\dots,d\}$ we have $\langle S_o^3S_x^3\rangle \geq \varphi$ once $\beta$ is sufficiently large if
\begin{equation}
S^2+S-\tfrac34 (1-u)(J_{d,L})^2\big(\tfrac14-\tfrac12\varepsilon\big)^{-2d}>0,
\end{equation}
which is fulfilled for any large enough $S$. This completes the proof.
\end{proof}
\subsection{The Random Path Model}\label{sec:rpmpath}
The Random Path Model (RPM) was introduced in \cite{L-T}.
It can be viewed as a random loop model with an arbitrary number of
coloured loops and walks, with loops and walks possibly sharing the same edge and, at every vertex, a pairing function which pairs pairs of links touching that vertex or leaving them unpaired.
It was shown in \cite{L-T} that, for different choices of the parameters of the RPM, we can obtain many interesting models such as the loop $O(N)$ model, the spin $O(N)$ model, the dimer and double-dimer model and random lattice permutations.
Here we introduce the RPM in a more general setting than in \cite{L-T}. Such a generalisation consists of allowing pairings
of links with different colours and allows us to derive site monotonicity properties for a more general class of two-point functions, for example, for the probability that a loop connects two distinct vertices of the torus.
Let $\mathcal{E}_L$ be the set of edges connecting nearest neighbour vertices of the torus.
Let $m=(m_e)_{e\in\Ecal_L}\in\N^{\Ecal_L}$ be an assignment of a number of \emph{links} on each edge of $\Ecal_L$ and, for $N\in N_{>0}$, let $c(m)\in\bigtimes_{e\in\Ecal_L}\big(\{1,\dots,N\}^{m_e}\big)$ be a function, which we call a \emph{colouring}, that for each $e\in\Ecal_L$ assigns the $m_e$ links on $e$ with a colour in $\{1,\dots,N\}$. Lastly we define $\pi(m,c(m))=(\pi_x(m,c(m)))_{x\in\T_L}$ consisting of a collection of partitions of links. $\pi_x(m,c(m))$ is a partition of the links incident to $x$ into sets with at most two links each. If, for some $x\in\T_L$, two links are in the same element of the partition at $x$ then we say the links are \emph{paired at $x$} and call this element a \emph{pairing}. If a link is not paired to any other link at $x$ then we say $x$ is \emph{unpaired at $x$}. Links can be paired or unpaired at both end points of their corresponding edge. We denote by $\Wcal_L$ the set of all such triples $(m,c(m),\pi(m,c(m))$ and refer to elements $w=(m(w),c(w),\pi(w))\in\Wcal_L$ as \emph{configurations}. Configurations can be interpreted as a collection of multicoloured loops and walks on $(\T_L,\Ecal_L)$.
Now for $x\in\T_L$ and $i\in\{1,\dots,N\}$ let $u^i_x$ be the number of unpaired links of colour $i$ at $x$, let $K_x$ be the number of pairings at $x$ between two differently coloured links, and let $n_x$ be the number of elements of $\pi_x$. If $K_x=0$ we define $v^i_x$ to be the number of pairings at $x$ between links with colour $i$, otherwise we define $v^i_x=0$. Finally let $t_x$ be the number of pairings at $x$ between links on the same edge (this is required to recover, for example, the spin $O(N)$ model from the RPM).
Let $U:\N^{2N+3}\to \R$ and $\beta\geq 0$. We define our measure $\mu_{L,N,\beta,U}$ on $\Wcal_L$ as
\begin{equation}
\mu_{L,N,\beta,U}(w)=\prod_{e\in\Ecal_L}\frac{\beta^{m_e(w)}}{m_e(w)!}\prod_{x\in\T_L}U_x(w)\qquad \forall w\in\Wcal_L
\end{equation}
where $U_x(w)=U(u^1_x,\dots,u^N_x,v^1_x,\dots,v^N_x,K_x,n_x,t_x)$. We refer to $U$ as a vertex \emph{weight function}. For $f:\Wcal_L\to\R$ we use the same notation for the expectation of $f$, $\mu_{L,N,\beta,U}(f):=\sum_{w\in\Wcal_L}f(w)\mu_{L,N,\beta,U}(w)$.
The measure $\mu_{L, N,
\beta, U }$ was proven to be reflection positive for reflections through edges in \cite[Proposition 3.2]{L-T}. The same result holds for the more general
random path model defined in this note, since allowing pairing of links with different colour does not modify the proof.
It can be shown that the random path model fits the general framework introduced in the present note, by considering local state spaces for $x\in\T_L$ that consist of a specification of the number of coloured links on each edge incident to $x$ (an element of $\N^{2dN}$) together with a function that maps $\N^{2dN}$ to partitions of $\sqcup_{m\geq0}\{1,\dots,m\}$. The measure is then supported on configurations whose functions partition the correct value of $m$ (the value corresponding to the total number of incident links) at each $x\in\T_L$ and which, for each $e\in\Ecal_L$ specify the same link numbers on $e$ for both end points of $e$.
Suppose that $U_x(w)=0$ whenever $K_x\neq 0$, then $\mu_{L,N,\beta,U}$ is supported on configurations of monochromatic loops and walks. From this we can recover the RPM introduced in \cite{L-T} which reduces to the specific examples mentioned above if we further specify $U$ in an appropriate way. In this case we could take
\begin{equation}
\langle\cdot\rangle=\frac{1}{Z^{loop}_{L,N,\beta,U}}\mu_{L,N,\beta,U}(\cdot)
\end{equation}
where $Z^{loop}_{L,N,\beta,U}$ is the total measure under $\mu_{L,N,\beta,U}$ of configurations with only loops. We then take
\begin{equation}
F^1_x=\1_{u^1_x=0}\qquad \text{ and } F^2_x=\1_{u^1_x=1}
\end{equation}
and find that $G_L(x,y)$ corresponds to the two-point function introduced in \cite{L-T}, when $U$ is chosen appropriately this is equal to the spin-spin correlation of the spin $O(N)$ model. From this we can recover Theorems 2.4, 2.6 and 2.8 in \cite{L-T} .
Now suppose that $N > 1$, that $U_x$ allows links of different colours to be paired, and that it is $0$ if $\sum_i u^i_x\neq 0$ (meaning the model only has loops and no walks). Our linear functional $\langle\cdot\rangle$ could then be given by
\begin{equation}
\langle\cdot\rangle=\frac{1}{Z^{mono}_{L,N,\beta,U}}\mu_{L,N,\beta,U}(\cdot)
\end{equation}
where $Z^{mono}_{L,N,\beta,U}$ is the total measure under $\mu_{L,N,\beta,U}$ of configurations with $\sum_xK_x=0$ and only loops.
Now we take
\begin{equation}
F^1_x=\1_{K_x=0}\quad \text{ and }\quad F^2_x=\1_{K_x=1}.
\end{equation}
We have that $G_L(x,y)=2\binom{N}{2}\P(x\leftrightarrow y)$ where the probability is in the system with only monochromatic loops with colours in $\{1,\dots,N\}$ and there are no walks.
The event $x\leftrightarrow y$ is the event that there is a loop that passes through $x$ and $y$.
Theorem \ref{thm:monotonicity} leads then to the following theorem.
\begin{thm}
Let $\mathbb{P}( x \leftrightarrow y )$ be the probability that
a loop passes through $x$ and $y$ in the random path model with only monochromatic loops and no open path.
For any $z=(z_1,\dots,z_d)$,
\begin{align}
\P(o\leftrightarrow z)&\leq \P(o\leftrightarrow z_i\boldsymbol{e}_i) \qquad \qquad\qquad\qquad\qquad\quad \,\,& \text{ if } z_i\in2\Z+1,
\\
\P(o\leftrightarrow z)&\leq \tfrac12\P(o\leftrightarrow (z_i-1)\boldsymbol{e}_i)+\tfrac12 \P(o\leftrightarrow (z_i+1)\boldsymbol{e}_i) \, & \text{ if } z_i\in 2\Z\setminus \{0\},
\end{align}
and that for $y\in\T_L$ such that $y\cdot\boldsymbol{e}_i=0$
\begin{equation}
\P(o\leftrightarrow y+n\boldsymbol{e}_i)+\P(o\leftrightarrow n\boldsymbol{e}_i)
\end{equation}
is a non-increasing function of $n$ for all odd $n\in(0,L/2)$.
\end{thm}
Note that $\mathbb{P}( x \leftrightarrow y )$ equals the probability that a loop connects $x$ and $y$ in the loop O(N) model, in the double dimer model, in lattice permutations or in the loop representation of the spin O(N) model under an appropriate choice of $U$ \cite{L-T}.
Further, it has been proven \cite{B-U} that, when $U$ is chosen appropriately, such a probability equals the following correlation, $\mathbb{P}( x \leftrightarrow y ) = \langle S_x^1 S_x^2 S_y^1 S_y^2 \rangle$, in the spin O(N) model with $N> 1$, hence our theorem provides monotonicity properties for such a four-spin correlation function.
\section{Proof of Theorem \ref{thm:monotonicity}}\label{sec:proofmonotonicity}
Suppose that $\langle\cdot\rangle$ is reflection positive with respect to the reflection $\vartheta$. Let $Q\subset \T_L$ and define $Q^{\pm}:=(Q\cap \T_L^{\pm})\cup\vartheta(Q\cap\T_L^{\pm})$. The key to the proof is the following lemma.
\begin{lem} \label{lem:keylem}
For $Q\subset\T_L$
\begin{equation}
\sum_{\substack{x,y\in Q\\ x\neq y}}G_L(x,y)\leq \frac12 \sum_{\substack{x,y\in Q^+\\ x\neq y}}G_L(x,y)+\frac12\sum_{\substack{x,y\in Q^-\\ x\neq y}}G_L(x,y).
\end{equation}
\end{lem}
\begin{proof}
For $0<\eta \ll 1$ we consider the following functions
\begin{equation}
A=\prod_{x\in Q\cap\T_L^+}(1+\eta F^2_x\prod_{z\in\T^+_L\setminus\{x\}}F^1_z),\qquad B=\prod_{x\in Q\cap\T_L^-}(1+\eta F^2_{\vartheta x}\prod_{z\in\T^-_L\setminus\{ x\}}F^1_{\vartheta z}).
\end{equation}
Now for simplicity of notation we write $\T_L(x)$ for $\T_L^+\setminus\{x\}$ if $x\in \T_L^+$ and $\T_L^-\setminus\{x\}$ if $x\in \T_L^-$.
A simple calculation gives
\begin{equation}
\begin{aligned}
\langle A\vartheta B\rangle&=\big\langle\prod_{x\in Q}\big(1+\eta F^2_x\prod_{z\in\T_L(x)}F^1_z\big)\big\rangle
\\
&=1+\eta\sum_{x\in Q}\big\langle F^2_x\prod_{z\in\T_L(x)}F^1_z\big\rangle+\eta^2\sum_{\substack{x,y\in Q\\ x\neq y}}\big\langle F^2_xF^2_y\prod_{z\in\T_L(x)}F^1_z\prod_{z\in\T_L(y)}F^1_z\big\rangle +O(\eta^3),
\end{aligned}
\end{equation}
and analogously
\begin{align}
\langle A\vartheta A\rangle&=1+\eta\sum_{x\in Q^+}\big\langle F^2_x\prod_{z\in\T_L(x)}F^1_z\big\rangle+\eta^2\sum_{\substack{x,y\in Q^+\\ x\neq y}}\big\langle F^2_xF^2_y\prod_{z\in\T_L(x)}F^1_z\prod_{z\in\T_L(y)}F^1_z\big\rangle +O(\eta^3),
\\
\langle B\vartheta B\rangle&=1+\eta\sum_{x\in Q^-}\big\langle F^2_x\prod_{z\in\T_L(x)}F^1_z\big\rangle+\eta^2\sum_{\substack{x,y\in Q^-\\ x\neq y}}\big\langle F^2_xF^2_y\prod_{z\in\T_L(x)}F^1_z\prod_{z\in\T_L(y)}F^1_z\big\rangle +O(\eta^3).
\end{align}
Now suppose that $x,y\in Q\cap\T_L^+$, then $x,y,\vartheta x,\vartheta y\in Q^+$ and we further note that
\begin{equation}\label{eq:mixedterms}
\big\langle F^2_xF^2_y\prod_{z\in\T_L(x)}F^1_z\prod_{z\in\T_L(y)}F^1_z\big\rangle=\big\langle F^2_{\vartheta x}F^2_{\vartheta y} \prod_{z\in\T_L(\vartheta x)}F^1_z\prod_{z\in\T_L(\vartheta y)}F^1_z\big\rangle.
\end{equation}
An analogous identity holds for $x,y\in Q\cap \T_L^-$.
Now we use \eqref{eq:refpos}. Note that the $\eta$ terms will cancel by \eqref{eq:refinvariance}.
Now we compare the $\eta^2$ terms. The terms $\big\langle F^2_xF^2_y\prod_{z\in\T_L(x)}F^1_z\prod_{z\in\T_L(y)}F^1_z\big\rangle$ when $x,y\in Q\cap\T_L^{\pm}$ will cancel due to \eqref{eq:mixedterms}. By using \eqref{eq:refinvariance} repeatedly on the remaining terms to group those terms that are equal gives the result.
\end{proof}
We take $Q=\{o,z\}$ and $\vartheta$ the reflection in the plane bisecting $\{p\boldsymbol{e}_i,(p+1)\boldsymbol{e}_i\}$ for $p:=\tfrac12 (z\cdot \boldsymbol{e}_i-1+q\}$, this requires $z\cdot\boldsymbol{e}_i+q\in2\N+1$ and $z\cdot\boldsymbol{e}_i\pm q\in(0,L)$. If we take $q=0$ when $z_i\in 2\N+1$ and $q=1$ when $z_i\in2\N\setminus\{0\}$ then Lemma \ref{lem:keylem} gives us \eqref{eq:oddinequality} and \eqref{eq:eveninequality}.
If we also have reflection positivity for reflections through sites then we can reflect in the plane $R=\{x\in\R\,:\, x\cdot\boldsymbol{e}_i=\tfrac12(z\cdot\boldsymbol{e}_i+q)\}$, requiring that $z\cdot\boldsymbol{e}_i+q$ is even. If we apply Lemma \ref{lem:keylem} with $q=0$ we find that for $z\cdot\boldsymbol{e}_i\in2\N\setminus\{0\}$ we also have \eqref{eq:oddinequality}.
For the monotonicity result \eqref{eq:oddmonotonicity} we take $Q=\{o,z,z_i\boldsymbol{e}_i,z-z_i\boldsymbol{e}_i\}$ with the same reflection as above. We define the function
\begin{equation}
G^{\boldsymbol{e}_i}_L(x):=\tfrac12\big(G_L(o,x)+G_L(o,(x\cdot\boldsymbol{e}_i)\boldsymbol{e}_i)\big),
\end{equation}
and find, using Lemma \ref{lem:keylem}, after rearranging and \eqref{eq:refinvariance} that for $z_i+q$ odd
\begin{equation}\label{eq:iterativeinequality}
G^{\boldsymbol{e}_i}_L(z+q\boldsymbol{e}_i)-G^{\boldsymbol{e}_i}_L(z)\geq G^{\boldsymbol{e}_i}_L(z)+G^{\boldsymbol{e}_i}_L(z-q\boldsymbol{e}_i).
\end{equation}
The proof follows the proof of \cite[Proposition 4.2]{L-T}.
We can now prove \eqref{eq:oddmonotonicity} by contradiction. Suppose that $y\in\T_L$ such that $y\cdot\boldsymbol{e}_i=0$ and odd $n\in(0,L/2)$ satisfy $G^{\boldsymbol{e}_i}_L(y+n\boldsymbol{e}_i)>G^{\boldsymbol{e}_i}_L(y+(n-2)\boldsymbol{e}_i)$. Now by repeatedly using \eqref{eq:iterativeinequality} with $q=2$ we find
\begin{equation}
G^{\boldsymbol{e}_i}_L(y+n\boldsymbol{e}_i)>G^{\boldsymbol{e}_i}_L(y+(n-2)\boldsymbol{e}_i)>G^{\boldsymbol{e}_i}_L(y+(n-4)\boldsymbol{e}_i)>G^{\boldsymbol{e}_i}_L(y+(n-6)\boldsymbol{e}_i)\dots
\end{equation}
Once we have used this inequality $n$ times we find $G^{\boldsymbol{e}_i}_L(y+n\boldsymbol{e}_i)>G^{\boldsymbol{e}_i}_L(y+n\boldsymbol{e}_i-2n\boldsymbol{e}_i)=G^{\boldsymbol{e}_i}_L(y-n\boldsymbol{e}_i)$, but by reflection positivity we must have $G^{\boldsymbol{e}_i}_L(y-n\boldsymbol{e}_i)=G^{\boldsymbol{e}_i}_L(y+n\boldsymbol{e}_i)$. This contradiction completes the proof of \eqref{eq:oddmonotonicity}. If, in addition, we have reflection positivity for reflections through sites we can use the reflection in $R=\{x\in\R\,:\, x\cdot\boldsymbol{e}_i=\tfrac12(z\cdot\boldsymbol{e}_i+q)\}$. We then obtain the inequality \eqref{eq:iterativeinequality} for $z_i+q$ even. Using this we can obtain a contradiction as before by alternating between the odd and even version of \eqref{eq:iterativeinequality} with $q=1$ to find that for any $y\in\T_L$ such that $y\cdot\boldsymbol{e}_i\pm 1\in(0,L)$
\begin{equation}
G^{\boldsymbol{e}_i}_L(y+\boldsymbol{e}_i)-G^{\boldsymbol{e}_i}_L(y)\geq G^{\boldsymbol{e}_i}_L(y)-G^{\boldsymbol{e}_i}_L(y-\boldsymbol{e}_i).
\end{equation}
The full monotonicity result then follows similarly to \eqref{eq:oddmonotonicity}.
\section{Proof of Theorem \ref{thm:positivity}}\label{sec:proofpositivity}
We start with the proof of (i) and we present the proof of (ii) and (iii) afterwards.
To begin, fix an arbitrary $\varphi \in (0, C_1)$. We claim that there must exist an $\epsilon > 0$ small enough such that for any $L \in 2 \mathbb{N}$ there exists $z_L \in \T_L \setminus [0, \epsilon L]^d $ such that $G_L(o,x) \geq \varphi$.
The proof of this claim is by contradiction. Suppose that this was not the case, then, under the assumptions of the theorem, we would have that
$$
\sum_{x \in \T_L} G_L(o,x) \leq \, \, \varphi \, \, \lceil (\, 1 \, - \, \epsilon \, ) \, L \rceil ^d \, + \, M \lceil \epsilon L \rceil^d,
$$
which would be in contradiction with (\ref{eq:FSS}) for small enough $\epsilon$, since we assumed that $\varphi< C_1$.
Now define $y_L : = z_L \cdot \boldsymbol{e}_1$ and, if it is odd, we use the first claim in Theorem \ref{thm:monotonicity} and deduce that,
$
G_L \big (o, y_L \boldsymbol{e}_1 \big ) \, \, \geq \, \, \varphi,
$
otherwise we use the second claim in Theorem \ref{thm:monotonicity} and deduce that,
$
\max \big \{G_L \big (o, (y_L + 1) \boldsymbol{e}_1 \big ), G_L \big (o, (y_L - 1) \boldsymbol{e}_1 \big ) \big \} \, \, \geq \, \, \frac{\varphi}{2}.
$
Using the fact that $y_L+ 1 \geq \epsilon L$ and the last claim in Theorem \ref{thm:monotonicity}, we deduce that, for any odd integer in the interval $n \in (o, \epsilon L)$,
$
G_L \big (o, n \boldsymbol{e}_1 \big ) \geq \frac{\varphi}{2}.
$
This concludes the proof of (i).
We now proceed with the proof of (ii) and (iii).
To begin, for $z\in\T_L$ we define
\begin{equation}
\Q_z:=\{(x_1,\dots,x_d)\in\Z^d\,:\, \forall i\in\{1,\dots,d\},\, x_i\leq |z\cdot\boldsymbol{e}_i| \text{ or } x_i>L-|z\cdot\boldsymbol{e}_i]\}.
\end{equation}
The proof relies on the following lemmas.
\begin{lem}\label{lem:keylem2}
Let $z\in\T_L$ and $y\in\mathbb{Q}_z$ be such that $z_i$ and $y_i$ are odd for every $i\in\{1,\dots,d\}$ then under the same assumptions as Theorem \ref{thm:positivity}
\begin{equation}
G_L(o,y)\geq 2^dG_L(o,z) -(2^d-1)M.
\end{equation}
If, in addition, $\langle\cdot\rangle$ is reflection positive for reflections through vertices then the inequality holds for any $z\in\T_L$ and $y\in\Q_z$.
\end{lem}
\begin{proof}
The proof is as in the proof of \cite[Proposition 4.7]{L-T} with minor changes as we only have the monotonicity result \eqref{eq:oddmonotonicity} for odd $n$. For convenience we assume that $z_i,y_i>0$ for every $i\in\{1,\dots,d\}$, other cases follow by symmetry. For $i\in\{1,\dots, d\}$ define
\begin{equation}
D_i:=(z-y)\cdot \boldsymbol{e}_i,
\end{equation}
then $D_i\in 2\N$. There is a ``path"
\begin{equation}
(z^1_0,z^1_1,\dots,z^1_{D_1/2},z^2_0,z^2_1,\dots,z^2_{D_2/2},\dots,z^d_0,z^d_1,\dots,z^d_{D_d/2})
\end{equation}
with the properties that $z^1_0=z$, $z^d_{D_d/2}=y$, and, for every $i\in\{1,\dots,d-1\}$, $z^i_{D_i/2}=z^{i+1}_1$. Further, for each $i\in\{1,\dots,d\}$ and $j\in [1,D_i/2]$
\begin{equation}
z^i_{j-1}-z^i_j=2\boldsymbol{e}_i.
\end{equation}
Now we use both \eqref{eq:oddinequality} and \eqref{eq:oddmonotonicity},
\begin{equation}
\begin{aligned}
2G_L(o,z^i_0) &\leq G_L(o,z^i_0) +G_L(o,(z^i_0\cdot\boldsymbol{e}_i)\boldsymbol{e}_i)
\\
&\leq G_L(o,z^i_{D_i/2}) + G_L(o,(z^i_{D_i/2}\cdot\boldsymbol{e}_i)\boldsymbol{e}_i),
\end{aligned}
\end{equation}
and hence using that $G_L(o,x)\leq M$ for any $x\in\T_L$ we have that
\begin{equation}
G_L(o,z^i_{D_i/2}) \geq 2G_L(o,z^i_0)-M.
\end{equation}
Iterating this for $i=1,\dots, d$ gives
\begin{equation}
\begin{aligned}
G_L(o,y)=G_L(o,z^d_{D_d/2})&\geq 2G_L(o,z^d_0)-M\geq \dots
\\
& \geq 2^dG_L(o,z)-(2^d-1)M,
\end{aligned}
\end{equation}
this completes the proof. If $\langle\cdot\rangle$ is also reflection positive for reflections through vertices the proof is exactly as in \cite[Proposition 4.7]{L-T}. We define $D_i$'s and the path $(z_0^1,\dots z^d_{D_d/2}$ as before except that we can take $z^i_{j-1}-z^i_j=\boldsymbol{e}_i$, the rest of the proof then proceeds as before.
\end{proof}
Now, for $r\in \N$ let
\begin{equation}
\mathbb{S}_{r,L}:=\{z\in\T_L\, :\, \exists i\in\{1,\dots,d\}\text{ such that } z\cdot \boldsymbol{e}_i<r\text{ or }L-z\cdot\boldsymbol{e}_i\leq r\}.
\end{equation}
\begin{lem}\label{lem:keylem3}
Under the same assumptions as \ref{thm:positivity} there are $x_L\in \T_L\setminus\mathbb{S}_{\varepsilon L,L}$ and $z_L\in\T_L\setminus\mathbb{S}_{\varepsilon L,L}$ with $|z_L\cdot\boldsymbol{e}_i|\in 2\N+1$ for every $i\in\{1,\dots,d\}$ such that
\begin{align}
G_L(o,x_L)&\geq M-(1-2\varepsilon)^{-d}(M-C_1), \label{eq:xL}
\\
G_L(o,z_L)&\geq M-\big(\tfrac12-\varepsilon\big)^{-d}(M-C_1) \label{eq:zL}.
\end{align}
\end{lem}
\begin{proof}
The proof of \eqref{eq:xL} is exactly as in \cite[Lemma 4.9]{L-T}. The proof of \eqref{eq:zL} is a simple adaptation of \cite[Lemma 4.9]{L-T} and we sketch it here.
Now a simple proof by contradiction shows that there must be a $z_L$ as in the statement of the lemma. Indeed, suppose for every $z_L\in\T_L$ with $|z_L\cdot\boldsymbol{e}_i|\in [\varepsilon L,L)\cap2\N+1$ for every $i\in\{1,\dots,d\}$ that $G_L(o,z_L)< M-\big(\tfrac12-\varepsilon\big)^{-d}(M-C_1)$. Using this together with the worst-case bound $M$ for every other vertex and the bound $|\T_L\setminus \mathbb{S}_{r,L}|=(L-2r)^d$ gives a contradiction.
\end{proof}
Statement (i) of Theorem \ref{thm:positivity} follows immediately from \eqref{eq:xL} and Theorem \ref{thm:monotonicity}. For statement (ii) of Theorem \ref{thm:positivity} note that if $z_L$ is as in the statement of Lemma \ref{lem:keylem3} then, by Lemma \ref{lem:keylem2}, for any $y\in\mathbb{Q}_{z_L}$ such that $y_i$ is odd for each $i\in\{1,\dots,d\}$ we have (after rearranging)
\begin{equation}
G_L(o,y)\geq 2^dG_L(o,z_L)-(2^d-1)M\geq M-2^d\big(\tfrac12-\varepsilon\big)^{-d}(M-C_1).
\end{equation}
which is equal to the bound in the Theorem.
Finally for statement (iii) of Theorem \ref{thm:positivity} we note that by Lemmas \ref{lem:keylem2} and \ref{lem:keylem3} for any $y\in\Q_{x_L}$ we have (after rearranging)
\begin{equation}
G_L(o,y)\geq 2^dG_L(o,x_L)-(2^d-1)M\geq M-2^d(1-2\varepsilon)^{-d}(M-C_1).
\end{equation}
\nocite{*}
|
1,108,101,563,175 | arxiv | \section{Introduction}
In the design of beam transfer lines, one often encounters
the problem of finding a combination of quadrupole lenses
and field free spaces (drifts) that will produce particular
transfer matrices in both the horizontal and the vertical
planes. Nowadays this problem is typically approached
with the help of computer routines which minimize the
deviations from the desired matrices as function of the
quadrupole strengths, lengths and distances between them.
Although very sophisticated software became available for
these purposes during the past decades, there is an important
theoretical question which has not been answered yet and
whose answer could affect the strategy and efficiency of
numerical computations. Searching for a numerical solution,
one has to remember that it is not proven yet that an arbitrary
four by four uncoupled beam transfer matrix can be represented
by using a finite number of drifts and quadrupoles
(representation problem) and the answer to this question is
not known not only for more or less realistic quadrupole
field models but also for the both most commonly used
approximations of quadrupole focusing, namely thick
and thin quadrupole lenses.
In this paper we make a step forward in resolving the
representation problem and prove that an arbitrary four
by four uncoupled beam transfer matrix actually can be
obtained as a product of a finite number of thin-lenses and
drifts. Even though our proof uses more thin lenses than
probably needed, we believe that the solution provided is
not only of theoretical interest, but could also find some
practical applications because it uses explicit analytical
formulas connecting thin-lens parameters with the elements
of the input beam transfer matrix.
Though the thin-lens kick is the simplest model of the
quadrupole focusing, its role in accelerator physics can
hardly be overestimated. The thin-lens quadrupole approximation
reveals the analogy between light optics and charged
particle optics and, if one takes into account difficulties of
analytical manipulations with the next by complexity
thick-lens quadrupole model ~\cite{Regenstreif_1, Regenstreif_2},
is an indispensable tool for
understanding principles and limitations of the already
available optics modules and for development of the new
optics solutions (see, as good examples,
papers ~\cite{BrownServranckx, MontagueRuggiero, Zotter, Napoly, dAmigoGuignard}).
The paper by itself is organized as follows. In Sec. II we
introduce all needed notations and give the lower bound on
the number of drifts and lenses which are required for a
solution of the representation problem by providing an
example of a matrix which cannot be obtained using five
thin lenses and five independently variable drift spaces. This
result is somewhat unexpected and up to some extent contradicts
a rather widespread opinion that the typical problem
can be solved by taking a number of parameters equal to the
number of constraints available. We see that although
the four by four uncoupled beam transfer matrix has only
6 degrees of freedom, there are matrices which cannot be
represented not only by three thin lenses and three drifts
(six parameters), but also by five thin lenses and five drifts
(ten parameters). This example, the example provided by the
matrix (\ref{i2_gb_ex2}), other of our attempts (though omitted in this
paper) to find thin-lens decompositions for particular beam
transfer matrices and the properties of the explicit solution
given below in this paper, lead us to the conjecture that in
order to represent an arbitrary four by four uncoupled beam
transfer matrix one needs at least six thin lenses if the
distances between them can be varied (independently or
not) or at least seven thin lenses if this variation is not
allowed.
In Sec. III we prove that an arbitrary four by four
uncoupled beam transfer matrix can be obtained as a
product of a finite number of thin-lenses and drifts by
giving an explicit solution of the thin-lens representation
problem which uses equally spaced thin lenses. The core
idea of our approach is the representation of the matrix of
the thin-lens multiplet as a product of elementary $P$ matrices
(the definition and the properties of the matrix $P$ can be
found in Appendix A) with subsequent reduction of the
initial 2D problem to two independent 1D problems. We
use in this section the equally spaced thin-lens system
because it allows one to make such a reduction with a
minimum of technical details. The solution obtained
utilizes 13 lenses if the spacing between them is fixed
beforehand and 12 lenses if this distance can be used as
an additional parameter. Thus, it uses six more lenses than
the minimal number stated in our conjecture, but the
setting of these six lenses depends only on the distance
between lenses and therefore does not depend (at least
directly) on the particular input beam transfer matrix.
In Sec. IV we consider the case of arbitrarily spaced thin
lenses. First, we show that the solution of the representation
problem presented in the previous section is still valid
after some minor modifications. Next we study in greater
detail the ways to transform the matrix of the drift-lens
system to the product of the elementary $P$ matrices (see formulas
(\ref{b2})-(\ref{b6_2}) and (\ref{ff_1})-(\ref{ff_6}) below).
The representation of the matrix of the thin-lens multiplet as a product
of elementary $P$ matrices (together with the multiplication formula (\ref{e1_r3}))
is a useful new tool for the analytical study
of the properties of thin-lens systems. It also gives some
clarification of the question why the role of the variable
drift spaces and the role of the variable lens strengths
are different when they are used as fitting parameters.
This paper is mostly a theoretical paper and its main
purpose is to turn the common believe that an arbitrary four
by four uncoupled beam transfer matrix can be obtained as
a product of a finite number of thin lenses and drifts into
proven scientific fact. Still, both, the developed new technique
for the analytical study of the properties of thin-lens
multiplets and the explicit thin-lens solution presented in
this paper, are of independent interest. To illustrate that, in
Appendix B we apply our $P$ matrix approach to the study
of four-lens beam magnification telescopes and find new,
previously unknown analytical solutions for this important
optics module. In Appendix C, we apply the explicit
solution developed in this paper to the design of a beam
line which allows an independent scan of horizontal and
vertical phase advances while preserving the entrance and
exit matching conditions for the Twiss parameters.
Besides that thin-lens blocks with decoupled transverse
actions introduced in this paper are another point of general
interest. Although the idea of decoupled tuning knobs by
itself is not new in the field of accelerator physics (see, for
example, ~\cite{Roser, WalkerIrwinWoodley}),
our approach is new and is not based on an iterative usage of small
steps in the lens strengths obtained at each iteration by linearization.
\section{Statement of the problem and preliminary considerations}
Let $M$ be an arbitrary four by four uncoupled beam
transfer matrix and let the two by two symplectic matrices
$M_x$ and $M_y$ be its horizontal and vertical focusing blocks,
respectively. Let us denote by $Q(g)$ the transfer matrix of
the one-dimensional thin lens of strength $g$ and by $D(l)$
the transfer matrix of the one-dimensional drift space of length $l$:
\noindent
\begin{eqnarray}
Q(g) \,=\,
\left(
\begin{array}{rr}
1 & 0 \\
g & 1
\end{array}
\right),
\;\;\;\;\;
D(l) \,=\,
\left(
\begin{array}{rr}
1 & l \\
0 & 1
\end{array}
\right).
\label{i1}
\end{eqnarray}
\noindent
The problem of representation of the matrix $M$
by a thin-lens system can then be written as
\noindent
\begin{eqnarray}
D(l_n)\, Q(\pm g_n) \cdot \ldots \cdot D(l_1)\, Q(\pm g_1) \,=\, M_{x, y},
\label{i2}
\end{eqnarray}
\noindent
where (here and later on) one has to take the upper sign in
the combinations $\pm$ and $\mp$ together with the index $x$ and the
lower sign together with the index $y$.
Note that the drift-lens system presented on the left-hand
side of Eq. (\ref{i2}) consists of equal numbers of drifts and
lenses and the first element which the beam sees during
its passage is a thin-lens.
Alternatively, one can consider equation
\noindent
\begin{eqnarray}
Q(\pm g_n)\,D(l_n) \cdot \ldots \cdot Q(\pm g_1)\,D(l_1) \,=\, M_{x, y},
\label{i2_sfd}
\end{eqnarray}
\noindent
where the first element is a drift space, or one can use the
drift-lens system with a nonequal number of drifts and
lenses which starts and ends with a drift (or a lens), but
for the moment this is not important.
There are many unanswered questions related to Eq. (\ref{i2}),
the most interesting for us in this paper is the following:
given a matrix $M$, does there exist a number $n$ such that
these equations have a solution? If the answer to this
question is positive, could the number $n$ be chosen
independently from the input matrix $M$ and, if it is also
possible, what is the minimal $n$ required?
From a mathematical point of view, Eq. (\ref{i2}) is a system
of eight polynomial equations in $2 n$ unknowns and for any
polynomial system considered over an algebraically closed
field of complex numbers there is an algorithmic way to
answer the question if this system has infinitely many
solutions or has a finite number of solutions, or has no
solutions at all. This can be done by transforming the
original system to a special form called a Gr$\ddot{\mbox{o}}$bner basis
and, very loosely speaking, is an analogue of the Gaussian
elimination process in linear algebra ~\cite{CoxLittleOshea}.
The Gr$\ddot{\mbox{o}}$bner basis can be computed in finitely many steps
and, moreover, nowadays its calculation can be done
with the help of symbolic manipulation programs like
MATHEMATICA and MAPLE.
Unfortunately, we are interested in the real solutions
of Eq. (\ref{i2}) constrained additionally by the requirements
for the drift lengths to be nonnegative
and therefore we cannot use all benefits provided by the Gr$\ddot{\mbox{o}}$bner basis theory.
Nevertheless, the use of the Gr$\ddot{\mbox{o}}$bner basis approach,
although it did not help us to solve the problem in general,
it was very useful in providing examples of particular matrices
which cannot be obtained using a certain number of thin lenses and drift spaces.
For example, using the
Gr$\ddot{\mbox{o}}$bner basis technique,
it is possible to prove that the matrix $M$ with
\noindent
\begin{eqnarray}
M_x \,=\,M_y\,=\,
\left(
\begin{array}{rr}
1 & 0\\
-1 & 1
\end{array}
\right)
\label{i2_gb}
\end{eqnarray}
\noindent
cannot be represented by five thin lenses and five
variable drift spaces starting either from a lens
like in Eq. (\ref{i2})
or from a drift
like in Eq. (\ref{i2_sfd}).
This example, the example provided by the matrix (\ref{i2_gb_ex2}),
many other of our attempts to study the representation problem
for particular beam transfer matrices,
and the properties of the explicit solution given below in this paper lead
us to the conjecture that in order to be able to represent
an arbitrary four by four uncoupled beam transfer matrix
one needs at least six thin lenses if the distances between them can
be varied (independently or not) or at least seven thin lenses
with nonzero drift spaces between them if this variation is not allowed.
To finish this section, let us note that
in the above discussions we made no use of the fact that we are interested
not in the general system of polynomial equations, but only in the polynomial
system produced by a product of matrices with simple inversion
properties:
\noindent
\begin{eqnarray}
Q^{-1}(g)\,=\,Q(-g),
\;\;\;\;\;
D^{-1}(l)\,=\,D(-l).
\label{i2_invp}
\end{eqnarray}
\noindent
Choosing some $k = 1, \ldots, n-1$
and using (\ref{i2_invp}),
one can rewrite the system (\ref{i2}) in the equivalent form:
\noindent
\begin{eqnarray}
D(l_k)\, Q(\pm g_k) \cdot \ldots \cdot D(l_1)\, Q(\pm g_1) \,=
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
Q(\mp g_{k+1}) \,D(-l_{k+1}) \cdot \ldots \cdot Q(\mp g_n) \,D(-l_n)\,
M_{x, y}.
\label{i2_do}
\end{eqnarray}
\noindent
This trick can be used for the elimination of a part of the unknowns
from the original system
by solving Eq. (\ref{i2_do}) with respect to the variables
$g_1, \ldots, g_k, l_1, \ldots, l_k$
or one may even think to construct an iterative
solution method
which could be considered as matrix version of the
method of successive elimination of unknowns
~\cite{Napoly, ChaoIrwin}.
This method was developed especially to deal with
the thin-lens multiplets and was
used in ~\cite{ChaoIrwin} in an attempt to characterize
all uncoupled beam transfer matrices which can be obtained by using three
thin lenses and three drift spaces.
Unfortunately, however this approach did not give us any
additional noticeable simplifications
in the solution of the general representation problem.
\section{Solution of 2D problem using equally spaced thin lenses}
In this section we will give an explicit solution of the thin-lens representation
problem which uses equally spaced thin-lenses.
Instead of Eq. (\ref{i2}) or Eq. (\ref{i2_sfd}), we will consider the system
\noindent
\begin{eqnarray}
B(m_n,\, \pm g_n, \,p_n) \cdot \ldots \cdot B(m_1,\, \pm g_1,\, p_1) \,=\, M_{x, y},
\label{a1_0}
\end{eqnarray}
\noindent
where as an elementary building block we take a thin lens sandwiched
between two drift spaces
\noindent
\begin{eqnarray}
B(m, \,\pm g, \,p) \,=\,D(p)\,Q(\pm g)\,D(m).
\label{a1}
\end{eqnarray}
If the block length $\,l = m + p > 0$, then
one can represent the block transfer matrix in the form
\noindent
\begin{eqnarray}
B(m, \,\pm g, \, p) \,=\,S^{-1}(m, \,p)\,P(2 \pm l g)\,S(m,\, p),
\label{a2}
\end{eqnarray}
\noindent
where
\noindent
\begin{eqnarray}
S(m,\,p) \,=\, \frac{1}{\sqrt{l}}
\left(
\begin{array}{rr}
1 & m \\
-1 & p
\end{array}
\right)
\label{a3}
\end{eqnarray}
\noindent
and
\noindent
\begin{eqnarray}
P(a) \,=\,
\left(
\begin{array}{rr}
a & 1 \\
-1 & 0
\end{array}
\right).
\label{a3_1}
\end{eqnarray}
\noindent
Note that the properties of the matrix $P$
(and other elementary matrices used in this paper)
can be found in Appendix A.
Let us assume that in the system (\ref{a1_0})
all $m_k$ and all $p_k$ are equal to each other, i.e., that
\noindent
\begin{eqnarray}
m_1 = \ldots = m_n = m,
\;\;\;\;
p_1 = \ldots = p_n = p,
\label{es0}
\end{eqnarray}
\noindent
and let $\,l = m + p > 0$.
The principle simplification that occurs in this case
is that after the substitution of the representation (\ref{a2})
into Eq. (\ref{a1_0}) the matrices $S(m, p)$ and
$S^{-1}(m, p)$ cancel each other
and we obtain
\noindent
\begin{eqnarray}
P(2 \pm l g_n) \cdot \ldots \cdot P(2 \pm l \,g_1) \,=\, \hat{M}_{x, y},
\label{es1}
\end{eqnarray}
\noindent
where
\noindent
\begin{eqnarray}
\hat{M}_{x,y} \,=\, S(m, \,p) \,M_{x, y}\, S^{-1}(m,\, p).
\label{es2}
\end{eqnarray}
\noindent
Equations (\ref{es1}) give the dimensionless form of
Eq. (\ref{a1_0}) and, additionally,
one sees that while the original system (\ref{a1_0}) is formed by
the product of $2 n + 1$ interleaved thin-lens and drift matrices
(with neighboring drifts lumped together), the system (\ref{es1}) includes
only $n+2$ matrices depending on unknowns
(there are $n+2$ unknowns: $n$ lens strengths plus
two variables characterizing the block length and the position of the lens
inside the block)
and $n$ of them are $P$ matrices.
Nevertheless, the system (\ref{es1}) is still too complicated to
find easily its solutions (or even to prove their existence)
for an arbitrary matrix $M$ and with the number of lenses
$n$ equal to six or seven as required by our conjecture.
Instead we will provide an explicit solution which
utilizes 13 lenses if the parameters $m$ and $p$ are fixed
and are independent from the input matrix $M$,
and 12 lenses if $m$ and $p$ can be varied.
The main idea of our solution is the reduction of the 2D problem (\ref{es1})
to two independent or, more exactly, almost independent 1D problems
by constructing thin-lens blocks
which can act in the horizontal and the vertical planes
similar to a single $P$ matrix, but whose actions for both planes
can be chosen independently.
At first we will consider a solution of the 1D problem in terms
of $P$ matrices. As the next step we will introduce a four-lens block with decoupled
transverse actions and then will give an explicit solution of the complete 2D problem.
Besides that we will discuss the recipe for constructing lens blocks
with decoupled transverse actions with more than four lenses.
Before giving the technical details
let us consider one more example obtained with the help of the
Gr$\ddot{\mbox{o}}$bner basis technique.
Let us assume that $m$ and $p$ are fixed and let the matrix $M$ be such
that the matrix $\hat{M}$ in (\ref{es1}) is equal to the symplectic unit matrix:
\noindent
\begin{eqnarray}
\hat{M}_x \,=\,\hat{M}_y\,=\,
\left(
\begin{array}{rr}
0 & 1\\
-1 & 0
\end{array}
\right).
\label{i2_gb_ex2}
\end{eqnarray}
\noindent
Then this matrix $M$ can not be represented by less than seven thin lenses
and with seven lenses there are many solutions which
geometrically can be viewed as six distinct parallel straight lines in the
seven-dimensional
space of lens strengths.
\subsection{1D problem in terms of $P$ matrices}
According to our plan we will prove in this subsection that
every real symplectic $2 \times 2$ matrix $M = (m_{ij})$ can be
represented as a product of at most four $P$ matrices.
First, we will consider the
case of three $P$ matrices and will find that three $P$ matrices
are insufficient for the representation of an arbitrary $2 \times 2$
symplectic matrix.
Next we will switch to the case of four $P$ matrices and
will show that with four $P$ matrices a solution can always be found,
but it is always nonunique.
Let us start with the case of three $P$ matrices, i.e., from the
equation
\noindent
\begin{eqnarray}
P(z_3)\, P(z_2)\, P(z_1) \,=\,M.
\label{s1d1}
\end{eqnarray}
\noindent
This matrix equation is, in fact,
the system of the four equations for the four matrix elements
\noindent
\begin{eqnarray}
\left\{
\begin{array}{l}
z_3 \cdot (z_1 \, z_2 - 1) - z_1 = m_{11}\\
z_2 = -m_{22}\\
z_2 \, z_3 - 1 = m_{12}\\
z_1 \, z_2 - 1 = -m_{21}
\end{array}
\right.
\label{s1d4_01}
\end{eqnarray}
\noindent
and, as it is well known, due to symplecticity of the matrices
on both sides of (\ref{s1d1}) these four equations should be
equivalent to some system consisting of three equations only.
In order to obtain such a system let us first substitute
$z_1 \, z_2 - 1 = -m_{21}$ into the first equation of the system
(\ref{s1d4_01}) and then plug $z_2 = -m_{22}$
in the equations three and four. Because in the resulting system
\noindent
\begin{eqnarray}
\left\{
\begin{array}{l}
z_1 = -m_{11} - m_{21} \cdot z_3\\
z_2 = -m_{22}\\
m_{22} \cdot z_3 = - 1 - m_{12}\\
m_{22} \cdot z_1 = - 1 + m_{21}
\end{array}
\right.
\label{s1d4_02}
\end{eqnarray}
\noindent
the fourth equation is equal to the first equation
multiplied by $m_{22}$ minus the third equation
multiplied by $m_{21}$, it can be omitted.
Thus we obtain that the system of the four third order
polynomial equations (\ref{s1d4_01}) is equivalent to the system
\noindent
\begin{eqnarray}
\left\{
\begin{array}{l}
z_1 = -m_{11} - m_{21} \cdot z_3\\
z_2 = -m_{22}\\
m_{22} \cdot z_3 = - 1 - m_{12}
\end{array}
\right.
\label{s1d4}
\end{eqnarray}
\noindent
which is linear in the unknowns $z_1$, $z_2$, and $z_3$.
Moreover, this system already has a triangular form and
its solvability depends only on the solvability of the
third equation with respect to the variable $z_3$.
Elementary analysis shows that there are three possibilities
for the solutions of the system (\ref{s1d4}).
If $m_{22} \neq 0$, then there exists a unique solution
\noindent
\begin{eqnarray}
z_1 = \frac{m_{21} - 1}{m_{22}}, \;\;\;
z_2 = -m_{22}, \;\;\;
z_3 = -\frac{m_{12} + 1}{m_{22}}.
\label{s1d2}
\end{eqnarray}
\noindent
If $m_{22} = 0$ and $m_{21} = 1$ (i.e if
$M = -P(-m_{11})$), then there exists a one-parameter family of solutions:
\noindent
\begin{eqnarray}
z_1 + z_3 = -m_{11}, \;\;\;
z_2 = 0.
\label{s1d3}
\end{eqnarray}
\noindent
Finally, if $m_{22} = 0$ and $m_{21} \neq 1$, then there is no solution at all.
Very loosely speaking, the condition $m_{22} = 0$ defines the two-dimensional surface of
singularities in the three-dimensional space of $2 \times 2$ real symplectic matrices.
This surface, in the next turn, contains the one-dimensional curve selected by the
additional relation $m_{21} = 1$. If the matrix $M$ (represented as a point in our
three-dimensional space) lies outside of the surface of singularities, then a solution
for such a matrix exists and is unique. If the point representing the matrix $M$
belongs to the surface of singularities, then we either have many solutions
or none depending on whether this point lies on the above defined
one-dimensional curve or not.
Let us now turn our attention to the equation
\noindent
\begin{eqnarray}
P(z_4)\,P(z_3)\, P(z_2)\, P(z_1) \,=\,M,
\label{s1d5}
\end{eqnarray}
\noindent
which includes four $P$ matrices.
The equivalent to this equation system is given below:
\noindent
\begin{eqnarray}
\left\{
\begin{array}{l}
z_1 = m_{21} - (m_{11} + m_{21} \cdot z_4) \cdot z_3\\
z_2 = -m_{12} - m_{22} \cdot z_4\\
(m_{12} + m_{22} \cdot z_4) \cdot z_3 = m_{22} - 1
\end{array}
\right.
\label{s1d6}
\end{eqnarray}
\noindent
and the easiest way to obtain it is to substitute
into the system (\ref{s1d4}) the elements of the matrix $P^{-1}(z_4)\cdot M$
instead of the $m_{ij}$.
The system (\ref{s1d6}) is not linear anymore, but still
has a triangular form
and its solvability depends again only on the solvability of the
third equation with respect to the variables $z_3$ and $z_4$.
Because the matrix $M$ is nondegenerated its elements
$m_{12}$ and $m_{22}$ cannot be equal to zero simultaneously
and therefore the expression $m_{12} + m_{22} \cdot z_4$
considered as a function of $z_4$
cannot be equal to zero in more than one point. It means
that the last equation in (\ref{s1d6}) always has
solutions and a good way to understand their
complete structure is to consider this equation
as the equation of a curve on the plane $(z_3, z_4)$.
If $m_{22} \cdot (m_{22}-1) \neq 0$ this curve is a hyperbola
with two separate branches, if $m_{22} = 1$ it is a degenerate
hyperbola consisting of two intersecting lines
$z_3 = 0$ and $z_4 = -m_{12}$, and, finally, if $m_{22} = 0$
we have a single straight line $z_3 = - m_{12}^{-1}$.
So we see that with the help of the four $P$ matrices a solution of our problem
can always be found and is always nonunique.
\subsection{Four-lens block with decoupled transverse actions}
Let us denote by $W^{x,y}$ the following combination
of four $P$ matrices:
\noindent
\begin{eqnarray}
W^{x,y}=P(2 \pm l g_4) P(2 \pm l g_3) P(2 \pm l g_2) P(2 \pm l g_1),
\label{es3}
\end{eqnarray}
\noindent
which in the original variables (\ref{a1_0}) includes
four thin-lenses (four-lens block).
If one chooses $\delta = \pm 1$ and if one takes
\noindent
\begin{eqnarray}
g_2 = \frac{\delta \sqrt{3}}{l},\;\;\;\;\;
g_3 = -\frac{\delta \sqrt{3}}{l},
\label{es4}
\end{eqnarray}
\noindent
then the block matrix can be written as
\noindent
\begin{eqnarray}
W^{x, y} \,=\, -\Lambda^{-1} \left(\sqrt{u^{x,y}}\right)\,
P (w^{x,y})\,\Lambda \left(\sqrt{u^{x,y}}\right),
\label{es5}
\end{eqnarray}
\noindent
where $\Lambda(a) = \mbox{diag}(a, 1 / a)$ is a diagonal scaling matrix,
\noindent
\begin{eqnarray}
u^{x,y} \,=\, 2 \,\mp\, \delta \sqrt{3},\;\;\;\;\;
u^x \cdot u^y \,=\,1
\label{es6_3}
\end{eqnarray}
\noindent
and
\noindent
\begin{eqnarray}
w^x \,=\, 7 \,+ \,u^y \cdot l g_1 \,+\,
u^x \cdot l g_4,
\label{es6_1}
\end{eqnarray}
\noindent
\begin{eqnarray}
w^y \,=\, 7 \,- \,u^x \cdot l g_1 \,-\,
u^y \cdot l g_4.
\label{es6_2}
\end{eqnarray}
\noindent
Since for any given value of $w^x$ and $w^y$ Eqs.
(\ref{es6_1}) and (\ref{es6_2}) can be solved with respect to
the variables $g_1$ and $g_4$,
\noindent
\begin{eqnarray}
g_1 = -\frac{\delta \sqrt{3}}{l}\cdot
\frac{28 \,-\, u^y \cdot w^x \,-\, u^x \cdot w^y}{24},
\label{es7}
\end{eqnarray}
\noindent
\begin{eqnarray}
g_4 = \;\,\frac{\delta \sqrt{3}}{l}\cdot
\frac{28\,-\,u^x \cdot w^x \,-\, u^y \cdot w^y}{24},
\label{es8}
\end{eqnarray}
\noindent
the formula (\ref{es5}) gives the result which we were looking for.
Both matrices $W^x$ and $W^y$ are similar to a single $P$ matrix
(with an inessential minus sign) and both parameters $w^x$ and $w^y$
can be chosen independently,
and then the setting of the first and the last
lenses in the block is determined according to the
formulas (\ref{es7}) and (\ref{es8}).
\subsection{Reduction of 2D problem to two independent or almost independent 1D problems}
Since with four $P$ matrices we always can solve the 1D problem, let us first consider a
combination of four blocks of the type (\ref{es5}). Using (\ref{e9}), one can show that
the total matrix of this 16 lens system can be written as follows:
\noindent
\begin{eqnarray}
W^{x, y}_4 \, W^{x, y}_3 \, W^{x, y}_2 \, W^{x, y}_1 \,=\,
\Lambda \left(a^{x,y}\right) \cdot
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
P \left(\hat{w}^{x,y}_4\right)
P \left(\hat{w}^{x,y}_3\right)
P \left(\hat{w}^{x,y}_2\right)
P \left(\hat{w}^{x,y}_1\right)
\Lambda \left(a^{x,y}\right),
\label{am1_0}
\end{eqnarray}
\noindent
where
\noindent
\begin{eqnarray}
a^{x,y} \,=\, \sqrt{\frac{u^{x,y}_1 u^{x,y}_3}{u^{x,y}_2 u^{x,y}_4}}
\label{am2_0}
\end{eqnarray}
\noindent
and
\noindent
\begin{eqnarray}
\hat{w}^{x,y}_1 = \frac{u^{x,y}_2 u^{x,y}_4}{u^{x,y}_3}\cdot w^{x,y}_1,
\;\;\;
\hat{w}^{x,y}_2 = \frac{u^{x,y}_3}{u^{x,y}_1 u^{x,y}_4}\cdot w^{x,y}_2,
\label{am3_0}
\end{eqnarray}
\noindent
\begin{eqnarray}
\hat{w}^{x,y}_3 = \frac{u^{x,y}_1 u^{x,y}_4}{u^{x,y}_2}\cdot w^{x,y}_3,
\;\;\;
\hat{w}^{x,y}_4 = \frac{u^{x,y}_2}{u^{x,y}_1 u^{x,y}_3}\cdot w^{x,y}_4.
\label{am4_0}
\end{eqnarray}
\noindent
Plugging this representation into Eq. (\ref{es1}) we obtain
\noindent
\begin{eqnarray}
P \left(\hat{w}^{x,y}_4\right)
P \left(\hat{w}^{x,y}_3\right)
P \left(\hat{w}^{x,y}_2\right)
P \left(\hat{w}^{x,y}_1\right) =
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
\Lambda^{-1} \left(a^{x,y}\right)
\hat{M}_{x,y}
\Lambda^{-1} \left(a^{x,y}\right).
\label{am1_001}
\end{eqnarray}
\noindent
Let us choose arbitrary nonnegative $m$ and $p$ with $l = m + p > 0$
and select for each four-lens block its own $\delta = \pm 1$.
This, in accordance with formula (\ref{es4}),
gives us the setting of the eight lenses in our system
and this completely determines the matrix on the right-hand side of
Eq. (\ref{am1_001}). As the last step we take
$\hat{w}^{x}_k$ and $\hat{w}^{y}_k$ as some solutions of two
independent 1D problems of the type (\ref{s1d5}) and define
the strengths of the remaining eight lenses
using the formulas (\ref{am3_0}), (\ref{am4_0}), (\ref{es7}), and (\ref{es8}).
One sees that using four blocks with decoupled transverse actions
the complete 2D problem can always be reduced to two easily solvable
independent 1D problems. But do we really need four blocks for making such a reduction?
The answer is no and the reason for this is as follows. We know that for most of the
$2 \times 2$ symplectic matrices the 1D problem can be solved with three $P$ matrices,
which means that for most of the $4 \times 4$ uncoupled beam transfer matrices
the 2D problem can also be solved with three blocks. The problem is what to do
with the rest? Happily it turns out that by appropriate choice of the parameters
$m$ and $p$ one can always move the input matrix $M$ away from the region of
unsolvability and, if the variation of $m$ and $p$ is not allowed,
this can be done by using only one additional thin lens.
Thus, we arrive at the solution announced in the Introduction,
namely 13 lenses if the spacing between them is fixed
and 12 lenses if this distance can be used as an additional parameter.
Below we will consider in detail the case of 12 lenses (three blocks) with variable spacing
and the check that the use of an additional lens for the fixed spacing also works
we leave as an exercise for the interested reader.
In analogy with (\ref{am1_0}) the
combination of three blocks can be written as
\noindent
\begin{eqnarray}
W^{x, y}_3 \, W^{x, y}_2 \, W^{x, y}_1 \,=\,
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
-\Lambda^{-1} \left(a^{x,y}\right)
P \left(\hat{w}^{x,y}_3\right)
P \left(\hat{w}^{x,y}_2\right)
P \left(\hat{w}^{x,y}_1\right)
\Lambda \left(a^{x,y}\right)
\label{am1}
\end{eqnarray}
\noindent
where
\noindent
\begin{eqnarray}
a^{x,y} \,=\, \sqrt{\frac{u^{x,y}_1 u^{x,y}_3}{u^{x,y}_2}}
\label{am2}
\end{eqnarray}
\noindent
and
\noindent
\begin{eqnarray}
\hat{w}^{x,y}_1 = \frac{u^{x,y}_2}{u^{x,y}_3}\cdot w^{x,y}_1,
\label{am3}
\end{eqnarray}
\noindent
\begin{eqnarray}
\hat{w}^{x,y}_2 = \frac{u^{x,y}_3}{u^{x,y}_1}\cdot w^{x,y}_2,
\label{am4}
\end{eqnarray}
\noindent
\begin{eqnarray}
\hat{w}^{x,y}_3 = \frac{u^{x,y}_1}{u^{x,y}_2}\cdot w^{x,y}_3.
\label{am5}
\end{eqnarray}
\noindent
Plugging again this representation into system (\ref{es1}) we obtain
the equation
\noindent
\begin{eqnarray}
P \left(\hat{w}^{x,y}_3\right)
P \left(\hat{w}^{x,y}_2\right)
P \left(\hat{w}^{x,y}_1\right) =
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
-\Lambda \left(a^{x,y}\right)
\hat{M}_{x,y}
\Lambda^{-1} \left(a^{x,y}\right).
\label{am1_0011}
\end{eqnarray}
\noindent
We know that the sufficient condition for this equation to be solvable with respect to
the unknowns $\hat{w}^{x,y}_k$ is
that the horizontal and vertical parts of the matrix
on the right-hand side both have nonvanishing $r_{22}$ elements.
The direct calculation gives us
\noindent
\begin{eqnarray}
r_{22}^{x,y} \,=\,
\frac{
m_{12}^{x,y} -
m \,m_{11}^{x,y} -
p \,m_{22}^{x,y} +
m p \,m_{21}^{x,y}}{m + p},
\label{am1_0012}
\end{eqnarray}
\noindent
where $m_{ij}^{x,y}$ are the elements of the input matrix $M$.
Looking for a solution
one can proceed further in the same manner as in the four block case
with only one difference. At the first step one has to take not arbitrary
nonnegative $m$ and $p$, but such $m$ and $p$ that both $r_{22}^x$
and $r_{22}^y$ are nonzero, which due to symplecticity of the matrices $M_x$ and $M_y$
is always possible.
\subsection{Recipe of construction of lens blocks with
decoupled transverse actions}
In this subsection we give the recipe for the construction of lens blocks with
decoupled transverse actions. As we will see, this recipe works not only for
the four-lens combination considered above, but is also applicable to blocks
with a larger number of lenses.
Let us consider $q-$lens block with $q \geq 4$:
\noindent
\begin{eqnarray}
W^{x,y} = P(2 \pm l g_q) \cdot \ldots \cdot P(2 \pm l g_1),
\label{db_0}
\end{eqnarray}
\noindent
and let us assume that the product of the $(q-2)$ inner matrices in our block takes
the form
\noindent
\begin{eqnarray}
P(2 \pm l g_{q-1}) \cdot \ldots \cdot P(2 \pm l g_2) =
\left(
\begin{array}{cc}
0 & u^{x,y}\\
-1 / u^{x, y} & \tau^{x,y}
\end{array}
\right).
\label{db_1}
\end{eqnarray}
\noindent
Then, as one can show by direct multiplication,
both matrices $W^x$ and $W^y$
become similar to a single P matrix
(with an inessential minus sign possibly presented), namely
\noindent
\begin{eqnarray}
W^{x,y} \,=\,-\mbox{sign}(u^{x,y}) \cdot
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
\Lambda^{-1} \left(\sqrt{\left|u^{x,y}\right|}\right)
P (w^{x,y}) \Lambda \left(\sqrt{\left|u^{x,y}\right|}\right),
\label{db_2}
\end{eqnarray}
\noindent
where
\noindent
\begin{eqnarray}
w^{x, y} =
\frac{2 \pm l g_1 }{\left|u^{x,y}\right|}
+ \left|u^{x,y}\right| \, \left(2 \pm l g_q \right)
+ \mbox{sign}(u^{x,y})\, \tau^{x,y}.
\label{db_3}
\end{eqnarray}
\noindent
If for arbitrary given values of $w^x$ and $w^y$ Eq. (\ref{db_3})
can be solved with respect to the variables
$g_1$ and $g_q$, then it will be exactly what we need,
and the necessary and sufficient condition for such solvability is
\noindent
\begin{eqnarray}
\left|u^x\right| \;\neq \;\left|u^y\right|.
\label{db_4}
\end{eqnarray}
\noindent
So, in order to construct the $q$-lens block with the decoupled transverse actions,
one has to solve two equations making the $r_{11}$ elements of the $x$ and $y$
parts of the product of the $(q-2)$ inner matrices equal to zero and
one has to satisfy one
additional inequality constraint (\ref{db_4}).
The solution for the four-lens block was already given above and is unique
up to a sign change ($\delta = \pm 1$). Let us now consider the more complicated
(but still analytically solvable) case of five lenses. In this situation all
possible solutions which bring the product of the three
inner $P$ matrices
\noindent
\begin{eqnarray}
P(2 \pm l g_4)\, P(2 \pm l g_3)\, P(2 \pm l g_2),
\label{db_0008}
\end{eqnarray}
\noindent
to the form (\ref{db_1}) can be expressed as a function of
parameters $l$ and $g_3$ as follows:
\noindent
\begin{eqnarray}
g_2 \,=\, \frac{1}{l} \cdot \frac{l g_3 + \delta
\sqrt{\left((l g_3)^2 - 2\right) \cdot\left((2 l g_3)^2 - 9\right)}}
{(l g_3)^ 2 - 3},
\label{db_6}
\end{eqnarray}
\noindent
\begin{eqnarray}
g_4 \,=\, \frac{1}{l} \cdot \frac{l g_3 - \delta
\sqrt{\left((l g_3)^2 - 2\right) \cdot\left((2 l g_3)^2 - 9\right)}}
{(l g_3)^ 2 - 3},
\label{db_7}
\end{eqnarray}
\noindent
$\delta = \pm 1$, and $l > 0$ and $g_3$ are such that
\noindent
\begin{eqnarray}
l g_3 \,\in\, \left(-\infty, \,-\sqrt{3}\right)
\cup \left(-\sqrt{3}, \,-1.5\right]
\cup
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
\left[-\sqrt{2}, \,\sqrt{2}\right]
\cup \left[1.5,\, \sqrt{3}\right)
\cup \left(\sqrt{3},\, +\infty\right).
\label{db_5}
\end{eqnarray}
\noindent
To complete the block construction we have to select from all these solutions
a subset on which the functions
\noindent
\begin{eqnarray}
u^{x,y} \,=\, 1 \,-\, (l g_2 \,\mp \,2) \cdot (l g_3 \,+\, l g_4)
\label{db_8}
\end{eqnarray}
\noindent
satisfy the inequality (\ref{db_4}). As one can check, this can be achieved
simply by removing from the set (\ref{db_5}) the endpoints of the given set intervals,
i.e., by removing the points $\pm 1.5$ and $\pm \sqrt{2}$.
So we see that there are many solutions which allow us to construct from five lenses
the block with decoupled transverse
actions and for selecting one of them some additional optimization criteria could be involved.
Note that in the blocks constructed according to our recipe the setting of the internal
lenses does not depend on the setting of the first and the last lenses and depends only
on the geometrical block parameters (distances between the lenses), which will be seen more
clearly in the following section where we will consider the case of arbitrarily spaced
thin lenses.
Note also that the horizontal and the vertical matrices between the first and
the last lenses in the block,
when calculated using not the $P$ matrix notation, but the original
variables in which Eq. (\ref{a1_0}) is written
\noindent
\begin{eqnarray}
D(m) B(m, \pm g_{q-1}, p) \cdot \ldots \cdot
B(m,\pm g_{2},p) D(p) =
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
D(m) S^{-1}(m, p) P(2\pm l g_{q-1}) \cdot \ldots \cdot
P(2\pm l g_{2}) \cdot
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
S (m, p) D(p) =
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
-
\left(
\begin{array}{cc}
u^{x,y} & 0\\
1 / u^{x, y}
+\left(u^{x, y} + \tau^{x, y}\right) / l
&
\;\;1 / u^{x, y}
\end{array}
\right),
\label{db_9}
\end{eqnarray}
\noindent
both have $r_{12}$ elements
equal to zero (i.e. the phase advances between the first and the last lenses in
the block are always multiples of $180^{\circ}$),
but this alone without the inequality (\ref{db_4}) satisfied does not give us the block with
the decoupled transverse actions.
\section{Generalization to the case of arbitrarily spaced thin lenses}
When the distances between the lenses are not equal to each other, we immediately
lose the advantage of the cancellation of $S$ matrices between the $P$ matrices after
substitution of the representation (\ref{a2}) into Eq. (\ref{a1_0}).
Nevertheless, as we will show below, this case can also be treated with the tools
developed in the previous section.
Let us denote by $d_{k_1, k_2}$ the distance between the lenses with the indices
$k_1$ and $k_2$ ($k_1 \leq k_2$).
We start from the observation that for $k = 2, \ldots,n$ the following
identity holds:
\noindent
\begin{eqnarray}
S(m_k,\, p_k)\,S^{-1}(m_{k-1},\, p_{k-1}) \,=
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
L\left(\frac{l_k}{d_{k-1,k}} - 1 \right)
\Lambda \left(\frac{d_{k-1,k}}{\sqrt{l_{k-1}\,l_k }} \right)
U\left(1- \frac{l_{k-1}}{d_{k-1, k}}\right),
\label{b1}
\end{eqnarray}
\noindent
which can be shown by direct multiplication and which requires
that all $l_k$ and $d_{k-1, k}$ are positive.
Note that in this identity $L$ and $U$ are the lower and upper triangular matrices
with unit diagonal elements (see Appendix A for more details).
Let us now substitute the representation (\ref{a2}) into Eq. (\ref{a1_0})
and then plug in the corresponding places the right-hand side of the
identity (\ref{b1}).
After that the property (\ref{e11}) allows us to eliminate from the result
all $L$ and $U$ matrices while shifting their arguments to the arguments of the
neighboring $P$ matrices, and leaving us with a
product consisting of alternating $P$ and $\Lambda$ matrices.
Although the $\Lambda$ matrices cannot be eliminated completely, they
can be moved either on the left or on the right-hand side of all
$P$ matrices with the help of the property (\ref{e9}).
As the last step we transfer all matrices from the left and right sides
of the obtained solid block of the $P$ matrices to the right-hand side of our equation,
hide them in the matrix $\tilde{M}_{x, y}$ and end up with the equation
\noindent
\begin{eqnarray}
P(\tilde{v}^{x,y}_n) \cdot \ldots \cdot P(\tilde{v}^{x, y}_1) \,=\, \tilde{M}_{x, y},
\label{b2}
\end{eqnarray}
\noindent
which already has the desired form.
The detailed structure of the arguments $\tilde{v}^{x, y}_k$ and of the matrix
$\tilde{M}_{x, y}$ depends on the particular ways how the individual $\Lambda$
matrices were moved (to the left or to the right sides) and is given below
for the case when during transformations all $\Lambda$ matrices were moved
to the left-hand side of the $P$ matrix block.
Nevertheless, the expressions given below are general in the sense that they
contain an arbitrary positive parameter $c_1$, and with the proper choice of
this parameter one can account for all possible ways of movement of the individual
$\Lambda$ matrices:
\noindent
\begin{eqnarray}
\tilde{M}_{x,y} = \Lambda(c_n) \,S(m_n, p_n)\, M_{x, y} S^{-1}(m_1, p_1) \Lambda(c_1),
\label{b3}
\end{eqnarray}
\noindent
\begin{eqnarray}
\tilde{v}^{x,y}_k = c_k^2 l_k \left(\frac{d_{k-1,k+1}}{d_{k-1,k} \,d_{k,k+1}} \pm g_k\right),
\;\;k = 1, \ldots, n,
\label{b4}
\end{eqnarray}
\noindent
\begin{eqnarray}
c_k \,=\, \frac{d_{k-1, k}}{\sqrt{ l_{k-1}\,l_k}} \cdot \frac{1}{c_{k-1}},
\;\;\;\;k = 2, \ldots, n,
\label{b5}
\end{eqnarray}
\noindent
$c_1$ is an arbitrary positive parameter
and, because we do not have lenses with indices $0$ and $n+1$,
we use the conventions that
\noindent
\begin{eqnarray}
d_{0,1} \,=\, l_1,
\;\;\;\;d_{0,2} \,=\, d_{0,1} + d_{1,2},
\label{b6_1}
\end{eqnarray}
\noindent
\begin{eqnarray}
d_{n,n+1} \,=\, l_n,\;\;\;\;
d_{n-1,n+1} \,=\,d_{n-1,n} + d_{n,n+1}.
\label{b6_2}
\end{eqnarray}
\noindent
Note that, if the parameter $c_1$ is taken to be a positive number or
a dimensionless function of the thin-lens multiplet parameters (drift lengths
and lens strengths), then Eq. (\ref{b2}) and the variables
(\ref{b4}) are also dimensionless. One of the possible choices
is to take $c_1$ for even $n$ as solution of the equation $c_n = c_1$
and for odd $n$ as solution of the equation $c_n = c_1^{-1}$.
If the condition (\ref{es0}) holds, then the solution of these equations for
both cases (even and odd $n$) is $c_1 = 1$ and the representation (\ref{b2})
turns into the representation (\ref{es1}) as one can expect.
Now in order to continue we need a lens block with the decoupled transverse actions
and, as it is not difficult to check, the recipe given in the previous section
is applicable without any changes. For the construction of the $q$-lens block we still
need to bring the product of the $(q-2)$ inner matrices to the form (\ref{db_1})
while also satisfying the inequality constraint (\ref{db_4}).
For the four-lens case
\noindent
\begin{eqnarray}
W^{x,y}\,=\,P(\tilde{v}^{x,y}_{4}) \,P(\tilde{v}^{x,y}_{3}) \,
P(\tilde{v}^{x,y}_{2}) \,P(\tilde{v}^{x,y}_{1})
\label{db1}
\end{eqnarray}
\noindent
the two equations making the $r_{11}$ elements of the $x$ and $y$
parts of the product of the two inner matrices equal to zero are
\noindent
\begin{eqnarray}
\tilde{v}^{x,y}_2 \cdot \tilde{v}^{x,y}_3 = 1,
\label{db4}
\end{eqnarray}
\noindent
and have a solution
\noindent
\begin{eqnarray}
g_{2} = \;\,\frac{\delta}{d_{1,2}} \cdot
\sqrt{
\frac{d_{1,4}}{d_{2,3}} \cdot
\frac{d_{1,3}}{d_{2,4}}},
\label{db2}
\end{eqnarray}
\noindent
\begin{eqnarray}
g_{3} = -\frac{\delta}{d_{3,4}}\cdot
\sqrt{
\frac{d_{1,4}}{d_{2,3}}\cdot
\frac{d_{2,4}}{d_{1,3}}},
\label{db3}
\end{eqnarray}
\noindent
which again is unique up to a sign change ($\delta = \pm 1$).
The values $u^{x, y}$ for this solution are
\noindent
\begin{eqnarray}
u^{x, y} \,=\,\tilde{v}_3^{x,y}\,=\,
\frac{c_3^2 \,l_3}{d_{3, 4}} \cdot
\left(\frac{d_{2,4}}{d_{2,3}} \,\mp\, \delta \cdot
\sqrt{
\frac{d_{1,4}}{d_{2,3}}\cdot
\frac{d_{2,4}}{d_{1,3}}}
\right).
\label{db3_0}
\end{eqnarray}
\noindent
Both of them are positive and clearly satisfy the inequality (\ref{db_4}).
With this choice for $g_2$ and $g_3$ the total block matrix takes
the form
\noindent
\begin{eqnarray}
W^{x, y} \,=\, -\Lambda^{-1} \left(\sqrt{u^{x,y}}\right)\,
P (w^{x,y})\,\Lambda \left(\sqrt{u^{x,y}}\right)
\label{db5}
\end{eqnarray}
\noindent
where
\noindent
\begin{eqnarray}
w^{x,y} \;=\;\left(u^{x,y}\right)^{-1}\cdot \tilde{v}^{x,y}_1
\;+\; u^{x,y} \cdot \tilde{v}^{x,y}_4 \;-\;1.
\label{db6}
\end{eqnarray}
\noindent
Equation (\ref{db6}) is the analogy of the
formulas (\ref{es7}) and (\ref{es8}) and
for any given values $w^x$ and $w^y$ allow one to
determine the corresponding lens strengths $g_1$ and $g_4$.
Thus, all results of the previous section
concerning the reduction of the 2D problem to two 1D
problems become applicable with some minor changes
connected with the difference in the matrices $\hat{M}_{x,y}$
and $\tilde{M}_{x,y}$ defined by the relations (\ref{es2}) and (\ref{b3}), respectively.
Note that if, when placed in the beam line, the actual decoupling block
starts from the lens with the index $k$,
one has simply to add $k-1$ to the indices $1,2,3$ and $4$ in all
above formulas.
\subsection{Removing of superfluous parameters}
Equation (\ref{a1_0})
contains $2 n$ parameters which specify the drift lengths
($m_1, p_1, \ldots , m_n, p_n$)
while only $n + 1$ parameters,
namely $m_1, d_{1,2}, \ldots, d_{n-1, n}, p_n$ have a clear physical
meaning and are independent.
Let us have a closer look at formulas (\ref{b2})-(\ref{b6_2})
and count how many superfluous parameters are still left in them
and then show ways to remove them.
The superfluous parameters $p_1$ and $m_n$ are clearly present,
either directly as the arguments of $S$ matrices or
through the lengths of the first and the last building blocks $l_1$ and $l_n$.
And actually that is all.
The presence of the other superfluous parameters through
the values $l_2, \ldots, l_{n-1}$ is completely imaginary.
To show this let us note that these values can enter the main formulas
(\ref{b2})-(\ref{b4}) only through the values $c_1$ and $c_n$ and
through the combinations $c_1^2 l_1, \ldots, c_n^2 l_n$. So if we
choose $c_1$ to be independent from $l_2, \ldots, l_{n-1}$, then
these parameters can enter in none of the combinations $c_k^2 l_k$
due to the recursion relation
\noindent
\begin{eqnarray}
c_k^2\,l_k \,=\,d_{k-1,k}^2 \cdot
\frac{1}{c_{k-1}^2\,l_{k-1}}
\;\;\;\;k = 2, \ldots , n,
\label{db8}
\end{eqnarray}
\noindent
which follows from the recursion relation (\ref{b5}),
and likewise they cannot enter the value $c_n$ because one
can write that $c_n = \sqrt{c_n^2 l_n / l_n}$.
Thus, there are only two superfluous parameters, $p_1$ and $m_n$,
present in our formulas, either directly or through the values $l_1$ and $l_n$.
Do we need to remove them? In general not, because it is clear that none
of the physically meaningful answers will depend on them and, in this sense,
their absence in the final results (like in formulas (\ref{db2}) and (\ref{db3}))
could work as some indirect indicator of the correctness of the calculations.
But from another point of view, it seems better not to have
any superfluous parameters from which one can expect nothing
except some possible additional complications.
The simplest way to remove the parameters $p_1$ and $m_n$ from
the formulas (\ref{b2})-(\ref{b4})
is to make them functions of the physically meaningful parameters.
For example, one can take $p_1 = 0.5 \cdot d_{1, 2}$ and $m_n = 0.5 \cdot d_{n-1, n}$.
However, the way which we prefer is the modification of
the formulas (\ref{b2})-(\ref{b4}) in such a way that the
superfluous parameters will disappear automatically.
In doing so let us first present the final result and then
make some remarks on how it can be obtained:
\noindent
\begin{eqnarray}
P(v^{x,y}_n) \cdot \ldots \cdot P(v^{x, y}_1) \,=\, \breve{M}_{x, y},
\label{ff_1}
\end{eqnarray}
\noindent
\begin{eqnarray}
\breve{M}_{x,y} \,=\, J\Lambda^{-1}(b_n) U(-p_n)\, M_{x, y}\,U(-m_1) \Lambda(b_1),
\label{ff_2}
\end{eqnarray}
\noindent
\begin{eqnarray}
v^{x,y}_1 = b_1^2 \left(\frac{1}{d_{1,2}} \pm g_1\right),
\label{ff_3}
\end{eqnarray}
\noindent
\begin{eqnarray}
v^{x,y}_k = b_k^2 \left(\frac{d_{k-1,k+1}}{d_{k-1,k} \,d_{k,k+1}} \pm g_k\right),
\;\,k = 2, \ldots, n-1,
\label{ff_4}
\end{eqnarray}
\noindent
\begin{eqnarray}
v^{x,y}_n = b_n^2 \left(\frac{1}{d_{n-1,n}} \pm g_n\right),
\label{ff_5}
\end{eqnarray}
\noindent
\begin{eqnarray}
b_1 > 0, \;\;\;b_k \,=\, d_{k-1, k} \cdot \frac{1}{b_{k-1}},
\;\;\;\;\;k = 2, \ldots, n,
\label{ff_6}
\end{eqnarray}
\noindent
and $J$ is the $2 \times 2$ symplectic unit matrix.
In order to obtain formulas (\ref{ff_1})-(\ref{ff_6}) from
formulas (\ref{b2})-(\ref{b6_2})
let us first introduce the parameters $b_k = c_k \sqrt{l_k}$ and
then assume that $c_1$ is chosen in such a way that
$b_1$ does not depend on any superfluous parameter
(for example, one simply can take $c_1 = 1 / \sqrt{l_1}$).
After this one sees that the parameters $l_1$ and $l_n$ enter the left-hand side
of Eq. (\ref{b2}) only through the matrices $P(\tilde{v}^{x,y}_1)$
and $P(\tilde{v}^{x,y}_n)$.
Because of the property (\ref{e11}) these matrices can be
decomposed into the following products:
\noindent
\begin{eqnarray}
P(\tilde{v}^{x,y}_1) \,=\, P(v^{x,y}_1) \, L(c_1^2)
\,=\, P(v^{x,y}_1) \, L(b_1^2 \,/\, l_1),
\label{ff_7}
\end{eqnarray}
\noindent
\begin{eqnarray}
P(\tilde{v}^{x,y}_n) = U(-c_n^2) \,P(v^{x,y}_n)
= U(-b_n^2 \,/\, l_n)\,P(v^{x,y}_n).
\label{ff_8}
\end{eqnarray}
\noindent
As the last step, one has to substitute these decompositions
back into Eq. (\ref{b2}),
transfer $U$ and $L$ to the right-hand side and, after
some straightforward manipulations, arrive at the final result
described in the above formulas (\ref{ff_1})-(\ref{ff_6}).
Note that the whole story about the presence of the superfluous parameters
is the result of our desire to have the expressions for the problem
description (expressions (\ref{b2})-(\ref{b6_2})) which reduces to
the highly symmetric expressions (\ref{es1}) and (\ref{es2})
in the limit of equal distances between thin lenses. If one does not
require that, then, as we will outline below, it is possible to arrive
at the representation (\ref{ff_1})-(\ref{ff_6}) without using the
identity (\ref{a2}).
According to (\ref{e5}) and (\ref{e6}) the matrix of
the building block can be written as
\noindent
\begin{eqnarray}
B(m, \,\pm g, \,p) \,=\,P(-p) \,P(\pm g)\, P(-m)\, J.
\label{rsp_1}
\end{eqnarray}
\noindent
Substituting this representation in the original Eq. (\ref{a1_0})
and using that due to (\ref{e4_2})
\noindent
\begin{eqnarray}
P(-m_k)\,J\,P(-p_{k-1})\,=\,-P(-d_{k-1, k})
\label{rsp_2}
\end{eqnarray}
\noindent
we obtain
\noindent
\begin{eqnarray}
P(\pm g_n) P(-d_{n-1, n}) \cdot \ldots \cdot P(-d_{1, 2}) P(\pm g_1) \Lambda(b_1) =
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
(-1)^{n-1} J\, U(-p_n)\,
M_{x,y} \,U(-m_1) \,\Lambda(b_1),
\label{rsp_3}
\end{eqnarray}
\noindent
where we have already introduced an arbitrary
positive parameter $b_1$.
Now, assuming that all distances between lenses are positive
and using (\ref{e4_8}), we can replace for each $k = 2, \ldots, n$ the matrix
$P(-d_{k-1, k})$ by the matrix $-\Lambda(d_{k-1, k})$
with simultaneous adding to the arguments of the two neighboring P
matrices
the value $d_{k-1, k}^{-1}$.
After these manipulations we arrive at the expression
\noindent
\begin{eqnarray}
P(d_{n-1, n}^{-1} \pm g_n)\, \Lambda(d_{n-1, n}) \cdot
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
P(d_{n-1, n}^{-1} + d_{n-2, n-1}^{-1} \pm g_{n-1})\,
\Lambda(d_{n-2, n-1})
\cdot
\ldots
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
\ldots
\cdot
P(d_{2, 3}^{-1} + d_{1, 2}^{-1} \pm g_2)
\Lambda(d_{1, 2}) P(d_{1, 2}^{-1} \pm g_1) \Lambda(b_1) =
\nonumber
\end{eqnarray}
\noindent
\begin{eqnarray}
J\, U(-p_n)\,
M_{x,y} \,U(-m_1) \,\Lambda(b_1),
\label{rsp_4}
\end{eqnarray}
\noindent
and the last step, which is still necessary in order
to obtain formulas (\ref{ff_1})-(\ref{ff_6}),
is to move all $\Lambda$ matrices to the left
in the left-hand side of Eq. (\ref{rsp_4})
using the identity (\ref{e9}) with a subsequent transfer of
the matrix
$\Lambda(b_n^{-1})$
from the left to the right-hand side of the
obtained equality.
\begin{acknowledgments}
The authors are thankful to Winfried Decking, Nina Golubeva and Helmut Mais
for support and their interest in this work.
The careful reading of the manuscript by Helmut Mais
and his useful advices are gratefully acknowledged.
\end{acknowledgments}
|
1,108,101,563,176 | arxiv | \section{Representation functions of sumsets}
Let $A$ be a set of integers and let $h \geq 2$ be an integer.
The \emph{counting function} $A(x)$ counts the number of positive
integers in the set $A$ that do not exceed $x$.
The $h$-fold sumset $hA$ is the set of all integers $n$
that can be written as sums of $h$ not necessarily distinct elements of $A$.
For every integer $n$, the
\emph{representation function}
$r_{A,h}(n)$ counts the number of $h$-tuples
$(a_1, a_2, \ldots, a_h) \in A^h$
such that
\[
a_1 \leq a_2 \leq \cdots \leq a_h
\]
and
\[
a_1 +a_2 + \cdots + a_h = n.
\]
A \emph{Sidon set} is a set $A$ of nonnegative integers such that every element
in the sumset $2A$ has a unique representation, that is,
$r_{A,2}(n) = 1$ for all $n \in 2A$.
More generally, for positive integers $h$ and $s$, a $B_{h,s}$-set is
a set $A$ of nonnegative integers such that
$r_{A,h}(n) \leq s$ for all $n \in hA$.
Sets whose sumsets have few representations
have been studied intensively (cf. Halbertam-Roth~\cite{halb-roth66},
O'Bryant~\cite{obry04}).
In this paper we consider sets whose $h$-fold sumsets have many representations.
A basic result is that if $h \geq 2$ and $A$ is an infinite set of nonnegative integers
with $r_{A,h}(n) \geq 2$ for all sufficiently large
integers $n \in hA$, then
\[
A(x) \gg \log x.
\]
In the special case $h=2$, Balasubramanian and Prakesh~\cite{bala-prak04}
proved that there is a number $c > 0$ such that, if $A$ is an infinite set of
nonnegative integers with $r_{A,2}(n) \geq 2$ for all sufficiently large
integers $n \in 2A$, then
\[
A(x) \geq c\left(\frac{\log x}{\log\log x} \right)^2.
\]
This improved a previous result of
Nicolas, Ruzsa, and S\" ark\` ozy~\cite{nico-ruzs-sark98},
who also proved the existence of an infinite set $A$ of
nonnegative integers with $r_{A,2}(n) \geq 2$ for all sufficiently large
integers $n \in 2A$ such that
\[
A(x) \ll (\log x)^2.
\]
It is an open problem to extend these results to $h$-fold sumsets for $h \geq 3$.
\emph{Acknowledgements.} I thank Michael Filaseta for bringing these problems to my attention, and Quan-Hui Yang
for the reference to the paper of Balasubramanian and Prakesh.
\section{Growth of sets with many representations}
Let $[u,v)$ denote the interval of integers $i$ such that $u \leq i < v$.
Let $|X|$ denote the cardinality of the set $X$.
\begin{theorem} \label{filaseta:theorem:h-ell2}
Let $h \geq 2$ be an integer,
and let $A$ be an infinite set of nonnegative integers.
If $r_{A,h}(n) \geq 2$ for all sufficiently large integers $n \in hA$,
then there is a positive number $w_0$ such that
\[
A(x) > \frac{\log x}{\log h} - w_0
\]
for all $x \geq h$.
\end{theorem}
\begin{proof}
For every positive integer $k$, let
\[
I_k = \left[ h^{k-1}, h^k \right)
\]
and
\[
A_k = A \cap I_k.
\]
The sets $\{A_k:k=1,2,\ldots \}$ partition $A\setminus \{ 0\}$.
There exists a positive integer $n_0$ such that, if $n \geq n_0$ and $n \in hA$,
then $r_{A,h}(n) \geq 2$. Because $A$ is infinite, there exists $a_0 \in A$
with $ha_0 \geq n_0$. Choose $k_0$ such that $a_0 \in A_{k_0}$.
Suppose that $k \geq k_0$ and $A_k \neq \emptyset$.
Let
\[
a_k^* = \max(A_k).
\]
Then
\[
h^{k-1} \leq a_k^* < h^k
\]
and
\[
A \cap \left[a_k^*+1,h^k \right) = \emptyset.
\]
Consider the integer
\[
ha_k^* \in hA.
\]
Because $a_k^* \geq a_0$, we have $ha_k^* \geq ha_0 \geq n_0$,
and so $r_{A,h}(ha_k^*) \geq 2$.
It follows that the set $A$ contains nonnegative integers $a_1,\ldots, a_h$ such that
\begin{equation} \label{filaseta:ineq1}
a_1 < a_h
\end{equation}
\begin{equation} \label{filaseta:ineq2}
a_1 \leq a_2 \leq \cdots \leq a_h
\end{equation}
and
\[
a_1 +a_2 + \cdots + a_h = ha_k^*.
\]
Because $A$ is a set of nonnegative integers, we have
\[
a_h \leq ha_k^*.
\]
Inequalities~\eqref{filaseta:ineq1} and~\eqref{filaseta:ineq2} imply that
\[
ha_k^*= a_1 +a_2 + \cdots + a_h < ha_h
\]
and so
\[
a_k^* < a_h \leq ha_k^* < h^{k+1}.
\]
Therefore,
\[
a_k^* + 1 \leq a_h < h^{k+1}.
\]
Equivalently,
\[
a_h \in \left[ a_k^*+1,h^{k+1} \right) = \left[ a_k^*+1,h^k \right) \cup A_{k+1}.
\]
Because $A \cap \left[a_k^*+1,h^k \right) = \emptyset$,
we see that $a_h \in A_{k+1}$
and so $A_{k+1} \neq \emptyset$.
It follows by induction that $A_k \neq \emptyset$ for all $k \geq k_0$.
Let $x \geq h$, and choose the positive integer $t$ such that
\[
h^t\leq x < h^{t+ 1}.
\]
Because
\[
A \setminus \{ 0\} = \bigcup_{k= 1}^{\infty} A_k
\]
it follows that
\[
A(x) \geq A\left( h^t \right) \geq t - (k_0 -1) > \frac{\log x}{\log h} - k_0
\]
for all $x \geq h$.
Let $w_0 = k_0$. This completes the proof.
\end{proof}
\begin{theorem}
Let $\ell \geq 2$ be an integer,
and let $A$ be an infinite set of nonnegative integers.
If $r_{A,2}(n) \geq \ell$ for all sufficiently large integers $n \in 2A$,
then there is a positive number $w_0$ such that
\[
A(x) > \frac{(\ell - 1)\log x}{\log 2} - w_0
\]
for all $x \geq 2$.
\end{theorem}
\begin{proof}
For every positive integer $k$, let
\[
I_k = \left[ 2^{k-1}, 2^k \right)
\]
and
\[
A_k = A \cap I_k.
\]
The sets $\{A_k:k=1,2,\ldots \}$ partition $A\setminus \{ 0\}$.
There exists a positive integer $n_0$ such that if $n \geq n_0$ and $n \in 2A$,
then $r_{A,2}(n) \geq \ell$. Because $A$ is infinite, there exists $a_0 \in A$
with $2a_0 \geq n_0$. Choose $k_0$ such that $a_0 \in A_{k_0}$.
Suppose that $k \geq k_0$ and $A_k \neq \emptyset$.
Let
\[
a_k^* = \max(A_k).
\]
Then
\[
2^{k-1} \leq a_k^* < 2^k
\]
and
\[
A \cap \left[a_k^*+1,2^k \right) = \emptyset.
\]
Consider the integer
\[
2a_k^* \in 2A.
\]
Because $a_k^* \geq a_0$, we have $2a_k^* \geq 2a_0 \geq n_0$,
and so $r_{A,2}(2a_k^*) \geq \ell$.
It follows that the set $A$ contains a subset
\[
\{ a_{i,j}:i=1,2 \text{ and } j = 1,\ldots, \ell -1\}
\]
such that, for $j=1,2,\ldots, \ell -1$,
\[
a_{1,j} < a_{2,j}
\]
\[
a_{1,1} < a_{2,1} < \cdots < a_{\ell -1,1} < a_k^*
< a_{\ell-1,2}< \cdots < a_{2,2} < a_{1,2} \]
and
\[
a_{1,j} +a_{2,j} = 2a_k^*
\]
for $j=1,2,\ldots, \ell-1$.
Moreover, $a_{1,j} \geq 0$ implies that
\[
a_{2,j} \leq 2a_k^*
\]
for $j =1,2,\ldots, \ell-1$.
Because $0 \leq a_{1,j} < a_{2,j} $, we have
\[
2a_k^*= a_{1,j} + a_{2,j} < 2a_{i,2}
\]
and so
\[
a_k^* < a_{2,j} \leq 2a_k^* < 2^{k+1}.
\]
Therefore,
\[
a_k^* + 1 \leq a_{2,j} < 2^{k+1}.
\]
Equivalently,
\[
a_{2,j} \in \left[ a_k^*+1,2^{k+1} \right) = \left[ a_k^*+1, 2^k \right) \cup A_{k+1}
\]
for $j=1,2,\ldots, \ell-1$.
Because $A \cap \left[a_k^*+1, 2^k \right) = \emptyset$,
we see that $a_{2,j} \in A_{k+1}$
for $j =1,2,\ldots, \ell-1$,
and so $\left| A_{k+1}\right| \geq \ell - 1$.
It follows by induction that $\left| A_k \right| \geq \ell - 1$ for all $k \geq k_0 + 1$.
Let $x \geq 2$, and choose the positive integer $t$ such that
\[
2^t \leq x < 2^{t + 1}.
\]
Because
\[
A \setminus \{ 0\} = \bigcup_{k= 1}^{\infty} A_k
\]
it follows that
\begin{align*}
A(x) & \geq A\left( 2^t \right) \geq (\ell - 1)(t - k_0 ) \\
& > (\ell - 1) \left( \frac{\log x}{\log 2} - k_0 - 1 \right) \\
& = \frac{(\ell - 1)\log x}{\log 2} - w_0
\end{align*}
for all $x \geq 2$.
Let $w_0 = (\ell - 1) (k_0 + 1)$.
This completes the proof.
\end{proof}
\begin{theorem}
Let $h \geq 2$ and $\ell \geq 2$ be integers,
and let $A$ be an infinite $B_{h-1, s}$ set of nonnegative integers.
If $r_{A,2}(n) \geq \ell$ for all sufficiently large integers $n \in hA$,
then there is a positive number $w_0$ such that
\[
A(x) > \frac{(\ell - 1)\log x}{s \log h} - w_0
\]
for all $x \geq h$.
\end{theorem}
\begin{proof}
For every positive integer $k$, let
\[
I_k = \left[ h^{k-1}, h^k \right)
\]
and
\[
A_k = A \cap I_k.
\]
The sets $\{A_k:k=1,2,\ldots \}$ partition $A\setminus \{ 0\}$.
There exists a positive integer $n_0$ such that, if $n \geq n_0$ and $n \in hA$,
then $r_{A,h}(n) \geq \ell$. Because $A$ is infinite, there exists $a_0 \in A$
with $ha_0 \geq n_0$. Choose $k_0$ such that $a_0 \in A_{k_0}$.
Suppose that $k \geq k_0$ and $A_k \neq \emptyset$.
Let
\[
a_k^* = \max(A_k).
\]
Then
\[
h^{k-1} \leq a_k^* < h^k
\]
and
\[
A \cap \left[a_k^*+1,h^k \right) = \emptyset.
\]
Consider the integer
\[
ha_k^* \in hA.
\]
Because $a_k^* \geq a_0$, we have $ha_k^* \geq ha_0 \geq n_0$,
and so $r_{A,h}(ha_k^*) \geq \ell$.
It follows that the set $A$ contains a subset
\[
\{ a_{i,j}:i=1,\ldots, h \text{ and } j = 1,\ldots, \ell -1\}
\]
such that, for $j=1,2,\ldots, \ell -1$,
\begin{equation} \label{filaseta:ineq3}
a_{1,j} < a_{h,j}
\end{equation}
\begin{equation} \label{filaseta:ineq4}
a_{1,j} \leq a_{2,j} \leq \cdots \leq a_{h-1,j} \leq a_{h,j}
\end{equation}
\[
a_{1,j} +a_{2,j} + \cdots + a_{h-1,j} + a_{h,j} = ha_k^*
\]
and
\[
\left( a_{1,j} , a_{2,j}, \ldots + a_{h-1,j} , a_{h,j} \right) \neq
\left( a_{1,j'} , a_{2,j'}, \ldots + a_{h-1,j'} , a_{h,j'} \right)
\]
for $1 \leq j < j' \leq \ell -1$.
Moreover, for $i=1,\ldots, h-1$ and $j = 1, \ldots, \ell -1$,
the inequality $a_{i,j} \geq 0$ implies that
\[
a_{h,j} \leq ha_k^*
\]
for $j =1,2,\ldots, \ell-1$.
Let $b \in A$ and let $J$ be a subset of $\{1,...,\ell -1\}$ such
that $a_{h,j} = b$ for all $j \in J$.
If $j \in J$, then
\[
a_{1,j} +a_{2,j} + \cdots + a_{h-1,j} = ha_k^* - a_{h,j} = ha_k^* - b
\]
and so
\[
r_{A,h-1}(ha_k^* - b) \geq |J|.
\]
Because $A$ is a $B_{h-1,s}$-set, we have
\[
r_{A,h-1}(ha_k^* - b) \leq s
\]
and so $ |J| \leq s$.
The pigeonhole principle implies that
\[
\left| \left\{ a_{h,j}:j=1,\ldots, \ell -1 \right\} \right| \geq \frac{\ell - 1}{s}.
\]
It follows from inequalities~\eqref{filaseta:ineq3} and~\eqref{filaseta:ineq4} that
\[
ha_k^*= a_{1,j} +a_{2,j} + \cdots + a_{h,j} < ha_{h,j}
\]
and so
\[
a_k^* < a_{h,j} \leq ha_k^* < h^{k+1}.
\]
Therefore,
\[
a_k^* + 1 \leq a_{h,j} < h^{k+1}.
\]
Equivalently,
\[
a_{h,j} \in \left[ a_k^*+1,h^{k+1} \right) = \left[ a_k^*+1, h^k \right) \cup A_{k+1}
\]
for $j =1,2,\ldots, \ell-1$.
Because $A \cap \left[a_k^*+1, h^k \right) = \emptyset$,
we see that
\[
\{ a_{h,j} : j=1,2,\ldots, \ell-1 \} \subseteq A_{k+1}
\]
and so
\[
\left| A_{k+1}\right| \geq \frac{\ell - 1}{s}.
\]
It follows by induction that $\left| A_k \right| \geq (\ell - 1)/s$ for all $k \geq k_0 + 1$.
Let $x \geq h$, and choose the positive integer $t$ such that
\[
h^t \leq x < h^{t + 1}.
\]
Because
\[
A \setminus \{ 0\} = \bigcup_{k= 1}^{\infty} A_k
\]
it follows that
\begin{align*}
A(x) & \geq A\left( h^t \right) \geq \left(\frac{\ell - 1}{s}\right)(j - k_0 ) \\
& > \left( \frac{\ell - 1}{s} \right) \left( \frac{\log x}{\log h} - k_0 - 1 \right)
\end{align*}
for $x \geq h$. Let $w_0 = (\ell - 1) (k_0 + 1)/s$.
This completes the proof.
\end{proof}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,563,177 | arxiv | \section{Introduction}
Two common approaches to probabilistically describe conditional dependence structures are based on undirected graphs (Markov networks) and directed acyclic graphs (DAG) (Bayesian networks), e.g.~\cite{pearl}.
The vertices of the graph represent variables, while the presence (absence) of an edge between two vertices indicates possible dependence (independence) between the two corresponding variables. Applications of Markov networks include biological networks, e.g., to study the dependence structure among genes from expression data \cite{Chun 2014,Dobra 2004} and financial time series for forecasting and predictive portfolio analysis \cite{Carvalho 2007,Wang 2011}; while Bayesian networks occur in expert system research for providing rapid absorption and propagation of evidence \cite{lau88, pearl86}, path analysis for computing implied correlations \cite{werm80}, and psychometrics for causal pathways \cite{boere, kana}.
Chain graphs \cite{lau84,laur,werm} provide an elegant unifying point of view on Markov and Bayesian networks.
These models allow for edges that are both directed and undirected and do not contain any semi-directed
cycles.
Although chain graphs were first introduced in the late eighties, most research has focused on Bayesian networks and Gaussian Graphical Models.
Recently they are receiving more attention as they have proved to be very useful in applications due to their ability to represent symmetric and non-symmetric relationships between the random variables of interest. However, there exist in the literature several different ways of interpreting chain graphs and what conditional independencies they encode, giving rise to different so-called \textit{chain graph interpretations}. Whilst a directed edge works as in DAGs when it comes to representing independence, the undirected edge can be understood in different ways, giving rise to the different interpretations. This implies that chain graphs can represent every independence model achievable by any DAG, whereas the opposite does not hold. The most common interpretations
are the Lauritzen-Wermuth-Frydenberg (LWF) interpretation, the Andersson-Madigan-Perlman interpretation and the multivariate regression (MVR) interpretation. Each interpretation has its own way of determining conditional independences in a CG and each interpretation subsumes another in terms of representable independence models; see \cite{sonn}. Moreover,
\cite{lau02} discuss causal interpretation of chain graphs.
This work focuses on chain graphs of the AMP type \cite{and,levi01} and develops a statistical framework to infer the chain graph structure from data measured on a set of variables of interest with the goal of understanding their association pattern. Many inferential procedures have concentrated on the estimation of the parameters characterizing the graph, \textit{given the chain graph topology}. For example, \cite{drton} consider likelihood-based parameter inference for chain graph models that are directly observed. Nevertheless, in the machine learning literature, methods based on optimization have been proposed to perform inference on the graph structure itself in the context of chain graphs, e.g.~\cite{mcc14,pena14}. We consider the case where
not only are the parameters unknown, but the chain graph itself is unobserved. The main objective is to perform Bayesian inference in such a context.
There are several contributions that propose Bayesian methods in related contexts. In \cite{silva1}, the author proposes a method to perform full Bayesian inference for acyclic directed mixed graphs using
Markov chain Monte Carlo (MCMC) methods. There are several similarities between the model there (based upon structural equation models (SEM) e.g .~\cite{bollen}) and the one developed here. However, the main difference
is given by the structure of the latent graph and the associated constraints in our case. These latter constraints are more stringent and lead to more complexity in the computational strategy. In \cite{wang} a similar model to the one presented in \cite{silva1} is developed, which uses a spike-and-slab prior to induce sparsity in the graph. The computational scheme is expected to outperform that in \cite{silva1} for a large number
of nodes. The prior structure in \cite{wang} could in principle be extended to accommodate chain graph structures at the cost of introducing constraints that could make computations infeasible.
The contributions of this article are as follows. Based upon the likelihood method in \cite{drton} we develop a new Bayesian model for latent AMP chain graphs. This model can also incorporate covariate information, if available.
We introduce a sequential Monte Carlo (SMC) method as in \cite{delm:06} that
improves upon MCMC-based methods in this context. Our approach is applied to real case studies from university graduation rates and a pharmacokinetics study. We find the performance of our algorithm to be stable and robust.
This article is structured as follows. In Section \ref{sec:model} we describe our model and prior specifications. In Section \ref{sec:alg} we introduce the SMC algorithm. In Section \ref{sec:res} we present a simulation study and two real-life applications. In the appendix we detail further elements of our algorithm in Section \ref{sec:alg}.
\section{Model}\label{sec:model}
\subsection{Likelihood}
Let $Y=(Y_1,\ldots, Y_p)\in\mathbb{R}^p$ be a random vector whose elements correspond to $p\in\mathbb{N}$ nodes of a graph. We assume we have $m\in\mathbb{N}$ observations on $Y$: $y_{1:m}$, $y_i\in\mathbb{R}^{p}$.
A graph $G = (V,E)$ is described by a set of nodes $V$ and edges $E$, with variables $Y_1,\ldots,Y_p$ placed at the nodes. The edges define the global conditional independence structure of the distribution. An AMP chain graph is a graph whose every edge is directed or undirected such that it does not contain any semi-directed cycles, that is, it contains no
path from a node $v$ to itself with at least one directed edge such that all directed edges have the same orientation. Each graph can be identified by an adjacency matrix. Let $A$ be a $p \times p$ matrix with entries $(a_{ij})_{1\leq i,j\leq p}$ where
$a_{ij}\in\{0,\dots,r\}$ (let $r=3$, which is a notation used later on) with $a_{ii}=0$, for $i\neq j$
$$
a_{ij} = \left\{\begin{array}{ll}
0 & \textrm{ no edge between}~i~\textrm{and}~j\\
1 & \textrm{ undirected edge between}~i~\textrm{and}~j\\
2 & \textrm{ directed edge from}~i~\textrm{to}~j \\
3 & \textrm{ directed edge from}~j~\textrm{to}~i
\end{array}\right.
$$
and, given the upper-triangular part of $A$, for $j<i$
$$
a_{ji} = \left\{\begin{array}{ll}
a_{ij} & \textrm{if}~a_{ij}\in\{0,1\} \\
2 & \textrm{if}~a_{ij}=3 \\
3 & \textrm{if}~a_{ij}=2.
\end{array}\right.
$$
Given $p$, a labelling of the nodes and the adjacency matrix $A$ define a graph $G(A)$. Let $\mathcal{C}$ denote the set of possible chain graphs for a set of $p$ vertices.
Given $A$, let $\Omega$ be a positive
definite $p\times p$ (real-valued) matrix, such that if $a_{ij}\neq 1$ then $\omega_{ij}=0$. In addition, given $A$, for an arbitrary real-valued $p\times p$ matrix
$B$, then, if $a_{ij}\neq 2$ $b_{ij}=0$. Finally set
$$
\Sigma(B,\Omega) = (I-B)^{-1} \Omega^{-1} (I-B')^{-1}
$$
where $I$ is the $p\times p$ identity matrix. To inherit the AMP chain graph property, $\Sigma(B,\Omega)$ should be a $p\times p$ positive definite matrix.
The absence of semi-directed cycles implies that the vertex set of a chain graph can be partitioned into so-called chain components such that edges within a chain component are undirected whereas the edges between two chain components are directed and point in the same direction. More precisely, the vertex set of a chain graph $G(A)$ can be partitioned into subsets $\mathscr{T}(G(A))$
such that all edges within each subset $\tau$ are un-directed and edges between two different subsets
$\tau\neq \tau'$ are directed. In the following, we assume that the partition $\tau \in\mathscr{T}(G(A))$ is maximal, that is, any two
vertices in a subset $\tau$ are connected by an un-directed path.
Recall that
the parents of a set of nodes $\mathcal{X}$ of $G$ is the set $ \textrm{pa}
( \mathcal{X} ) =\{ V_j \text{ s.t. }
V_j \rightarrow V_i \in G,V_j\not\in \mathcal{X} \quad \& \quad V_i \in\mathcal{X} \}$.
Let $\textrm{pa}(\tau)$ be parents of $\tau$, $B_{\tau}=(B_{ij})_{i\in\tau,j\in\textrm{pa}(\tau)}$ and $\Omega_{\tau}=(\omega_{ij})_{i,j\in\tau}$ sub-matrices of $B$ and $\Omega$, respectively. For a single observation $y\in\mathbb{R}^p$, $y_{\tau}\mid y_{\textrm{pa}(\tau)},B,\Omega\sim\mathcal{N}_{|\tau|}(B_{\tau}y_{\textrm{pa}(\tau)},\Omega_{\tau}^{-1})$, where, for $d\in\mathbb{N}$,
$\mathcal{N}_d(\mu,\Sigma)$ denotes the $d-$dimensional Gaussian distribution with mean $\mu$ and covariance matrix $\Sigma$. Then the joint density of $y\in\mathbb{R}^p$ given $B,\Omega,A$ is
$$
p(y\mid B,\Omega,A) = \prod_{\tau\in\mathscr{T}(G(A))}p(y_{\tau}\mid y_{\textrm{pa}(\tau)},B,\Omega).
$$
The likelihood is then given by
$$
p(y_{1:m}\mid B,\Omega,A) = \prod_{i=1}^m p(y_i \mid B,\Omega,A)
$$
that is, the observations are i.i.d.~given $B,\Omega,A$. We remark that this structure is similar to the structural equation model proposed by \cite{silva}, where also a Gaussian distribution is assumed for the variables corresponding to the nodes of the graph. Moreover, these above assumptions on the precision matrix ensure that the corresponding graph satisfies the AMP Markox property as discussed in \cite{and}.
\subsection{Prior Distribution on the Chain Graph Space}
We specify the prior on the set of possible chain graphs, by specifying a prior on the elements of the corresponding adjacency matrix.
For $1\leq i <j\leq p$, let
$$
\mathbb{P}(a_{ij}=l\mid \pi_{ij}) = \pi_{ij}(l), \quad l = 0,\dots,r
$$
where $\pi_{ij}=(\pi_{ij}(0),\dots,\pi_{ij}(r) )$. We assume each vector $\pi_{ij}$ to follow a Dirichlet distribution with parameter $\alpha=(\alpha_0, \dots, \alpha_r )$ and the $\pi_{ij}$ to be conditionally independent given $\alpha$. Note that, if available, covariate information could be incorporated to model $\pi_{ij}$, as in \cite{tan} for example.
Marginally, we have
\begin{align*}
\mathbb{P}(a_{ij}= l \mid \alpha) &=\frac{1}{B(\alpha)} \int \prod_{k=0}^{r} \pi_{ij}(k)^{ \mathbb{I}(a_{ij}=l) } \prod_{k=0}^{r} \pi_{ij}(k)^{ \alpha_s-1 } d\pi_{ij}(0)... d\pi_{ij}(r) \\
& = \frac{ \Gamma(1+\alpha_{a_{ij}}) } { \Gamma(\alpha_{a_{ij}}) } \frac{\Gamma\left( \sum_{k=0}^{r}\alpha_k \right)}{\Gamma\left( 1+\sum_{k=0}^{r}\alpha_k \right)}, \qquad l= 0,\ldots, r
\end{align*}
where
$$ B(\alpha)= \frac{\prod_{k=0}^r \Gamma(\alpha_k) }{ \Gamma\left( \sum_{k=0}^r \alpha_k \right)} $$
The prior for a graph $G(A)$ then becomes:
$$
p\big((a_{ij})_{i<j} \mid \alpha\big) \propto \mathbb{I}_{\mathcal{C}}(G(A))\Big\{\prod_{i<j}\mathbb{P}(a_{ij}|\alpha) \Big\}.
$$
Note that the prior does not integrate to 1, but has a finite mass.
We assume that given $G(A)$, $\Omega$ has a $G-$Wishart prior distribution (\cite{lenk}) of parameters $\delta,D$, and given $A$ the
non-zero entries of $B$ are i.i.d. univariate Gaussian random variables with mean $\xi$ and variance $\kappa$.
Let $\mathcal{P}$ be the cone of $p\times p$ real-valued positive definite matrices. Then the joint prior for $\Omega$ and $B$ is given by:
$$
p(\Omega,B|A) = \mathbb{I}_{\mathcal{P}}(\Sigma(B,\Omega)) p(\Omega|A) p(B|A)
$$
which does not integrate to 1.
\subsection{Posterior Inference}
\label{sec:alg}
The posterior distribution $\pi$ of all the unknown of interest is given by:
$$
\pi(B,\Omega,(a_{ij})_{i<j}\mid y_{1:m},\alpha) \propto p(y_{1:m}|B,\Omega,A) p(\Omega,B|A)p\big((a_{ij})_{i<j}|\alpha\big).
$$
The structure of the model bears some resemblance to the models considered in \cite{silva1,wang}. Besides the graph structure being different (namely, we do not allow
bi-directional edges), we also impose the AMP constraint, mathematically represented by the term $ \mathbb{I}_{\mathcal{P}}(\Sigma(B,\Omega))$ in $p(\Omega,B|A)$, which leads to a much more
complex structure of the graph space than the aforementioned references. As a result, posterior exploration of graph space is more challenging and computationally demanding and requires carefully devised algorithms, in a sense more sophisticated, than the ones considered in \cite{silva1,wang}.
Our approach is to design an adaptive SMC sampler as in \cite{jasra} (see \cite{delm:06} for the original algorithm and \cite{beskos} for convergence results).
SMC based algorithms offer the advantage of being easily parallelisable, often reducing computation times over serial methods in some scenarios. The strategy involves simulating $N$ samples (particles)
to approximate the sequence of densities
$$
\pi_t(B,\Omega,(a_{ij})_{i<j}\mid y_{1:m},\alpha) \propto \nu_t(B,\Omega,(a_{ij})_{i<j}|\alpha)
$$
where
$$
\nu_t(B,\Omega,(a_{ij})_{i<j}\mid \alpha) = \Big[p(y_{1:m}\mid B,\Omega,A)\Big]^{\phi_t}p(\Omega,B\mid A)p\big((a_{ij})_{i<j}\mid \alpha\big)
$$
and $0=\phi_0<\cdots<\phi_T=1$. The motivation for this algorithm is well-documented (see the aforementioned references for details), and SMC algorithms have been successfully employed in may contexts to sample from high dimensional posterior distributions.
In the implementation of the algorithm, we require a Markov kernel (e.g., a MCMC kernel) $K_t$, $t\in\{1,2,\dots,T\}$ that admits $\pi_t$ as an invariant distribution; this step is detailed in the appendix.
The algorithm is summarised in Algorithm \ref{algo:smc_samp}. For notational convenience, we set $u_t=(B_t,\Omega_t,(a_{t,ij})_{i<j})$.
In the initialization stage, one simulates exactly from $\pi_0$ using rejection sampling.
The sequence of $\{\phi_t\} $ is set as proposed in \cite{Zhou 2016}, and the choice of parameters for the MCMC kernel follows \cite{jasra}.
\begin{algorithm}
\begin{itemize}
\item{\textbf{Initialize}. Set $t=0$, for $i\in\{1,\dots,N\}$ sample $u_0^{(i)}$ from $\pi_0$.}
\item{\textbf{Iterate}: Set $t=t+1$. Determine $\phi_t$. If $\phi_t=1$ set $t=T$, otherwise determine the parameters of the MCMC kernel $K_t$.
\begin{itemize}
\item{Resample
$(\hat{u}_{t-1}^{(1)},\dots,\hat{u}_{t-1}^{(N)})$ using the weights $(w_{t}^{(1)},\dots,w_{t}^{(N)})$
where, for $i\in\{1,\dots,N\}$,
$$
w_t^{(i)} = \Big(\frac{\nu_t(u_{t-1}^{(i)})}{\nu_{t-1}(u_{t-1}^{(i)})}\Big)\Big(\sum_{j=1}^N \frac{\nu_t(u_{t-1}^{(j)})}{\nu_{t-1}(u_{t-1}^{(j)})}\Big)^{-1}
$
}
\item{Sample $u_t^{(i)}\mid \hat{u}_{t-1}^{(i)}$ from $K_t$ for $i\in\{1,\dots,N\}$.}
\end{itemize}
}
\end{itemize}
\caption{SMC Sampler.
}
\label{algo:smc_samp}
\end{algorithm}
\section{Simulation and Real Data Study}\label{sec:res}
In the following numerical experiments, we set $\delta=3$ and $D=I_{p}$ for the G-Wishart prior, and $\xi=0, \kappa=1$ for the distribution of the non-zero element of $B$. The number of particles for the SMC is $N=500$. The MCMC steps are Metropolis-Hastings kernels.
\subsection{Simulated Example}
In the simulated example, we assume that the $p$ random variables corresponding to the nodes of the graph are independent, i.e., $a_{ij} = 0 $ for all $1 \leq i,j \leq p$.
This is a benchmark example to evaluate the ability of our algorithm to recover the structure as this is a very simple graph structure.
We consider $p=10$ vertices and $m=100$ observations. To generate the data, we set the precision matrix $\Omega$ equal to identity matrix $I_p$, and the entries of matrix $B$ are set to be $0$. For the Dirichlet prior, we consider $\alpha=(3,1,1,1)$. This prior implies a higher probability of no connection and assumes the same prior probability for any type of edge between two nodes. The reason we prefer this choice of hyper-parameters to $\alpha=(1,1,1,1)$ (which corresponds to a uniform prior) is due to the computational time required by the initialization step to generate the chain graph.
\begin{figure}[H]
\centering\includegraphics[width=\textwidth]{simulation1.pdf}
\caption{\textit{Simulation results for the independent case: (a) ESS in each SMC step; (b) plot of $\Omega[1,1]$ across on particles; (c) - (e) acceptance rates in each MCMC step; (f) distribution of the log(target) (i.e., log of posterior probabilities) of the samples at the end of the algorithm.}}
\label{fig1}
\end{figure}
The simulation results are summarized in Figure \ref{fig1}. The effective sample size (ESS) drops very fast as the algorithm begins and goes into a steady lower state after several resampling procedures. The acceptance rate is acceptable as it does not fall below a very small value. Using the weighted samples $\{ W_{T}^{(n)} , A_{T}^{(n)}\}_{n=1}^N$ obtained at the last step of SMC, we estimate the posterior probability $\mathbb{P} (a_{ij}=0 \mid y_{1:m},\alpha), 1\leq i < j \leq p $ by
$$
\widehat{\mathbb{P}} (a_{ij}=0 \mid y_{1:m},\alpha) = \sum_{p=1}^{N} W_{T}^{(p)}\mathbb{I}_{\{0\}}( a_{T,ij}^{(p)}),
$$
and summarize the results in Table~1. From the table, we can see that all the estimated posterior probabilities are greater than 0.7, and most of them are above 0.9. This suggests that our algorithm is able to recover the structure of the graph (independence) used to generate the data.
\begin{table}[!htbp]\centering
\caption{Posterior probability $\mathbb{P} (a_{ij}=0 | y_{1:m},\alpha), 1\leq i < j \leq p $.}
\label{table1}
\begin{tabular}{crrrrrrrrc}
\hline
Nodes & \multicolumn{1}{c}{$2$} & \multicolumn{1}{c}{$3$} & \multicolumn{1}{c}{$4$} &
\multicolumn{1}{c}{$5$} & \multicolumn{1}{c}{$6$} & \multicolumn{1}{c}{$7$} & \multicolumn{1}{c}{$8$} &
\multicolumn{1}{c}{$9$} & \multicolumn{1}{c}{$10$} \\
\hline
$~~1$ & 0.882 & 0.892 & 0.920 & 0.932 & 0.920 & 0.870 & 0.874 & 0.908 & 0.934 \\
$~~2$ & & 0.902 & 0.906 & 0.828 & 0.898 & 0.934 & 0.784 & 0.804 & 0.906 \\
$~~3$ & & & 0.914 & 0.890 & 0.880 & 0.890 & 0.908 & 0.900 & 0.890 \\
$~~4$ & & & & 0.900 & 0.866 & 0.882 & 0.770 & 0.918 & 0.900 \\
$~~5$ & & & & & 0.788 & 0.952 & 0.912 & 0.830 & 0.918 \\
$~~6$ & & & & & & 0.806 & 0.932 & 0.906 & 0.908 \\
$~~7$ & & & & & & & 0.904 & 0.738 & 0.916 \\
$~~8$ & & & & & & & & 0.918 & 0.740 \\
$~~9$ & & & & & & & & & 0.952 \\
\hline
\end{tabular}
\end{table}
\subsection{University Graduation Rates}
We investigate the performance of our algorithm when the latent graph has a more complex structure. We consider data first presented in \cite{Druzdzel} that stem from a study for college ranking carried out in 1993. After initial analysis, Druzdzel and Glymour focus on $p=8$ variables:
\begin{table}[!htbp]\centering
\begin{tabular}{ll}
$spend$ & average spending per student,\\
$strat$ & student-teacher ratio, \\
$salar$ & faculty salary, \\
$rejr$ & rejection rate, \\
$pacc$ & percentage of admitted students who accept university's offer, \\
$tstsc$ & average test scores of incoming students, \\
$top10$ & class standing of incoming freshmen, and \\
$apgra$ & average percentage of graduation.
\end{tabular}
\end{table}
Based on $m=159$ universities, the correlation matrix of these eight variables is estimated in \cite{Druzdzel}. Conditional on this correlation matrix, \cite{drton} obtain a chain graph through the SIN model selection with significance level 0.15. To get a chain graph with different AMP and LWF Markov properties, \cite{drton} further deleted the undirected edge between $top10$ and $rejr$ and the undirected edge between $salar$ and $top10$, and introduced an undirected edge between $top10$ and $rejr$. The resulting graph is shown in Figure~\ref{figSIN}, and the corresponding adjacency matrix is shown in Table~\ref{tableSIN}. This graph has three chain components $\tau_1=\{ spend, strat, salar \}, \tau_2=\{top10, tstsc, rejr, pacc \}$ and $\tau_3=\{ apgra \}$. In the original article \cite{Druzdzel} provide some insight about the causal relationship between some of the variables. The average spending per student ($spend$), student-teacher ratio ($strat$) and faculty salary ($salar$) are determined based on budget considerations and are not influenced by any of the five remaining variables. Rejection rate ($rejr$) and percentage of students who accept the university's offer from among those who are offered admission ($pacc$) precede the average test scores ($tstsc$) and class standing ($top10$) of incoming freshmen. The latter two are measures taken over matriculating students. Finally, graduation rate ($apgra$) does not influence any of the other variables.
\begin{figure}[!htbp]
\begin{minipage}[b]{.5\linewidth}
\centering
\includegraphics[width=7cm]{true.pdf}
\caption{\textit{Chain Graph Estimate presented in \cite{drton}. }}
\label{figSIN}
\end{minipage}%
\begin{minipage}[b]{.5\linewidth}
\centering
\renewcommand\arraystretch{1.5}
\small
\begin{tabularx}{\textwidth}{XYYYYYYYY}
\hline
& strat & spend & salar & top10 & tstsc & rejr & pacc & apgra \\
\hline
strat & 0 & 1 & 1 & 2 & 0 & 0 & 0 & 0 \\
spend & 1 & 0 & 1 & 2 & 2 & 2 & 0 & 0 \\
salar & 1 & 1 & 0 & 0 & 2 & 2 & 2 & 2 \\
top10 & 3 & 3 & 0 & 0 & 1 & 0 & 1 & 0 \\
tstsc & 0 & 3 & 3 & 1 & 0 & 1 & 0 & 2 \\
rejr & 0 & 3 & 3 & 0 & 1 & 0 & 1 & 0 \\
pacc & 0 & 0 & 3 & 1 & 0 & 1 & 0 & 2 \\
apgra & 0 & 0 & 3 & 0 & 3 & 0 & 3 & 0 \\
\hline
\end{tabularx}
\captionof{table}{The adjacency matrix corresponding to chain graph in Figure~\ref{figSIN}.}
\label{tableSIN}
\end{minipage}
\end{figure}
To test the performance of the proposed Bayesian chain graph model, we build an empirical baseline graph through the following procedure. Starting with a graph with only one node denoted by $G$, each time we add a new node to the existing graph. For every node in the subgraph not containing the new node, we consider the candidate graphs that are generated by adding one of four types of correlation (no edge, undirected edge and directed edges in either direction) between the new node and the node in the subgraph. We choose the graph with smallest BIC value obtained by fitting a structural equation model (SEM). SEM is a multivariate statistical analysis technique that is commonly used to analyse structural relationships. In general, the formulation of the SEM is given by the basic equation $v=Av+u$, where $v$ and $u$ are vectors of random variables. The parameter matrix $A$ contains regression coefficients and the matrix $P=\mathbb{E}(uu^{\prime})$ gives covariances among the elements of $u$. The chain graph model described in this work can be written as $Y = BY+U$, where $U$ is a multivariate Gaussian random vector with covariance matrix $\Omega^{-1}$. This is consistent with the structural equation model formulation. Figure~\ref{figbase} shows the derived graph and Table~\ref{tablebase} shows the corresponding adjacency matrix.
\begin{figure}[!htbp]
\begin{minipage}[b]{.5\linewidth}
\centering
\includegraphics[width=7cm]{base.pdf}
\caption{\textit{Empirical Graph.}}
\label{figbase}
\end{minipage}%
\begin{minipage}[b]{.5\linewidth}
\centering
\renewcommand\arraystretch{1.5}
\small
\begin{tabularx}{\textwidth}{XYYYYYYYY}
\hline
& strat & spend & salar & top10 & tstsc & rejr & pacc & apgra \\
\hline
strat & 0 & 1 & 2 & 2 & 2 & 2 & 2 & 2 \\
spend & 1 & 0 & 1 & 2 & 2 & 2 & 2 & 2 \\
salar & 3 & 1 & 0 & 1 & 2 & 2 & 3 & 2 \\
top10 & 3 & 3 & 1 & 0 & 2 & 1 & 0 & 2 \\
tstsc & 3 & 3 & 3 & 3 & 0 & 1 & 0 & 2 \\
rejr & 3 & 3 & 3 & 1 & 1 & 0 & 3 & 0 \\
pacc & 3 & 3 & 2 & 0 & 0 & 2 & 0 & 2 \\
apgra & 3 & 3 & 3 & 3 & 3 & 0 & 3 & 0 \\
\hline
\end{tabularx}
\captionof{table}{The adjacency matrix corresponding to the chain graph in Figure~\ref{figbase}.}
\label{tablebase}
\end{minipage}
\end{figure}
We compare posterior inference from our model with the chain graphs in Figure~\ref{figSIN} and Figure~\ref{figbase}. For the SMC sampler, we set the number of samples $N=5000$. For the Dirichlet prior, we first consider a prior based on the analysis of \cite{drton}, i.e., choosing $\alpha=(0.39 , 0.25 , 0.36 , 0.05)$ by matching the probabilities of each type of edges in Figure~\ref{figSIN}. More precisely, the number of no-edges, un-directed edges and directed edges from $i$ to $j$ are 11, 7 and 10 according to the graphs in Figure~\ref{figSIN}. So the corresponding proportions of these three types of edge are the first three components of $\alpha$. The last component of $\alpha$ is chosen to be 0.05 to ensure the occurrence of directed edge from $j$ to $i$ and the probability of this type of edge is not large compared to the other three types. We also perform posterior inference with $\alpha=(1,1,1,1)$, which is a uniform prior, and with $\alpha=(1,3,3,3)$, which favours more connections. The remaining parameters are specified as in the previous section.
Based on the weighted samples $\{ W_{T}^{(n)} , A_{T}^{(n)}\}_{n=1}^N$, we first estimate the posterior probability of occurrence of each edge $\widehat{\mathbb{P}} (a_{ij}=k \mid y_{1:m},\alpha),$ $1 \leq i < j \leq p $ by
$$
\widehat{\mathbb{P}} (a_{ij}=k \mid y_{1:m},\alpha) = \sum_{p=1}^{N} W_{T}^{(p)}\mathbb{I}_{\{k\}}( a_{T,ij}^{(p)}), \quad k = 0,1,2,3.
$$
Then the entries of the estimated adjacency matrix $A$ are given by
$$
a_{ij} = \mathop{\arg\max}_{k=0,1,2,3} { \widehat{\mathbb{P}} (a_{ij}=k \mid y_{1:m},\alpha) }.
$$
Figure \ref{fig_post} shows the estimated chain graphs obtained under each prior. The structure of the chain graph obtained from setting $\alpha=(0.39,0.25,0.36,0.05)$ is obviously more similar to Figure~\ref{figSIN} than that obtained using $\alpha=(1,1,1,1)$. This is not surprising since the prior with $\alpha=(0.39,0.25,0.36,0.05)$ is very informative and forces the posterior distribution of the edges to be similar to Figure~\ref{figSIN}. The estimated chain graph obtained setting the hyper-parameters equal to $\alpha=(1,3,3,3)$ favours more connections compared with other two priors. If we compare our results with the conclusions in \cite{Druzdzel}, the estimated chain graph obtained under the informative prior indeed shows that $spend$, $strat$ and $salar$ are\textit{ parents} of other variables, and $apgra$ is a \textit{child} of the others. However, it does not show that $rejr$ and $pacc$ precede $tstsc$ and $top10$ (the graph just shows the opposite relation). Similarly, the graph obtained under the prior with $\alpha=(1,3,3,3)$ shows that $tstsc$ is a parent of $salar$, $strat$ and $spend$, which is a contrast to the available prior information. This can be improved by modifying the ordering of nodes and choosing a small value of $\alpha_3$.
\begin{figure}[!htbp]
\centering
\subfigure[$\alpha=(0.39 , 0.25 , 0.36 , 0.05)$]{
\includegraphics[width=7cm]{est_A.pdf}
}
\quad
\subfigure[$\alpha=(1 , 1 , 1 , 1)$]{
\includegraphics[width=7cm]{est_dir1.pdf}
}
\quad
\subfigure[$\alpha=(1,3,3,3)$]{
\includegraphics[width=7cm]{est_dir_1333.pdf}
}
\caption{\textit{(a): posterior estimated chain graph using a Dirichlet prior with $\alpha=(0.39 , 0.25 , 0.36 , 0.05)$; (b) posterior estimated chain graph using a Dirichlet prior with $\alpha=(1 , 1 , 1 , 1)$; (c) posterior estimated chain graph using a Dirichlet prior with $\alpha=(1 , 3 , 3 , 3)$.}}\label{fig_post}
\end{figure}
Finally, we compare our results with those obtained using SEM. We use package {\tt sem} in R to fit a SEM to these data. Note that when fitting the SEM model the graph topology is fixed.
Table \ref{tablesummary} presents a brief summary of three different chain graphs using the \textit{sem} function. From the table, we can see that the chain graph selected by the Bayesian chain graph model with prior $\alpha=(0.39,0.25,0.36,0.05)$ has smaller AIC and BIC values compared with the graph selected by the SIN model selection procedure and the empirical graph, which suggests that our algorithm can take advantage of available prior information and perform better.
\begin{table}[!htbp]\centering
\caption{Summaries of different chain graphs using package SEM.}
\label{tablesummary}
\begin{tabularx}{1\textwidth}{XY|XY|XY}
\hline
\multicolumn{4}{c|}{\multirow{3}*{Base chain graph}} & \multicolumn{2}{c}{\multirow{3}*{Chain graph selected by SIN } } \\
\multicolumn{4}{c|}{}&\multicolumn{2}{c}{} \\
\multicolumn{4}{c|}{}&\multicolumn{2}{c}{} \\
\hline
Edge & p-value & Edge & p-value & Edge & p-value \\
\hline
strat --- spend & 1.630e-14 & pacc $\rightarrow$ salar & 1.137e-06 & strat --- spend & 1.630e-14 \\
strat $\rightarrow$ salar & 1.382e-06 & pacc $\rightarrow$ rejr & 1.470e-03 & strat --- salar & 1.082e-05 \\
spend --- salar & 2.629e-11 & strat $\rightarrow$ apgra & 8.237e-02 & strat $\rightarrow$ top10 & 1.935e-09 \\
strat $\rightarrow$ top10 & 4.743e-07 & spend $\rightarrow$ apgra & 6.067e-02 & spend --- salar & 7.156e-13 \\
spend $\rightarrow$ top10 & 2.822e-28 & salar $\rightarrow$ apgra & 4.794e-03 & spend $\rightarrow$ top10 & 5.979e-34 \\
top10 --- salar & 8.931e-03 & top10 $\rightarrow$ apgra & 4.253e-01 & spend $\rightarrow$ tstsc & 3.995e-12 \\
strat $\rightarrow$ tstsc & 3.140e-03 & tstsc $\rightarrow$ apgra & 1.096e-10 & spend $\rightarrow$ rejr & 2.909e-03 \\
spend $\rightarrow$ tstsc & 6.634e-01 & pacc $\rightarrow$ apgra & 2.711e-03 & salar $\rightarrow$ tstsc & 2.350e-05 \\
salar $\rightarrow$ tstsc & 2.008e-04 & & & salar $\rightarrow$ rejr & 1.323e-03 \\
top10 $\rightarrow$ tstsc & 6.831e-19 & & & salar $\rightarrow$ pacc & 1.827e-14 \\
strat $\rightarrow$ rejr & 1.954e-01 & & & salar $\rightarrow$ apgra & 1.570e-02 \\
spend $\rightarrow$ rejr & 3.621e-03 & & & top10 --- tstsc & 1.256e-09 \\
salar $\rightarrow$ rejr & 2.575e-04 & & & top10 --- pacc & 5.020e-01 \\
top10 --- rejr & 1.816e-04 & & & tstsc --- rejr & 8.297e-03 \\
tstsc --- rejr & 1.003e-02 & & & tstsc $\rightarrow$ apgra & 8.352e-19 \\
strat $\rightarrow$ pacc & 2.585e-02 & & & rejr --- pacc & 5.617e-03 \\
spend $\rightarrow$ pacc & 4.109e-07 & & & pacc $\rightarrow$ apgra & 5.481e-03 \\
\hline
\multicolumn{2}{c}{AIC} & \multicolumn{2}{c|}{BIC} &\centering{AIC} & BIC \\
\multicolumn{2}{c}{67.887} & \multicolumn{2}{c|}{-13.319} & \centering{80.838} & -24.919 \\
\hline
\multicolumn{2}{c}{\multirow{3}*{Chain graph selected by algorithm } } \vline & \multicolumn{2}{c|}{\multirow{3}*{Chain graph selected by algorithm } } &\multicolumn{2}{c}{\multirow{3}*{Chain graph selected by algorithm } } \\
&&&&&\\
\multicolumn{2}{c}{($\alpha=(0.39 , 0.25 , 0.36 , 0.05)$)} \vline& \multicolumn{2}{c|}{($\alpha=(1,1,1,1)$)} &\multicolumn{2}{c}{($\alpha=(1,3,3,3)$)}\\
\hline
Edge & p-value & Edge & p-value & Edge & p-value \\
strat --- spend & 1.630e-14 & spend $\rightarrow$ strat & 2.150e-52 & spend $\rightarrow$ strat & 1.536e-42\\
strat $\rightarrow$ salar & 1.727e-06 & strat $\rightarrow$ salar & 9.127e-07 & salar $\rightarrow$ strat & 4.952e-06 \\
strat $\rightarrow$ top10 & 3.597e-09 & spend $\rightarrow$ salar & 2.484e-24 & strat $\rightarrow$ top10 & 1.636e-07 \\
spend $\rightarrow$ salar & 4.956e-33 & top10 $\rightarrow$ spend & 2.068e-25 & tstsc $\rightarrow$ strat & 8.237e-01\\
spend $\rightarrow$ top10 & 1.086e-33 & salar $\rightarrow$ pacc & 7.304e-10 & salar $\rightarrow$ spend & 9.659e-11 \\
spend $\rightarrow$ tstsc & 4.059e-12 & apgra $\rightarrow$ salar & 3.751e-11 & spend $\rightarrow$ top10 & 3.881e-10\\
salar $\rightarrow$ tstsc & 5.524e-05 & top10 --- tstsc & 2.163e-14 & tstsc $\rightarrow$ spend & 2.886e-08 \\
salar $\rightarrow$ pacc & 1.012e-18 & top10 $\rightarrow$ rejr & 2.504e-03 & tstsc $\rightarrow$ salar & 3.654e-27 \\
salar $\rightarrow$ apgra & 3.201e-05 & tstsc $\rightarrow$ rejr & 2.847e-04 & salar $\rightarrow$ rejr & 8.844e-02 \\
top10 --- tstsc & 2.295e-10 & tstsc $\rightarrow$ apgra & 1.428e-43 & salar $\rightarrow$ pacc & 1.062e-08 \\
top10 $\rightarrow$ rejr & 1.951e-03 & rejr $\rightarrow$ pacc & 1.480e-04 & salar $\rightarrow$ apgra & 1.036e-04 \\
tstsc $\rightarrow$ rejr & 2.348e-04 & apgra $\rightarrow$ pacc & 3.587e-03 & tstsc $\rightarrow$ top10 & 1.033e-20 \\
tstsc $\rightarrow$ apgra & 1.367e-18 && & top10 $\rightarrow$ rejr & 9.160e-03 \\
rejr $\rightarrow$ pacc & 1.032e-03 && & tstsc $\rightarrow$ rejr & 5.443e-03 \\
pacc --- apgra & 5.315e-03 && & tstsc $\rightarrow$ apgra & 2.644e-17 \\
&&&& rejr $\rightarrow$ pacc & 3.048e-04 \\
&&&& apgra $\rightarrow$ pacc & 5.759e-03 \\
\hline
\centering{AIC} & BIC & \centering{AIC} & BIC & \centering{AIC} & BIC \\
\centering{61.372} & -50.522 & \centering{102.93} & -18.169 & \centering{58.401} & -47.356 \\
\hline
\end{tabularx}
\end{table}
\subsection{Tenofovir study}
\noindent In this section, we illustrate our Bayesian model for AMP chain graphs on a real data application from the RMP-02/MTN-006 study (\cite{AntonEtAl2012}). Tenofovir (TFV) is a medication used to treat chronic hepatitis B and to prevent and treat HIV. TFV $1\%$ gel demonstrated $39\%$ protective efficacy in women using the gel within 12 hours before and after sexual activity in the Centre for the AIDs Programme of Research in South Africa 004 study \cite{Abdool}. Daily dosing of tenofovir disoproxil fumarate (TDF)/emtricitabine provides 62 to $73\%$ protection against HIV transmission in serodiscordant men and women enrolled in the Partners PrEP study \cite{Baeten}. The RMP-02 study was designed to evaluate the systemic safety and biologic effects of oral TFV combined with a gel formulation of the drug for application rectally and vaginally. The study enrolled 18 patients, all of whom received a single oral dose of TFV, and randomized each patient to receive either the gel formulation or a placebo several weeks later. Details about the phase 1 study are given in \cite{AntonEtAl2012}. \cite{RichardsonHarmanEtAl2014} present analyses of the ancillary studies into the treatment's biologic effects.
The biologic effects we examine here concern the pharmacokinetics (PK) of TFV and its active metabolite tenofovir diphosphate (TFVdp), as well as the pharmacodynamics of the drug. The PK studies evaluate subjects' TFV and TFVdp concentrations in multiple physiologic compartments (i.e., tissues and cells) across multiple time points during the study. Table \ref{tab:Compartments} lists the compartments.
\begin{table}[htp]
\caption{Tissues and cell types examined in the PK studies}
\begin{center}
\begin{tabular}{llc}
\hline
Compound & Compartment & Notation \\
\hline
TFV & Blood plasma & TFV$_{plasma}$\\
TFV & Rectal biopsy tissue & TFV$_{tissue}$\\
TFV & Rectal fluid & TFV$_{rectal}$ \\
TFVdp & Rectal biopsy tissue & TFVdp$_{tissue}$ \\
TFVdp & Total mononuclear cells in rectal tissue & Total$_{\text{MMC}}$ \\
TFVdp & CD4$^+$ lymphocytes from MMC & CD4$^+_{\text{MMC}}$ \\
TFVdp & CD4$^-$ lymphocytes from MMC & CD4$^-_{\text{MMC}}$ \\
\hline
\end{tabular}
\end{center}
\label{tab:Compartments}
\end{table}%
\cite{RichardsonHarmanEtAl2014} demonstrate that tissue HIV infectibility (cumulative p24) is correlated with \textit{in vivo} concentrations of both TFV and TFVdp. Statistically significant, non-linear dose-response relationships with reduced tissue infectibility are found for one TFV compartment and four TFVdp compartments; the dose-response relationships are highly significant for TFVdp in whole rectal tissue, CD4$^+_{\text{MMC}}$, CD4$^-_{\text{MMC}}$ and Total$_{\text{MMC}}$ compartments. Furthermore, \cite{Yang} conduct a comprehensive pharmacokinetic study of rectally administered TFV gel that describes the distribution of TFV and TFVdp into various tissue compartments relevant to HIV infection. They argue that TFV rectal fluid concentrations may be reasonable bio-indicators of plasma and rectal tissue concentrations, making it easier to estimate adherence and TFV concentrations in the target tissue. Therefore, the correlations between the TFV and TFVdp in compartments can be helpful in studying HIV suppression procedure and providing a measure of drug efficacy, enabling more advanced population pharmacokinetic modelling methods. In the following analysis, we investigate the correlation structure of p = 7 concentration levels in Table \ref{tab:Compartments} collected at visit 12. From clinical knowledge, we would expect the following associations:
\begin{itemize}
\item TFV$_{plasma}$ is associated with TFV$_{tissue}$ (blood levels and tissue levels),
\item TFV$_{tissue}$ is associated with TFV$_{rectal}$ (rectal tissue and rectal fluid),
\item TFV$_{tissue}$ is associated with Total$_{\text{MMC}}$ (rectal tissue and mononuclear cells in rectal tissue),
\item Total$_{\text{MMC}}$ is associated with CD4$^+_{\text{MMC}}$ and CD4$^-_{\text{MMC}}$ (total and constituents).
\end{itemize}
We normalise the observations of each variable to have zero mean and standard deviation of one. The number of observation is $m=11$, and the number of particles is set to $N=5000$. We fit the model using as hyper-parameters in the Dirichlet prior both $\alpha=(1,1,1,1)$, which corresponds to the uniform prior, and $\alpha=(1,3,3,3)$, which favours the presence of connections. This latter prior choice is more suitable for small sample sizes. The remaining parameters are set as in the previous section.
Based on weighted samples $\{ W_{T}^{(n)} , (A,B,\Omega)_{T}^{(n)}\}_{n=1}^N$, we calculate the posterior probabilities $\mathbb{P}\big((A,B,\Omega)_{T}^{(n)}\mid y_{1:n},\alpha\big)$. As posterior estimate of the resulting chain graph, we report the one obtained from the adjacency matrix $A_{T}^{(n^*)}$ where $ n^* = \mathop{\arg\max}_{n} \, \mathbb{P}\big((A,B,\Omega)_{T}^{(n)}\mid y_{1:n},\alpha\big)$, and is shown in Figure \ref{fig.tenofovir} (for both prior settings). Obviously, the chain graph obtained under the prior with $\alpha=(1,3,3,3)$ has more connections. Moreover, this graph also shows that the TFV$_{tissue}$ is related to TFV$_{plasma}$ and TFV$_{rectal}$, the TFV$_{tissue}$ is associated with Total$_{\text{MMC}}$ through TFV$_{plasma}$, and CD4$^+_{\text{MMC}}$ causes Total$_{\text{MMC}}$ and CD4$^-_{\text{MMC}}$. This is consistent with the clinical knowledge mentioned before. However, some of these associations are missing in the chain graph obtained under the uniform prior, e.g. the edge between TFV$_{plasma}$ and TFV$_{tissue}$ (they are independent given Total$_{\text{MMC}}$ and CD4$^+_{\text{MMC}}$ under the AMP chain graph property).
\begin{figure}[!htbp]
\centering
\subfigure[$\alpha=(1,1,1,1)$]{
\includegraphics[width=8.3cm]{RMP_a_1.pdf}
}
\quad
\subfigure[$\alpha=(1,3,3,3)$]{
\includegraphics[width=8.3cm]{RMP_a_3.pdf}
}
\quad
\caption{\textit{Chain graph with highest posterior probability.}}\label{fig.tenofovir}
\end{figure}
We compare our results to those obtained by fitting a SEM model using the \textit{sem} function in the R package {\tt sem}. We summarise the results in Table \ref{tableRMP}. The AIC and BIC values of the model corresponding to the chain graph obtained under the uniform prior are missing, which may due to the singular Hessian matrix when estimating the covariance matrix of the parameter. From the table, it is evident that the p-values of some edges are large. For example, the edge CD4$^+_{\text{MMC}}$ $\rightarrow$ TFV$_{tissue}$ in the chain graph obtained under uniform prior has a p-value 0.704. These results suggest that these relationships are not significant in a SEM model.
When inadequate fit of a structural equation model is observed, model modification is often conducted followed by retesting of the modified model. The most popular statistic is the modification index, which is a chi-square score test statistic with degree of freedom one. The modification index provides an estimated value in which the model's chi-square test statistic would decrease if the corresponding parameter is added to the model and respecified as a free parameter. We perform model modification using the \textit{modIndices} function in {\tt sem} package, which calculates modification indices and estimates parameter changes for the fixed and constrained parameters in a structural equation model. Table 5 shows the five largest modification indices for both the $A$ matrix and $P$ matrix for the model corresponding to the chain graph obtained under the Dirichlet prior with $\alpha=(1,3,3,3)$. The modification indices suggest that a better fit to the data would be achieved by adding association between TFV$_{rectal}$ and TFV$_{plasma}$ to the model. The small sample size makes it obviously challenging to estimate associations. Furthermore, the level in rectal tissue may depend on whether or not the patient received the TFV gel or the placebo, which may make the correlation of TFV$_{tissue}$ with other variables unclear.
\begin{table}[!htbp]\centering
\caption{Summaries of two different chain graphs using package SEM.}
\label{tableRMP}
\begin{tabularx}{1.03\textwidth}{XY|XY}
\hline
\multicolumn{2}{c|}{\multirow{3}*{Chain graph selected by algorithm}} & \multicolumn{2}{c}{\multirow{3}*{Chain graph selected by algorithm } } \\
&&&\\
\multicolumn{2}{c}{($\alpha=(1,1,1,1)$)}\vline & \multicolumn{2}{c}{($\alpha=(1,3,3,3)$)} \\
\hline
Edge & p-value & Edge & p-value \\
CD4$^-_{\text{MMC}}$ $\rightarrow$ Total$_{\text{MMC}}$ & 9.881e-19 & CD4$^+_{\text{MMC}}$ $\rightarrow$ TFVdp$_{tissue}$ & 5.687e-01 \\
CD4$^-_{\text{MMC}}$ $\rightarrow$ CD4$^+_{\text{MMC}}$ & 4.091e-81 &
CD4$^+_{\text{MMC}}$ $\rightarrow$ CD4$^-_{\text{MMC}}$ & 2.941e-46 \\
CD4$^-_{\text{MMC}}$ $\rightarrow$ TFV$_{rectal}$ & 1.496e-01 & CD4$^+_{\text{MMC}}$ $\rightarrow$ TFV$_{plasma}$ & 1.210e-02 \\
Total$_{\text{MMC}}$ --- TFVdp$_{tissue}$ & 1.589e-02 & CD4$^+_{\text{MMC}}$ $\rightarrow$ Total$_{\text{MMC}}$ & 7.028e-10 \\
TFVdp$_{tissue}$ --- CD4$^+_{\text{MMC}}$ & 1.812e-02 & CD4$^+_{\text{MMC}}$ $\rightarrow$ TFV$_{rectal}$ & 1.874e-03 \\
CD4$^+_{\text{MMC}}$ --- TFV$_{rectal}$ & 8.583e-03 & TFVdp$_{tissue}$ --- CD4$^-_{\text{MMC}}$ & 8.477e-02 \\
Total$_{\text{MMC}}$ $\rightarrow$ TFV$_{plasma}$ & 1.815e-03 & TFVdp$_{tissue}$ $\rightarrow$ TFV$_{rectal}$ & 2.991e-01 \\
CD4$^+_{\text{MMC}}$ $\rightarrow$ TFV$_{tissue}$ & 7.043e-01 & TFVdp$_{tissue}$ $\rightarrow$ TFV$_{plasma}$ & 2.584e-01 \\
&& TFVdp$_{tissue}$ $\rightarrow$ Total$_{\text{MMC}}$ & 1.162e-13 \\
&& CD4$^-_{\text{MMC}}$ $\rightarrow$ Total$_{\text{MMC}}$ & 2.352e-22 \\
&& TFV$_{plasma}$ $\rightarrow$ TFV$_{tissue}$ & 4.259e-01 \\
&& TFV$_{plasma}$ --- Total$_{\text{MMC}}$ & 5.719e-02 \\
&& TFV$_{tissue}$ $\rightarrow$ TFV$_{rectal}$ & 7.550e-01 \\
&& Total$_{\text{MMC}}$ $\rightarrow$ TFV$_{rectal}$ & 2.337e-02\\
\hline
\centering{AIC} & BIC & \centering{AIC} & BIC \\
\centering{Inf} & Inf & \centering{46.875} & -11.909 \\
\hline
\end{tabularx}
\end{table}
\begin{table}[!htbp]\centering
\caption{Summaries of modification indices for the model corresponding to the chain graph obtained under prior with $\alpha=(1,3,3,3)$.}
\label{tableRMP}
\begin{tabularx}{\textwidth}{XY|XY}
\hline
\multicolumn{2}{c|}{\multirow{3}*{5 largest modification indices, $A$ matrix}} & \multicolumn{2}{c}{\multirow{3}*{5 largest modification indices, $P$ matrix} } \\
&&&\\
\multicolumn{2}{c}{(regression coefficients)}\vline & \multicolumn{2}{c}{(variances/covariances)} \\
\hline
TFV$_{rectal}$ $\rightarrow$ Total$_{\text{MMC}}$ & 3.281 & TFV$_{rectal}$ --- Total$_{\text{MMC}}$ & 3.394 \\
TFV$_{rectal}$ $\rightarrow$ TFV$_{plasma}$ & 2.219 & TFV$_{rectal}$ --- TFV$_{plasma}$ & 2.881 \\
CD4$^-_{\text{MMC}}$ $\rightarrow$ TFV$_{rectal}$ & 0.709 & TFV$_{rectal}$ --- CD4$^-_{\text{MMC}}$ & 0.709 \\
TFV$_{rectal}$ $\rightarrow$ TFVdp$_{tissue}$ & 0.654 & TFV$_{rectal}$ --- TFVdp$_{tissue}$ & 0.709 \\
TFV$_{rectal}$ $\rightarrow$ CD4$^-_{\text{MMC}}$ & 0.555 & TFV$_{tissue}$ --- TFVdp$_{tissue}$ & 0.389 \\
\hline
\end{tabularx}
\end{table}
\section{Conclusions}
In this article we propose a novel Bayesian model for latent AMP chain graphs, for which observations are available only on the nodes of the graph. Posterior inference is performed through a specially devised SMC algorithm. We investigate the ability of the model to recover a range of structures, also when prior knowledge is available. The performance of the SMC sampler is stable and consistent in our numerical study. However, the sampler is not suitable when the number of nodes $p$ is large, as the initializing step is difficult. Moreover, the proposed algorithm does not scale well with respect to $p$. The computational cost of computing the probabilities in the adjacency matrix is $\mathcal{O}(p^2)$, and the computation of the normalizing constant in the $G$-Wishart distribution is quite expensive when $p$ is large (approximately $\mathcal{O}(p^3)$).
Several extensions of this work are possible. First, the algorithm can be extended to large $p$ by choosing a more efficient initial proposal $q$ and by actually exploiting parallel computing techniques. Second, the model can be extended to accommodate multiple groups of observations, allowing borrowing information across groups or time periods. Third, we could avoid sampling $\Omega$ and $B$ by using a Laplace approximation when calculating the posterior probability.
|
1,108,101,563,178 | arxiv | \section{Introduction}
A time-dependent spacetime metric can result in quantum particle creation, as was first discussed by Parker~\cite{Parker}
in the context of the expansion of the universe. The cosmological creation of gravitons was discussed by
Grishchuk~\cite{Grishchuk:1975}, using the equation for tensor perturbations of an expanding universe found by Lifshitz~\cite{Lifshitz}.
The process of quantum particle creation has been studied subsequently in the context of inflation. After the end of inflation, quantum
creation of particles, including gravitons, can contribute to the matter and radiation of the universe~\cite{Ford:1987}.
We here focus on a different scenario involving graviton production due to rapid oscillations
around a mean expansion rate in a spatially flat Friedmann-Robertson-Walker (FRW) background. We consider two cosmological
models in which these kinds of oscillations arise. The first one involves the usual matter fields in standard general relativity plus a
minimally coupled scalar field (GRSF) in a harmonic potential. The second model involves $f(R)$ gravity, when a term
proportional to the square of the Ricci scalar is added to the Einstein-Hilbert action, and can arise in semiclassical
gravity coupled to the renormalized expectation value of a quantum matter stress tensor. Although both
models lead to quantum graviton creation, the graviton wave equation, which determines the creation rates, is different for
each case. The framework of the GRSF model is standard general relativity, so the graviton equation is that obtained
by Lifshitz~\cite{Lifshitz}, and in the transverse, tracefree gauge, has the form of the Klein-Gordon equation for a massless,
minimally coupled scalar field. For this reason, the problem of calculating graviton creation in the GRSF model can be reduced to
that of calculating scalar particle production~\cite{FordParker:1977}. In the case of $f(R)$ gravity, the modified Einstein equation includes higher order derivative terms which lead to a modified graviton wave equation~\cite{Hawng:1991, HawngNoh:1996}.
This paper is organized as follows: In Sec.~\ref{perturbationcalculation}, we review a perturbation formalism which will be used to
calculate the graviton production rate. We also describe how, in both models, an oscillating
scale factor in a spatially flat FRW background can arise, and give explicit results for the number and energy density of the
gravitons created by oscillations around a flat background. In Sec.~\ref{scalinggravitondensity}, we calculate
the graviton energy density for both models in an expanding universe. In Sec.~\ref{cosmlogicalconstraints}, we discuss observational
constraints on the energy density of the created gravitons, and hence on the oscillation amplitude of the scale factor.
In Sec.~\ref{quantumdecoherenceinducedbygravitonbath}, we estimate the decoherence time of quantum systems induced
by spacetime geometry fluctuations due to the graviton bath.
In Sec.~\ref{summarydiscussion}, we summarize and discuss our main results.
In the Appendices, we derive in detail the oscillating scale factor and the Friedmann equation for each model.
Units in which $\hbar = c =1$ are used
throughout the paper. We define the reduced Planck mass to be $M_{pl}\equiv (8\pi G)^{-1/2}$, where $G$ is Newton's
constant. The metric signature is $(-,+,+,+)$,
Greek indices run from 0 to 3, and Latin indices for spatial components run from 1 to 3 .
\section{Perturbation calculation of graviton creation}
\label{perturbationcalculation}
\subsection{Perturbation expansion about conformal coupling}
We take the metric to be that of a spatially flat FRW universe, with the following line element:
\begin{equation}
ds^2=-dt^2+a^2(t)d\textbf{x}^2=a^2(\eta)(-d\eta^2+d\textbf{x}^2) \, ,
\label{eq:metric}
\end{equation}
where the conformal time $\eta$ is related to the scale factor $a(t)$ by $\eta=\int^{t}a^{-1}(t')dt'$. In this conformally flat spacetime,
gravitons in general relativity, using the transverse tracefree gauge, are equivalent to a pair of massless minimally coupled
scalar fields~\cite{FordParker:1977}. Each scalar field corresponds to one of the independent polarization states of the gravitons.
In our case, we calculate scalar particle production in the metric that we are interested in, Eq.~(\ref{eq:metric}), and then multiply the final expressions for the number density and energy density of the massless scalar field by a factor of 2. (For
discussions about graviton creation in Robertson-Walker universes, including calculations of number and energy densities,
see Refs.~\cite{Ford:1987, FordParker:1977}.)
The massless scalar field $\phi(x)$ satisfies the wave equation
\begin{equation}
\left[\square - {\xi} R(x) \right]\phi(x)=0 \, ,
\label{waveequation}
\end{equation}
where $\square=\nabla_{\mu}\nabla^{\mu}$ is the covariant d'Alembert operator, $R(x)$ is the Ricci scalar, and ${\xi}$ is the coupling constant between the scalar field and scalar curvature. The minimal coupling corresponds to $\xi=0$, which is a necessary condition to study graviton production using the scalar field equation.
In general, obtaining an exact solution of Eq.~(\ref{waveequation}) in a given metric can be difficult. We adopt an approximation
developed by Birrell and Davies~\cite{BD80,BirrellDavies:1982}, which is a perturbation expansion about the conformally invariant case,
$\xi =1/6$. After the mode decomposition of the field in modes $u_\mathbf{k}$, which satisfy Eq.~(\ref{waveequation}) and
the separation of these modes as $u_\mathbf{k}(x)=(2\pi)^{-\frac{3}{2}}\exp(i{\bf{k\cdot x}})a^{-1}(\eta)\chi_k(\eta)$, the equation for
$\chi_k(\eta)$ becomes
\begin{equation}
\frac{d^2 \chi_k(\eta)}{d\eta^2} + \left[k^2 -V(\eta)\right]\chi_k(\eta)=0 \, .
\label{xkequation}
\end{equation}
Here $k =|\mathbf{k}|$ and
\begin{equation}
V(\eta)= \left(\frac{1}{6} -\xi \right)\, a^2(\eta)\,R(\eta) \, .
\label{vetaequation}
\end{equation}
The Ricci scalar for the spacetime of Eq.~(\ref{eq:metric}) can be expressed as
\begin{equation}
R=C^{-1}\left(3\dot D + \frac{3}{2}D^2 \right) \, ,
\label{ricciscalar}
\end{equation}
where $D=\dot{C}/C$, $C(\eta)=a^2(\eta)$, and dot denotes the derivative with respect to $\eta$.
We impose the conditions
$V(\eta)\rightarrow0$ as $\eta\rightarrow \pm \infty$. Then the normalized solution of Eq.~(\ref{xkequation}) which has
positive frequency in the past is denoted by $\chi_k(\eta)$, and has the asymptotic form $\chi_k(\eta) \sim\chi_k^{in}(\eta) $,
as $\eta\rightarrow -\infty$, where
\begin{equation}
\chi_k^{in}(\eta)=(2 k)^{-\frac{1}{2}}\exp(-i k \eta) \, .
\end{equation}
With this initial condition, Eq.~(\ref{xkequation}) can be replaced by an integral equation
\begin{equation}
\chi_k(\eta)=\chi_k^{in}(\eta)+k^{-1} \int_{-\infty}^{\eta} V(\eta')\,\sin\left[k(\eta-\eta')\right]\chi_k(\eta')d \eta' \, .
\label{integralequation}
\end{equation}
The perturbation expansion results from successive iterations of this equation, and may be viewed as an expansion in
powers of $1/6 -\xi$. We will work to first order, and replace $\chi_k(\eta')$ by $\chi_k^{in}(\eta')$ in the integrand
of Eq.~(\ref{integralequation}). The resulting solution for $\chi_k(\eta)$ may be expressed in the late time region as
\begin{equation}
\chi_k^{out}(\eta)=\alpha_k \,\chi_k^{in}(\eta)+\beta_k \,\chi_k^{in *}(\eta)\,,
\end{equation}
where the Bogoliubov coefficient, $\beta_k$, is given by
\begin{equation}
\beta_k=-\frac{i}{2k}\int^{\infty}_{-\infty}\exp(-2i k\eta)V(\eta)d\eta \,.
\label{betacoefficients}
\end{equation}
The number density per unit of proper volume of created particles at late times is
\begin{equation}
n=2\times\left[2\pi^2 a^{3}(\eta)\right]^{-1}\int_0^{\infty} \vert\beta_{k}\vert^2 k^2 dk \, ,
\label{densitynumber}
\end{equation}
and the corresponding energy density is
\begin{equation}
\rho=2\times\left[2\pi^2 a^{4}(\eta)\right]^{-1}\int_0^{\infty}\vert\beta_{k}\vert^2 k^3 dk \, .
\label{energydensity}
\end{equation}
Here the factors of $2$ account for the polarization states, and the factors of $1/a^3$ and $1/a^4$ describe the dilution and
redshifting of massless particles by the continued
expansion of the universe after the creation process has essentially finished.
After substituting Eqs.~(\ref{vetaequation}) and (\ref{betacoefficients}) into Eqs.~(\ref{densitynumber}) and (\ref{energydensity}),
and performing the respective integrals in $k$, the number and energy density can be rewritten as coordinate-space integrals, as shown
in Refs.~\cite{BD80,BirrellDavies:1982},
\begin{equation}
n=2\times[16\pi a^3(\eta)]^{-1}\int^{\infty}_{-\infty}V^2(\eta_1)d\eta_1 \, ,
\label{densitynumbercoordinatespace}
\end{equation}
and
\begin{equation}
\rho=-2\times[32\pi^2a^4(\eta)]^{-1}\int^{\infty}_{-\infty}d\eta_1 \int^{\infty}_{-\infty}d\eta_2 \;
\frac{\ln\left|( \eta_2-\eta_1 ) \mu\right|^2}{2} \times \dot{V}(\eta_1)\dot{V}(\eta_2) \, .
\label{energydensitycoordinatespace}
\end{equation}
Here $\mu$ is an arbitrary mass. The energy density $\rho$ is independent of $\mu$, provided that
$\dot{V}(\eta) \rightarrow 0$ as $\eta \rightarrow \pm \infty$. In general, the energy density of gravity waves, and hence of
gravitons, may not be clearly defined. However, when the wavelength of the gravity waves is short compared to the
radius of curvature of the background spacetime, there is a well-defined effective energy momentum tensor for gravity,
as is discussed, for example, in Ref.~\cite{MTW}. This will be the case in the models we examine, as the period of the
scale factor oscillations is very short compared to the Hubble time of the FRW background. The graviton energy density
used here is obtained from this effective energy momentum tensor, as discussed in Ref.~\cite{FordParker:1977}.
Note that we are working to first order in a perturbation expansion in powers of $1/6$, so the lowest order results are only
approximate but should be adequate for the order of magnitude estimates which we seek.
\subsection{Oscillating scale factors in a spatially flat FRW background}
\label{oscillatingscalefactor}
We consider small oscillations around a FRW background, with a scale factor of the form
\begin{equation}
a(t)= \bar{a}(t)\left[ 1 + A_{\text{eff}}(t) \cos(\omega_0 t) \right] \, ,
\label{scalefactorinanexpandinguniverse}
\end{equation}
where $\bar{a}(t)$ is the background scale factor time averaged over oscillations, $A_{\text{eff}}(t) \ll 1$ is a nonconstant
oscillation amplitude, and $\omega_0$ is the angular frequency of oscillations. Note that if we take the background scale factor
to be that of flat spacetime and use $t \approx \eta$ to leading order, then Eq.~(\ref{scalefactorinanexpandinguniverse}) takes
the following form in conformal time:
\begin{equation}
a(\eta)= 1 + A_0 \cos(\omega_0 \eta) \, ,
\label{scalefactor}
\end{equation}
where $A_0 \ll 1$ is the constant amplitude of the metric oscillations.
We analyze two models in which a scale factor of the form in Eq.~(\ref{scalefactorinanexpandinguniverse}) can arise.
First, we consider the standard matter fields in general relativity consisting of a perfect fluid plus the addition of a minimally coupled
scalar field in a harmonic potential. Second, we consider a specific model in $f(R)$ gravity in which the gravitational
action is expanded in a power series to second order in the Ricci scalar.
\subsubsection{Standard matter fields in general relativity plus a minimally coupled scalar field (GRSF model)}
Coherent scalar field oscillations in an expanding universe were studied by Turner \cite{Turner:1983}, and have been widely
considered in the literature in the context of
inflation and the reheating epoch after inflation~\cite{Shtanovetal:1994} or as a dark matter
candidate~\cite{PeeblesVilenkin:1999,SuarezMatos:2011}. We focus on the oscillations of the scale factor driven by scalar field
oscillations. The action for this model is given by
\begin{equation}
S = \frac{M_{pl}^2}{2}\int d^4x \sqrt{-g}R + \int d^4x \left[ \mathscr{L}_M(g_{\mu\nu},\Psi_M) +
\mathscr{L}_{\text{scalar}}(g_{\mu\nu},\varphi)\right]\,,
\label{G(R)plusscalaraction}
\end{equation}
where $\mathscr{L}_M(g_{\mu\nu},\Psi_M)$ is the Lagrangian for the matter fields $\Psi_M$, and
$\mathscr{L}_{\text{scalar}}(g_{\mu\nu},\varphi) = (\sqrt{-g}/2)[-g^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi - 2V(\varphi)]$,
where $\varphi$ is a homogeneous scalar field with a harmonic potential, $V(\varphi)=(\omega^2\varphi^2)/2$. The
Friedmann equation for the scale factor is
\begin{equation}
3H^2M_{pl}^2 =\rho_M + \rho_{\varphi}\,,
\label{scalar1}
\end{equation}
and the scalar field equation of motion is
\begin{equation}
\partial_{t}^2\varphi + 3H\partial_{t}\varphi + \omega^2 \varphi = 0\,.
\label{cosmologicalequationonescalar}
\end{equation}
Here $H = \dot{a}(t)/a(t)$ is the Hubble parameter and $\rho_M$, and $\rho_{\varphi} =(\partial_t \varphi)^2/2 + V(\varphi)$ are
the energy density for matter fields and the scalar field, respectively. In the regime $H \ll \omega$, the friction term in
Eq.~(\ref{cosmologicalequationonescalar}) is subdominant, and the scalar field oscillates around the minimum of the potential
with an angular frequency $\omega$ according to $\varphi(t) \approx A(t) \cos(\omega t)$. Let $A(t)\propto 1/\bar{a}(t)^{\gamma}$.
Then, if we neglect $\ddot{\bar{a}}(t)$ and $\dot{\bar{a}}^2(t)$ terms and take $H \approx \bar{H} \equiv \dot{\bar{a}}(t)/\bar{a}(t)$, the expression for $\varphi(t)$
satisfies Eq.~(\ref{cosmologicalequationonescalar}) with $\gamma = 3/2$.
It follows that the time evolution of the scalar field can be expressed as
\begin{equation}
\varphi(t) = \varphi_i\left( \frac{\bar{a}_i}{\bar{a}} \right)^{3/2} \cos(\omega t)\,,
\label{timeevolutionscalar}
\end{equation}
where $\varphi_i$ is the oscillation amplitude when oscillations start at time $t_i$ and $\bar{a}_i \equiv \bar{a}(t_i)$.
The oscillating behavior of the scalar field causes the scale factor in this model also to have an oscillating behavior.
In Appendix~\ref{GRscalar} we calculate this scale factor in detail, and find
\begin{equation}
a(t)=\bar{a}(t) \left[ 1 - D_i \left(\frac{\bar{a}_i}{\bar{a}(t)}\right)^{3} \cos(2\omega t) \right]\,,
\label{scalefactorGRscalar}
\end{equation}
where $D_i\equiv (\varphi_i^2) / (16 M_{pl}^2)$ is the initial amplitude of the metric oscillations. Thus the scale factor oscillates
at twice the frequency of the scalar field.
If we consider this background scale factor to be that of flat spacetime and $D_i \ll 1$, Eq.~(\ref{scalefactorGRscalar}) takes
the form, to leading order, of Eq.~(\ref{scalefactor}) with $D_i = A_0$ and $ \omega = \omega_0 / 2$, where $\omega$ is
the mass of the scalar field $\varphi$.
The generation of gravitons in this model is ruled by Eq.~(\ref{xkequation}), because we are working in standard general relativity.
\subsubsection{Modified Einstein's gravity: Quadratic terms in the curvature [$f(R)$ model]}
Oscillations of the scale factor shown by Eq.~(\ref{scalefactorinanexpandinguniverse}) can also arise from modifications
of Einstein's equation by terms quadratic in the curvature.
An example is $f(R)$ gravity, where the Einstein-Hilbert action is taken to be $S_H=\frac{1}{2}\, M_{pl}^2 \,\int d^4x \sqrt{-g} f(R)$,
with $f(R)$ being an analytic function of the Ricci scalar $R$. Expand $f(R)$ to second order as
\begin{equation}
f(R)=a_0 + a_1R+\frac{a_2}{2!}R^2 +\ldots \,
\label{ricciscalarexpansion}
\end{equation}
and set $a_0=0$ and $a_1=1$, so that $f(R) \approx R+(a_2/2)R^2$. The resulting modified vacuum Einstein's
equation and its trace equation are, respectively,
\begin{equation}
G_{\mu\nu} + a_2\, \left(RR_{\mu\nu}-\frac{1}{4}R^2g_{\mu\nu}+
g_{\mu\nu}\nabla^{\alpha}\nabla_{\alpha}R-\nabla_{\mu}\nabla_{\nu}R\right)=0 \, ,
\label{modifiedeinsteinequation}
\end{equation}
\begin{equation}
\left( \Box - \frac{1}{3a_2} \right)R = 0\,,
\label{tracemodifiedeinsteinequation}
\end{equation}
where $R_{\mu\nu}$ is the Ricci tensor, and $G_{\mu\nu} = R_{\mu\nu}-g_{\mu\nu}\,{R}/2$ is the Einstein tensor. The term proportional to $a_2$ in Eq.~(\ref{modifiedeinsteinequation}) need not arise from a modification of the gravitational action, but perhaps more plausibly, can also arise in semiclassical gravity where the renormalized expectation value of a quantum matter stress tensor acts as the source of gravity.
In either case, the modified Einstein's equation, Eq.~(\ref{modifiedeinsteinequation}), contains terms which are fourth
order in the metric and can cause flat spacetime to be unstable or to oscillate, as was discussed by
Horowitz and Wald~\cite{HorowitzWald:1978}. Let the spacetime metric be that of Eq.~(\ref{eq:metric}) with
$a(\eta) = 1+ \gamma$. To first order in $\gamma$, Eq.~(\ref{modifiedeinsteinequation}) becomes
\begin{equation}
-\partial_{\mu}\partial_{\nu}\gamma + (\Box\gamma)\eta_{\mu\nu}+3a_2\partial_{\mu}\partial_{\nu}(\Box\gamma)-3a_2\Box(\Box\gamma)\eta_{\mu\nu}=0 \, ,
\end{equation}
where $\Box=\partial^{\alpha}\partial_{\alpha}$. The spatially homogeneous solutions of this equation grow exponentially
in $\eta$ if $a_2<0$, so flat spacetime becomes unstable. If $a_2>0$, the scale factor oscillates, as described by
Eq.~(\ref{scalefactor}),
with an angular frequency of
\begin{equation}
\omega=\frac{1}{\sqrt{3a_2}}\;\;\;\;(a_2>0) \, .
\label{relationomegaa2}
\end{equation}
Note the peculiar fact that as $a_2$ becomes smaller, the frequency of oscillation $\omega$ becomes larger.
Laboratory tests of the inverse square law of gravity place an upper bound on $a_2$ of about~\cite{BerryGair:2011}
$a_2 \alt 2\times10^{-9}\text{ m}^2$. From Eq.~(\ref{relationomegaa2}), this bound leads to a lower bound on $\omega$ of
\begin{equation}
\omega \agt \omega_B = 1.3\times 10^4\text{ m}^{-1}=4\times10^{12}\text{ Hz}.
\label{omegalowerbound}
\end{equation}
The possible effect of these oscillations in causing radiation by charged particles was discussed by
Horowitz and Wald~\cite{HorowitzWald:1978}, and their possible role in causing enhanced quantum fluctuation effects
through noncancellation of anticorrelated fluctuations was treated in Ref.~\cite{ParkinsonFord:2014}. Our primary
interest is their effect on graviton creation, which will be treated in the next subsection. Quantum creation
of particles by metric oscillations plays a role in the Starobinsky model of inflation~\cite{Starobinsky:1980}, and
was discussed by Vilenkin~\cite{Vilenkin:1985}. More recent treatments of graviton creation in oscillating metrics
have been given in Ref.~\cite{Bag:2014} in the context of emergent cosmology and in Ref.~\cite{Ema:2015} in
a model of inflaton decay.
Gravitational waves in general relativity are associated with a massless spin two graviton field with two different polarizations.
However, the presence of higher order derivative terms in the modified Einstein's equation in Eq.~(\ref{modifiedeinsteinequation})
causes, in addition to a graviton field, an extra scalar mode associated with a massive spin zero field.
To see this extra scalar mode, consider small deviations from a flat background of the form $g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}$,
where $|h_{\mu\nu} | \ll 1$. If we work to first order in the perturbation, then the linearized version of the trace field equation, Eq.~(\ref{tracemodifiedeinsteinequation}), predicts scalar modes satisfying a massive Klein-Gordon equation~ \cite{BerryGair:2011}
\begin{equation}
\left( \Box - \omega^2 \right)R^{(1)} = 0\,,
\end{equation}
where $R^{(1)} = \partial_{\mu}\partial_{\nu}h^{\mu\nu}-\eta^{\alpha\beta}\Box h_{\alpha\beta}$ is the linearized Ricci scalar to
first order. Thus, if we take seriously this modified gravity theory, we should expect massive scalar particle creation together to
graviton creation. We will focus solely upon graviton creation in the present paper. We expect the scalar particle creation
rate to be somewhat suppressed compared to that for gravitons due to the nonzero mass of the scalar particle and the two
polarization degrees of freedom of the gravitons. In any case, the observational constraints which
we will derive using gravitons alone may be regarded as lower bounds on the slightly tighter constraints which would
arise if the effects of scalar particles were also included.
Gravitational waves in a spatially flat FRW background can be analyzed by
considering a transverse and traceless perturbation of the metric.
Rewrite Eq.~(\ref{modifiedeinsteinequation}) and define an effective energy momentum tensor, $T^{\text{eff}}_{\mu\nu}$, by
\begin{equation}
G_{\mu\nu}=\frac{1}{M_{pl}^2}T^{\text{eff}}_{\mu\nu}\equiv \frac{a_2}{1+a_2R}\left(-\frac{1}{4}R^2g_{\mu\nu}-g_{\mu\nu}\nabla^{\alpha}\nabla_{\alpha}R+\nabla_{\mu}\nabla_{\nu}R\right)\,.
\end{equation}
One may express $T^{\text{eff}}_{\mu\nu}$ in terms of effective fluid quantities and describe the perturbations
of the above equation using a gauge invariant formulation. Let $H_{{\bf{k}}}({\bf{x}},t) \propto \text{exp}(i{\bf{k}}\cdot{\bf{x}}) u_{k}(t)$
be the graviton mode function. Then the evolution of $u_{k}(t)$ is given by~\cite{HawngNoh:1996, Hawng:1991}
\begin{equation}
\frac{1}{a^3(t)F(R)}\left[ a^3(t)F(R)\partial_{t}u_k\right(t)],_{t} + \frac{k^2}{a(t)^2}u_k(t)=0\,,
\label{101Hawng}
\end{equation}
where $F(R)\equiv d f(R)/dR$. Note that this equation differs from the general relativity case by an extra term,
$(F,_{t}/F)(\partial_{t} u_k)$, which comes from the nonzero anisotropic pressure part of the imperfect fluid $T_{\mu\nu}^{\text{eff}}$.
Defining $u_k(\eta) = v_k(\eta) /(a\sqrt{F}) $, Eq.~(\ref{101Hawng}) becomes
\begin{equation}
\frac{d^2 v_k(\eta)}{d\eta^2} + \left[k^2 -\frac{1}{a\sqrt{F}}\frac{d^2 (a\sqrt{F})}{d\eta^2}\right]v_k(\eta)=0 \,.
\label{115Hawng}
\end{equation}
In the limit $a_2 \rightarrow 0$, where $F \rightarrow 1$, we recover the known results from general relativity.
In this limit, after setting
$\chi_k(\eta) = v_k(\eta)$ and using $R(\eta)=6\ddot{a}(\eta)/a(\eta)^3$, Eq.~(\ref{115Hawng}) becomes Eq.~(\ref{xkequation}),
as expected.
The easiest way to analyze the behavior of oscillations in this model in a spatially flat FRW background is to take advantage of the
equivalence between $f(R)$ theories and scalar-tensor gravity. [For a review and discussion about $f(R)$ gravity and its equivalence
with the scalar-tensor theory for gravitation see Refs.~\cite{DeFeliceTsujikawa:2010,Faulkneretal:2006}.]
The usual approach to obtain a scalar-tensor gravity from $f(R)$ gravity is to perform a conformal transformation,
$\tilde{g}_{\mu\nu}=F(R)g_{\mu\nu}$ with $F(R)\equiv df(R)/dR$, and to introduce an auxiliary scalar field $\phi$ according
to $F(R(\phi))=\text{exp}[\sqrt{2/3}~(\phi/M_{pl})]$. In the new frame, or Einstein frame, the theory looks like conventional
general relativity plus a minimally coupled auxiliary scalar field $\phi$. It is, however, not identical to the GRSF model of the
previous subsection. Note that we use $\varphi$ to denote the scalar field in the GRSF model, and $\phi$ to denote that in the $f(R)$ model. The scalar field $\phi$ can oscillate around the minimum of its potential, which leads to oscillatory behavior
of the scale factor in the original frame, or Jordan frame, of the form
\begin{equation}
a(t) = \bar{a}(t) \left[ 1 - E_i\, \left(\frac{ \bar{a}_i }{\bar{a}(t)}\right)^{3/2} \, \cos(\omega t) \right]\,,
\label{scalefactorf(R)}
\end{equation}
where $E_i=(\phi_i)/(\sqrt{6}M_{pl})$ is the initial amplitude of metric oscillations, $\phi_i > 0$ is the initial value of the scalar field,
$\bar{a}(t)$ is the background scale factor time averaged over the oscillations, and $\bar{a}_i = \bar{a}(t_i)$ where $t_i$ is the time
at which oscillations start.
The equivalence between $f(R)$ gravity and scalar-tensor gravity and the derivation of Eq.~(\ref{scalefactorf(R)}) are discussed in
detail in Appendix~\ref{f(R)gravity}.
If we consider the background scale factor to be flat spacetime and $E_i \ll 1$, Eq.~(\ref{scalefactorf(R)}) takes the form, to
leading order, of Eq.~(\ref{scalefactor}) with $E_i = A_0$ and $\omega = \omega_0$, where $\omega$ is the mass of the scalar
field $\phi$. Thus in the $f(R)$ model, the scale factor and the scalar field oscillate at the same frequency.
We can express the scale factors of both models, Eqs.~(\ref{scalefactorGRscalar}) and (\ref{scalefactorf(R)}), as
\begin{equation}
a(t) = \bar{a}(t)[1 + \delta a(t)]\, ,
\label{eq:del-a}
\end{equation}
where $\delta a(t) \ll 1$ is the oscillatory part of the scale factor.
Figure~\ref{figure3} illustrates the behavior of this oscillatory part in both models in a radiation dominated universe.
Here $\bar{a}(t) \propto t^{1/2}$, so $\delta a(t) \propto \bar{a}(t)^{-3} \propto t^{-3/2}$ in the GRSF model and
$\delta a(t) \propto \bar{a}(t)^{-3/2} \propto t^{-3/4}$ in the $f(R)$ gravity model. Thus the oscillations are at twice the
frequency and decay more rapidly in the GRSF model as compared to the $f(R)$ gravity model.
\begin{figure}
\centering
\includegraphics[scale=0.7]{Figure3.png}
\caption{Oscillatory part of the scale factor in a radiation dominated universe
is illustrated in the GRSF model and in $f(R)$ gravity. Here $\delta a(t)_{\text{norm}}$
is $\delta a(t)$ expressed in units where $\delta a(t)_{\text{norm}} = 1$ at an initial time given by $\omega t =1$.}
\label{figure3}
\end{figure}
\subsection{Calculation of graviton creation caused by oscillations around flat spacetime}
We consider the graviton creation in both models using the oscillating
scale factor defined by Eq.~(\ref{scalefactor}), which refers to small oscillations around a flat spacetime. Note that even though oscillations are present in both scenarios, the gravitational wave equation, which rules the graviton creation, is different for each case.
\subsubsection{Graviton creation in standard general relativity plus a minimally coupled scalar field}
We analyze the asymptotic behavior of the number and energy density of created gravitons on time scales long compared
to the period of oscillation.
From Eqs.~(\ref{vetaequation}) and ~(\ref{ricciscalar}), the expression for $V(\eta)$ is
\begin{equation}
V(\eta)= \frac{1}{2C(\eta)^2}\left[ \ddot{C}(\eta)C(\eta)-\frac{1}{2}\, \dot{C}(\eta)^2 \right] \, .
\end{equation}
Substituting the expression for $a(\eta)$, Eq.~(\ref{scalefactor}), into this equation, we obtain, to first order in $A_0$,
\begin{equation}
V(\eta)= -\frac{A_0\,\omega_0^2\cos(\omega_0\eta)}{1+A_0\cos(\omega_0\eta)}\approx -A_0\,\omega_0^2\cos(\omega_0 \eta) \, .
\label{vetaapproximation}
\end{equation}
Here we treat the case of oscillations around flat spacetime, and hence set
$a(\eta) = 1$ in the prefactors to the integrals in Eqs.~(\ref{densitynumbercoordinatespace}) and
(\ref{energydensitycoordinatespace}). The graviton number density becomes
\begin{align}
n &\approx \frac{1}{8\pi}\times \int d\eta_1 \left[ -A_0\,\omega_0^2\cos(\omega_0 \eta_1) \right]^2 \, \\
&= \frac{A_0^2\,\omega_0^4}{8\pi}\times \int d\eta_1 \frac{1+\cos(2\omega_0 \eta_1)}{2} \, ,
\end{align}
where the integral on $\eta_1$ is to be taken over a long, but finite interval. On time scales long compared to $\omega_0^{-1}$, the average number density creation rate is the same in both conformal time $\eta$ and comoving time $t$ and given by
\begin{equation}
\frac{dn}{d\eta}\Bigr|_{\text{GRSF}} = \frac{dn}{dt}\Bigr|_{\text{GRSF}} = \frac{A_0^2\,\omega_0^4}{16\pi} \, .
\label{eq:number-rate}
\end{equation}
If the oscillations last for a comoving time $t$, then the number density of created gravitons becomes, to leading
order in $A_0$,
\begin{equation}
n_{g}\sim\frac{A_0^2\,\omega_0^4t}{16\pi} \, .
\label{gravitonnumberdensity}
\end{equation}
Let $\lambda = 2\pi/\omega_0$ be the period of oscillation, and, in $c=1$ units, the wavelength associated with angular
frequency $\omega_0$. The number density creation rate of Eq.~(\ref{eq:number-rate}) can be expressed as
\begin{equation}
\frac{dn}{dt}\Bigr|_{\text{GRSF}}= \, \pi^3\, A_0^2\, \lambda^{-3} \, \lambda^{-1} \, .
\label{eq:number-rate2}
\end{equation}
This result tells us that an average of $\pi^3\, A_0^2$ gravitons are created in volume $ \lambda^{3}$ per
oscillation.
For the case of the graviton energy density, we use the approximate expression of $V(\eta)$ in Eq.~(\ref{vetaapproximation})
and calculate its derivative with respect to the conformal time,
\begin{equation}
\dot{V}(\eta) \approx A_0\,\omega_0^3\sin(\omega_0\eta) \, .
\end{equation}
Substituting this equation into Eq.~(\ref{energydensitycoordinatespace}), we now have for the graviton energy density
\begin{equation}
\rho \approx -\frac{A_0^2\,\omega_0^6}{16\pi^2}\int d\eta_1 \sin(\omega_0\eta_1) \int d\eta_2
\frac{\ln\left|( \eta_2-\eta_1 )\mu \right|^2}{2} \sin(\omega_0\eta_2) \, ,
\label{eq:rho}
\end{equation}
where the integrals on $\eta_1$ and $\eta_2$ are to be taken over long, but finite intervals. First, let us focus on the inner integral
\begin{equation}
I(\eta_1)= \int_{-T}^{T} d\eta_2 \frac{\ln\left|( \eta_2-\eta_1 )\mu \right|^2}{2} \sin(\omega_0 \eta_2) \, ,
\label{Iintegralprevious}
\end{equation}
where we examine the limit $T \rightarrow \infty$ for fixed $\eta_1$. With the change of variable $y=\omega_0 \eta_2$, we have
\begin{align}
I(\eta_1)&=\omega_0^{-1} {\rm Re}\int_{-T\omega_0}^{T\omega_0} dy \left\lbrace \frac{\ln\left[(y-\omega_0\eta_1)\right]^2}{2} +
\ln\left(\frac{\mu}{\omega_0}\right) \right\rbrace \sin(y) \, \\
&=\omega_0^{-1} {\rm Re} \int_{-T\omega_0}^{T\omega_0} dy \frac{\ln\left[(y-\omega_0\eta_1)\right]^2}{2} \sin(y) \,\\
&\sim -\frac{\pi}{\omega_0}\,\sin(\omega_0\eta_1) + O\left(\frac{1}{T\omega_0}\right) \, ,
\label{Iintegral}
\end{align}
where we have, in the second line, dropped the $\mu$ dependent part because it is proportional to
$\int_{-T\omega_0}^{T\omega_0} dy \sin(y) = 0$, and, in the third line, used the asymptotic values at $\pm\infty$ of the cosine
and sine integral functions. Note that the assumption of $\omega_0 \gg T^{-1}$ in Eq.~(\ref{Iintegral}) makes the integrand
in the expression for the graviton energy density, Eq.~(\ref{eq:rho}), approximately local.
Now, we have
\begin{equation}
\rho \sim \frac{A_0^2\,\omega_0^5}{16\pi} \int d\eta_1 \, \sin^2(\omega_0\eta_1) \, .
\end{equation}
Again, on long time scales, the average energy density creation rate is the same in both conformal and comoving time, so
\begin{equation}
\frac{d\rho}{d\eta}\Bigr|_{\text{GRSF}}=\frac{d\rho}{dt}\Bigr|_{\text{GRSF}}=\frac{A_0^2\,\omega_0^5}{32\pi} \, .
\label{eq:rate}
\end{equation}
The leading order for the graviton energy density after a time $t$ is
\begin{equation}
\rho_{g} \Bigr|_{\text{GRSF}}
\sim\frac{A_0^2\,\omega_0^5t}{32\pi} \, .
\label{oldgravitonenergydensity}
\end{equation}
Here we are ignoring any possible interference terms. That is, we assume that the energy density of gravitons created at
earlier times adds incoherently to that of gravitons created later.
Equations~(\ref{eq:number-rate}) and (\ref{eq:rate}) show that the graviton number density creation rate, as
well as the energy density creation rate, are proportional to the square of the metric oscillations $A_0$, and that the
mean graviton energy
is $\omega_0/2$. This latter result can be explained using the analogy with the spontaneous parametric down-conversion
in nonlinear optics, where a nonlinear crystal is used to split photon beams into pairs of photons. Here, in accordance with the
law of conservation of energy, the sum of the energies of the pair equals the energy of the original photon. Graviton production
in pairs with energy $\omega_0/2$ per particle has previously been found in the context of the Starobinsky model for inflation \cite{Vilenkin:1985}.
\subsubsection{Graviton creation in $f(R)$ gravity}
Now we can obtain the number and energy density creation rate in $f(R)$ gravity from those in the GRSF model. Substituting the expression for $V(\eta)$, Eq.~(\ref{vetaapproximation}), into the gravitational wave equation in standard general relativity, Eq.~(\ref{xkequation}), we obtain for the GRSF model
\begin{equation}
\frac{d^2 v_k(\eta)}{d\eta^2} + \left[k^2 +A_0\omega_0^2\cos(\omega_0\eta)\right]v_k(\eta)=0 \,.
\label{GRcase}
\end{equation}
For the case of $f(R)$ gravity, using $F(R) = 1 + a_2R$ and working to second order in $A_0$ in the term $(a\sqrt{F}),_{\eta \eta}/(a\sqrt{F})$ in the modified gravitational wave equation, Eq.~(\ref{115Hawng}), we have
\begin{equation}
\frac{d^2 v_k(\eta)}{d\eta^2} + \left[k^2 -3A_0^2\omega_0^2\cos(2\omega_0\eta)\right]v_k(\eta)=0 \,.
\label{fRcase}
\end{equation}
The difference between Eqs.~(\ref{GRcase}) and (\ref{fRcase}) lies in their respective sinusoidal factors.
Note that the overall sign is not important and does not change the particle creation rate.
Making the replacements $\omega_0\rightarrow 2\omega_0$ and $A_0 \rightarrow (3/4)A_0^2$ in Eqs.~(\ref{eq:number-rate})
and (\ref{eq:rate}) for the GRSF model, we can obtain the corresponding results for $f(R)$ gravity:
\begin{equation}
\frac{dn}{dt} \Bigr|_{\text{f(R)}} = \frac{9A_0^4\omega_0^4}{16\pi} \,,
\label{eq:ratef(R)casenumber}
\end{equation}
\begin{equation}
\frac{d\rho}{dt} \Bigr|_{\text{f(R)}} = \frac{9A_0^4\omega_0^5}{16\pi}\,.
\label{eq:ratef(R)case}
\end{equation}
These last equations show that the graviton number density and energy density creation rates are proportional to the
fourth power of the metric oscillation amplitude, $A_0$, and that the mean graviton energy is $\omega_0$.
\section{Graviton energy density in an expanding universe}
\label{scalinggravitondensity}
Now we wish to extend the results for the energy density creation rate in flat spacetime obtained in both cases, Eqs.~(\ref{eq:rate})
and (\ref{eq:ratef(R)case}), to
an expanding universe. The general scale factor in a spatially flat FRW background is given by Eq.~(\ref{scalefactorinanexpandinguniverse}),
where the amplitude of the oscillations decreases with time.
So long as the expansion rate of the background is slow compared to the oscillation rate,
\begin{equation}
\frac{1}{\bar{a}(t)}\, \frac{d \bar{a} }{dt} \ll \omega \,,
\label{eq:expandrate}
\end{equation}
we may treat the background spacetime as approximately flat, and use the results of Eqs.~(\ref{eq:rate}) and (\ref{eq:ratef(R)case}) with
$A_0 \rightarrow A_{\text{eff}}(t)$. Recall that $A_{\text{eff}}(t) = D_i (\bar{a}_i/\bar{a})^3$ in the GRSF model and $A_{\text{eff}}(t) = E_i (\bar{a}_i/\bar{a})^{3/2}$ in the case of $f(R)$ gravity. Then the energy density creation rates in the expanding universe become
\begin{equation}
\frac{d\rho}{dt} \approx
J\omega_0^5 \left[\frac{\bar{a}_i}{\bar{a}(t)}\right]^6 \,,
\label{eq:ratewithdampingGR}
\end{equation}
where $J = (D_i^2)/(32 \pi)$ in the GRSF model and $J = (9E_i^4)/(16\pi)$ in $f(R)$ gravity. Note that $d\rho/dt \propto \bar{a}^{-6}$
in both cases.
In addition to the damping effect on the metric oscillations, the expansion causes redshifting and dilution of the created gravitons.
After creation, the graviton energy density scales as $1/\bar{a}^4(t)$.
Including both effects, the energy density at $t=t_0$ due to gravitons created in an interval $dt$ at an earlier time $t$ is
\begin{equation}
d\rho_g(t_0)= J\omega_0^5\, \left[ \frac{\bar{a}_i}{\bar{a}(t)}\right]^6 \left[ \frac{\bar{a}(t)}{\bar{a}_0}\right]^4 dt \,,
\label{creationrateGR}
\end{equation}
where $\bar{a}_0 = \bar{a}(t_0)$. If we take $t_0$ to be the present time, the gravitons in question were created at redshift $z$,
where $1+z = \bar{a}_0/ \bar{a}(t)$.
These expressions tell us that the present contribution of earlier graviton production is suppressed by a factor of $( 1+z)^{-4}$ due to
redshifting and increased by a factor proportional to $( 1+z)^{6}$ due to the greater oscillation amplitude at earlier times.
If we substitute into Eq.~(\ref{creationrateGR}) the values of $\omega_0$ and $J$ for each model, which depend upon the scalar field initial values,
either $\varphi_i$ or $\phi_i$, we find that
the energy density creation rate in the $f(R)$ gravity case is 4 times that in the GRSF model, if the scalar field masses
and initial values are the same. Specifically we have
\begin{equation}
d\rho_g(t_0)\Bigr|_{\text{GRSF}}= \frac{\varphi_i^4\,\omega^5}{256\pi M_{pl}^4}\, \left[ \frac{\bar{a}_i}{\bar{a}(t)}\right]^6 \left[ \frac{\bar{a}(t)}{\bar{a}_0}\right]^4 dt \,,
\label{creationrateGR2}
\end{equation}
\begin{equation}
d\rho_g(t_0)\Bigr|_{\text{f(R)}}= 4\times\frac{\phi_i^4\,\omega^5}{256\pi M_{pl}^4}\, \left[ \frac{\bar{a}_i}{\bar{a}(t)}\right]^6 \left[ \frac{\bar{a}(t)}{\bar{a}_0}\right]^4 dt \,.
\label{creationratef(R)2}
\end{equation}
If the oscillations start at time $t_i$, then the graviton energy density at time $t_0$ will be given by
\begin{equation}
\rho_g(t_0) = J\omega_0^5 a_i^6 \,\int_{t_i}^{t_0} \bar{a}(t)^{-2} dt \, ,
\label{scalinggravitonenergydensityGR}
\end{equation}
with $\bar{a}_0 =1$. We assume that $t_i$ is after the end of inflation and that gravitons created at earlier times do not cause
interference with gravitons created at later times, as was assumed in Eq.~(\ref{oldgravitonenergydensity}).
Consider a model of the universe which is spatially flat and contains radiation (photons, neutrinos, and gravitons),
nonrelativistic matter (baryonic and nonbaryonic dark matter) and a cosmological constant associated with the dark energy.
The model is first radiation dominated, then nonrelativistic matter dominated, and is now entering into its dark energy
dominated phase. On time scales much longer than the period of oscillations, the Friedmann equation in this model of universe,
which is derived in detail in the Appendices, can be expressed as
\begin{equation}
3\bar{H}(t)^2 M_{pl}^2 \approx \frac{\rho_{r,0}}{\bar{a}^4(t)} +\frac{\rho_{m,0}}{\bar{a}^3(t)}+\rho_{\Lambda,0} + \frac{\omega^2\chi_i^2}{2}\left( \frac{\bar{a}_i}{\bar{a}} \right)^3\, ,
\label{Friedmannequation}
\end{equation}
where $\bar{H}(t)\equiv[\dot{\bar{a}}(t)]/[\bar{a}(t)]$ is the Hubble parameter as a function of the time-averaged scale factor,
$\bar{a}(t)$. Here $\rho_{r,0}$, $\rho_{m,0}$,
and $\rho_{\Lambda,0}$ are the radiation, nonrelativistic matter, and dark energy densities today, respectively, and the
scalar field energy density is $\rho_{\chi} \approx (\omega^2 \chi_i^2/2) (\bar{a}_i/\bar{a})^3$, where $\chi$ refers to either the
$\varphi$ scalar field in the GRSF model or the $\phi$ scalar field in $f(R)$ gravity.
Since we are interested in cosmological implications of the quantum graviton creation, we assume that oscillations of the scale
factor continue through the present epoch. This is equivalent to requiring
that the scalar field in each model continues in its oscillatory phase. Note that the scalar energy density in both cases scales
like nonrelativistic matter, and could grow to dominate the radiation energy density before the expected beginning
of the matter-dominated epoch.
In order to avoid that, the scalar energy density, $\rho_{\chi}(t)$, should be always less than that of the nonrelativistic matter,
$\rho_m(t)$, through the present epoch. Indeed, this conclusion is supported by observational data, as will be explained in detail
in Sec.~\ref{cosmlogicalconstraints}.
If we assume $\rho_{\chi} (t) < \rho_{m}(t)$, the Friedman equation for both models, Eq.~(\ref{Friedmannequation}), becomes
\begin{equation}
\frac{\bar{H}(t)^2}{H_0^2} \approx \frac{\Omega_{r,0}}{\bar{a}^4(t)} +\frac{\Omega_{m,0}}{\bar{a}^3(t)}+\Omega_{\Lambda,0} \, ,
\label{Friedmannequationapproximation}
\end{equation}
where $\Omega_{r,0}=\rho_{r,0}/\rho_{c,0}$, $\Omega_{m,0}=\rho_{m,0}/\rho_{c,0}$, and
$\Omega_{\Lambda,0}=\rho_{\Lambda,0}/\rho_{c,0}$. Here $\rho_{c,0} = (3 H_0^2)/(8 \pi G)$ is the critical density today
and $G$ is Newton's constant. Then
$\Omega_0=\Omega_{r,0}+\Omega_{m,0}+\Omega_{\Lambda,0} \approx 1$ is the energy density parameter today.
We use the values $H_0\equiv 100~h_0\text{ km}\,\text{s}^{-1}\,\text{Mpc}^{-1}$, $\Omega_{r,0} =4.15\times10^{-5}~h_0^{-2}$,
and $\rho_{c,0}=1.88\times10^{-26}~h_0^2\text{ kg m}^{-3}$. We take $h_0 = 0.673$ and $\Omega_{m,0} = 0.315$ from the
Planck temperature power spectrum data including WMAP polarization at low multipoles~\cite{Planckdata:2013}.
Substituting Eq.~(\ref{Friedmannequationapproximation}) into Eq.~(\ref{scalinggravitonenergydensityGR}), the graviton energy density today is found
to be
\begin{equation}
\rho_g(t_0) = \frac{J\omega_0^5 \bar{a}_i^6}{H_0}\, \int_{\bar{a}_i}^{1} \left( \frac{\bar{a}^{-1}}{\sqrt{{\Omega_{r,0}}+
{\Omega_{m,0}}\,{\bar{a}}+\Omega_{\Lambda,0}\, \bar{a}^4}}\right) d\bar{a} \, .
\label{rho_g(t_0)G(R)}
\end{equation}
This integral cannot be expressed in terms of elementary functions and must be calculated numerically.
The graviton energy density during the radiation dominated epoch can be calculated more easily. At some time $t_{r} \alt t_{rm}$,
where $t_{rm}$ is the time of radiation-matter equality, the scale factor can be approximated as
\begin{equation}
\bar{a}(t)\approx{({2\sqrt{\Omega_{r,0}}\, H_0\, t})^{1/2}} \propto \sqrt{t} \,.
\label{eq:root-t}
\end{equation}
This is a solution of Eq.~(\ref{Friedmannequationapproximation}) when the nonrelativistic matter and dark energy terms may be neglected
compared to the radiation term, and the latter term is assumed
to come entirely from photons and neutrinos. If other relativistic particles are present, then the constant of proportionality increases
by a factor of the fourth root of the number of types of particles present. This factor will be assumed to be of order
one, and will be ignored in our rough estimates.
As a result, the graviton energy density at time $t_r \gg t_i$ is given by
\begin{equation}
\rho_g(t_r) = J\omega_0^5\int^{tr}_{t_i}\left[ \frac{\bar{a}_i}{\bar{a}(t)} \right]^6 \left[ \frac{\bar{a}(t)}{\bar{a}(t_r)} \right]^4dt \approx
J\omega_0^5\left( \frac{t_i^{3}}{t_r^{2}}\right)\ln({t_r/t_i})\,.
\label{rho_g(t^*)GR}
\end{equation}
Here we are assuming that the oscillations begin during the radiation dominated era. Clearly some significant event is needed to cause the oscillations
to begin and to determine the initial amplitude. Two possibilities are the reheating at the end of inflation, or a subsequent
phase transition. Note that the graviton energy density in Eq.~(\ref{rho_g(t^*)GR}) vanishes in the limit $t_i \rightarrow t_r$ as is expected.
Thus far we have not discussed the decay of the scalar fields caused by direct coupling with other fields such as radiation or
nonrelativistic matter and/or the quantum particle production different from gravitons. Even though in the GRSF model we have
not considered a direct coupling between the scalar field and matter fields, the field $\varphi$ couples with those fields through
gravity by means of the scale factor (the oscillatory part of the scale factor is proportional to $\overline{\varphi^2}$). This coupling
results in quantum particle production not only of gravitons (when the scale factor coupling to a pair of minimally coupled massless
scalar fields) but also, for instance, of massive scalar particles, vector bosons and fermions~\cite{Ema:2015}. In any case, if we
are interested in values for $\omega$ below the masses of these particles, we expect that these processes are
mass suppressed. We have a similar scenario for $f(R)$ gravity, with the difference that in this theory there is a direct coupling
between the auxiliary scalar field $\phi$ and the matter fields.
However this coupling is suppressed in the regime in which we are working, where $E_i \propto (\phi_i/M_{pl}) \ll 1$.
\section{Cosmological constraints on the oscillation amplitude of the scale factor}
\label{cosmlogicalconstraints}
In this section, we explore three cosmological constraints on the graviton creation. The first two are observational
constraints on the effects of the created gravitons,
one from big bang nucleosynthesis (BBN) and another from observational Hubble parameter measurements. The third
comes from an observational constraint on scalar field energy density, which in the context of the specific models we
treat, implies a strong constraint on the amplitude of oscillations.
All of these constraints will depend on the value for $\omega$ considered.
In $f(R)$ gravity the angular frequency of oscillations is bounded from below, $\omega \geq \omega_B$. There is no analogous
bound in the GRSF model, but in both models we will consider a range of angular frequencies beginning at $\omega_B$
and extending upward by several orders of magnitude. The upper bound on $\omega$ could be as high as the Planck
frequency, $10^{31}\, \omega_B$, where our semiclassical approach is expected to break down. However, we will be
primarily concerned with more typical particle physics energy scales.
\subsection{Big bang nucleosynthesis constraint}
Commonly, the BBN bound is expressed as a number of extra neutrino varieties,
$\Delta N_{\nu}$. (For a review, see big bang cosmology and big bang nucleosynthesis reviews in Ref.~\cite{Oliveetal:2014}.)
In the early universe, relativistic particles dominate the total energy density. For this reason, at $T=1$ MeV (before
electron-positron annihilation), the total energy density is $\rho_{BBN} = N(T)(\pi^2/30) T^4$, where $N(T)$ is the
equivalent number of degrees of freedom at temperature $T$, approximately given by the contribution of photons, electrons,
positrons and neutrinos. Any additional contribution at that time to the total energy density from a component with a radiation-like
equation of state can be described as an equivalent number of extra neutrinos. Thus, the graviton energy density $\rho_{gBBN}$
at $T=1$ MeV is
\begin{equation}
\rho_{gBBN} = \frac{7}{8}\Delta N_{\nu} \, \rho_{\gamma} \, ,
\label{gravitonenergydensityBBN}
\end{equation}
where $\rho_{\gamma}=\left[(2\pi^2)/(30)\right]T^4$ refers to the photon energy density.
It is possible to find in the literature several constraints on $\Delta N_{\nu}$, which depend upon the specific light element
abundances considered, from $\Delta N_{\nu} \leq 0.2$ to $\Delta N_{\nu} \leq 1$~\cite{Giovannini:2010}. The constraint
can be relaxed in some nonstandard nucleosynthesis scenarios~\cite{Giovannini:2002}. We take for our purpose
$\Delta N_{\nu} \approx 1$. Then, using Eq.~(\ref{rho_g(t^*)GR}) for the graviton energy density in the radiation-dominated
epoch, we have
\begin{equation}
\rho_g(t_r) \approx J\omega_0^5 \left(\frac{t_i^{3}}{t_r^{2}}\right)\ln{(t_r/t_i)} \leq \frac{7}{8} \rho_{\gamma} \, ,\\
\label{gravitonenergydensityBBNboundGR}
\end{equation}
where $t_r$ refers to the time when $T=1$ MeV, which is approximately one second.
Equation~(\ref{gravitonenergydensityBBNboundGR}) gives a bound on $D_i$, in the GRSF model, and $E_i$, in $f(R)$ gravity, for a given $\omega$ of
\begin{equation}
D_i\Bigr|_{\text{G(R)}}\, \alt 10^{-5}\,
\left(\frac{10^{-6} \,{\rm s}}{t_i}\right)^\frac{3}{2}\, \left(\frac{10^{10}\, \omega_B}{\omega}\right)^\frac{5}{2}\,
\left[\ln{(1~ \text{s}/t_i)}\right]^{-1/2} ,
\label{eq:BBN-boundGR}
\end{equation}
\begin{equation}
E_i\Bigr|_{\text{f(R)}}\, \alt 3\times 10^{-3}\,
\left(\frac{10^{-6} \,{\rm s}}{t_i}\right)^\frac{3}{4}\, \left(\frac{ 10^{10}\, \omega_B}{\omega}\right)^\frac{5}{4}\,
\left[\ln{(1~ \text{s}/t_i)}\right]^{-1/4} \, .
\label{eq:BBN-boundfR}
\end{equation}
Recall that the initial oscillation amplitude, $D_i$ and $E_i$, needs to be small for the consistency of our treatment. This
condition can be fulfilled if $\omega \agt 10^{10}\, \omega_B \approx 26\, {\rm MeV}$.
Note that an initial time $t_i = 10^{-6} \,{\rm s}$ corresponds to a temperature of $T_i \approx 1\, {\rm GeV}$.
\subsection{Constraint from the expansion rate of the universe}
Observational data on the late universe can be used to obtain an upper bound on the present density of gravitons.
Rewriting the scale factor as a function of the redshift in Eq.~(\ref{Friedmannequationapproximation}) using
$\bar{a}(z)=1/(1+z)$, we obtain
\begin{equation}
\bar{H}(z)=H_0\left[\Omega_{r,0}(1+z)^4 +\Omega_{m,0}(1+z)^3+ (1-\Omega_{r,0}-\Omega_{m,0})\right]^{1/2} \, ,
\label{H(z)}
\end{equation}
which shows the dependence of $\bar{H}(z)$ on the cosmological parameters. Taking into account graviton production,
Eq.~(\ref{H(z)}) becomes
\begin{equation}
\bar{H}(z)=H_0\left[(\Omega_{r,0}+\Omega_{g,0})(1+z)^4 +\Omega_{m,0}(1+z)^3+ (1-\Omega_{r,0}-\Omega_{m,0}-\Omega_{g,0})\right]^{1/2} \, ,
\label{H(z)plusgraviton}
\end{equation}
where $\Omega_{g,0}$ is the graviton energy density parameter today.
We use a sample of 18 observational measurements of Hubble parameter in the range of $0.09 \leq z \leq 1.75$ with their respective
standard errors reported by Moresco {\it et al.}~\cite{Morescoetal:2012}, Table 1. Measurements are provided
by passively evolving galaxies, high-quality spectra of red-envelope galaxies in galaxy clusters, and spectroscopic evolution of early
type galaxies. The least-squares method is applied by means of minimizing the reduced sum of the square of residuals weighted by errors $\chi^2_\nu$ according to
\begin{equation}
\chi^2_{\nu}(\Omega_{g,0})= \frac{1}{\nu}\sum_{i=1}^{18}\frac{[H^{obs}(z_i) - \bar{H}(z_i;\Omega_{g,0})]^2}{\sigma_{H^{obs}(z_i)}^2} \, ,
\label{chisquare}
\end{equation}
where $H^{obs}(z_i)$ is the $i$th observational value of $H(z)$ at redshift $z_i$,
$\bar{H}(z_i;\Omega_{g,0})$ is the theoretical $i$th value of $H(z)$ obtained by means of Eq.~(\ref{H(z)plusgraviton}) at redshift $z_i$,
$\sigma_{H^{obs}(z_i)}$ is the error associated
with the $i$th observational value of $H(z)$ at redshift $z_i$, and $\nu$ is the number of degrees of freedom (18 observational data
points minus one parameter to be adjusted, i.e., $\Omega_{g,0}$). The standard errors, $\sigma_{\Omega_{g,0}}(\Omega^*_{g,0})$ and $\sigma_{H}(z;\Omega^*_{g,0})$, associated with the graviton energy density today and the fitted function $\bar{H}(z)$, respectively, are calculated
following standard procedures~\cite{Richter:1995}. Here we have defined $\Omega^*_{g,0}$ as the value of the graviton energy density parameter today which minimizes $\chi^2_\nu$.
\begin{figure}
\centering
\includegraphics[scale=0.7]{Figure1.png}
\caption{Observational data for $H(z)$ and their errors (see Ref.~\cite{Morescoetal:2012}) are plotted. The solid line gives
the fiducial cosmology, which assumes no gravitons with $h_0 = 0.673$, $\Omega_{r,0}=4.15\times10^{-5}~h_0^{-2}$,
$\Omega_{m,0}=0.315$ , and $\Omega_{\Lambda}=1-(\Omega_{r,0}+\Omega_{m,0})$. The dashed line is a best fit using
least squares method with nonzero $\Omega_{g,0}$. The boundaries of the regions associated with confidence levels
of $\sigma_{H}(z;\Omega^*_{g,0})$ and of $3 \sigma_{H}(z;\Omega^*_{g,0})$ are also illustrated. }
\label{figure1}
\end{figure}
Figure~\ref{figure1} shows the fiducial cosmology without gravitons, obtained from Eq.~(\ref{H(z)}), and the best fit with a
nonzero value for the graviton energy density parameter today, obtained from Eq.~(\ref{H(z)plusgraviton}). Including
gravitons in the evolution of the Hubble parameter is equivalent to increasing the radiation energy density parameter.
This produces
an increase of the Hubble parameter for a given $z$ in comparison to the fiducial cosmology.
The best fit is found to be $\Omega^*_{g,0} = 0.011 \pm 0.015$ (for 1 standard deviation) with $\chi^2_\nu = 0.75$.
The value of $\chi^2_\nu$ is reasonably close to 1 indicating that the fit can be considered meaningful. (See, for example,
Ref.~\cite{Richter:1995}.)
At the level of two standard deviations, we obtain an upper bound for the graviton energy density parameter today of
$\Omega^*_{g,0} \leq 0.04$.
Because the graviton energy density increases as the comoving time increases, it is in principle possible to obtain constraints
on the oscillation amplitude for each case:
\begin{equation}
\frac{\rho_{g}(t_0)}{\rho_{c,0}} = \Omega^*_{g,0} \lesssim 0.04 \, .
\label{newconstraint}
\end{equation}
Use Eq.~(\ref{rho_g(t_0)G(R)}) for $\rho_{g}(t_0)$. Then the constraints on the oscillation amplitudes may be expressed as
\begin{equation}
D_i\Bigr|_{\text{GRSF}} \alt 10^{-5} \,
\left(\frac{T_i}{1 \, {\rm GeV}}\right)^3 \, \left(\frac{ 10^{10}\, \omega_B}{\omega}\right)^\frac{5}{2}\,,
\label{constraintlate1}
\end{equation}
\begin{equation}
E_i\Bigr|_{\text{f(R)}} \alt 10^{-2}\,
\left(\frac{T_i}{1 \, {\rm GeV}}\right)^\frac{3}{2} \,\left(\frac{10^{10}\, \omega_B}{\omega}\right)^\frac{5}{4}\, .
\label{constraintlate2}
\end{equation}
Here we have used $\bar{a}_i \approx 3K/T_i$, where $T_i$ is the initial energy scale, for the factor $\bar{a}_i^6$ in
Eq.~(\ref{rho_g(t_0)G(R)}). Moreover, since the definite integral in this equation
is slowly varying with respect to its lower limit,
$\bar{a}_i$, we have evaluated it at $T_i$ = 1 GeV, where its value is about $2 \times 10^3$.
Note that these constraints from late time dynamics of the universe are comparable to those obtained from nucleosynthesis
in Eqs.~(\ref{eq:BBN-boundGR}) and (\ref{eq:BBN-boundfR}). There seem to be competing effects which nearly cancel
one another. Nucleosynthesis occurs earlier in the history of the universe when then characteristic amplitude of the
oscillations is greater and there has been less redshifting of the created gravitons. However, in the late universe, there has
been far more time for graviton creation.
\subsection{Constraints on the scalar field energy density}
Now we consider a constraint on the scalar energy density, $ \rho_{\chi}$, and its implications. Data from the dynamics of galaxy
clusters~\cite{Bahcalletal:2014} lead to an estimate of the current matter density of $\Omega_{m,0} = 0.26$. This estimate
includes all matter, including dark matter, which is localized on the scale of a cluster of galaxies, but would not include
a homogeneous background density, such as that due to a scalar field. CMB data from the Planck collaboration
2013~\cite{Planckdata:2013} leads to a slightly larger value of $\Omega_{m,0} = 0.315$. Given that about 70$\%$ of the
current energy density is dark energy, the scalar field energy density must be less than the matter density,
\begin{equation}
\rho_{\chi}(t) < \rho_m (t) \,.
\label{eq:scalar-density}
\end{equation}
Note that this is also a constraint on $\chi_i$, the initial value of the scalar field. Because
$\rho_{m} \approx \rho_{m,0}/\bar{a}^3 \approx \rho_{m,0}(T/T_0)^3$ and
$\rho_{\chi} \approx (\omega^2\chi_i^2/2)(\bar{a}_i/\bar{a})^3 \approx (\omega^2\chi_i^2/2)(T/T_i)^3 $, we have
\begin{equation}
\frac{\chi_i}{M_{pl}} \lesssim 10^{-11}~\left(\frac{T_i}{1~\text{GeV}}\right)^{3/2}\left(\frac{\omega_B} {\omega} \right)\,,
\label{firstassumption}
\end{equation}
where $T_i$ and $T_0$ are the temperature at time $t_i$ and the current temperature of the cosmic microwave background,
respectively.
This constraint on the scalar energy density leads to a very strong constraint on the initial amplitude of
oscillations in both models:
\begin{equation}
D_i\Bigr|_{\text{GRSF}} \alt 10^{-23} \,
\left(\frac{T_i}{1 \, {\rm GeV}}\right)^3 \, \left(\frac{ \omega_B}{\omega}\right)^2\,,
\label{strongboundDi}
\end{equation}
\begin{equation}
E_i\Bigr|_{\text{f(R)}} \alt 10^{-12}\,
\left(\frac{T_i}{1 \, {\rm GeV}}\right)^\frac{3}{2} \,\left(\frac{ \omega_B}{\omega}\right)\, .
\label{strongboundEi}
\end{equation}
These constraints are much stronger than the constraints which come directly from the observable effects of the created
gravitons. This is presumably related to the weakness of the graviton creation process. However, the scalar field
energy density constraint is more model dependent, and comes from the key role played by scalar fields in both of
the specific models treated here.
\section{Quantum decoherence induced by the graviton energy density}
\label{quantumdecoherenceinducedbygravitonbath}
A realistic quantum system cannot be considered isolated, but is in interaction with the surrounding environment. This
interaction can induce in the system a loss of quantum coherence, namely, a local suppression of interference between
two different states~\cite{Schlosshauer:2004}. The environment can refer to ordinary matter, quantum fields, or gravitational
fields. For a recent review and discussion about quantum decoherence and gravitational interactions,
see Ref.~\cite{AnastopoulosandHu:2013}.
De Lorenci and Ford~\cite{LorenciandFord:2015} studied the decoherence rate of quantum systems induced by a bath of long wavelength gravitons. The basic mechanism arises from quantum geometry fluctuations produced by the graviton bath, which
in turn produce length and hence phase fluctuations in a quantum system. These phase fluctuations lead to a loss of contrast
in interference patterns, and hence decoherence by dephasing.
We will apply these results to quantum systems in a bath of graviton created by the mechanism discussed in the GRSF model.
First, we summarize the essential results of Ref.~\cite{LorenciandFord:2015}.
Adopt the transverse-tracefree gauge and
define $h$ as the root-mean-square fractional length fluctuations in a particular direction, such as the $x$-direction by
\begin{equation}
h^2=\langle (h_{xx})^2 \rangle = (1/9)\langle h^{TT}_{ij}h^{ij}_{TT}\rangle \,.
\end{equation}
We can reexpress $h$ as a function of the graviton energy density as
\begin{equation}
h = \frac{4}{3}\sqrt{2\pi}\frac{\lambda_g \sqrt{\rho_g}}{E_p} \, ,
\label{expressionforh}
\end{equation}
where $\lambda_g = 2\pi/\omega_g$ is the characteristic graviton wavelength and $E_p$ is the Planck energy.
Suppose we have a quantum system in which $\Delta \omega$ is the energy difference between the interfering states.
The decoherence time $t_d$ induced by length fluctuations is approximately $t_d \approx 1/(h \Delta \omega)$.
If the graviton wavelength is large compared to the geometric size of the quantum system,
the decoherence time may be written as
\begin{equation}
t_d = \frac{3}{4\sqrt{2\pi}}\frac{E_p}{\lambda_g \sqrt{\rho_g}\Delta\omega}\, .
\label{generalformuladecoherencetime}
\end{equation}
Note that decoherence by the effects of a graviton bath seems to be compatible with the assumption, stated after Eq.~(\ref{oldgravitonenergydensity}), that the graviton energy density accumulates incoherently. A thermal bath of gravitons is maximally incoherent, but is expected to produce length and hence phase fluctuations. The key issue is that the typical graviton wavelength be larger than the size of the quantum system.
In our case, the graviton energy density may be taken to be the present value given by Eq.~(\ref{rho_g(t_0)G(R)}), and $\lambda_g$
is understood to be an average wavelength at the present time. For the purpose of an estimate, we take the energy density
to be at the upper bound of $4\%$ of the total energy density of the universe found in Eq.~(\ref{newconstraint}). We also take
$\lambda_g = 2\pi/\omega_g \approx 4\pi/\omega_0$. That is, we use the GRSF model, where
the gravitons are created with an angular
frequency of $\omega_0/2$, and we are assuming that the present graviton bath is composed of gravitons which have not been
significantly redshifted since their creation. This is reasonable, given that in the time that a given graviton's energy has been
redshifted by a factor of $1/2$, its contribution to the energy density has decreased by a factor of $1/16$.
With these assumptions, we obtain a lower bound on the decoherence time of
\begin{equation}
t_d \agt 10^{7} \text{ yr}\, \left(\frac{\omega}{\omega_B}\right) \left( \frac{1 \text{ eV}}{\Delta \omega} \right)\, ,
\label{eq:td-bound}
\end{equation}
where we have associated the mass of the scalar field $\varphi$ with the angular frequency of oscillations using
$\omega = \omega_0/2$.
For $\omega \approx \omega_B$, this lower bound holds for quantum systems with a geometric size small compared to $\lambda_{g} \approx 0.05 \text{ cm}$.
This decoherence time is quite long unless the energy difference $\Delta \omega$ is large.
\section{Summary and discussion}
\label{summarydiscussion}
We have studied quantum creation of gravitons by small scale factor oscillations in a spatially flat FRW background.
We use the perturbative method
of Birrell and Davies~\cite{BD80,BirrellDavies:1982}, which is an expansion in powers of a parameter describing the deviation
from conformal coupling. In our case,
the effective expansion parameter has the value $1/6$, which should be small enough for order of magnitude estimates, but
not for precise results.
Sinusoidal scale factor oscillations can arise in various cosmological models and we consider two examples. The first consists of
the standard matter fields in general relativity plus the addition of a minimally coupled scalar field, $\varphi(x)$, in a harmonic
potential (GRSF model). The second model involves a modification of Einstein gravity in which a term proportional to the
square of the Ricci scalar is added to the gravitational action [$f(R)$ gravity model].
The same modified Einstein equation also arises, perhaps more naturally, in semiclassical gravity theory, where the classical
gravitational field is coupled to the renormalized expectation value of a quantum matter stress tensor. The $f(R)$ gravity model
is equivalent to a scalar-tensor theory of gravity, and the scale factor oscillations may be described in terms of oscillations
of the scalar field in the scalar-tensor theory. Laboratory tests of the
inverse square law for gravity give an upper bound on the coefficient of the $R^2$ term in $f(R)$ gravity, which leads to a
lower bound, $\omega_B$, on the oscillation frequency $\omega$. By contrast, in the GRSF model the value of $\omega$ is
not bounded from below. In both models the amplitude of oscillations is a free
parameter and presumably determined by initial conditions. In the GRSF model, the quantum graviton production is ruled by the
standard gravitational wave equation from general relativity, but in $f(R)$ gravity, the graviton creation is ruled by a modification of
this equation. This leads to different expressions for the graviton creation rates in the two models. In both models, the amplitude
of the scale factor oscillations decays as the universe expands. If $\bar{a}(t)$ is the background scale factor, time averaged
over oscillations, then the amplitude decreases as $\bar{a}(t)^{-3}$ in the GRSF model, and as $\bar{a}(t)^{-3/2}$ in the $f(R)$
model.
We first obtained expressions for the number and energy
density creation rates on an average background of flat spacetime in both models, Eqs.~(\ref{eq:number-rate}),
(\ref{eq:rate}), (\ref{eq:ratef(R)casenumber}), and (\ref{eq:ratef(R)case}). We then extended our analysis to an expanding universe
by including two effects: damping of the metric oscillations and density dilution and redshifting of the created gravitons.
The results show the differences between the two models with respect to the dependence upon initial amplitude, angular frequency, and damping rate
of the oscillations. If the mass of the scalar field in each model is $\omega$, the angular frequency of the metric oscillations is
2$\omega$ in the GRSF model, and $\omega$ in $f(R)$ gravity. The angular frequency of the created gravitons is
$\omega$ in both models. The initial amplitude of oscillations is expected to be
determined by processes in the early universe, such as at reheating or a subsequent phase transition.
We assumed the matter fields in both models to be the usual perfect fluids associated with radiation, nonrelativistic matter and
a cosmological constant. We examined two cosmological constraints on the energy density of the created gravitons, and hence
on the initial amplitude of the oscillations for fixed $\omega$. The first constraint comes from big bang nucleosynthesis and the second
from data on the expansion rate of the late universe. Both constraints lead to similar bounds on the initial metric oscillation
amplitudes. These bounds become meaningful if $\omega \agt 26 \,{\rm MeV}$. The expansion rate data indicate that gravitons
cannot comprise more than about $4\%$ of the present mass density of the universe. We also used data from the dynamics
of galaxy clusters and the cosmic microwave background to argue that the energy density of the scalar fields, which appear in both
of our models, must be small compared to the current density of nonrelativistic matter. This in turn places strong constraints on
the amplitudes of the scalar field oscillations, and hence on the amplitudes of the scale factor oscillations. The latter constraints
are much stronger than those obtained from the effects of the created gravitons, but are more dependent upon the details of
our specific models, and potentially less robust.
Finally, we examined the role of the bath of gravitons produced by the GRSF model in decohering quantum systems,
using the results of
Ref.~\cite{LorenciandFord:2015}. Long wavelength gravitons produce quantum spacetime geometry fluctuations
which in turn lead to length and phase fluctuations in a system exhibiting quantum interference. The phase fluctuations
lead to a loss of contrast in the interference pattern. Using our upper bound on the present graviton energy density from data of the Hubble parameter in the late universe,
leads to a lower bound on the characteristic decoherence time, $t_d$, given in Eq.~(\ref{eq:td-bound}).
This bound allows the decoherence time to be quite long unless the energy difference of interfering components
of the system is large.
\acknowledgments
We thank Mark Hertzberg, Alexander Vilenkin, and Xiaozhe Hu for valuable discussions. This work was supported in part by the National Science Foundation under Grants No. PHY-1506066 and No. PHY-1607118.
|
1,108,101,563,179 | arxiv | \section{Introduction}
In economics, an agent is defined by her preferences and beliefs, in
psychology by her values, attitudes and feelings. One also talks about
\textquotedblleft eliciting\textquotedblright\ or \textquotedblleft
revealing\textquotedblright\ preferences and attitudes. This tacitly
presumes that those properties are sufficiently well-defined (determined)
and stable. In particular, it is assumed that the mere fact of subjecting a
person to an elicitation procedure, i.e., to \textquotedblleft
measure\textquotedblright\ her taste does not affect the taste. Yet,
psychologists are well aware that simply answering a question about a
feeling may modify a person's state of mind. For instance when asking a
person \textquotedblleft Do you feel angry?\textquotedblright\ a
\textquotedblleft yes" answer may take her from a blended emotional state to
an experience of anger. But before answering the question, it may be neither
true nor false that the person was angry. It may be a \textquotedblleft
jumble of emotions\textquotedblright \cite{Wri}. Similarly, Erev, Bornstein
and Wallsten (1993) show in an experiment that simply asking people to state
the subjective probability they assign to some event affects the way they
make subsequent decisions. The so-called \textquotedblleft disjunction
effect\textquotedblright\ (Tversky and Shafir (1992)) may also be viewed in
this perspective. In a well-known experiment, the authors find that
significantly more students report they would buy a non-refundable Hawai
vacation if they knew whether they passed the exam or failed compared to
when they don't know the outcome of the examination. In the case they
passed, some buy the vacation to reward themselves. In the case they failed,
some purchase the vacation to console themselves. When they don't know, a
seemingly inconsistent behavior is observed: fewer vacations are being
purchased than in any one of the two possible events.
In the examples above, the mere fact of subjecting an agent to a procedure
that reveals her feeling, preferences or beliefs seems to affect her. In
this paper, we propose to adopt a measurement theoretical approach to
behavior: actual behavior reveals preferences (or beliefs) in the sense of
being the outcome of a measurement of those preferences. Interestingly,
Kahneman and A. Tversky explicitly discuss some behavioral anomalies in
terms of measurement theory: \textit{\textquotedblleft Analogously, - to
classical physical measurement - the classical theory of preference assumes
that each individual has a well-defined preference order and that different
methods of elicitation produce the same ordering of
options\textquotedblright . But, \textquotedblright In these situations - of
violation of procedural invariance - observed preferences are not simply
read off from some master list; they are actually constructed in the
elicitation process.\textquotedblright } (\cite{Katver00} p. 504). A. Sen
\cite{sen97} also emphasizes that the \textquotedblleft act of
choice\textquotedblright\ has implications for preferences. In this work we
adopt the view that performing a measurement on a system generally changes
its state. In particular, an experiment or a decision situation that reveals
a person's preferences affects that person's preferences.
Is it possible to build a predictive model of a system whose state changes
as we perform measurements on it? We assert that it is if the interaction
between systems and measurement instruments satisfies some natural
conditions. We formulate them as axioms and show that the state space is
endowed with the structure of an atomistic orthomodular orthospace and the
states are realized as probability measures on the state space.
Of course, our formalization does not build on an empty spot. The question
of modeling a system that changes when being measured is at the heart of
Quantum Mechanics (QM). Birkhoff and von Neumann's seminal article from 1936
initiated a rich literature on the mathematical foundations of QM. For an
excellent review of the field see the introductory chapter in Coecke, Moore
and Wilce (2000). Recently, the interest for QM has been rapidly expanding
to other fields. Partly, this is due to the development of quantum
computing, which inspires physicists and more recently economists to
investigate the use of quantum information in games (Eisert (1999), La Mura
(2004)). Another avenue of research has emerged in response to observations
that classical (or macro) objects (e.g. human perception or preferences) can
exhibit properties specific to QM-objects. In Lambert-Mogiliansky, Zamir and
Zwirn (2003), a Hilbert space model is proposed to describe economic agents'
preferences and decision-making. Aerts (1994), Busemeyer and Townsend (2004)
and Khrenikov et al. (2003) investigate quantum-like phenomena in
psychology. The basic idea is that the mathematical formalism of QM, often
referred to as \textquotedblleft quantum logic\textquotedblright\ rather
than its physical content, is a suitable model for describing, explaining
and predicting human behavioral phenomena in psychology and social sciences.
In this paper we expose the foundations of a general measurement theory. The
objective with the proposed formulation is to allow assessing the relevance
of this framework for social sciences\ including for the analysis of
individual choice and in particular for modelling bounded rationality.
Section 2 offers a few examples of quantum and quantum-like behavior. In
Section 3 we introduce basic notions of measurement theory, namely that of
measurement and of state. They are illustrated in models of rational choice
in Section 4. Axioms and their consequences are exposed in Sections 5 and 6.
Section 7 discusses an interpretation of the basic axioms and properties for
behavioral sciences.
\section{Examples}
\textbf{Example 1:} \textit{The spin of an electron}
An electron is endowed with several characteristics including the spin. The
spin is an intrinsic property of any particle and corresponds to a magnetic
moment which can be measured.\footnote{%
Stern and Gerlagh created an instrument such that the interaction between
the magnetic moment of the electron and that of the experimental setup
generates the splitting of a beam of electrons. A measure of the deviation
can be interpreted of the measurement of spin (along some orientation).}
It is well-known that the outcome of the measurement is always $\pm 1/2$ (in
some units) independently of the orientation of the measurement device. If
we measure a concrete electron along some axis $x$ and obtain result $+1/2$,
then a new measurement along the same axis will give the same result. Assume
we prepare a number of electrons this way. If we, for the second
measurement, modify the orientation of the axis, e.g., the measurement
device is turned by $90^{\circ }$, the result now shows equal probability
for both outcomes. As we anew perform the measurement along the $x$-axis, we
do not recover our initial result. Instead, the outcome will be $-1/2$ with
.5 probability.
We limit ourselves to noting that once the spin of the electron along some
axis is known, the results of the measurement of the spin of that electron
along some other axis has a probabilistic character. This is a central
feature. In the classic world, we are used to deal with probabilities. But
there the explanation for the random character of the outcome is easily
found. We simply do not know the exact state of the system, which we
represent by a probability mixture of other states. If we sort out this
mixture in the end we obtain a pure state and then the answer will be
determinate. In the case with the spin, it is not possible to simultaneously
eliminate randomness in the outcome of measurements relative to different
axis.\medskip
\textbf{Example 2:} \textit{A fly in a box}
Consider a box divided by two baffles into four rooms (left/front (LF),
Left/Back (LB), RF and RB. In this box, we hold a fly that flies around.
Because of the baffles, it is limited in its movements to the room where it
is.
Assume that we only have access to two types of measurements. The first
allows answering the question whether the fly is in the Left (L) or the
Right (R) half of the box. And, in the process of measurement, the baffle
between the Front (F) and the Back (B) half of the box is lifted while the
separation between Right and Left is left in place. During that process, the
fly flies back and forth from Front to Back. When the measurement operation
is over and the baffle between Front and Back put back in place, the
position of the fly is therefore quite random (LF or LB). The same applies
for the measurement of Front/Back.
Assume that we have performed the measurement L/R and obtained answer L.
Repeating that same measurement even 100 times we will always obtain the
same answer L. But if we do, in between, the F/B measurement, we have equal
(for the sake of simplicity) chances to obtain R as L. We see that the
behavior of our system reminds of that of the spin (when the Stern-Gerlach
device is rotated by an angle of $90^{\circ }$). Here the position of the
fly cannot be determined with certainty with respect to the two measurements
(LR) and (FB) simultaneously. The measurement affects the system in an
uncontrollable and unavoidable way. This simple example exhibits all basic
features of the non-classical measurement theory developed in this
paper.\medskip
\textbf{Example 3:} \textit{Attitudes and preferences}
Consider the following situation. We are dealing with a group of individuals
and we are interested in their preferences (or attitudes). We dispose of two
tests.
The first test is a questionnaire corresponding to a Prisoners' Dilemma
against an anonymous opponent. The options are cooperate (C) and defect (D).
The second test corresponds to the first mover's choice in an Ultimatum Game
(UG). The choice is between making an offer of (9,1) or of (4,6).\footnote{%
In the Ultimatum game the first mover makes an offer. The respondent either
accepts the deal and the payoffs are distributed accordingly. Or he refuses
in which case no one receives any payoff.}
The observations we are about to describe cannot be obtained in a world of
rational agents whose preferences are fully described by their monetary
payoff.\footnote{%
Game Theory uniquely predicts behavior: people defect (D) in the PD and
(with common knowledge of rationality) they offer (9,1) in UG.
Experimentalists have however taught us to distinguish between monetary
payoffs, which can be controlled and preferences, which may include features
beside monetary payoffs unknown to the designer of the experiment.} But this
is not our point. Our point is that such observations exhibit the same
patterns as the ones we described in the spin and fly example above.
Suppose that we have the following observations. The respondents who answer
C to the first questionnaire repeat (with probability close to one) their
answer when asked immediately once more. We now perform the second test (UG)
and the first test (PD) again. In that last PD test we observe that not all
respondents repeat their initial answers. A (significant) share of those who
previously chose to cooperate now chooses to defect.\footnote{%
D. Balkenberg and T. Kapplan (University of Exeter, unpublished) conducted
an experiment with those two same games but with two populations of
respondents. They investigate the frequency of the choices when the two
games are played in one order compared to when they are played in the
reverse order. The data shows an impact of the first choice on the second
which is characteristic of non-classical measurements.}
How do we understand this kind of behavior? When deciding in the PD our
respondent may feel conflicted: she wants to give trust and encourage
cooperation, but she does not like to be taken advantage of. Consider the
case when her optimistic `I' takes over: she decides to cooperate. When
asked again immediately after, her state of mind is that of the optimistic
`I' so she feels no conflict: she confirms her first choice. Now she
considers the UG. The deal (4,6) is very generous but it may be perceived as
plain stupid. The (9,1) offer is not generous but given the alternative it
should not be perceived as insulting. She feels conflicted again because her
optimistic `I' does not provide clear guidance. Assume she chooses (9,1).
Now considering the Prisoners' Dilemma again, she feels conflicted anew.
Indeed, her choice of (9,1) is not in line with the earlier optimistic mood
so she may now choose to defect.\footnote{%
We do not in any manner mean that the proposed description in terms of inner
conflict is the only possible one. A variety of psychological stories are
consistent with such phenomena of non-commutativity.}
As in the spin and the fly example, the measurement (elicitation of
preferences) affects the agent in an uncontrollable way so the observed
behavior (measurement outcomes) may exhibit instances characteristic for
quantum-like systems.
\section{Measurements and states}
In this section we introduce and discuss two basic concepts of the theory,
namely the concepts of measurement and of state.
\subsection{Measurements}
A system is anything that we can perform \textit{measurements} on. A
measurement is an interaction between a system and some measurement device,
which yields some result, the \textit{outcome of the measurement} that we
can observe and record. The set of possible outcomes of a measurement $M$ is
denoted $O(M)$. For instance in the case with the Stern-Gerlach experimental
setup, we let the electron travel through a non-homogeneous magnetic field
and observe deviation either up or down. In the example with the fly we lift
up a baffle and observe in which half of the box the fly is located. In our
third example, we let people play the Prisoner Dilemma (and the UG) and
observe their choice.
\
\textbf{First-kindness}
Measurements constitute a special class of interactions. We focus on
non-destructive measurements, which means that the system is not destroyed
in the process of measurement so we can perform new measurements on the
system. In particular, we can perform a measurement $M$ twice in a row. If
the outcomes of the two measurements always coincide, we say that the
measurement $M$ is a \emph{first-kind} measurement.\footnote{%
The term \textquotedblleft first-kind\textquotedblright\ measurement was
proposed by W. Pauli. J. von Neumann used the following formulation:
\textquotedblright If the physical quantity is measured twice in succession
on a system $S$ \ then we get the same value each time.\textquotedblright}
In other words the results of a first-kind measurement are repeatable
(reproducible). This is a very important point that deserves some additional
comments. One may wonder why a \textquotedblleft
measurement\textquotedblright\ would fail to satisfy the property of
first-kindness. There are several reasons for that. A first and most
important reason is that the system is evolving. For instance, the thirst of
a person running a marathon is not the same from one time to another along
the race. In this paper we focus on systems that do not have an own dynamics
(or alternatively on situations where measurements are made so close in time
that we can disregard the own dynamics). A second reason for failing
first-kindness is noise in the measurement instrument itself. We shall
assume that measurements do not bring in own uncertainty. A third reason is
that the measurement operation actually is a combination of \textit{%
incompatible}\ measurements. We return to this point soon.
In what follows, we assume that all measurements are first-kind. Indeed, if
a measurement is not first-kind it is unclear what we measure and what the
relation is between the outcome of the measurement and our system. Of
course, the question about first-kindness of any concrete measurement is an
experimental one.
\medskip
\textbf{Compatibility}
Two measurements are \textit{compatible} if they, roughly speaking, can be
performed simultaneously or more precisely, if the performance of one
measurement does not affect the result of the other. Suppose that the first
measurement gave outcome $o$; then we perform the second measurement and the
first one anew. In case we are dealing with compatible measurements we
obtain outcome $o$ with certainty.
Given two compatible measurements $M$ and $N$ we can construct a third finer
measurement. We may perform $M$ and thereafter $N$ and view this as a new
(compound) measurement $M\ast N$ with outcome set $O(M)\times O(N)$. Because
of compatibility, the measurement $M\ast N$ is a first-kind measurement.
If all measurements are compatible we can substitute them with a single
finest (complete) measurement, which is also first-kind. Performing that
measurement we learn everything about the system. Such a system is \emph{%
classical}.
The existence of incompatible measurements is a distinctive feature of
non-classical systems. It is closely related to the impact of measurements
on the state and the existence of \textquotedblleft
dispersed\textquotedblright\ states (see next subsection).
In the examples of Section 2 all measurements were incompatible.
\subsection{States}
\textbf{Measurable systems}
As we perform a measurement and observe its result we learn something about
a system. All the information that we have about a system is
\textquotedblleft encapsulated\textquotedblright\ in the \emph{state} of the
system. The state is the result of past measurements and it is the basis for
making predictions of future measurements. A theory (or a model) of a system
should describe the set of states, the results of any measurement in every
state and the change in the state induced by any measurement.\footnote{%
If the system has an own dynamic the model should be enriched with a
description of its evolution over time.}
The state of a system predicts the result of any measurement. But we do not
assume that it predicts a unique outcome. We only assume that the state
determines the probabilities for the outcomes, that is it determines a
random outcome.
In order to avoid technical subtleties associated with the notion of
probability, we shall in what follows assume that the sets $O(M)$ are
finite. In such a case, a probabilistic measure (or a random element) on $%
O(M)$ is a collection of non-negative numbers (probabilities) $\mu (o)$ for
each $o\in O(M)$ subjected to the condition $\sum_{o\in O(M)}\mu (o)=1$. The
set (a simplex indeed) of probabilistic measures on $O(M)$ is denoted $%
\Delta (O(M))$. In such a way the state $s$ defines a random outcome in $%
O(M) $, that is a point $\mu _{M}(s)\in \Delta (O(M))$ for every measurement
$M$.
Of course, the random outcome $\mu _{M}(s)$ can be degenerated, that is $\mu
_{M}(o|s)=1$ for some outcome $o\in O(M)$. In the general case, the outcome
is random; moreover, we are interested in systems with \textquotedblleft
intrinsic uncertainty\textquotedblright . We return to this central point
later, for now we note that in the general case measurements impact on
(change) the state. Indeed, let $s$ be a state such that the outcome of a
measurement $M$ is not uniquely determined. After having performed
measurement $M$ (and obtained outcome $o$) the new state $s^{\prime }$ of
the system differs from $s$ because (according to the first-kindness of $M$)
now the result of $M$ is uniquely determined and equal to $o$.\medskip
\textbf{Definition.} A \emph{measurable system} is a system equipped with a
set $\mathcal{M}$ of first-kind measurements. A \emph{model} of a measurable
system includes the following three collections of data:
1) a set of \emph{states\ }$\mathbb{S}$ ;
2) an \emph{outcome mapping,} $\mu _{M}:\mathbb{S}\rightarrow \Delta (O(M))$
for every measurement $M\in \mathcal{M}$;
3) a \emph{transition mapping,} $\tau _{M,o}:\mathbb{S}\rightarrow \mathbb{S}
$ for every measurement $M\in \mathcal{M}$ and any of its outcome $o\in O(M)$%
.\medskip
The first mapping defines the probabilities for the possible outcomes when
performing measurement $M$ in an arbitrary state $s$. The second mapping $%
\tau _{M,o}$ points out where the state $s$ goes (transits) as we perform
measurement $M$ and obtain outcome $o\in O(M)$. We have to recognize that
the mappings $\tau_{M,o} $ are not defined for those states in which the
outcome $o $ is impossible.
It is useful at this point to introduce a few notions that we also use
later. Let $M$ be a measurement and $A\subset O(M)$. Denote
\begin{equation*}
E_{M}(A)=\{s\in \mathbb{S},\ \mu _{M}(A|s):=\sum_{o\in A}\mu _{M}(o|s)=1\}.
\end{equation*}%
The set $E_{M}(A)$ consists of the states endowed with the following
property: the result of the measurement $M$ belongs to $A$ for sure. The set
$E_{M}(o)$ for $o\in O(M)$ is called the \emph{eigenset} of measurement $M$
corresponding to outcome $o$.
In these terms the mapping $\tau _{M,o}$ is not defined on the subset $%
E_{M}(O(M)\setminus \{o\})$, where $o$ can not be an outcome of $M$. The
image of $\tau _{M,o}$ coincides with the eigenset $E_{M}(o)$.\medskip
\textbf{Pure states}
Although it is not necessary, we shall suppose that two states coincide if
all their predictions are the same. (Here we follow Mackey:
\textquotedblleft A state is a possible simultaneous set of statistical
distributions of the observables.\textquotedblright ). In that case, we can
consider the set $\mathbb{S}$ as some subset of the convex set $\times
_{M\in \mathcal{M}}\Delta (O(M))$.\medskip
This allows to speak about mixtures of states. A state $\sigma $ is called a
(convex or probabilistic) \emph{mixture} of states $s$ and $t$ with
(non-negative) weights $\alpha $ and $1-\alpha $, if
\begin{equation*}
\mu _{M}(o|\sigma )=\alpha \mu _{M}(o|s)+(1-\alpha )\mu _{M}(o|t)
\end{equation*}
for any $M\in \mathcal{M}$ and any $o\in O(M)$. Mixtures of three or more
states are defined similarly. A state is said to be \emph{pure} if it is not
a non-trivial mixture of other states.
Without loss of generality one can suppose that the set of states $\mathbb{S}
$ is convex (as a subset of $\times _{M\in \mathcal{M}}\Delta (O(M))$). The
subset $\mathbb{P}$ of pure states is the set of extreme points of the
convex set $\mathbb{S}$, $\mathbb{P}=\text{ext}(\mathbb{S})$. In the sequel
we assume that $\mathbb{S}$ is the convex hull of $\mathbb{P}$, $\mathbb{S}=%
\text{co}(\mathbb{P})$. Moreover, it is quite natural to assume that the
transition mappings are linear (i.e., are compatible with the convex
structure on $S$)\textbf{.} For this reason we can work with the set of pure
states $\mathbb{P}$ instead of $\mathbb{S}$. Of course, we should keep in
the mind that the transition state $\tau _{M,o}(s)$ can be mixed.
In the classical world, pure states are \emph{dispersion-free}, that is the
outcome of any measurement performed on a system in a pure state is uniquely
determined. Randomness in the results of a measurement indicates that the
system is in a mixed state. One can sort out (or filter) this mixture by
making measurements so as to eventually obtain a pure state.
A distinctive feature of non-classical systems is the existence of dispersed
(that is non dispersion-free) pure states. This feature can be called
\textquotedblleft intrinsic uncertainty\textquotedblright . It is closely
related to two other properties of non-classical systems: the existence of
incompatible measurements and the impact of measurements on states. If a
state is dispersion-free i.e., the outcome of every possible measurement is
uniquely determined, there is no reason for the state to change. If all pure
states are dispersion-free then measurements do not impact on pure states
and therefore all measurements are compatible. On the contrary, if a state
is dispersed then by necessity it will be modified by an appropriate
measurement. On the other hand, the change in a pure state is the reason for
incompatibility of measurements. The initial outcome of the first
measurement $M$ is not repeated because the system has been modified by the
second measurement $N$.
\section{An illustration: non-classical rational choice}
Let us illustrate the above introduced notions in (thought) examples of
non-classical rational choice behavior. \medskip
We shall consider a situation where an agent is making a choice out of a set
of alternatives $X$. A primitive measurement is a choice from a subset $%
A\subset X$; the set of outcomes of the measurement is $A$ (i.e. we consider
single-valued choices). For this reason we denote such a choice-measurement $%
A$. A main idea is that a choice out of \textquotedblleft
small\textquotedblright\ subsets is well-defined and rational. By
well-defined we mean that the corresponding measurement is first-kind. By
rational we mean that consecutive choices from \textquotedblleft
small\textquotedblright\ subsets satisfy Houthakker's axiom (or the
principle of independence of irrelevant alternatives, IIA). Our motivation
is that an agent may, in his mind, structure any \textquotedblleft
small\textquotedblright\ set of alternatives, i.e., he is capable of
simultaneously comparing those alternatives. He may not be able to do that
within a \textquotedblleft big\textquotedblright\ set (which we interpret as
bounded rationality, see Section 7). This does not means that our agent
cannot make a choice from a \textquotedblleft big\textquotedblright\ set.
For example, he might use an appropriate sequence of binary comparisons and
choose the last winning alternative. However, such a compound
choice-measurement would not in general be first-kind.\medskip
We formulate Houthakker's axiom in the following way. Let $A$ and $B$ be two
\textquotedblleft small\textquotedblright\ subset, and $A\subset B$. In our
context, Houthakker's axiom consists of two parts:
1) Suppose the agent chooses from $B$ an element $a$ which also belongs to $%
A $. If the consecutive measurement is $A$ then the agent chooses $a$.
2) Suppose the agent chooses from $A$ an element $a$. If the consecutive
measurement is $B$ then the outcome of the choice is not in $A\setminus
\{a\} $.\medskip
In order to be more concrete, we shall consider as \textquotedblleft
small\textquotedblright\ subsets of the size 3 or less.\medskip
We begin with the case when all choice-measurements are compatible.
Performing binary choices we obtain some binary relation $\prec $ on $X$.
From ternary choices we see that this relation is transitive so that the
relation $\prec $ is a linear order. Therefore, it is natural to identify
the set $\mathbb{P}$ of pure states with the set of linear orders on $X$. We
obtain the well-known classical model.
We now relax the assumption of compatibility of all choice-measurements and
consider three different models.\medskip
\textbf{Model 1.} $X$ includes three alternatives $a$, $b$ and $c$. To
define a model we need to define the set of states, the outcome and the
transition mappings.
Let the set of pure states $\mathbb{P}$ consists of three states denoted $%
[a] $, $[b]$ and $[c]$. When the agent is asked to choose an item out of $X$
he chooses $a$ in state $[a]$, $b$ in $[b]$, and $c$ in $[c]$. The
choice-measurement $X$ does not change the state. When the agent is asked to
choose an item out of $\{a,b\},$ a choice-measurement that we denote by $ab$%
, he chooses $a$ in the state $[a]$, $b$ in $[b]$; in the state $[c]$ he
chooses $a$ and $b$ with equal chances (and transits into $[a]$ or $[b]$
correspondingly). Symmetrically for the choice-measurements $ac$ and $bc$.
It is clear that Houthakker's axiom is satisfied in this model. Note that
the choice measurements are incompatible. Indeed, let for instance the agent
be in the state $[c]$ and choose $c$ out of $X$. We then ask her to choose
out of $\{a,b\}$. After that choice-measurement she chooses $a$ or $b$ out
of $X$ but not $c$.\medskip
\textbf{Model 2.} The set of alternatives $X$ is the same as in the previous
model. But the set of states differs. Now we identify (pure) states with
outcomes of our four choice-measurements $ab,\ ac,\ bc,\ abc$. That is $%
\mathbb{P}=\{\underline{a}b,\ a\underline{b},\ \underline{a}c,\ a\underline{c%
},\ \underline{b}c,\ b\underline{c},\ \underline{a}bc,\ a\underline{b}c,\ ab%
\underline{c}\}$. Here $\underline{a}c$ denotes the choice of item $a$ out
of $\{a,c\}$ and so on.
To define the model we have to specify the outcomes of the measurements and
the corresponding state transition.
Let the state be $a\underline{b}c$. The outcome of measurement $abc$ is
obvious, as well as that of measurements $ab$ and $bc$ (by Houthakker's
axiom). The corresponding new states are $a\underline{b}c$, $a\underline{b}$
and $\underline{b}c$ respectively. But what about choice-measurement $ac$?
We assume that (with equal chances) the new state is $\underline{a}c$ or $a%
\underline{c}$.
Let now the state be $a\underline{b}$. By definition $b$ is the outcome of
measurement $ab$ and the state does not change. Suppose that we perform the
choice\textbf{-}measurement . We assume that the new state is $ab\underline{c%
}$ with probability 1/3 and is $a\underline{b}c$ with probability 2/3.
Houthakker's axiom says that $a$ cannot be the outcome the $abc\ $%
measurement. Suppose that the measurement $ac$ is performed. The new state
is $a\underline{c}$ with probability 2/3 and $\underline{a}c$ with
probability 1/3. Similarly, if we perform measurement $bc$ we obtain $c$
with probability 1/3 and $b$ with probability 2/3.
The outcomes and the transitions into other states are defined
symmetrically. This completes the definition of our model which obviously
satisfies the Houthakker axiom. Model 2 describes a rational (non-classical)
choice behavior as does Model 1. Yet, the pure states cannot be identified
with orderings of the alternatives as in the classical model.\medskip
Clearly, the eigensets of the measurement $abc$ are the one-element sets $\{
\underline{a}bc\}$, $\{a\underline{b}c\}$ and $\{ab\underline{c}\}$. The
eigensets of measurement $ab$ look more interesting. They are the
two-elements sets $\{\underline{a}b,\underline{a}bc\}$ and $\{a\underline{b}%
,a\underline{b}c\}$. The eigensets of the measurements $bc$ and $ac$ are
defined similarly. Here is the full list of the properties:
a) 3 one-element subsets: $\{\underline{a}bc\}$, $\{a\underline{b}c\}$, $\{ab%
\underline{c}\}\ $(represented by the nodes of the second line from below in
the lattice below);
b) 6 two-element subsets: $\{\underline{a}b,\ \underline{a}bc\}$, $\{a%
\underline{b},\ a\underline{b}c\}$, $\{\underline{a}c,\ \underline{a}bc\}$, $%
\{a\underline{c},\ ab\underline{c}\}$, $\{\underline{b}c,\ a\underline{b}c\}$%
, $\{b\underline{c},\ ab\underline{c}\}\ $(they\ are represented by the
nodes of the third line from below in the lattice);
c) 3 four-element subsets: $\{\underline{a}bc,\ a\underline{b}c,\ \underline{%
a}c,\ \underline{b}c\}$, $\{\underline{a}bc,\ ab\underline{c},\ \underline{a}%
b,\ b\underline{c}\}$, $\{a\underline{b}c,\ ab\underline{c},\ a\underline{c}%
,\ a\underline{b}\}$ (represented by the nodes of the fourth line of the
lattice);
d) the empty set $\emptyset $ and the whole set $\mathbb{P}$\ (respectively
the bottom and the upper node in the lattice).\medskip
Note that the intersection of properties is a property as well. The lattice
of the properties is drawn below:
\unitlength=.800mm \special{em:linewidth 0.4pt} \linethickness{0.4pt}
\begin{picture}(81.00,64.00)(-30,5)
\put(60.00,5.00){\circle{2.00}} \put(40.00,20.00){\circle{2.00}}
\put(60.00,20.00){\circle{2.00}} \put(80.00,20.00){\circle{2.00}}
\put(57.00,30.00){\circle{2.00}} \put(63.00,30.00){\circle{2.00}}
\put(40.00,45.00){\circle{2.00}} \put(60.00,45.00){\circle{2.00}}
\put(80.00,45.00){\circle{2.00}} \put(60.00,60.00){\circle{2.00}}
\put(59.00,59.00){\vector(-4,-3){17.00}}
\put(60.00,59.00){\vector(0,-1){13.00}}
\put(61.00,59.00){\vector(4,-3){17.00}}
\put(61.00,44.00){\vector(1,-1){13.00}}
\put(59.00,44.00){\vector(-1,-1){13.00}}
\put(41.00,19.00){\vector(4,-3){18.00}}
\put(60.00,19.00){\vector(0,-1){13.00}}
\put(79.00,19.00){\vector(-4,-3){18.00}}
\put(45.00,30.00){\circle{2.00}} \put(75.00,30.00){\circle{2.00}}
\put(76.00,29.00){\vector(1,-2){4.00}}
\put(44.00,29.00){\vector(-1,-2){4.00}}
\put(50.00,30.00){\circle{2.00}} \put(70.00,30.00){\circle{2.00}}
\put(41.00,44.00){\vector(2,-3){8.67}}
\put(51.00,29.00){\vector(1,-1){8.00}}
\put(80.00,44.00){\vector(-3,-4){10.00}}
\put(69.00,29.00){\vector(-1,-1){8.00}}
\put(42.00,44.00){\vector(3,-2){20.00}}
\put(78.00,44.00){\vector(-3,-2){20.00}}
\put(56.00,29.00){\vector(-2,-1){15.00}}
\put(64.00,29.00){\vector(2,-1){15.00}}
\end{picture}
\begin{center}
Figure 1
\end{center}
\noindent We note also that the lattice is not atomistic (not all elements
can be written as the join of atoms).\medskip
\textbf{Model 3.} Here the set $X$ consists of four items: $a,b,c$ and $d$.
For any $A\subset X$ with 3 or 2 elements the corresponding
choice-measurement $A$ is first-kind. We assume as before that Houthakker's
axiom holds in any two consecutive choice-measurements. In addition, we
assume that choices out of $A$ and $B$ are compatible if $A\cup B\neq X$.
Thus, our agent is \textquotedblleft more classical\textquotedblright\ than
in Model 2.
As in the classical case we can perform binary and ternary measurements on
each triple of items taken separately and reveal a linear order on that
triple. It is therefore natural to identify the set of pure states with the
collection of those linear orders.\ There are 24 such orders: $[a>b>c]$, $%
[c>d>a]$, and so on.
Suppose that the state is $[a>b>c]$.
1) If $A\subset \{a,b,c\}$ then the outcome of choice-measurement $A$ is
determined by the order $a>b>c$; the measurement does not change the state.
2) If we perform measurement $A$ with outcome set$\ \{a,b,d\},$ the new
state will be $[a>b>d]$, $[a>d>b]$ or $[d>a>b]$ with equal chances; the
outcomes are $a$, $a$ and $d$ correspondingly. For $A=\{a,c,d\}$ or $%
\{b,c,d\}$ the new states are defined similarly.
3) If we perform measurement $A$ with outcome set $\{a,d\},$ the new state
can be one of $[a>b>d]$, $[a>c>d]$, $[a>d>b]$, $[a>d>c]$, $[d>a>b]$, and $%
[d>a>c]$ with equal chances. In the first four cases the outcome is $a$; in
the two last cases the outcome is $d$. Similarly for $A=\{b,d\}$ and $%
A=\{c,d\}$.
The eigenset of the measurement $abc$ corresponding to $a$ is $%
\{[a>b>c],[a>c>b]\}$. The eigenset of the measurement $ab$ corresponding to $%
a$ is $\{[a>b>c],\ [a>c>b],\ [a>b>d],\ [a>d>b],\ [c>a>b],\ [d>a>b]\}$%
.\medskip
Consider now a choice out of the set $\left\{ a,b,c,d\right\} .$\textbf{\ }It%
\textbf{\ }is not a first-kind choice-measurement. Here many scenarios are
possible. We shall assume the agent proceeds by making two (first-kind)
measurements. First she chooses from a pair then from the triple consisting
of the first selected item and the two remaining items. We can call this
behavior "procedural rational" because the agent proceeds as if she had
preferences over the 4 items.
Let us consider the following scenario. Assume that the agent just made a
choice in $abc$\ and that the outcome was $ab\underline{c}$\ so the state
belongs to $\{[c>b>a],[c>a>b]\}.$\ Suppose that when confronted with $abcd$\
the agent follows the following procedure she \ performs the measurement $ab$
and then the measurement $bcd$\ (this means that the first outcome is $a$%
\underline{$b$}).$\ $After the first measurement,\ the state can be anyone
of $\left[ c>b>a\right] $,$\ \left[ b>a>d\right] ,\ \left[ b>d>a\right] \ $%
and $\left[ d>b>a\right] .\ $Therefore\ the outcome of $bcd$\ can be $b.\ $%
This violates of the principle of independence of irrelevant alternative
(IIA) and demonstrates preference reversal.\footnote{%
In models 1 and 2 we could also obtain a phenomena of preference reversal
but not in two consecutive choices. Model 3 allows for that because the
choice out of four items is a compound measurement.}\textbf{\ }
The examples above demonstrate that there can be many different models of
one and same measurable system. Which of them is the correct one? It is an
empirical question. We also see from this illustration that as we consider
the possibility of incompatible choice-measurements on subsets of $X$, the
behavior of a non-classical rational man differs from that of a classical
rational man in ways that can accommodate behavioral anomalies.
\section{Basic structure on the state space}
In this section we go further with the formal investigation. We show that if
the model of a measurable system ($\mathcal{M},\mathbb{S},\mu ,\tau )$ is
endowed with some additional properties (that we formulate as axioms) then
the set of states is equipped with the structure of an orthomodular
ortho-separable orthospace.
\subsection{Properties}
We remind that, for $M\in \mathcal{M}$ and $A\subset O(M)$, we introduced
the set $E_{M}(A)$ as the set of states $s$ such that when performing
measurement $M$ on a system in state $s$, the result of the measurement
belongs to $A$ for sure. The sets of the form $E_{M}(A)$ are called \emph{%
properties} of our system, namely the property to imply $A$ when performing
measurement $M$.
Different instruments measure different properties of a system. But it may
happen that one and the same property can be measured by several
instruments. We require that in such a case the probability for the property
does not depend on the instrument. We note that Model 1 from Section 4 does
not meet that requirement. Indeed, property $\left[ a\right] $ can be
obtained by measurements $abc$ and $ab.\ $But in state $\left[ c\right] ,$
outcome $a$ never obtains with the first instrument while $a$ obtains with
probability 1/2 with the second instrument.\ Yet, this is a very reasonable
requirement and we impose the following (slightly stronger) monotonicity
condition:
\begin{axiom}
Let $M$ and $M^{\prime }$ be two measurements, let $A\subset O(M)$, and $%
A^{\prime }\subset O(M^{\prime })$. Suppose that $E_{M}(A)\subset
E_{M^{\prime }}(A^{\prime })$. Then $\mu _{M}(A|s)\leq \mu _{M^{\prime
}}(A^{\prime }|s)$ for any state $s$.
\end{axiom}
In particular, if $P=E_{M}(A)=E_{M^{\prime }}(A^{\prime })$ is a property
then the probabilities $\mu _{M}(A|s)$ and $\mu _{M^{\prime }}(A^{\prime
}|s) $ are equal and depend only on the property $P$. We denote this number
as $s(P)$ and understand it as the probability to obtain property $P$ when
performing a suitable measurement of the system in state $s$. By definition
we have
\begin{equation*}
s(P)=1\Leftrightarrow s\in P.
\end{equation*}
The set of all properties is denoted by $\mathcal{P}$. As a subset of $2^{%
\mathbb{S}}$, $\mathcal{P}$ is a poset (a partially ordered set). It has a
minimal element $\mathbf{0}=\emptyset $ and a maximal element $\mathbf{1}=%
\mathbb{S}$. Any state $s\in \mathbb{S}$ defines the monotone function $s:%
\mathcal{P}\rightarrow \mathbb{R}_{+}$, $s(\mathbf{0})=0$, $s(\mathbf{1})=1$%
.\medskip
We now describe another basic structure on the property poset $\mathcal{P}$.
Let $P=E_{M}(A)$ be a property. The subset $P^{op}=E_{M}(O(M)\setminus A)$
is a property too. We have $s(P^{op})=1-s(P)$ for any state $s\in \mathbb{S}$%
. Thus
\begin{equation*}
P^{op}=\{s\in \mathbb{S},\ s(P^{op})=1\}=\{s\in \mathbb{S},\ s(P)=0\}
\end{equation*}%
and the set-property $P^{op}$ depends only on $P$. We call it the \emph{%
opposite} property to $P$.
\begin{lemma}
The poset $\mathcal{P}$ equipped with the operation ' is an orthoposet. That
is the following assertions holds:
1) If \ $P\subset Q\ $then$\ Q^{op}\subset P^{op};$
2) $P\cap P^{op}=\varnothing $ for every property $P$;
3) $P^{^{\prime \prime }}=P$ for every property $P$.
\end{lemma}
\textbf{Proof.} 1) Suppose $s\in Q^{op}.\ $Then\ $s\left( Q^{op}\right) =1$
and $s\left( Q\right) =1-s\left( Q^{op}\right) =0.$ Since $P\subset Q$ by
Axiom 1 we have that $s\left( P\right) =0$ as well. Therefore $s\left(
P^{op}\right) =1-s\left( P\right) =1$ and $s\in P^{op}$. Assertions 2) and
3) are obvious. $\Box$\medskip
From Lemma 1 we obtain that $\mathbf{1}$ is the only property which contains
both $P$ and $P^{op}$. Indeed if $P\cup P^{\prime }\subset Q$ then by 1) $%
Q^{op}\subset P^{op}\cap P^{\prime \prime }=\varnothing ,\ Q^{op}=\mathbf{0}$
and $Q=\mathbf{1.\ }$In other words the supremum $P\vee P^{op}$ equals$\
\mathbf{1}$ in the \emph{ortho-poset} $\mathcal{P}$. Note that in general $%
P\cup P^{op}\neq \mathbb{S}$.
\subsection{Orthospaces}
The set of states $\mathbb{S}$ possesses a similar orthogonality structure.
We say that two states $s$ and $t$ are \textit{orthogonal} (and write $%
s\perp t$) if there exists a property $P$ such that $s(P)=1$ and $t(P)=0$.
Since, for the opposite property $P^{op}$, it holds $s(P^{op })=0$ and $%
t(P^{op})=1$, we have $t\perp s$, so that $\perp $ is a symmetric relation
on the set $\mathbb{S}$. Clearly, $\perp $ is an irreflexive relation.
This lead us to the following general notion.\medskip
\textbf{Definition.} A symmetric irreflexive binary relation $\bot $ on a
set $X$ is called an \emph{\ orthogonality relation}. A set $X$ equipped
with an orthogonality relation $\bot $ is called an \emph{orthospace}%
.\medskip
\textbf{Example 4.} Consider an Euclidean space $H$ equipped with a scalar
product $\left( x,y\right) .\ $We say that vectors $x$ and $y$ are
orthogonal if $(x,y)=0$. The symmetry of the orthogonality relation follows
from the symmetry of the scalar product. To obtain the irreflexivity we have
to remove the null vector. So that $X=H\backslash \{0\}$ is an orthospace. A
Hilbert space over the field of complex numbers is another example. Such a
model is standard for Quantum Mechanic.\medskip
When graphically representing an orthospace, one may connect orthogonal
elements and obtain a (non-oriented) graph. This representation is often the
most \textquotedblleft economic\textquotedblright , since few edges need to
be written. Alternatively, one may connect non-orthogonal ({or \textit{%
tolerant}\footnote{%
The term \textquotedblleft tolerant\textquotedblright\ is used in
mathematics to refer to a symmetric and reflexive relation.}) elements. The }%
tolerance{\ graph quickly becomes extremely complex. In simple cases, we
combine the two representations. Dotted lines depict orthogonality and solid
lines depict tolerantness. The graphs of Example 2 (A fly in a box) is in
figure 2. }
\begin{eqnarray*}
&&\FRAME{itbpF}{1.8308in}{1.6259in}{0in}{}{}{qlbfig3.png}{\special{language
"Scientific Word";type "GRAPHIC";display "USEDEF";valid_file "F";width
1.8308in;height 1.6259in;depth 0in;original-width 2.8643in;original-height
2.4448in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename
'/document/graphics/qlbfig3.png';file-properties "XNPEU";}%
} \\
&&\ \ \ \ \ \ \ \ \ \text{Figure\ 2}
\end{eqnarray*}
For $A\subset X$, we denote by $A^{\perp }$ the set of elements that are
orthogonal to all elements of $A$,
\begin{equation*}
A^{\perp }=\{x\in X,\ x\perp A\}.
\end{equation*}%
For instance $\emptyset ^{\perp }=X$ and $X^{\perp }=\emptyset $. If $%
A\subset B$ then $B^{\perp }\subset A^{\perp }$.\medskip
\textbf{Definition.} The subset $A^{\perp \perp }$ is called the \emph{%
ortho-closure} of a subset $A$. A set $F$ is said to be \emph{ortho-closed}
(also called a \emph{flat }) if $F=F^{\perp \perp }$.\medskip
It is easily seen that for any $A\subset X,\;$the set$\;A^{\perp }$ is
ortho-closed. Indeed, let $F=A^{\perp }$. Then $F\subset F^{\perp \perp }$.
On the other side, $A \subset A^{\perp \perp }=F^{\perp }$; applying $\perp $
we reverse the inclusion relation $F^{\perp \perp }\subset A^{\perp }=F$. In
particular, the ortho-closure of any $A$ is ortho-closed.\medskip
Let $\mathcal{F}(X,\perp )$ denote the set of all flats of orthospace $%
(X,\bot)$ ordered by the set-theoretical inclusion, which we denote $\leq $.
It contains the largest element $X$, denoted $\mathbf{1}$, and the smallest
element $\emptyset $, denoted $\mathbf{0}$. Moreover the poset $\mathcal{F}%
\left( X,\perp \right) $ is a (complete) lattice. The intersection of two
(or more) flats is a flat implying that $A\wedge B$\ exists and equals $%
A\cap B$. The join $A\vee B$\ also exists and is given by the formula%
\begin{equation*}
A\vee B=\left( A\cup B\right) ^{\perp \perp }.
\end{equation*}
\textbf{Definition.} An \emph{ortholattice} is a lattice equipped with a
mapping $\perp :\mathcal{F} \rightarrow \mathcal{F} $ such that
i. $x=x^{\perp \perp }$;
ii. $x\leq y$ if and only if $y^{\perp }\leq x^{\perp }$;
iii. $x\vee x^{\perp }=\mathbf{1}$.\medskip
Thus the poset $\mathcal{F}(X,\perp )$ is an ortholattice.
\subsection{The intersection axiom}
We have associated to a measurable system two objects: the orthoposet of
properties $\mathcal{P}$ and the ortholattice of flats $\mathcal{F}(\mathbb{S%
},\bot )$. It is intuitively clear that these two objects are closely
related. But for now we can only assert the following inclusion
\begin{equation*}
P^{op}\subset P^{\bot }
\end{equation*}%
for a property $P$. Indeed, by the definition of $\bot $, any element of $P $
is orthogonal to any element of the opposite property $P^{op}$.
In order to go further we impose the following \textit{intersection axiom}
\begin{axiom}
The intersection of any properties is a property.
\end{axiom}
Axiom 2 puts conditions on the set of measurements as it requires that if $P$
and $Q$\thinspace are two properties there must exist a measurement such
that $P\cap Q$ is one of its eigensets. Axiom 2 is fulfilled in Models 2 and
3 from Section 4. As we shall see this Axiom implies that properties and
flats are the same. To prove this we first get a few consequences of Axiom 2.
\begin{lemma}
$P^{op}=P^{\bot }$ for any property $P$.
\end{lemma}
\textit{Proof}. Since $P^{op}\subset \mathbf{P}^{\bot }$, we have to check
the opposite inclusion $P^{\bot }\subset P^{op}$. Let $t$ be an arbitrary
element of $P^{\bot }$, and let $s$ be an arbitrary element of $P$. Since $%
t\bot s$, then by the definition of $\bot $ there exists a property $E_{s}$
such that $t\in E_{s}$ and $s\in E_{s}^{op}$. Set $E=\cap _{s\in P}E_{s}$;
by Axiom 2 $E$ is a property. Since $E\subset E_{s}$ for any $s$, we have $%
E_{s}^{^{op} }\subset E^{op}$. Together with $s\in E_{s}^{op}$ we obtain
that $P\subset E^{op}$. Therefore $E\subset P^{op}$, and we have that $t\in
\cap _{s}E_{s}=E\subset P^{op}$. $\Box $\medskip
In particular, any property $P=(P^{op})^{op}=(P^{op})^\bot$ is orthoclosed.
Hence the inclusion $\mathcal{P}\subset \mathcal{F}$ holds. We now prove the
inverse inclusion, that is any flat is a property. We first establish this
faci for flats of the form $\{s\}^{\bot }$, where $s$ is a state. Let $P(s)$
denote the least property containing the state $s$, that is intersection of
all properties containing the state $s$.
\begin{lemma}
{$s^{\bot }=P(s)^{op}$. }
\end{lemma}
\textit{Proof.} Since $s\in P(s)$, we have that $P(s)^{op }=P(s)^{\bot }$
(by Lemma 1) is contained in $\{s\}^{\bot }$. In order to check the reverse
inclusion, we consider an arbitrary element $t$ of $\{s\}^{\bot }$. By
definition this means that $s\in E$ and $t\in E^{op }$ for some property $E$%
. Since $P(s)$ is the minimal property containing $s$, we have $P(s)\subset
E $. Hence $E^{op }\subset P(s)^{op }$ and $t\in P(s)^{op }$. This prove the
inclusion $\{s\}^{\bot }\subset P(s)^{op }$. $\Box $\medskip
In particular, flats of the form $\{s\}^{\bot }$ are properties. Since any
flat is an intersection of subsets of the form $\{s\}^{\bot }$, from the
intersection axiom we obtain that any flat is a property. Thus, we proved
the following important theorem
\begin{theorem}
$\mathcal{P}=\mathcal{F}(\mathbb{S},\bot )$.
\end{theorem}
From Theorem 1 we see that the orthoclosure $A^{\bot \bot }$ of a set $A$ is
the least property containing $A$. It consists of states having the
properties that are common to all elements of $A$. The elements of $A^{\bot
\bot }$ are also called \textit{superpositions} of $A$. The following
Proposition implies that any mixture of $A$ is a superposition of $A$.
\begin{proposition}
Suppose that a state $s$\ is the convex mixture of states $s_{1},...,s_{n}$
with strictly positive coefficients $\alpha _{i}.$ Then the orthoclosure of $%
s$ is the same as the orthoclosure of $\left\{ s_{1},...,s_{n}\right\} $.
\end{proposition}
\textit{Proof.} We have to show that $s$ is endowed with property $P$ if and
only if $s_{1},...,s_{n}$ are endowed with property $P.$ $s\in
P\Leftrightarrow s\left( P\right) =1.$\ But $s\left( P\right)
=\sum_{i}\alpha _{i}s_{i}\left( P\right) .$ Since $s_{i}\left( P\right) \leq
1$ and $\alpha _{i}>0$ for all $i$, we have that $\sum_{i}\alpha
_{i}s_{i}\left( P\right) =1$ if and only if all $s_{i}\left( P\right) =1$
i.e., if and only if $s_{1},...,s_{n}\in P$. $\Box $\medskip
\textbf{Corollary.} \textit{The natural mapping }$\mathcal{F}(\mathbb{P}%
,\bot )\rightarrow \mathcal{F}(\mathbb{S},\bot )$\textit{, where }$\mathbb{P}
$\textit{\ is the set of pure states with the induced orthogonality
relation, is an isomorphism of ortholattices.}\medskip
For this reason we can work with the orthospace $\mathbb{P}$ of pure states
holding in mind that mixtures are in principle possible.
\subsection{Atomicity and the preparation axiom}
A state $s$ is an \emph{atom} if the set $\{s\}$ is orthoclosed. By
Proposition 1, any atom is a pure state. Model 2 from Section 4 shows that
the inverse is not true. Nevertheless in the sequel we restrict our
attention to systems in which any pure state is atom. We formulate this
requirement in terms of measurements.
\begin{axiom}
For any pure state $s\in \mathbb{P}$, there exists a measurement $M$ such
that $\{s\}$ is one of its eigensets.
\end{axiom}
In other words, any pure state is fully characterized by its properties.
Substantively (or operationally) it means that, given any state $s$, there
exists an experimental set-up which can \textquotedblleft
prepare\textquotedblright\ the system in that state $s$. Axiom 3 is rather
reasonable \textbf{(}it is fulfilled in Model 3 from Section 4) and we
explore its consequences. \textbf{\ }
Let $s$ and $t$ be two pure states. Due to Axiom 3, the set $\{t\}$ is a
property and therefore we can speak about $s(t):=s(\{t\})$, the probability
for a transition from the state $s$ to the state-property $t$.
\begin{lemma}
Suppose Axiom 3 is fulfilled. Then $t(s)=0$ if and only if $s\bot t$.
\end{lemma}
\textit{Proof:} Let us suppose that $s\bot t$ and let $P$ be a property such
that $s\in P$ and $t(P)=0$. From the inclusion $\{s\}\subset P$ and the
monotonicity axiom we have $t(\{s\})=0$.\ The converse assertion is more
obvious because $s$ belongs to the property $\{s\}$ on which $t$ vanishes. $%
\Box $\medskip
\begin{corollary}
$s(t)=0$ \emph{if and only if} \ $t(s)=0$.\medskip
\end{corollary}
In the general case, $s(t)$ can differ from $t(s)$.\medskip
Axiom 3 implies the ortho-separability of the orthospace $\mathbb{P}$. Let
us remind that an orthospace $(X,\bot )$ is called \emph{ortho-separable} if
any single-element subset $\{x\}$ of $X$ is a flat. It is easy to check that
$\{x\}$ is a flat if and only if for any $y\neq x$ there exists $z$
orthogonal to $x$ but not to $y$. For example, the orthospace in figure 3
\begin{eqnarray*}
&\FRAME{itbpF}{2.5953in}{1.663in}{0in}{}{}{qlbfig4.png}{\special{language
"Scientific Word";type "GRAPHIC";display "USEDEF";valid_file "F";width
2.5953in;height 1.663in;depth 0in;original-width 3.3831in;original-height
2.1352in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename
'/document/graphics/qlbfig4.png';file-properties "XNPEU";}%
}& \\
&\text{Figure\ 3}&
\end{eqnarray*}%
is not ortho-separable, since $a^{\perp \perp }=\{a,c,d\}\neq \{a\}$.\medskip
It is worthwhile noting that any ortho-separable orthospace $(X,\bot )$ can
be reconstructed from the ortho-lattice $\mathcal{F}(X,\bot )$. Recall that
an \emph{atom} of a lattice $\mathcal{F}$ is a minimal non-zero elements of $%
\mathcal{F}$. A lattice $\mathcal{F}$ is called \emph{atomistic} if any
element of the lattice is the join of atoms. If $(X,\perp )$ is an
ortho-separable orthospace then $\mathcal{F}(X,\perp )$ is a complete
atomistic ortho-lattice.
Conversely, let $\mathcal{F}$ be a complete atomistic ortho-lattice. Then,
there exists an ortho-separable orthospace $(X,\perp )$ (unique up to
isomorphism) such that $\mathcal{F}$ is isomorphic to $\mathcal{F}\left(
X,\perp \right) $. One needs to take the set of atoms of $\mathcal{F\ }$as
the set $X$; atoms $x$ and $y$ are orthogonal if $x\leq y^{\perp }$. For
more details see, for example \cite{Mo}. Roughly speaking, ortho-separable
orthospaces and atomistic ortholattices are equivalent objects.
\subsection{Orthomodularity}
It was early recognized that the failure of classical logic to accommodate
quantum phenomena was due to the requirement that the lattice of properties
should satisfy the distributivity law. Birkhoff and von Neumann \cite%
{BirvNeu36} proposed to substitute the distributivity law by the modularity
law. As it turned out, the weaker notion of orthomodularity proved to be
more adequate, see \cite{Kalm}.\medskip
\textbf{Definition.} An orthospace $(X,\bot )$ is said to be \emph{\
orthomodular} if, for every two flats $F$ and $G$ such that $F\subset G$
there exists an element $x\in G$ which is orthogonal to $F$.\medskip
In other words, if $x$ does not belong to a flat $F$ then there exists a
superposition of $F$ and $x$ which is orthogonal to $F$. Orthomodularity
permits constructing orthogonal bases in the same way as in Euclidean
spaces. An \emph{\ orthobasis} of a flat $F$ is a subset $B$ of mutually
orthogonal elements such that $F=B^{\bot \bot }$. In is easy to see that
(for any orthospace) the maximal flat \textbf{1} has an orthobasis. If $X$
is orthomodular then each flat has an orthobasis. More precisely, there holds
\begin{lemma}
Let $F$ be a flat in an orthomodular space, and $B^{\prime }\subset F$ be a
subset consisting of mutually orthogonal elements. Then, there exists an
orthobasis $B$ of $F$ containing $B^{\prime }$.
\end{lemma}
\emph{Proof.} Let $B$ be a maximal (by inclusion) subset in $F$ which
contains $B^{\prime }$ and consists of mutually orthogonal elements. We
claim that the orthoclosure of $B$ coincides with $F$. In opposite case
there exists an element $x$ in $F$ orthogonal to $B$. If we add $x$ to $B$
we extend $B$ which contradicts the assumption of $B$\ being maximal. $\Box$%
\medskip
In particular, beginning with the empty $B^{\prime}$ we can construct an
orthobasis of any flats. Note, however, that ortobases of the same flat
(even the maximal flat \textbf{1}) can have different numbers of elements.
The orthomodularity of space $(X,\bot )$ implies the orthomodularity of the
corresponding ortho-lattice of flats $\mathcal{F }(X,\bot )$. Recall that an
ortho-lattice $\mathcal{F }$ is called \emph{orthomodular} if, for every its
elements $a$ and $b$ such that $a\le b$, the following equality holds
\begin{equation*}
b=a\vee (b\wedge a^\bot).
\end{equation*}
\begin{lemma}
An orthospace $(X,\bot )$ is orthomodular if and only if its ortholat\-tice $%
F(X,\bot )$ is orthomodular.
\end{lemma}
\emph{Proof.} Let $(X,\bot )$ be orthomodular space and let $F$ and $G$ be
two flats such that $F\subset G$. We have to show that $G=F\vee (G\wedge
F^{\bot })$. Denote by $G^{\prime }$ the right hand side of the expression;
it is clear that $G^{\prime }\subset G$. If the inclusion is strict then $G$
consists an element $x$ orthogonal to $G^{\prime }$. In particular, $x$ is
orthogonal to $F$, that is $x$ belongs to $F^{\bot }$. Since $x$ belongs to $%
G$ as well then $x$ belongs to $G\cap F^{\bot }$ and all the more to $%
G^{\prime }$. But we obtain that $x$ is orthogonal to $x$ which contradicts
the irreflexivity of the orthogonality relation $\bot $. Conversely, let the
lattice $\mathcal{F}(X,\bot )$ be orthomodular and $F\subset G$ be two
different flats. Since $G=F\vee (G\cap F^{\bot })$ then $G\cap F^{\bot }$ is
nonempty. Every element of $G\cap F^{\bot }$ belongs to $G$ and is
orthogonal to $F$. $\Box$\medskip
We now impose the following requirement
\begin{axiom}
Let $P$ and $Q$ be comparable properties (that is either $P\subset Q$ or $%
Q\subset P$). Then there exists a measurement $M\in \mathcal{M}$ such that $%
P=E_{M}(A)$ and $Q=E_{M}(B)$ for some $A, B$ in $O(M)$.
\end{axiom}
This is a serious restriction. For example, it is violated in Model 2 of
Section 3. A first consequence of Axiom 4 is the orthomodularity of the
state space.\medskip
\begin{proposition}
If Axiom 4 is fulfilled then the state space $S$ (or $P$) is
orthomodular.\medskip
\end{proposition}
Indeed, suppose that $P\subset Q$ are two different properties. Let $M$ be a
measurement such that $P=E_{M}(A)$ and $Q=E_{M}(B)$ for $A,B\subset O(M)$.
Obviously, $A\subset B$ and this inclusion is strict. If $b\in B\setminus A$
then every element of $E_{M}(b)$ belongs to $Q$ and is orthogonal to $P$. $%
\Box $\medskip
Another important consequence of Axiom 4 is that any state $s\in \mathbb{S}$
can be considered as a probability measure on the orthospace $\mathbb{P}$%
.\medskip
\textbf{Definition.} A \emph{probability measure} on an orthospace $(X,\bot
) $ is a function $p: \mathcal{F}(X,\bot )\rightarrow \mathbb{R}_{+}$
satisfying the following two requirements:
1) if $F$ and $F^{\prime }$ are orthogonal flats then $p(F\vee F^{\prime
})=p(F)+p(F^{\prime })$;
2) $p(\mathbf{1})=1$.\medskip
By induction we obtain the equality $p(F_1 \vee ...\vee F _n)=p(F_1
)+...+p(F_n )$ for any mutually orthogonal flats $F_1 ,...,F _n$. It is
natural to call this property \emph{ortho-additivity}. The requirement 2) is
simply a normalization. Note that 1) implies $p(\mathbf{0})=0$. If the
orthospace $(X,\bot )$ is orthomodular (what we shall assume) then $p$ is
monotone (that is $p(F)\le p(Q)$ for $F\subset Q$). When all elements of $X$
are orthogonal each to other (and $X$ is a finite set) we come to
conventional notion of a probability measure on $X$, see 5.1.
We already (see 5.1) represented an arbitrary state $s$ as a function on $%
\mathcal{F}(\mathbb{P},\bot )$. We now assert that this function is
ortho-additive.
\begin{proposition}
Axiom 4 implies that any state $s$ (as a function on $\mathcal{F}(\mathbb{P}%
,\bot )$) is a probability measure.
\end{proposition}
\textit{Proof}. Since $s(\mathbf{1})=1$ we have to check the
ortho-additivity of $s$. Let $F$ and $F^{\prime }$ be two orthogonal flats,
and $G=F\vee F^{\prime }$. Since $F\subset G$ then, by Axiom 4, there exists
a measurement $M$ such that $F=E_{M}(A)$ and $G=E_{M}(B)$ for $A\subset
B\subset O(M)$. Set $A^{\prime }=B\setminus A$.
We claim that $F^{\prime }=E_{M}(A^{\prime })$. We begin with inclusion $%
\subset $. Let us consider an arbitrary state $t$ from $F^{\prime }$. The
outcome of the measurement $M$ in the state $t$ cannot belong to $A$
according to orthogonality of $t$ and $F$. Hence the outcome of the
measurement belongs to $A^{\prime }$ with certainty, that is $t\in
E_{M}(A^{\prime })$. Thus, $F^{\prime }\subset E_{M}(A^{\prime })$.
Suppose now that $t$ is a state from $E_{M}(A^{\prime })$, but not from $%
F^{\prime }$. Due to orthomodularity, we can assume that $t$ is orthogonal
to $F^{\prime }$. Since $t$ is orthogonal to $F=E_{M}(A)$ as well, we
conclude that $t$ is orthogonal to $F\vee F^{\prime }=G$ and therefore
cannot belong to $E_{M}(A^{\prime })$. A contradiction. The claim is proven.
Now $s(F\vee F^{\prime})=\mu _M((A\cup A^{\prime})|s)=\mu _M(B|s)=s(G)$. $%
\Box$\medskip
In particular, if $\{b_{1},...,b_{n}\}$ is an orthobasis of a property $F$
then $s(F)=s(b_{1})+...+s(b_{n})$ for any state $s$.
\section{Impact of measurements}
Here we assume that $(\mathcal{M},\mathbb{S},\mu ,\tau )$ is a model of some
measurable system satisfying Axioms 1-4. In the preceding Section we have
shown that the state space $P$ is an ortho-separable orthomodular space. In
this section we show that measurements act as orthogonal projections in this
orthospace.
\subsection{Ideal measurements}
We know that measurements impact on the state, that is the state of a system
is modified by the performance of measurements. Here we investigate
measurements that \textquotedblleft minimally\textquotedblright\ impact on
the state. These measurements are called \textit{ideal}. Let us give a
precise definition. Let $M\in \mathcal{M}$ be a measurement with eigensets $%
F(o)=E_{M}(o),\ o\in O(M)$.\medskip
\textbf{Definition.} A measurement $M$ is \emph{ideal} if, for every state $%
s $, the new state $\tau _{M,o}(s)$ belongs to the convex hull of the flat $%
F(o)\wedge (s\vee F(o)^{\bot })$.\medskip
Note that we earlier said that the transition mapping $\tau _{M,o}$ is
undefined for states belonging to $F(o)^{\bot }$. This is in agreement with
the fact that, for $s\in F(o)^{\bot }$, the flat $F(o)\wedge (s\vee
F(o)^{\bot })=F(o)\wedge F(o)^{\bot }$ is empty. On the contrary, if $s$
does not belong to $F(o)^{\bot }$ then the flat $s\vee F(o)^{\bot }$ is
strictly larger than $F(o)^{\bot }$. According to orthomodularity it
contains an element orthogonal to $F(o)^{\bot }$, that is an element
belonging to $F(o)$. Therefore the flat $F(o)\wedge (s\vee F(o)^{\bot })$ is
nonempty indeed.
\begin{proposition}
Let $M$ be an ideal measurement. Suppose that a pure state $s$ belong to one
of the eigensets of $M$. Then the performance of measurement $M$ leaves the
state $s$ unaffected.
\end{proposition}
In that sense an ideal measurement minimally impacts on states or produces
\textquotedblleft a least perturbation\textquotedblright .\medskip
\emph{Proof.} Let us suppose that $s$ belongs to the eigenset $F=F(o)$. By
the definition of ideality, the new state is in $F\wedge (s\vee F^{\bot })$.
Since $s\in F$ we have the dual inclusion $F^{\perp }\subset \{s\}^{\perp }$%
. By force of orthomodularity $s^{\perp }=F^{\bot }\vee (s^{\perp }\vee F)$.
Applying $\perp $ we obtain the equality $F\wedge (s\vee F^{\bot })=s^{\perp
\perp }$. By Axiom 3, that last set is made out of a single element $s$.
Therefore the new state coincides with $s$. $\Box$\medskip
Let us mention one more property of ideal measurements. When performing an
ideal measurement the new state $s^{\prime }$ is not orthogonal to the old
state $s$. This follows from the fact that $F\wedge (s\vee F^{\bot })$ and $%
s^{\bot }$ do not intersect. In fact,
\begin{equation*}
s^{\bot }\wedge F\wedge (s\vee F^{\bot })=(s\vee F^{\bot })^{\bot }\wedge
(s\vee F^{\bot })=0.
\end{equation*}
Strengthening Axiom 4, we postulate that there exists sufficiently many
ideal measurements. Let $M\in \mathcal{M}$ be a measurement. As we know,
different flats $E_{M}(o)$ are orthogonal to each other. Moreover, the join
of all $E_{M}(o)$ is equal to $\mathbf{1}$. Indeed, if a state $s$ is
orthogonal to $E_{M}(o)$ then $E_{M}(o)\subset s^{\perp }$ and consequently $%
s(E_{M}(o))\leq s(s^{\perp })=1-s(s)=0$; on the other hand, $%
\sum_{o}s(E_{M}(o))=1$. This leads us to the following definition\medskip
\textbf{Definition.} An \emph{Orthogonal Decomposition of the Unit (ODU)} is
a finite family of flats $(F_{i},\ i\in I)$ such that
a. $F_{i}$ and $F_{j}$ are orthogonal if $i\ne j$;
b. $\vee _{i\in I}F_{i}=\mathbf{1}$.\medskip
Thus, for a measurement $M$, the family of eigensets $(E_{M}(o),\ o\in O(M))$
is an ODU. The next \emph{ideality} axiom asserts that any ODU may be
obtained as the collection of the eigensets of some ideal measurement.
\begin{axiom}
For any ODU $(F_{i},\ i\in I)$, there exists an ideal measurement $M\in
\mathcal{M}$ with the outcome set $O(M)=I$ and the eigensets $E_{M}(i)=F_{i}$%
.
\end{axiom}
Axiom 5 connects ideal measurements with ODUs. This allows to investigate
the central issue of compatibility of measurements which we do next.
\subsection{Compatible measurements}
In Section 3 we informally discussed the notion of compatible measurements.
In order to consider this issue more formally we need to introduce a notion
of commutativity in the orthomodular space $\mathbb{P}$. Two (or more) flats
commute (or are compatible) if they possess a common orthobasis (see 5.5).
More precisely, a family $(F_{i},\ i\in I)$ of flats \emph{commute} if there
exists an orthobasis $B$ of $\mathbb{P}$ and a family $(A_{i},\ i\in I)$ of
subsets of $B$ such that $A_{i}$ is an orthobasis of the flat $F_{i}$.
For example, flats $F$ and $G$ commute if they are comparable or are
orthogonal. One can show that a family $(F_i, \ i\in I)$ of flats commute if
every two member of the family commute.\medskip
\begin{lemma}
Let flats $F$ and $G$ commute. Then $F\wedge (G\vee F^\bot)=F\wedge G$.
\end{lemma}
Indeed, let $B$ be a common orthobasis of $F$ and $G$. That is $F$ and $G$
are the orthoclosure of some subsets $A$ and $C$ of $B$. Then $C\cup
(B\setminus A)$ is an orthobasis of the flat $G\vee F^{\bot }$ and $A\cap
(C\cup (B\setminus A))=A\cap C$ is an orthobasis of the flat $F\wedge (G\vee
F^{\bot })$. On the other hand, $A\cap C$ is an orthobasis of the flat $%
F\wedge G$. $\Box $
Let $M$ and $M^{\prime}$ be two ideal measurements with eigensets $E_{M}(o)$
, $o\in O(M)$, and $E_{M^{\prime}}(o^{\prime})$, $o^{\prime}\in
O(M^{\prime}) $.\medskip
\textbf{Definition.} The ideal measurements $M$ and $M^{\prime }$ are \emph{%
compatible} ( or \emph{commute}) if every $E_{M}(o)$ commutes with every $%
E_{M^{\prime }}(o^{\prime })$.\medskip
We assert that compatible measurements are compatible in the previously
mentioned informal sense, that is performing one of the measurements does
not affect the results from the other measurement. Indeed, suppose that a
state $s$ is in an eigenset $F:=E_{M}(o)$ and therefore performing $M$ gives
outcome $o$. Suppose further that we perform measurement $M^{\prime }$ and
obtain an outcome $o^{\prime }$. Then the new state $s^{\prime }$ is in the
flat $G\wedge (s\vee G^{\perp })$, where $G=E_{M^{\prime }}(o^{\prime })$.
All the more, the new state $s^{\prime }$ is in the flat $G\wedge (F\vee
G^{\perp })=F\wedge G$ according to according to Lemma 7. Therefore $%
s^{\prime }$ remain in $F$, and if we perform the measurement $M$ again we
obtain the same outcome $o$.
We next show that two ideal measurements are compatible if and only if they
are ``coarsening''\ of a third (finer) measurement. First a
definition\medskip
\textbf{Definition.} A measurement $M^{\prime }$ is \emph{coarser} than a
measurement $M$ (and $M$ is \emph{finer} than $M^{\prime }$) if every
eigenset of $M$ is contained in some eigenset of $M^{\prime }$.\medskip
In other words, outcomes of $M^{\prime }$ can be obtained from outcomes of $%
M $ by means of a mapping $f:O(M)\rightarrow O(M^{\prime })$. In this case
the eigensets $E_{M^{\prime }}(o^{\prime })$ have the form $%
E_{M}(f^{-1}(o^{\prime }))$.
If $M^{\prime }$ and $M^{\prime \prime }$ both are coarsening of $M$ then
they are compatible. For this aim we have to take an orthobasis common to
all eigensets of the measurement $M$.
Conversely, let $M$ and $M^{\prime }$ be two compatible measurements. Then
there exists an orthobasis $B$ common for eigensets of $M$ and $M^{\prime }$%
. If $B$ is a finite set, then we can take as $M^{\prime \prime }$ the
(complete) measurement corresponding to the ODU $B$. In the general case, we
have to consider the family of flats $(E_{M}(o)\wedge E_{M^{\prime
}}(o\prime ))$, where $o$ runs $O(M)$ and $o^{\prime }$ runs $O(M^{\prime })$%
. Because of compatibility, these flats form an ODU. And we have to take as $%
M^{\prime\prime}$ the corresponding measurement which exists by Axiom 5.
Thus, we proved the following
\begin{theorem}
Ideal measurements $M$ and $M^{\prime }$ are compatible if and only if there
exists an ideal measurement refining both $M$ and $M^{\prime }$.
\end{theorem}
\subsection{Canonical decomposition of a model}
We now show that a model of a measurable system satisfying Axioms 1 to 5 can
be written as the direct sums of its irreducible submodels. The argument
below holds for any orthospace but is of largest interest for
ortho-separable orthospaces.
Let $(X,\bot )\ $be an orthospace. We say that two elements of $X$ are \emph{%
connected} if they can be linked be a chain of pairwise non-orthogonal
(tolerant) elements. This relation is an equivalence relation and therefore
divides the set $X$ into classes of connected elements which we denote $%
X(\omega), \ \omega \in \Omega$. Elements from different connected
components are orthogonal to each other; therefore $X(\omega)$ are flats.
These flats are called \emph{central} or \emph{classical}.
If $F$ is a flat in $X$ then, for any $\omega $, the set $F\cap X(\omega )$
is a flat in the orthospace $X(\omega )$ equipped with the induced
orthogonality relation. Conversely, suppose we have a collection of flats $%
F(\omega )$ in $X(\omega )$, $\omega \in \Omega $. Then the union $F=\cup
_{\omega }F_{\omega }$ is a flat in $X$. In other words, the ortholattice $%
\mathcal{F}(X,\bot )$ is the direct (orthogonal) product of ortholattices $%
\mathcal{F}(X(\omega ),\bot )$.
Let us go back to a model of a measurable system with orthospace $(\mathbb{P}%
,\bot )$. As any orthospace, $\mathbb{P}$ decomposes into connected (or
irreducible) components $\mathbb{P}(\omega ),\ \omega \in \Omega $. Since
these components form an ODU then, by Axiom 4, there exists a corresponding
ideal measurement $C$. Since any state is in some component, Proposition 4
implies that the measurement $C$ affects no state. It is natural to call the
measurement $C$ \textit{classical} and to call the corresponding components $%
X(\omega )$ \textit{classical super-states}. The classical measurement $C$
commutes with any ideal measurement. For this reason, we can without loss of
generality, consider only irreducible models.
\subsection{Axiom of Purity}
The property of ideality of measurements significantly narrowed down the
range of the possible impact of a measurement on the state. We know that as
we obtain the result $o$ the system moves from state $s$ to state $%
s^{\prime} $ belonging to the convex hull of the flat $E_{M}(o)\wedge (s\vee
E_{M}(o)^{\bot })$. If this flat is an atom, the new state is uniquely
determined. But if this flat is not an atom (and that is fully possible) the
new state may be a probabilistic mixture of states in $E_{M}(o)\wedge (s\vee
E_{M}(o)^{\bot })$.
To see this, let us consider the example of a fly in a $3\times 2$ box.
There are two measurements: $LR$ and $FCB$. Suppose the state $F$ is
realized as a probability measure $F(L)=F(C)=F(R)=1/3$ (of course, $F(F)=1$
and $F(B)=0$ ). If we, in the state $F$, perform a measurement with
eigensets $\{L\}$ and $\{L\}^{\perp }=\{C,R\}$ and obtain outcome "not $L$",
we may conclude that the image of $F$ is not a pure state but the
equiprobable mixture of states $C$ and $R$. Such a conclusion is in
agreement with the ideality of the measurement $\{C,R\}$ which sends the
state $F$ into $\{C,R\}\wedge (F\vee \{C,R\}^{\bot })=\{C,R\}\wedge (F\vee
L)=\{C,R\}\wedge \mathbf{1}=\{C,R\}$.
We introduce a last axiom guaranteeing that under the impact of a
measurement any pure state jumps into another pure state. Namely, we
consider the following \emph{axiom of purity }
\begin{axiom}
For any pure state $s\in \mathbb{P}$ and any flat $F$ the flat $F\wedge
(s\vee F^{\bot })$ is an atom of the lattice $\mathcal{F}(\mathbb{P},\perp )$%
.
\end{axiom}
We have introduced above a number of non-trivial axioms. We assert that they
are all compatible with each other. Indeed, Example 2 (A fly in a box) gives
a model satisfying to all axioms. Another example is the so-called Hilbert
space model of Quantum Mechanics.\medskip
\textbf{Example 4 }(continued). Let $H$ be a (finite-dimensional) Euclidean
space, as in Example 4. And let $\mathbb{P}$ be the set of all
one-dimensional vector subspaces of $H$. The orthogonality relation is clear
from Example 4. Any flat is given by a vector subspace $V$ and consists of
one-dimensional subspaces in $V$. Measurements are identified with ODUs.
Suppose that $(V_{i},i\in I)$ is a family of pairwise orthogonal vector
subspaces in $H$ and $\sum_{i}V_{i}=H$, and let $v$ be a (non-zero) vector
in $H$ representing some state $s$. Denote by $v_{i}$ the orthogonal
projections of $v$ on subspace $V_{i}$. Then under impact of the
corresponding measurement the state $s$ moves into the state $v_{i}$ with
probability $\cos ^{2}(\varphi )$, where $\varphi $ is the angle between the
vectors $v$ and $v_{i}$ (the probability is 0, if $v_{i}=0$), and give the
outcome $i$. By the construction, this measurement is ideal. It is easy to
check that all other axioms also are fulfilled.\medskip
\textbf{Remark.} In some sense Example 4 is not only a special case. If the
height of $\mathcal{F}$ is more than 3 then the lattice $\mathcal{F}$ can be
realized as a (ortho)lattice of vector subspaces of some Hermitian space
over some $\ast $-field $K$.\footnote{%
The case of the height 1 is trivial: $\mathcal{F}=\{0,1\}$ and $\mathbb{P}$
consists of single state. The case of the height 2 is of more interest. The
(ortho)lattice $\mathcal{F}=\{0,1\}\cup \mathbb{P}$, and the mapping $%
s\mapsto s^{\bot }$ acts on the set $\mathbb{P}$ as an involution without
fixed points. The case of the height 3 is very intricate and unclear.} The
details can be found in \cite{Belcas81} or in \cite{holland95}. If we
additionally require that the orthospace $\mathbb{P}$ is compact and
connected (as a topological subspace of $\Delta (\mathbb{P} ,\bot )$) then
the field $K$ is the real field $\mathbb{R}$, the complex field $\mathbb{C}$
or the skew field $\mathbb{H}$ of quaternions.
\section{Non-classical models in social sciences: A discussion}
In the last section we want to discuss some of the key properties of general
measurable systems in order to help the reader assess their relevance for
social sciences. We wish to emphasize that this section is highly
explorative and should be viewed as a first step that only aims at opening
the discussion.
When applying the theory of measurable system exposed in this paper to
behavioral and social sciences, the general idea is to view an individual as
a measurable system. She is characterized by her type which encapsulates
information about her preferences, attitudes, beliefs, feelings etc. A
decision situation (a situation such that she must choose an alternative out
of set of alternatives) or a questionnaire is a device that measures her
type. Actual behavior, e.g. the choice made in a decision situation, the
actions taken in a game or the answer given to a questionnaire are
measurement outcomes.
In the Introduction we formulated a question as to whether it is possible to
build an interesting theory about a system that changes when being measured.
We answered by the affirmative when imposing a series of properties on
measurements and on their interaction with the system. The state space
representing the system is then endowed with the structure of an atomistic
orthomodular orthospace and the states are realized as probability measures
on the state space.\ We next propose a psychological and behavioral
interpretation of some of those properties.
\
\textit{First kindness}
The first key property of a measurable system is that measurements satisfy
first-kindness. Classical measurement theory (including revealed preference
theory) also relies on such an assumption of repeatability. Some reservation
may be in place. We do propose that a choice be viewed as a measurement
outcome that reveals or more precisely actualizes preferences. But in many
settings the repeated character of a choice changes the decision situation.
Clearly, a repeated interaction in a game situation is not equivalent to a
repetition of one and the same decision situation. The prolific theory of
repeated games amply illustrates this. So the repeatedness we have in mind
pertains to elementary situations.
Compared with standard revealed preference theory, the requirement of
repeatability is limited in two respects. The property applies to a smaller
set of choice experiments. As we illustrated in section 4 not all choice
sets can be associated with a first-kind measurement unless we are dealing
with a fully classical agent. Moreover repeatability is only requested in
two \textit{consecutive} identical measurements. If another (incompatible)
measurement is performed on the system in-between, the initial result may
not be repeated. Therefore although the property of first-kindness is
somehow restrictive it is still far less demanding than the standard
classical assumption. Yet, it is not an innocuous assumption and in
particular it precludes stochastic preferences.\bigskip
\textit{Invariance}
Axiom 1 is an axiom of invariance. In the context of choice theory, it is
related to the principle of procedure invariance assumed in classical
rational choice theory. This principle states that a preference relation
should not depend on the procedure of elicitation. Numerous experimental
studies were made on choice versus pricing to exhibit examples of violation
of procedure invariance. It is beyond the scope of this short comment to
systematically compare the classical concept of procedure invariance with
Axiom 1. We confine ourselves to remarking that Axiom 1 applies to systems
in the same state and to remind that it concerns first-kind measurements
only. In particular, we do not take for granted that the pricing or the
matching procedure which are considered as computationally relatively
demanding (compared with a choice procedure) can be performed as a single
measurement rather than as a sequence of (possibly incompatible)
measurements. \bigskip
Axioms 2, 3, 4 and 5 are axioms which all formulate requirements on the
richness of the set of measurements. Another way to look at them that lends
itself to an interpretation in choice theory is that when the set of
primitive measurements is actually limited - in Model 2 there could only be
4 measurements - the axioms imply that choice-measurements may not all be
incompatible with each other. Indeed, it is immediate to see that these
axioms are fulfilled for compatible measurements as these can be combined
into new measurements satisfying the axioms. For this very reason these
axioms do seem very natural to a classically minded person. In a
non-classical world they do not follow naturally, which is demonstrated by
the fact that these axioms are violated in Model 2. Hence, in a choice
theoretic context, these axioms put a limit on how ``non-classical'' agent
(a behavioral system) is allowed to be. Model 3 satisfies all these axioms
and still allows for so-called ``behavioral anomalies''.\bigskip
\textit{States and types}
The notion of state is closely related to Harsanyi's classical notion of
type which is why we use this term when referring to the state of agents.
The Harsanyi type of an agent is a complete description of her preferences,
beliefs and private information such that it allows predicting the agent's
behavior. By observing past behavior, we learn about an agent's type and can
make finer predictions of her future behavior. In Harsanyi's classical world
an information about past behavior is used to predict future behavior
relying on Bayesian updating. The same holds for any compatible
choice-measurements made on a non-classical agent. But generally (when some
measurements are incompatible) learning is not Bayesian. In Model 3 of
section 4, we saw that the performance of the $ab$ measurement on the agent
in state $s$ $\in \left\{ \left[ c>b>a\right] ,\ \left[ c>a>b\right]
\right\} $ erases information about her preferences what concerns the
ordering between $b$ and\ $c$\ so the agent can choose $b$ in$\ \left\{
b,c,d\right\} $.\
As in Harsanyi's model, a non-classical pure type is maximal information
about the agent. But in contrast with Harsanyi, a pure type may still be
dispersed (cf Section 3.4) so knowing the pure type does not allow to
predict behavior with certainty. This is reflected in the structure of the
type space. In Harsanyi's type space types are orthogonal to each other. In
the non-classical model not all (pure) type are orthogonal. Instead,
non-orthogonal states are connected with each other in the sense that under
the impact of a measurement the state of the system can transit from one
state to another. This strongly contrasts with Harsanyi's static model where
the act of choosing (i.e. a measurement of the type) has no impact on the
type only on payoffs. The non-classical type space models a changing agent,
we return to this aspect below.\bigskip
\textit{Incompatible measurements}
In the models of rational choice \ of Section 4 measurements corresponds to
sets of alternatives from which the agent makes a choice. Whichever model we
choose, the existence of incompatible choice-measurements implies that the
agent cannot have a preference order on all items \textit{simultaneously}.
Our theory gives a precise meaning to this impossibility. It means: i) if
the ordering over some subset of items is known (possibly only to the agent)
then his preferences over another incompatible subset is random (dispersed
pure states); ii) as the agent makes a choice in a given choice set his type
(preferences) is being modified (measurements affect the state). A
non-classical agent does not have a fixed type (preferences). The
non-classical model is consistent with the hypothesis that an agent's
preferences are shaped in the process of elicitation as proposed by Kahneman
and Tversky (see Introduction).
Thus, the distinctive feature of the non-classical model namely the
existence of incompatible measurements (or alternatively the existence of
dispersed states) delivers a formulation of bounded rationality as the
impossibility to compare and order all items simultaneously. We view this
formulation as particularly interesting because it is also linked to the
idea that an agent's preferences are ``context-dependent'' (see \cite%
{Katver00}, part 6). Both these themes: the issue of comparability in the
universal set of items and intrinsic contextuality of preferences are
central themes in behavioral and experimental economics.
As for today there is no consensus in Physics about the reasons for the
non-classicality of quantum physical phenomena. There exists however a huge
literature on the subject in epistemology. We do not wish to speculate on
reasons for such phenomena in human behavior. Instead, we note that the
(possible) non-classicality of agents invites social scientists to making
interactions the central object of their investigation. Clearly, game theory
is all about interactions but it retains that the type of agents is
exogenous. This cannot be maintained if agents are non-classical systems.
Instead we ought to make agents' type as (partly) endogenous to interaction.
\footnote{%
Geanakoplos et al. (1989) pioneered an approach in psychological games where
agent's motivation (utility) depends on other's beliefs and therefore are
endogenous to the interaction.} We are currently investigating simpler game
situations along these lines and we trust that it is a promising avenue of
research.\bigskip
\textit{The Stability of the state}
The minimal perturbation principle (ideality) means that a coarse
measurement leaves unperturbed the uncertainty not sorted out by its set of
outcomes. Axiom 5 demands that there exists sufficiently many such ideal
measurements.
In applications to behavioral sciences, an interpretation is that when asked
to choose out of an initial ``state of hesitation'' (a dispersed pure
state), this hesitation is only resolved so as to be able to produce an
answer but no more. The remaining uncertainty is left \textquotedblleft
untouched\textquotedblright . One way to understand this is to see that when
we assume that a choice measurement is ideal it is as if we assumed that the
individual proceeds by taking the "shortest way" to resolve uncertainty. For
instance, suppose as in Model 3 that we ask an individual to make a choice
out of $\{ a,b,c\}$ and that her initial state is a superposition (see
section 5.3) of all six orders. The minimal perturbation principle entails
that the individual will proceed so as to find the most preferred item
without ordering the 3 items. Uncertainty not resolved by an ideal
choice-measurement is left untouched. Therefore we say that ideality implies
a certain stability of the type of a non-classical agent. This can be
contrasted with the classical assumption of complete stability, i.e.
revealing preferences does not affect them at all. Nevertheless ideality may
turn out a rather demanding property, which should be viewed as an
approximation.
Axiom 6 further precises the impact of measurement. It tells us exactly were
a measurement takes the state. We focus on an informational interpretation.
In a social science context it implies that whatever choice the individual
makes that changes her (pure) type, the new behavioral type encapsulates
maximal information about behavior as did the initial type and by ideality
we know the new state.\footnote{\textbf{Information in a pure state is
maximal in the sense that no new information can be obtained from any
measurement without losing some other information, i.e. information that was
true in the initial state but is no longer true in the new state}.} In a
classical context such a pure state corresponds to a state of complete
information. In a non-classical context we know that there exist dispersed
pure states and so a maximal information type (a pure type) does not
uniquely predict behavior in all circumstances.\bigskip
\textit{Caveat}
In applications to behavioral sciences a less attractive feature of our
framework is that a measurement erases information about the previous type.
We should however recall that a person is expected to be composed of a
number of irreducible systems. The loss of memory only applies locally,
within one (irreducible) sub-system. Even within such a system, when
assuming ideality, memory is fully lost only in the case the measurement is
complete (not coarse).\ Yet, our approach implies that to some extent an
individual's previous choices are not relevant to her current type. She may
recall them but she experiences that she has changed. Implicitly, we assume
that at a higher cognitive level, the individual accepts changes in, e.g.
her tastes, which are not motivated by new cognition.
\section{Concluding remarks}
In this paper, we have described the basic structure of non-classical
measurement theory. The objective has been to investigate, from a
theoretical point of view, whether this framework could be suitable for
describing, explaining and predicting human behavior.
As a non-classical measurable system, an agent is characterized by her type
(preferences, attitudes, beliefs etc...) which changes when she makes a
choice actualizing her type. As a consequence behavior exhibits an
irreducible uncertainty. Yet, as we impose some axioms on the interaction
between measurements and the system, behavior is characterized by sufficient
regularity to allow for predictions. We have argued that some of the basic
axioms and properties that underline the theory can be given a meaningful
interpretation consistent with central themes addressed in psychology,
behavioral and social sciences. We also argued that the distinctive feature
of non-classical measurement theory, i.e. the existence of incompatible
measurements, provides an appealing formulation of bounded rationality.
\newpage
|
1,108,101,563,180 | arxiv | \section{Introduction}
Interior methods for nonlinear optimization~\cite{wachter2006implementation,Byrd2006,Vanderbei1999,GONDZIO1995} are essential in many areas of science and engineering. They are commonly used in model predictive control with applications to robotics \cite{sleiman2019contact}, autonomous cars \cite{wang2020non}, aerial vehicles \cite{jerez2017forces}, combustion engines \cite{keller2020model}, and heating, ventilation and air-conditioning systems \cite{ma2014stochastic}, to name a few. Interior methods are also used in public health policy strategy development \cite{acemoglu2020optimal,silva2013optimal}, data fitting in physics \cite{reinert2018semilocal}, genome science \cite{andronescu2010computational,varoquaux2014statistical}, and many other areas.
Most of the computational cost within interior methods is in solving linear systems of Karush-Kuhn-Tucker (KKT) type~\cite{NocW2006}. The linear systems are sparse, symmetric indefinite, and usually
ill-conditioned and difficult to solve. Furthermore, implementations of interior methods for nonlinear optimization, such as the filter-line-search approach in Ipopt~\cite{wachter2006implementation} and HiOp~\cite{hiopuserguide}, typically expect the linear solver to provide the matrix inertia (number of positive, negative and zero eigenvalues)
to determine if the system should be regularized. (Otherwise, interior methods perform curvature tests to ensure descent in a certain merit function~\cite{Chiang2016}.) Relatively few linear solvers are equipped to solve KKT systems, and even fewer to run those computations on hardware accelerators such as graphic processing units (GPUs)~\cite{swirydowicz2020linear}.
At the time of writing, six out of the ten most powerful computers in the world have more than 90\% of their compute power in hardware accelerators \cite{top500}. Hardware accelerator technologies are becoming ubiquitous in off-the-shelf products, as well. In order to take advantage of these emerging technologies, it is necessary to develop fine-grain parallelization techniques tailored for high-throughput devices such as GPUs.
For such sparse problems, pivoting becomes extremely expensive, as data management takes a large fraction of the total time compared to computation~\cite{LDLpivot}. Unfortunately, \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization is unstable without pivoting. This is why \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace approaches, typically used by interior methods for nonlinear problems on CPU-based platforms \cite{tasseff2019}, have not performed as well as on hardware accelerators \cite{swirydowicz2020linear}. Some iterative methods such as MINRES~\cite{MINRES} for general symmetric matrices can make efficient (albeit memory bandwidth limited) use of GPUs because they only require matrix-vector multiplications at each iteration, which can be highly optimized, but they have limited efficiency when the number of iterations becomes large. Another approach for better-conditioned KKT systems is using a modified version of the preconditioned conjugate gradient (\textbf{PCG}) with implicit-factorization preconditioning~\cite{Dollar2006}. In our case, the ill-conditioned nature of our linear problems means that iterative methods alone are not practical~\cite{MINRES,Pyzara2011}.
We propose a hybrid direct-iterative method for solving KKT systems that is suitable for execution on hardware accelerators. The method only requires direct solves using a Cholesky factorization, as opposed to \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace, which means it avoids pivoting. We provide preliminary test results that show the practicality of our approach. Our test cases are generated by optimal power flow analysis \cite{chakrabarti2014security, kourounis2018toward, molzahn2017survey} applied to realistic power grid models that resemble actual grids, but do not contain any proprietary data~\cite{ACOPF}. These systems are extracted from optimal power flow analysis using Ipopt~\cite{wachter2006implementation} with MA57 as its linear solver. Solving such sequences of linear problems gives us an insight in how our linear solver behaves within an interior method. Using these test cases allowed us to assess the practicality of our hybrid approach without interfacing the linear solver with an optimization solver.
Power grids are representative of very sparse and irregular systems commonly found in engineering disciplines.
The paper is organized as follows. \Cref{tab:notation} defines our notations. \cref{sec:nonlinear} describes the optimization problem being solved. \cref{sec:KKT} defines the linear systems that arise when an interior method is applied to the optimization problem. In \cref{sec:block2x2}, we derive a novel hybrid direct-iterative algorithm to utilize the block structure of the linear systems, and prove convergence properties for the algorithm. Numerical tests in \cref{sec:testing} show the accuracy of our algorithm on realistic systems, using a range of algorithmic parameter values.
\Cref{sec:compare} compares our C\raisebox{1.5pt}{\small $++$}\xspace and CUDA implementation to MA57~\cite{Duff2004}. \Cref{sec:itdirect} explains our
decision to use a direct solver in the inner loop of our algorithm. In \cref{sec:summary}, we summarize our main contributions and results. \Cref{appA} provides supplemental figures for \cref{sec:testing}.
\label{sec:notation}
\begin{table}[t]
\caption{\label{tab:notation} Notation. SP(S)D stands for symmetric positive (semi)definite}
\centering
\smallskip
\footnotesize
\begin{tabular}{lrrr} \toprule
Variable & Properties & Functions & Meaning
\\ \midrule
$M$ & Symmetric matrix & $\lambda_{\max}(M)$, $\lambda_{\min}(M)$ & largest, smallest (most negative) eigenvalues
\\$M$& SPSD matrix & $\lambda_{\min *}(M)$& the smallest nonzero eigenvalue
\\$M$& SPD matrix & $\kappa(M)=\lambda_{\max}(M)/\lambda_{\min}(M)$& condition number
\\ $J$& rectangular matrix & null($J$)& nullspace
\\ $x$ & vector, $x>0$ & $X\equiv \text{diag}(x)$ & A diagonal matrix $X$, $X_{ii}=x_i$
\\ $e_{p}$ & vector & &a $p$-vector of $1$s
\\ \bottomrule
\end{tabular}
\end{table}
\section{Nonlinear optimization problem}
\label{sec:nonlinear}
We consider constrained nonlinear optimization problems of the form
\begin{subequations}\label{problemstatement}
\begin{align}
&& \min_{x\in\mathbb{R}^\nx}\ \ & f(x) \label{problemstatement_a}
\\ && \text{s.t.} \ \ & c(x) = 0, && \label{problemstatement_b}
\\ && & d(x) \ge 0, && \label{problemstatement_c}
\\ && & x \ge 0, && \label{problemstatement_d}
\end{align}
\end{subequations}
where $x$ is an $\nx$-vector of optimization parameters, $f:~\mathbb{R}^\nx\rightarrow \mathbb{R}$ is a possibly nonconvex objective function, $c:~\mathbb{R}^\nx\rightarrow \mathbb{R}^{m_c}$ defines $m_c$ equality constraints, and $d:~\mathbb{R}^\nx\rightarrow \mathbb{R}^{m_d}$ defines $m_d$ inequality constraints.
(Problems with more general inequalities can be treated in the same way.)
Functions $f(x)$, $c(x)$ and $d(x)$ are assumed to be twice continuous differentiable.
Interior methods enforce bound constraints \eqref{problemstatement_d} by adding barrier functions to the objective \eqref{problemstatement_a}:
$$
\min_{x\in\mathbb{R}^\nx,\,s\in\mathbb{R}^{m_d}}
\ f(x) - \mu\sum_{j=1}^\nx \ln{x_j} - \mu\sum_{i=1}^{m_d} \ln{s_i},
$$
where the inequality constraints \eqref{problemstatement_c}
are treated as equality constraints $d(x)-s=0$ with slack variables $s\ge 0$. The barrier parameter $\mu>0$ is reduced toward zero using a continuation method to obtain solution that is close to the solution of \eqref{problemstatement} to within a solver tolerance.
Interior methods are most effective when exact first and second derivatives are available, as we assume for $f(x)$, $c(x)$, and $d(x)$. We define
$J(x) = \nabla c(x)$ and $J_d(x)=\nabla d(x)$ as the sparse Jacobians for the constraints.
The solution of a barrier subproblem satisfies the nonlinear equations
\begin{subequations}\label{nonlinearequations}
\begin{align}
\nabla f(x) + J^T y + J_d^T y_d - z_x &= 0
\label{nonlinearequations_a}
\\ y_d - z_s &= 0
\label{nonlinearequations_b}
\\ c(x)&=0
\label{nonlinearequations_c}
\\ d(x)-s&=0
\label{nonlinearequations_d}
\\ Xz_x- \mu e_\nx&=0
\label{nonlinearequations_e}
\\ Sz_s - \mu e_{m_d}&= 0 \label{nonlinearequations_f},
\end{align}
\end{subequations}
where $x$ and $s$ are primal variables, $y$ and $y_d$ are Lagrange multipliers (dual variables) for constraints \eqref{nonlinearequations_c}--\eqref{nonlinearequations_d}, and $z_x$ and $z_s$ are Lagrange multipliers for the bounds $x\ge 0$ and $s \ge 0$. The conditions $x>0$, $s>0$, $z_x>0$, and $z_s>0$ are maintained throughout, and the matrices $X\equiv \text{diag}(x)$ and $S\equiv \text{diag}(s)$ are SPD.
\begin{comment}
\begin{align} \label{linsys6by6}
\begin{bmatrix}
H & 0 & J^T & J_d^T & -I & 0
\\ 0 & 0 & 0 & -I & 0 & -I
\\ J & 0 & 0 & 0 & 0 & 0
\\ J_d & -I & 0 & 0 & 0 &0
\\Z_x & 0 & 0 &0 &X & 0
\\0 & Z_s & 0 &0 & 0 & S
\end{bmatrix}
\begin{bmatrix}
\Delta x \\ \Delta s \\ \Delta y \\ \Delta y_d \\ \Delta z_x \\ \Delta z_s
\end{bmatrix}=
\begin{bmatrix}
\tilde{r}_{x} \\ r_s \\ r_{y} \\ r_{yd} \\ r_{z_x} \\ r_{z_c}
\end{bmatrix},
\end{align}
where the $r$s correspond respectively to $-$ the left hand sides (\textbf{LHS}) of \cref{nonlinearequations}.
\end{comment}
Analogously to \cite{wachter2006implementation}, at each continuation step in $\mu$ we solve nonlinear equations \eqref{nonlinearequations} using a variant of Newton's method. Typically $z_x$ and $z_s$ are eliminated from the linearized version of \eqref{nonlinearequations} by substituting the linearized versions of \eqref{nonlinearequations_e} and \eqref{nonlinearequations_f} into the linearized versions of \eqref{nonlinearequations_a} and \eqref{nonlinearequations_b}, respectively, to obtain a smaller \textit{symmetric} problem. Newton's method then calls the linear solver to solve a series of linearized systems
$K_k \Delta x_k = r_k$, $k=1,2,\dots$, of block $4 \times 4$ form
\begin{align} \label{linsys4x4}
\overbrace{\begin{bmatrix}
H + D_x & 0 & J^T & J_d^T
\\ 0 & D_s & 0 & -I
\\ J & 0 & 0 & 0
\\ J_d & -I & 0 & 0
\end{bmatrix}}^{K_k}
\overbrace{\begin{bmatrix}
\Delta x \\ \Delta s \\ \Delta y \\ \Delta y_d
\end{bmatrix}}^{\Delta x_k}=
\overbrace{\begin{bmatrix}
\tilde{r}_{x} \\ r_s \\ r_{y} \\ r_{yd}
\end{bmatrix}}^{r_k},
\end{align}
where index $k$ denotes optimization solver iteration (including continuation step in $\mu$ and Newton iterations), each $K_k$ is a KKT-type matrix (with saddle-point structure), vector $\Delta x_k$ is a search direction\footnote{Search directions are defined such that $x_{k+1}=x_{k}+\alpha \Delta x$ for some linesearch steplength $\alpha>0$.} for the primal and dual variables, and
$r_k$ is derived from the residual vector for \eqref{nonlinearequations}
evaluated at the current value of the primal and dual variables (with $\norm{r_k} \rightarrow 0$ as the method converges):
\begin{align*}
\tilde{r}_{x} &= -(\nabla f(x) + J^T y + J_d^T y_d + \mu X^{-1}e_\nx),
\\ r_s &= - (y_d +\mu S^{-1}e_{m_d}),
\quad r_{y}= - c(x),
\quad r_{yd}=s-d(x).
\end{align*}
With $Z_x\equiv \text{diag}(z_x)$ and $Z_s\equiv \text{diag}(z_s)$, the sparse Hessian
\begin{equation*}
H \equiv \nabla^2 f(x) + \sum_{i=1}^{{m_c}} y_{c,i} \nabla^2 c_i(x) + \sum_{i=1}^{{m_d}} y_{d,i} \nabla^2 d_i(x)
\end{equation*}
and diagonal $D_x \equiv X^{-1} Z_x$ are $\nx \times \nx$ matrices, $D_s \equiv S^{-1} Z_s$ is a diagonal ${m_d}\times {m_d}$ matrix, $J$ is a sparse ${m_c}\times\nx$ matrix, and $J_d$ is a sparse ${m_d} \times \nx$ matrix. We define $m\equiv {m_c}+{m_d}$, $n\equiv \nx+{m_d}$, and $N\equiv m+n$.
Interior methods may take hundreds of iterations (but typically not thousands) before they converge to a solution. All $K_k$ matrices have the same sparsity pattern, and their nonzero entries change slowly with~$k$. An interior method can exploit this
by reusing output from linear solver functions across multiple iterations $k$:
\begin{itemize}
\item Ordering and symbolic factorization are needed only once because the sparsity pattern is the same for all $K_k$.
\item Numerical factorizations can be reused over several adjacent Newton's iterations, e.g., when an inexact Newton solver is used within the optimization algorithm.
\end{itemize}
Operations such as triangular solves have to be executed at each iteration $k$.
The workflow of the optimization solver with calls to different linear solver functions is shown in \cref{fig:workflow}
(where $K_k \Delta x_k = r_k$ denotes the linear system to be solved at each iteration).
The main \emph{optimization} solver loop is the top feedback loop in \cref{fig:workflow}. It is repeated until the solution is optimal or a limit on optimization iterations is reached. At each iteration, the residual vector $r_k$ is updated. Advanced implementations
have control features to ensure stability and convergence of the optimization solver. The lower feedback loop in \cref{fig:workflow} shows linear system regularization by adding a diagonal perturbation to the KKT matrix. One such perturbation removes singularity \cite[Sec.~3.1]{wachter2006implementation}, which happens if there are redundant constraints. The linear solver could take advantage of algorithm control like this and request matrix perturbation when beneficial.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{figs_combined/workflow.jpg}
\caption{Optimization solver workflow showing invocation of key linear solver functions. The top feedback loop represents the main optimization solver iteration loop. The bottom feedback loop is the optimization solver control mechanism to regularize the underlying problem when necessary.}
\label{fig:workflow}
\end{figure*}
\section{Solving KKT linear systems}
\label{sec:KKT}
$ $\hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization via MA57 \cite{Duff2004} has been used effectively for extremely sparse problems on traditional CPU-based platforms, but is not suitable for fine grain parallelization required for GPU acceleration.
Parallel and GPU accelerated direct solve implementations such as SuperLU~\cite{Li2005,SuperLUweb}, STRUMPACK~\cite{Rouet2016, SSTRUMPACKweb}, and PaStiX~\cite{PaStiX,pastixweb} exist for general symmetric indefinite systems (although the first two are designed for general systems), but these software packages are designed to take advantage of dense blocks of the matrices in denser problems and do not perform well on our systems of interest, which do not yield these dense blocks~\cite{swirydowicz2020linear,he2015gpu}.
The fundamental issue with using GPUs for $\hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace$ is that this factorization is not stable without pivoting~\cite{GV4}. Pivoting requires considerable data movement and, as a result, a substantial part of the run time is devoted to memory access and communication. Any gains from the hardware acceleration of floating point operations are usually outweighed by the overhead associated with permuting the system matrix during the pivoting. This is especially burdensome because both rows and columns need to be permuted in order to preserve symmetry~\cite{LDLpivot}. While any of the two permutations can be performed efficiently on its own with an appropriate data structure for the system's sparse matrix (\textit{i.e.}, row compressed sparse storage for row permutations and, analogously, column compressed sparse storage for column permutations) swapping both rows and columns simultaneously is necessarily costly.
Here we propose a method that uses sparse Cholesky factorization (in particular, a GPU implementation). Cholesky factorization is advantageous for a GPU implementation because it is stable without pivoting and can use GPUs efficiently compared to \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace~\cite{RENNICH2016140}. Furthermore, the ordering of the unknowns (also known as the symbolic factorization) for the purpose of reducing fill-in in the factors can be established without considering the numerical values and only once at the beginning of the optimization process, and hence, its considerable computational cost is amortized over the optimization iterations.
\begin{comment}
Each system $K_k \Delta x_k = r_k$ has the block $4 \times 4$ form \cite[eq.~(11)]{wachter2006implementation}:
\begin{align} \label{linsys4x4}
\bmat{H + D_x & 0 & J^T & J_d^T
\\ 0 & D_s & 0 & -I
\\ J & 0 & 0 & 0
\\ J_d & -I & 0 & 0 }
\bmat{\Delta x \\ \Delta s \\ \Delta y \\ \Delta y_d}
=
\bmat{\tilde{r}_{x} \\ r_s \\ r_{y} \\ r_{yd}},
\end{align}
where $J(x) = \nabla c(x)$ and $J_d(x)=\nabla d(x)$ are sparse Jacobians for the constraints, and sparse matrix
\begin{align*}
H& = \nabla^2 f(x) + \sum_{i} y_{c,i} \nabla^2 c_i(x) + \sum_{i} y_{d,i} \nabla^2 d_i(x
\end{align*}
is defined from the problem Hessians. $D_x$ and $D_s$ are nonnegative diagonal matrices arising from derivatives of barrier functions for inequality constraints. We assume $J$ has {\em full row rank}
as required by the optimization solver.
\end{comment}
To make the problem smaller, we can eliminate $\Delta s = J_d \Delta x - r_{yd}$
and $\Delta y_d = D_s \Delta s - r_s $ from \eqref{linsys4x4} to obtain the $2 \times 2$ system~\cite[Sec.~3.1]{petra2009computational}
\begin{align} \label{linsys2x2}
\bmat{\widetilde H & J^T
\\ J & 0}
\bmat{\Delta x \\ \Delta y}
= \bmat{r_x \\ r_{y}},
\quad
\widetilde H \equiv H + D_x + J_d^T D_s J_d,
\end{align}
where $r_x = \tilde{r}_x + J_d^T (D_s r_{yd} + r_s )$.
This reduction requires block-wise Gaussian elimination with
block pivot {\footnotesize$\bmat{D_s & -I \\ -I}$},
which is ill-conditioned when $D_s$ has large elements,
as it ultimately does. Thus, system \eqref{linsys2x2} is smaller but more ill-conditioned.
After solving \eqref{linsys2x2}, we compute $\Delta s$ and $\Delta y_d$ in turn to obtain the solution of \eqref{linsys4x4}.
\section{A block $2\times 2$ system solution method}
\label{sec:block2x2}
Let $Q$ be any SPD matrix. Multiplying the second row of \eqref{linsys2x2} by $J^TQ$ and adding it to the first row gives a system of the form
\begin{align} \label{linsys2x2mod}
\bmat{H_{\gamma} & J^T \\ J & 0}
\bmat{\Delta x \\ \Delta y}
=
\bmat{\skew3\hat r_x \\ r_{y}},
\qquad H_{\gamma}= \widetilde H + J^TQJ,
\end{align}
where $\skew3\hat r_{x}=r_x+J^TQr_{y}$. The simplest choice is $Q=\gamma I$ with $\gamma>0$:
\begin{equation}\label{eq:gamma}
H_{\gamma} = \widetilde H + \gamma \, J^T J.
\end{equation}
When $H_{\gamma}$ is SPD, its Schur complement $S\equiv JH_{\gamma}^{-1} J^T$ is well defined, and \eqref{linsys2x2mod} is equivalent to
\begin{align}
S\Delta y &= JH_{\gamma}^{-1}\skew3\hat r_x-r_{y}, \label{Schur1} \\
H_{\gamma}\Delta x &= \skew3\hat r_x-J^T\Delta y. \label{Schur2}
\end{align}
This is the approach of Golub and Greif \cite{GG03} for saddle-point systems (which have the structure of the $2\times 2$ system in \eqref{linsys2x2}). Golub and Greif found experimentally that $\gamma=\norm{\widetilde H}/\norm{J}^2$ made $H_{\gamma}$ SPD and better conditioned than smaller or larger values.
We show in Theorems \ref{thm:2} and \ref{thm:4}
that for large $\gamma$, the condition number $\kappa(H_{\gamma})$ increases as $\gamma\rightarrow\infty$, but $\kappa(S)$ converges to $1$ as $\gamma\rightarrow\infty$. \Cref{cor:1} shows there is a finite value of $\gamma$ that minimizes $\kappa(H_{\gamma})$. This value is probably close to $\gamma=\norm{\widetilde H}/\norm{J}^2$.
Our contribution is to combine the system reduction in~\cite{petra2009computational} (from \eqref{linsys4x4} to \eqref{linsys2x2}) with the method of~\cite{GG03} for changing \eqref{linsys2x2} to \eqref{linsys2x2mod} to solve an optimization problem consisting of a series of block $4\times 4$ systems using a GPU implementation of sparse Cholesky factorization applied to $H_{\gamma}$. A new method for regularizing is added, and practical choices for parameters are given based on scaled systems. Also, important convergence properties of the method are proven in Theorems \ref{thm:1}--\ref{thm:4}.
If \eqref{linsys2x2mod} or $H_{\gamma}$ are poorly conditioned, the only viable option may be to ignore the block structure in \eqref{linsys2x2mod} and solve \eqref{linsys4x4} with an \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization such as MA57 (likely without help from GPUs). This is the fail-safe option. Otherwise, we require $H_{\gamma}$ to be SPD (for large enough $\gamma$) and use its Cholesky factorization to apply the conjugate gradient method (CG) \cite{CG} or MINRES \cite{MINRES} to \eqref{Schur1} with or without a preconditioner
If the $R$ part of QR factors of $J^T$ is not too dense,
we could use
$$
M \equiv (JJ^T)^{-1} JH_{\gamma} J^T(JJ^T)^{-1}
$$
as a multiplicative preconditioner. This gives the exact solution if the RHS is orthogonal to null($J$) or $J$ is square, which is not possible in our case. In our experiments, solutions with the preconditioner $M$ lagged slightly behind those without it, but both take $O(1)$ iterations. We proceed without the use of preconditioner.
\subsection{A hybrid solver with minimal regularization}
\label{sec:algorithm}
Typically, \eqref{linsys2x2} starts with an SPD $\widetilde H$ and full-rank $J$. As the optimization solver iterations progress, $\widetilde H$ may become indefinite and $J$'s rank may shrink (at least numerically). This means the system becomes singular and must be regularized. We seek a small regularization to avoid changing too much the solution of the equivalent system \eqref{linsys2x2mod}.
An SPD $H_{\gamma}$ guarantees that $\widetilde H$ is SPD on null($J$), a requirement at the solution of the optimization problem.
\begin{theorem}
\label{thm:1}
For large enough $\gamma$ ($\gamma>\gamma_{\min}$) and full row rank $J$, $H_{\gamma}=\widetilde H + \gamma J^TJ$ is SPD iff $\widetilde H$ is positive definite on null($J$).
\end{theorem}
\begin{proof}
Assume $H_{\gamma}$ is SPD. For any nonzero vector $v_0$ in null($J$), we have
$v_0^T\widetilde H v_0 = v_0^TH_{\gamma} v_0 > 0$ (this direction of the proof does not require $J$ to have full row rank).
Conversely, assume $\widetilde H$ is positive definite on null($J$) and $J$ has full row rank. For any nonzero vector $v=v_0+v_1$ with $v_0$ in null($J$) and $v_1$ orthogonal to null($J$),
\begin{align*}
v^T(\widetilde H + \gamma J^TJ)v=
v_0^T\widetilde H v_0 + v_1^T(\widetilde H + \gamma J^TJ)v_1.
\end{align*}
We have $v_0^T\widetilde H v_0\ge 0$ by assumption. Further,
\begin{align*}
v_1^T(\widetilde H + \gamma J^TJ)v_1 \ge \big(\lambda_{\min}(\widetilde H)
+ \gamma \lambda_{\min*}(J^TJ) \big) v_1^Tv_1>0
\end{align*}
if $\gamma \geq\gamma_{\min}\equiv -\lambda_{\min}(\widetilde H)/\lambda_{\min*}(J^TJ)$.
\end{proof}
We work with $\gamma \geq 0$. We use $H_{\gamma}$ SPD
as a proxy for $\widetilde H$ being SPSD on null($J$), keeping in mind that if it does not hold even for a very large $\gamma$ in \eqref{eq:gamma}, $H_{\gamma}$ is singular and needs to be modified. However, $\gamma$ cannot be made arbitrarily large without increasing $\kappa(H_{\gamma})$ when $J$ is rectangular, as in our case. There must be an ideal intermediate value of $\gamma$.
\begin{theorem}
\label{thm:2}
If $J$ has full row rank with more columns than rows, $\widetilde H$ is symmetric and positive definite on null($J$), and $H_{\gamma}=\widetilde H+\gamma J^TJ$, there exists $\gamma_{\max}\geq \max(\gamma_{\min},0)$ such that for $\gamma \geq \gamma_{\max}$, $\kappa(H_{\gamma})$ increases linearly with $\gamma$.
\end{theorem}
\begin{proof}
\begin{gather*}
\lambda_{\max}(H_{\gamma})\equiv \max_{\norm{v}_2=1} v^TH_{\gamma} v\leq \max_{\norm{v}_2=1} v^T\widetilde H v + \max_{\norm{v}_2=1} v^T(\gamma J^TJ) v \\
= \lambda_{\max}(\widetilde H)+ \gamma\lambda_{\max}(J^TJ),
\\
\lambda_{\max}(H_{\gamma})\geq \min_{\norm{v}_2=1} v^T\widetilde H v + \max_{\norm{v}_2=1} v^T(\gamma J^TJ) v=\lambda_{\min}(\widetilde H)+ \gamma\lambda_{\max}(J^TJ).
\end{gather*}
Hence $\lambda_{\min}(\widetilde H)+ \gamma \lambda_{\max}(J^TJ)\leq \lambda_{\max}(H_{\gamma})\leq \lambda_{\max}(\widetilde H)+ \gamma \lambda_{\max}(J^TJ)$, meaning $\lambda_{\max}(H_{\gamma}) \propto \gamma$ for large enough $\gamma$ (defined as $\gamma \geq \gamma_{\max}\geq \max(\gamma_{\min},0)$).
Similarly,
\begin{gather*}
\lambda_{\min}(H_{\gamma})\equiv \min_{\norm{v}_2=1} v^TH_{\gamma} v\geq \min_{\norm{v}_2=1} v^T\widetilde H v + \min_{\norm{v}_2=1} v^T(\gamma J^TJ)v =\lambda_{\min}(\widetilde H),
\\
\lambda_{\min}(H_{\gamma})\leq \max_{\norm{v}_2=1} v^T\widetilde H v + \min_{\norm{v}_2=1} v^T(\gamma J^TJ) v=\lambda_{\max}(\widetilde H).
\end{gather*}
Thus $\lambda_{\min}(\widetilde H)\leq \lambda_{\min}(H_{\gamma})\leq \lambda_{\max}(\widetilde H)$. From \cref{thm:1}, $\gamma\geq \gamma_{\min}\Rightarrow\lambda_{\min}(H_{\gamma})>0$, so that $\kappa(H_{\gamma})=\lambda_{\max}(H_{\gamma})/\lambda_{\min}(H_{\gamma})\propto \gamma$ for $\gamma\geq \gamma_{\max}$.
\end{proof}
\begin{comment}
\begin{proof} $H_{\gamma}=\widetilde H +\gamma J^TJ$ is real symmetric.
For large enough $\gamma_{\max}$, the approximation $
\lambda_{\max}(H_{\gamma})\approx \lambda_{\max}( \gamma \, J^T J)=\gamma \lambda_{\max}( J^T J)
$ holds for any $\gamma\geq\gamma_{\max}$. If there is an eigenvector of $\widetilde H$ completely in null($J^TJ$) then $\lambda_{\min}(H_{\gamma})$ stays constant. In this case, let $\lambda_z(\widetilde H)$ be the minimum eigenvalue on null($J$). Then, $\lambda_{\min}(H_{\gamma}(\gamma))=\lambda_{z}(\widetilde H)$ and $\kappa(H_{\gamma})=\lambda_{\max}\left(H_{\gamma}(\gamma)\right)/\lambda_{z}(\widetilde H)\approx \gamma \lambda_{\max}( J^T J)/\lambda_{z}(\widetilde H))$.
Otherwise, $\lambda_{\min}(H_{\gamma})$ increases with $\gamma$ but may be bounded by some value, in this case $\kappa(H_{\gamma})\propto \gamma$ as before. The remaining option is $\lambda_{\min}(H_{\gamma})$ increasing sublinearly with $\gamma$, which corresponds to a sublinear increase of $\lambda_{\min}(H_{\gamma})$ with $\gamma$. Since we know that $\kappa(H_{\gamma})\rightarrow \infty$ for $\gamma \rightarrow \infty$, by continuity $\lambda_{\min}(H_{\gamma})$ cannot increase linearly (or quicker).
\end{proof}
\end{comment}
\begin{corollary}
\label{cor:1}
Among $\gamma$s such that $H_{\gamma}$ is SPD ($\gamma\ge \gamma_{\min}$), $\kappa(H_{\gamma})$ is minimized for some $\gamma \in [\gamma_{\min},\gamma_{\max}]$.
\end{corollary}
In practice, the optimizer may provide systems where $\widetilde H$ is not SPD on null($J$). In this case we can regularize $H_{\gamma}$ by using $H_{\delta}=H_{\gamma} + \delta_1 I$ instead.
Unlike $\gamma$, the introduction of $\delta_1$ changes the solution of the system, so it is essential to keep $\delta_1$ as small as possible. If $H_{\gamma}$ is not SPD, we set $\delta_1=\delta_{\min}$, a parameter for some minimum value of regularization. If $H_{\delta}$ is still not SPD, we double $\delta_1$ until it is. This ensures we use the minimal value of regularization (to within a factor of 2) needed to make $H_{\delta}$ SPD.
If $\delta_1$ proves to be large, which can happen before we are close to a solution, it is essential for the optimizer to be informed to allow it to modify the next linear system.
When the optimizer nears a solution, $\delta_1$ will not need to be large.
In our tests, $\delta_1$ starts at $0$
for the next matrix in the sequence, but is set back to its previous value if the factorization fails.
We also set $\delta_{\max}$, the maximum allowed $\delta_1$ before we resort to \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization of $K_k$ or return to the optimization solver.
If $J$ has low row rank, $S$ in \eqref{Schur1} is SPSD. In this case, CG will succeed if \eqref{Schur1} is consistent. Otherwise, it will encounter a near-zero quadratic form. We then restart CG on the regularized system $(J H_{\delta}^{-1} J^T+\delta_2 I)\Delta
y=Jw-r_{y}$. In this way, $\delta_1$ regularizes the $(1,1)$ block and $\delta_2$ regularizes the $(2,2)$ block.
To ensure we can judge the size of parameters $\gamma$ and $\delta_{\min}$ relative to system \eqref{linsys2x2mod}, we first scale \eqref{linsys2x2mod} with a symmetrized version of Ruiz scaling \cite{Ruiz01ascaling}.
\Cref{alg:CGschur} is a generalization of the Uzawa iteration~\cite{Uzawa} for KKT systems with a (1,1) block that is not necessarily positive definite. It gives a method for solving a sequence of systems \{\eqref{Schur1}--\eqref{Schur2}\} with $Q=\gamma I$ used in the calculation of $H_{\gamma}$. The workflow is similar to~\cref{fig:workflow} except only $H_{\delta}$ is factorized. On lines 16--19, $H_{\delta}^{-1}$ is applied by direct triangular solves with the Cholesky factors $L$, $L^T$ of $H_{\delta}$. Each iteration of the CG solve on line 17 requires multiplication by $J^T$, applying $H_{\delta}^{-1}$, multiplying by $J$, and adding a scalar multiple of the original vector.
\Cref{sec:itdirect} shows why complete Cholesky factorization of $H_{\delta}$ was chosen.
The rest of the section discusses other important bounds that $\gamma$ must obey. However, selecting an ideal $\gamma$ in practice is difficult and requires problem heuristics (like $\gamma=\norm{\widetilde H}/\norm{J}^2$ in \cite{GV4}) or trial and error.
\begin{algorithm}[t
\begin{algorithmic}[1]
\FOR {each matrix in the sequence such as in \eqref{linsys2x2mod}}
\STATE $\delta_1\gets 0$
\STATE $H_{\delta}= H_{\gamma}$ ($\gamma$ used in the calculation of $H_{\gamma}$)
\STATE Try $LL^T=\texttt{chol}(H_{\delta})$ (Fail $\gets$ False if factorized, True otherwise)
\WHILE {Fail and $\delta_1<=\delta_{\max}/2$}
\IF {$\delta_1==0$}
\STATE$\delta_1\gets \delta_{\min}$
\ELSE
\STATE $\delta_1=\delta_{\min}\gets 2\delta_{\min}$
\ENDIF
\STATE $H_{\delta}= H_{\gamma}+\delta_1 I$
\STATE Try $LL^T=\texttt{chol}(H_{\delta})$ (Fail $\gets$ False if factorized, True otherwise)
\ENDWHILE
\IF{Fail==False}
\STATE Direct solve $H_{\delta} w=\skew3\hat r_x$
\IF {CG on $(J H_{\delta}^{-1} J^T)\Delta
y=Jw-r_{y}$ produces a small quadratic form}
\STATE CG solve $(J H_{\delta}^{-1} J^T+\delta_2 I)\Delta
y=Jw-r_{y}$ (perturbed \eqref{Schur1})
\ENDIF
\STATE Direct solve \hbox{$H_{\delta} \Delta x = \skew3\hat r_x-J^T\Delta y$\hspace*{65pt}} (perturbed \eqref{Schur2})
\ELSE
\STATE Use \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace to solve \eqref{linsys4x4} or return problem to optimization solver
\ENDIF
\ENDFOR
\end{algorithmic}
\caption{Using CG on Schur complement to solve the block system \eqref{linsys2x2mod} by solving \eqref{Schur1}--\eqref{Schur2}. $H_{\delta}$ is a nonsingular perturbation of $H_{\gamma}=\widetilde H + \gamma J^TJ$.
Typical parameters: $\gamma = 10^{4}$, $\delta_{\min}=\delta_{2}= 10^{-9}$, $\delta_{\max}=10^{-6}$.}
\label{alg:CGschur}
\end{algorithm}
\subsection{Guaranteed descent direction}
Optimization solvers typically use an \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization to solve the $N \times N$ system \eqref{linsys4x4} at each step because (with minimal extra computation) it supplies the inertia of the matrix. A typical optimization approach treats each of the four $2\times 2$ block of~\eqref{linsys4x4} as one block, accounting for the possible regularization applied to $H_{\gamma}$.
We mean that the (1,1) block {$H_K\equiv$\footnotesize$\bmat{ H+D_x+\delta_1 I& \\ & D_s}$} is a Hessian of dimension $n$, and the (2,1) block {$J_K\equiv$\footnotesize$\bmat{ J& \\ J_d& -I}$} represents the constraint Jacobian and has dimensions $m \times n$. The inertia being $(n,m,0)$ implies that (a) $H_K$ is uniformly SPD on null($J_K$) for all $k$, meaning $v^TH_Kv\geq \epsilon >0$ for all vectors $v$ satisfying $J_Kv=0$, iterations $k$, and some positive constant~$\epsilon$; (b) KKT matrix in \eqref{linsys4x4} is nonsingular. Together these conditions are sufficient to ensure a descent direction and fast convergence in the optimization algorithm~\cite{wachter2006implementation,NocW2006}. We show that \cref{alg:CGschur} ensures properties (a) and (b) without computing the inertia of the KKT matrix. %
\begin{theorem}
\label{thm:3}
If \cref{alg:CGschur} succeeds with $\delta_2=0$, it provides a descent direction for the interior method for large enough $\gamma$.
\end{theorem}
\begin{proof}
By construction, $H_{\delta}$ is uniformly SPD (because the Cholesky factorization was successful for all iterations of the solver until this point) and the KKT system in \eqref{linsys2x2mod} is nonsingular (because $J$ has full row rank, or $\delta_2 I$ was added to $S$ to shift it from SPSD to SPD).
Therefore, \cref{alg:CGschur} provides a descent direction for \eqref{linsys2x2mod} even if regularization is added. The rest of the proof assumes $\delta_2=0$, and we return to the other case later. Since $J$ has full row rank, $H+D_x+ \delta_1 I$ is nonsingular by assumption, all block rows in~\eqref{linsys4x4} have full rank internally, and Gaussian elimination cannot cause cancellation of an entire block row, we conclude that \eqref{linsys4x4} is nonsingular. Let $H_{K\gamma}\equiv H_K+\gamma_K J_K^TJ_K$ (for $\gamma_K\geq \gamma$ used in \cref{alg:CGschur}). For any nonzero vector $u^T=(u_1^T, u_2^T)$ with $u_1$ and $u_2$ of dimensions $\nx$ and ${m_d}$,
$$
u^T H_{K\gamma} u = u_1^T (H+D_x+\gamma_K J^TJ)u_1 + u_2^TD_su_2+ \gamma_K w^Tw,
$$
where $w=J_du_1-u_2$. If $w=0$, then $u_2=J_du_1$, which means $u^TH_{K\gamma} u\geq u_1^TH_{\gamma} u_1\geq \epsilon>0$ and the proof is complete (with $\gamma_K=\gamma$). Otherwise, $\gamma_w\equiv \gamma_K w^Tw>0$. So for large enough $\gamma_K$, $u^TH_{K\gamma} u\geq \epsilon>0$. Applying \cref{thm:1} to \eqref{linsys4x4} with $H_K$ and $J_K$ replacing $\widetilde H$ and $J$ shows that $H_K$ is positive definite on null($J_K$).
\end{proof}
We note that $\delta_1$ corresponds to the so-called primal regularization of the filter line-search algorithm~\cite{wachter2006implementation}. Under this algorithm, whenever $\delta_1$ becomes too large, one can invoke the feasibility restoration phase of the filter line-search algorithm~\cite{wachter2006implementation} as an alternative to performing the $\hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace$ factorization on the CPU. Feasibility restoration ``restarts'' the optimization at a point with more favorable numerical properties. We also note that when $\delta_1$ is sufficiently large, the curvature test used in \cite{Chiang2016} should be satisfied. Hence, inertia-free interior methods have the global convergence property without introduction of other regularization from the outer loop.
The $\delta_2$ regularization is a numerical remedy for low-rank $J$, caused by redundant equality constraints. This regularization is similar to the so-called dual regularization used in Ipopt~\cite{wachter2006implementation} and specifically addresses the issue of rank deficiency; however, there is no direct analogue from a $\delta_2$ regularization in \eqref{linsys2x2mod} to Ipopt's dual regularization in~\eqref{linsys4x4} and neither of the two heuristics guarantees a descent direction for~\eqref{linsys4x4}. Given the similarity between the two heuristics, we believe that the $\delta_2$ regularization can be effective within the filter line-search algorithm.
When \cref{alg:CGschur} is integrated with a nonlinear optimization solver such as \cite{waecther_05_ipopt} and \cite{HiOp}, the while loop (Line 5) in \cref{alg:CGschur} can be removed and the computation of $\delta_1$ and $\delta_2$ can be decided by the optimization algorithm. The development of robust heuristics that allow \cref{alg:CGschur}'s $\delta_1$ and $\delta_2$ regularization within the filter line-search algorithm will be subject of future work.
\subsection{Convergence for large $\gamma$}
In \cref{sec:algorithm} we showed that $H_{\gamma}$ is SPD for large enough $\gamma$, and that $\gamma$ should not be so large that the low-rank term $\gamma (J^TJ)$ makes $H_{\gamma}$ ill-conditioned. Here we show that in order to decrease the number of CG iterations, it is beneficial to increase $\gamma$ beyond what is needed for $H_{\gamma}$ to be SPD.
\begin{theorem}
\label{thm:4}
In exact arithmetic, for $\gamma \gg 1$, the eigenvalues of $S_\gamma \equiv \gamma S$ converge to $1$ with an error term that decays as $1/\gamma$.
\end{theorem}
\begin{proof}
By definition,
\begin{align}
S_\gamma = \gamma J(\widetilde H +\gamma J^TJ)^{-1} J^T=J\left (\frac{\widetilde H}{\gamma}+J^TJ\right)^{-1} J^T.
\label{eq:S1}
\end{align}
Since $\widetilde H$ is nonsingular and $J$ has full row rank by assumption,
the Searle identity $(A+BB^T)^{-1} B^T=A^{-1} B(I+B^TA^{-1} B)^{-1}$~\cite{Searle}
with $A\equiv \widetilde H/\gamma$ and $B\equiv J^T$
gives
\begin{align*}
S_\gamma=\gamma J\widetilde H^{-1} J^T(I+\gamma J\widetilde H^{-1} J^T)^{-1}=\left(I+C\right)^{-1},
\quad
C \equiv \frac{1}{\gamma} \left(J\widetilde H^{-1} J^T\right)^{-1}.
\end{align*}
For $\gamma \rightarrow \infty$, $\left|\lambda_{i} (C)\right| \ll 1 \ \forall i$.
The identity $(I+C)^{-1} =\Sigma_{k=0}^{\infty} (-1)^k C^k$, which applies when $\left|\lambda_{i} (C)\right| < 1 \ \forall i$, gives
\begin{align}
S_\gamma=\sum_{k=0}^{\infty}(-1)^k C^k=I - C + O\left(\frac{1}{\gamma^2}\right).
\label{eq:S3}
\end{align}
\end{proof}
A different proof of an equivalent result is given by Benzi and Liu \cite[Lemma 3.1]{Benzi07}.
\begin{corollary}
The eigenvalues of $S$ are well clustered for $\gamma \gg 1$, and the iterative solve in \cref{alg:CGschur} line 16 converges quickly.
\end{corollary}
In the next section, we show that our choices of $\gamma=10^4$--$10^6$ are large enough for the arguments to hold. This explains the rapid convergence of CG. We also show that our transformation from \eqref{linsys4x4} to \eqref{linsys2x2mod} and our scaling choice are stable on our test cases, by measuring the error for the original system \eqref{linsys4x4}.
\section{Practicality demonstration}
\label{sec:testing}
We demonstrate the practicality of \cref{alg:CGschur} using five series of linear problems \cite{ACOPF} generated by the Ipopt solver performing optimal power flow analysis
on the power grid models
summarized in \cref{tab:descr}. We compare our results with a direct solve via MA57's \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization~\cite{Duff2004}.
\begin{table}[b]
\caption{\label{tab:descr} Characteristics of the five tested optimization problems, each generating sequences of linear systems $K_k \Delta x_k = r_k$ \eqref{linsys4x4} of dimension $N$. Numbers are rounded to 3 digits. K and M signify $10^3$ and $10^6$.}
\centering
\footnotesize
\begin{tabular}{lrr} \toprule
Name & $N(K_k)$~~~ & \textrm{nnz}($K_k$)
\\ \midrule
South Carolina grid & 56K & 411K
\\ Illinois grid & 4.64K & 21.6K
\\ Texas grid & 55.7K & 268K
\\ US Western Interconnection grid & 238K & 1.11M
\\ US Eastern Interconnection grid & 1.64M & 7.67M
\\ \bottomrule
\end{tabular}
\end{table}
\subsection{$\gamma$ selection}
\begin{figure}[t]
\centering
\includegraphics[width=\Width\textwidth]{figs_combined/ACT200.jpg}
\caption{Illinois grid: (a) CG iterations on \cref{Schur1} with varying $\gamma$. $\gamma \geq 10^3$ gives good convergence. The mean number of iterations for $\gamma=10^4$ is $9.4$. (b) Sorted eigenvalues of $S_\gamma \equiv \gamma JH_{\gamma}^{-1} J^T$ in \eqref{eq:S1} matrix 22, for $\gamma=10^4$. The eigenvalues are clustered close to $1$.}
\label{fig:alphas200}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=\Width\textwidth]{figs_combined/ACT200e1.jpg}
\caption{Illinois grid \eqref{linsys4x4}, with varying $\gamma$ in \eqref{eq:gamma}: (a) Backward error (\textbf{BE}) and (b) relative residual (\textbf{RR}). (a) $\gamma \leq 10^4$ gives results close to machine precision. (b) $\gamma \leq 10^4$ has $\text{RR} \le 10^{-8}$. }
\label{fig:alphas200back44low}
\end{figure}
We use \cref{alg:CGschur} with CG as the iterative method on line 16. A larger $\gamma$ in $H_{\gamma}=\widetilde H + \gamma J^TJ$ may improve CG convergence but make $H_{\gamma}$ more ill-conditioned and increase the error in the solution of \eqref{linsys4x4}. We therefore run some preliminary tests to find a suitable $\gamma$ before selecting $\delta_{\min}$ and testing other problems,
and we require CG to solve accurately (stopping tolerance $10^{-12}$ for the relative residual norm). We start with one of the smaller problems, the Illinois grid, to eliminate some values of $\gamma$ and see if the condition number of $K_k$ for each matrix in the sequence is useful in determining the needed iterations or regularization. We run these tests with $\delta_{\min}=10^{-10}$.
\Cref{fig:alphas200}(a) shows that for the Illinois grid sequence, values of $\gamma \ge 10^4$ give CG convergence in approximately 10 iterations for every matrix in the sequence. For the last matrix we found that eigenvalues of $S$ in \eqref{Schur1} are very well clustered and the condition number of $S$ is $\kappa \approx 1.04$, as shown in \cref{fig:alphas200}(b). This guarantees and explains the rapid convergence because for CG applied to a general SPD system $Ax=b$,
\begin{align}
\frac{\norm{e_k}_A}{\norm{e_0}_A} \le 2\left(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\right)^k, \label{CGconv}
\end{align}
where $e_k=x-x_k$ is the error in an approximate solution $x_k$ at iteration~$k$ \cite{GV4}.
For the last few matrices, $\gamma=10^8$ is the only value requiring $\delta_1>0$. The final value of $\delta_1$ was $16\delta_{\min} = 1.6 \cdot 10^{-9}$.
No important information was gleaned from $\texttt{cond}(K_k)$. For all other values of $\gamma$, $\delta_1=0$ for the whole sequence. For a system $Ax=b$ and an approximate solution $\skew3\tilde x \approx x$, we define the backward error \textbf{BE} as $\norm{A\skew3\tilde x-b}_2/(\norm{A}_2 \, \norm{\skew3\tilde x}_2+\norm{b}_2)$ and the relative residual \textbf{RR} as $\norm{A\skew3\tilde x-b}_2/\norm{b}_2$. As is common practice, we use $\norm{A}_\infty$ to estimate $\norm{A}_2$, which is too expensive to calculate directly. $\norm{A}_\infty$ always provides an upper bound for $\norm{A}_2$, but in practice is quite close to the actual value.
Note that MA57 always has a BE of order machine precision.
\begin{comment}
\Cref{fig:hist} shows the eigenvalue distribution for $S \equiv JH_{\gamma}^{-1} J^T$ of matrix 22 in the Illinois grid. The eigenvalues are well clustered.
\end{comment}
\Cref{fig:alphas200back44low}
shows the (a) BE and (b) RR for system \eqref{linsys4x4} for varying $\gamma$. Results for the BE and RR of system \eqref{linsys2x2} are not qualitatively different and are given in \cref{appA}. One conclusion is that increasing $\gamma$ to reduce CG iterations can be costly for the accuracy of the solution of the full system. Based on the results of this section, $\gamma$ in the range $10^2$--$10^6$ gives reasonable CG iterations and final accuracy. For other matrices, we present a selected $\gamma$ in this range that produced the best results.
\subsection{Results for larger matrices}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figs_combined/TAMU500.png}
\caption{South Carolina grid: $\delta_1$ for $\gamma=10^4$. For other values of $\gamma$ the graph was similar. Except for the first few and last few matrices, $\delta\approx 1$, meaning the required regularization would make the solution too inaccurate. The value of $0$ is omitted on the log-scale. }
\label{fig:South Carolina grid}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\Width\textwidth]{figs_combined/ACT10k.jpg}
\caption{US Western Interconnection grid with $\gamma=10^6$ in \eqref{eq:gamma}: (a) CG iterations on \cref{Schur1}. The mean number of iterations is $17$. (b) BE and RR for the sequence. The BE for \eqref{linsys4x4} is less than $10^{-10}$, except for matrix 4. }
\label{fig:WesternUS}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\Width\textwidth]{figs_combined/ACT70k.jpg}
\caption{US Eastern Interconnection grid with $\gamma=10^6$ in \eqref{eq:gamma}: (a) CG iterations on \cref{Schur1}. The mean number of iterations is $13.1$. (b) BE and RR for \eqref{linsys4x4} and \eqref{linsys2x2}. The BE for \eqref{linsys4x4} is less than $10^{-10}$. }
\label{fig:EasternUS}
\end{figure}
Solving larger (and perhaps more poorly conditioned) problems brings about new computational challenges and limits the amount of time any particular task can take. We wish to set $\delta_{\min}$ small enough to avoid over-regularizing the problem, and large enough to eliminate wasteful iterations and numerical issues. We want $\delta_{\max}$ small enough to recognize that we have over-regularized (and should try a different method), but large enough to allow for reasonable regularization. In our numerical tests, we use $\delta_{\min}=10^{-10}$ and $\delta_{\max}$ large enough so that $\delta_1$ can increase until $H_{\delta}=\widetilde H+\delta_1 I$ is SPD.
This informs the parameter selection for the next system.
\Cref{fig:South Carolina grid} shows that the South Carolina grid matrices as currently constructed cannot benefit from this method. They need $\delta_1>1$ to make $H_{\gamma}+\delta_1 I$ SPD, which on a scaled problem means as much weight is given to regularization as to the actual problem. \Cref{alg:CGschur} succeeds on the other matrix sequences, at least for certain $\gamma$s, and needs no regularization ($\delta_1=\delta_2=0$).
For the US Western Interconnection grid, \cref{fig:WesternUS}(a) shows a CG convergence graph and (b) shows several types of error. For the US Eastern Interconnection grid, \cref{fig:EasternUS}(a) shows a CG convergence graph and (b) shows several types of error. Figures for the Texas grid are given in \cref{appA}, as they do not provide more qualitative insight. Convergence occurs for all matrix sequences in less than $20$ iterations on average.
The BE for \eqref{linsys4x4} is consistently less than $10^{-8}$ and, with two exceptions in the US Western Interconnection grid, is close to machine precision. Results for the US Eastern Interconnection grid show that the method does not deteriorate with problem size, but rather there are some irregular matrices in the US Western Interconnection grid.
The results in this section suggest that $\delta_{\min}$ in the range $10^{-8}$ down to $10^{-10}$ is reasonable for any $\gamma \leq 10^8$. There is no clear choice for $\delta_{\max}$, but a plausible value would be $\delta_{\max}=2^{10}\delta_{\min}\approx 1000 \delta_{\min}$. This way we are guaranteed that the regularization doesn't take over the problem, and the number of failed factorizations is limited to $10$, which should be negligible in the total solution times for a series of $\approx 100$ problems.
\subsection{Reordering $H_{\gamma}$}
\label{sec:reorder}
\begin{figure}[h]
\centering
\includegraphics[width=\Width\textwidth]{figs_combined/ACT200ord.jpg}
\caption{Illinois grid matrix 22: (a) Approximate minimum degree ordering of chol($H_{\gamma}$) is sparser than (b) Nested dissection ordering of chol($H_{\gamma}$). Both orderings are calculated in {\sc Matlab}\xspace.}
\label{fig:cholamd}
\end{figure}
\begin{comment}
The ordering of unknowns can substantially affect the sparsity of the Cholesky factorization, which is important to cut down on computation and memory costs. In this section we show that approximate minimum degree (\textbf{AMD}) works best for our problems and we use it implicitly in all other sections. The data in this section are generated via {\sc Matlab}\xspace.
\end{comment}
\begin{comment}
\Cref{fig:Hamd} and \cref{fig:Hnd} show the structure of $H_{\gamma}$ of matrix 22 in the Illinois grid sequence (though it is the same for other matrices in the sequence, and qualitatively similar for other ACTIVs matrices) for AMD and nested dissection (\textbf{ND}) respectively.
\end{comment}
The efficiency of sparse Cholesky factorization $P H_{\gamma} P^T = LL^T$
depends greatly on the row/column ordering defined by permutation $P$.
\Cref{fig:cholamd} compares the sparsity of $L$, corresponding to $H_{\gamma}$ of matrix 22 in the Illinois grid sequence, obtained from two choices
of $P$: approximate minimum degree (\textbf{AMD}) and nested dissection.
(The data in this section are generated via {\sc Matlab}\xspace.)
We see that AMD produces a sparser $L$ ($17,413$ nonzeros vs.\ $20,064$).
Reverse Cuthill-McKee and no ordering gave $46,527$
and $759,805$ nonzeros respectively. Recall that the sparsity structure is identical for matrices from the same family and similar for other matrix families. As expected, AMD was the sparsest ordering tested for other matrix families.
Thus, AMD is our reordering of choice.
It is a one-time cost performed during the optimization problem setup.
\section{Comparison with \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace}
\label{sec:compare}
We compare our method with a direct solve of \eqref{linsys4x4} using MA57's \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization \cite{Duff2004} with default settings. All testing in this section is done using a prototype C\raisebox{1.5pt}{\small $++$}\xspace/CUDA code on a single GPU device. Reference MA57 solutions were computed on a CPU. Further details on computational platforms used are given in~\cref{tab:machines}.
\begin{table}[t]
\caption{\label{tab:machines} Accelerator devices and compilers used. }
\centering
\smallskip
\footnotesize
\begin{tabular}{lllll} \toprule
Machine name & Host processor & Accelerator device & Host compiler & Device compiler
\\ \midrule
Newell & IBM Power9 & NVIDIA Volta 100 & GCC 7.4.0 & CUDA 10.2
\\ Deception & AMD EPYC 7502 & NVIDIA Ampere 100 & GCC 7.5.0 & CUDA 11.1
\\ \bottomrule
\end{tabular}
\end{table}
The factorization density is $\rho_L=\left(2\,\textrm{nnz}(L)+\textrm{nnz}(D)\right)/N$. We define $\rho_C$ analogously for the Cholesky factors of $H_{\gamma}$ in \eqref{linsys2x2mod}, with $\textrm{nnz}(D)=0$ and dimension $\nx$. Note that $\rho$ gives the average number of nonzeros in the factorization per row or column.
\Cref{tab:ldlcomp} shows that the Cholesky factor of $H_{\gamma}$ is usually less dense than the \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factor for \eqref{linsys4x4}, even though \eqref{linsys4x4} is sparser than \eqref{linsys2x2mod}.
\Cref{tab:solve} shows the solve times on the Newell computing cluster \cite{PNNLmachines}. The main trend is that as the problem size increases, GPUs using cuSolver~\cite{cuSolverweb} increasingly outperform equivalent computations on a CPU.
\begin{table}[t]
\caption{\label{tab:ldlcomp} The dimensions, number of nonzeros, and factorization densities (the number of nonzeros in the factors per row) for solving \eqref{linsys4x4} directly with \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace ($N$, $\textrm{nnz}_L$, $\rho_L$ respectively) and for solving \eqref{eq:gamma} with Cholesky ($\nx$, \textrm{nnz}{}$_C$, $\rho_C$ respectively). Numbers are rounded to 3 digits. K and M signify $10^3$ and $10^6$. For all cases, $\rho_C<\rho_L$ and $\nx<N/2$. }
\centering
\smallskip
\footnotesize
\begin{tabular}{lrrrrrr} \toprule
Abbreviation & $N$ & $\textrm{nnz}{}_L$& $\rho_L$& $\nx$ &$\textrm{nnz}{}_C$&$\rho_C$
\\ \midrule
Illinois & 4.64K & 94.7K & 20.4& 2.28K&34.9K &15.3
\\ Texas & 55.7K & 2.95M& 52.9 & 25.9K&645K &24.9
\\ Western US & 238K &10.7M& 44.8 & 116K&2.23M& 19.2
\\ Eastern US & 1.64M & 85.4M& 52.1 & 794K&17.7M& 22.3
\\ \bottomrule
\end{tabular}
\end{table}
\begin{comment}
\begin{table}[t]
\caption{\label{tab:fact} {\sc Matlab}\xspace factorization times (in seconds) for solving \eqref{linsys4x4} directly with \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace or for solving systems with $H_{\delta}$ with Cholesky, and the ratio between the former and the latter. Cholesky is quicker than \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace by an increasingly large ratio.}
\centering
\smallskip
\footnotesize
\begin{tabular}{lrrr} \toprule
Name & \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace & Cholesky & ratio
\\ \midrule
Illinois & $1.34\cdot 10^{-2}$ & $2.24\cdot 10^{-3}$ & 5.99
\\ Texas & $5.34\cdot 10^{-1}$ & $4.78\cdot 10^{-2}$ & 11.2
\\ Western US & $2.53\cdot 10^{0\phantom{-}}$ & $1.34\cdot 10^{-1}$ & 18.9
\\ Eastern US& $2.71\cdot 10^{1\phantom{+}}$ & $9.84\cdot 10^{-1}$ & 27.6
\\ \bottomrule
\end{tabular}
\end{table}
\end{comment}
\begin{table}[t]
\newcommand{\tenp}[1]{\cdot 10^{#1}}
\newcommand{\cdot 10^0\phantom{-}}{\cdot 10^0\phantom{-}}
\caption{\label{tab:solve} Average times (in seconds) for solving \eqref{linsys4x4} directly on a CPU with
\hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace (via MA57~\cite{Duff2004}) or for solving one $H_{\delta}$ linear system with supernodal Cholesky via Cholmod (CM) in Suite\-Sparse~\cite{Cholmod}, or Cholesky via cuSolver~\cite{cuSolverweb} (CS), each on a CPU and on a GPU. Cholesky on a GPU is quicker than \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace on a CPU by an increasingly large ratio. CM GPU does not work for small problems. All runs are on Newell~\cite{PNNLmachines}.}
\centering
\smallskip
\footnotesize
\hspace*{-0pt}%
\begin{tabular}{l@{\qquad}l@{\qquad}l@{\qquad}l@{\qquad}l@{\qquad}l} \toprule
Name & MA57 & CM CPU & CM GPU & CS CPU & CS GPU
\\ \midrule
Illinois & $7.35\tenp{-3}$ & $1.74\tenp{-3}$ & & $2.25\tenp{-3}$ & $5.80\tenp{-3}$
\\ Texas & $1.24\tenp{-1}$ & $3.42\tenp{-2}$ & & $5.67\tenp{-2}$ & $4.79\tenp{-2}$
\\ Western US & $4.30\tenp{-1}$ & $1.02\tenp{-1}$ & & $1.89\tenp{-1}$ & $1.59\tenp{-1}$
\\ Eastern US & $4.34\cdot 10^0\phantom{-}$ & $1.08\cdot 10^0\phantom{-}$ & $3.65\cdot 10^0\phantom{-}$ & $2.52\cdot 10^0\phantom{-}$ & $6.12\tenp{-1}$
\\ \bottomrule
\end{tabular}
\end{table
\begin{table}[t]
\caption{\label{tab:solvecg} Average times (in seconds) for solving sequences of systems \eqref{linsys4x4} directly on a CPU with \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace (via MA57~\cite{Duff2004}) or for solving sequences of systems \eqref{Schur1}--\eqref{Schur2} on a GPU. The latter is split into analysis and factorization phases, and multiple solves. Symbolic analysis is needed only once for the whole sequence. Factorization happens once for each matrix. The solve phase is the total time for Lines 15-17 in \cref{alg:CGschur} with a CG tolerance of $10^{-12}$ on Line 16. The results show that our method, without optimization of the code and kernels, outperforms \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace on the largest series (US Eastern Interconnection grid) by a factor of more than $2$ on a single matrix, and more than $3$ on a whole series, because the cost of symbolic analysis can be amortized over the series. All runs are on Deception~\cite{PNNLmachines}.}
\centering
\smallskip
\footnotesize
\hspace*{-0pt}%
\begin{tabular}{lclll} \toprule
Name & MA57 &\multicolumn{3}{c}{Hybrid Direct-Iterative Solver}
\\ \midrule &
& Analysis
&Factorization
&Total solves
\\ \midrule
Illinois & $6.24\cdot 10^{-3}$ & $3.87\cdot 10^{-3}$ & $5.07\cdot 10^{-3}$ &$8.55\cdot 10^{-3}$
\\ Texas & $1.00\cdot 10^{-1}$& $2.58\cdot 10^{-2}$ & $3.54\cdot 10^{-2}$ & $1.02\cdot 10^{-1}$
\\ Western US & $3.38\cdot 10^{-1}$ & $1.54\cdot 10^{-1}$ & $1.74\cdot 10^{-1}$ & $1.43\cdot 10^{-1}$
\\ Eastern US & $3.48\cdot 10^{0\phantom{-}}$ & $5.81\cdot 10^{-1}$ & $6.94\cdot 10^{-1}$ & $3.25\cdot 10^{-1}$
\\ \bottomrule
\end{tabular}
\end{table}
Supernodal Cholesky via Cholmod in Suite\-sparse~\cite{Cholmod} does not perform well on GPUs for these test cases, but performs better than cuSolver on CPUs.
This matches literature showing that multi\-frontal or supernodal approaches are not suitable for very sparse and irregular systems, where the dense blocks become too small, leading to an unfavorable ratio of communication versus computation~\cite{davis2010algorithm,booth2016basker,he2015gpu}. This issue is exacerbated when supernodal or multifrontal approaches are used for fine-grain parallelization on GPUs~\cite{swirydowicz2020linear}.
Our method becomes better when the ratio of \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace to Cholesky factorization time grows, because factorization is the most costly part of linear solvers and our method has more (but smaller and less costly) system solves.
\Cref{tab:solvecg} compares a direct solve of \eqref{linsys4x4} using MA57's \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization~\cite{Duff2004} and the full CG solve and direct solves on \{\eqref{Schur1}--\eqref{Schur2}\}, broken down into symbolic analysis of $H_{\delta}$, factorization of $H_{\delta}$, and CG on \eqref{Schur1} on Deception \cite{PNNLmachines}.
When Cholesky is used, symbolic analysis is needed only for the first matrix in the sequence because pivoting is not a concern. As problems grow larger, the solve phase becomes a smaller part of the total run time.
Also, our method increasingly outperforms MA57. The run time is reduced by a factor of more than $2$ on one matrix from the US Eastern Interconnection grid and more than $3$ on the whole series, because the analysis cost can be amortized over the entire optimization problem.
This motivates using our method for even larger problems.
Another advantage of our method is that it solves systems that are less than half the size of the original one, though it does have to solve
more of them. Notably, the \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization may require pivoting during the factorization, whereas Cholesky does not. With MA57, all our test cases required substantial permutations
even with a lax pivot tolerance of 0.01
(and a tolerance as large as 0.5 may be required to keep the factorization stable). This means that for our systems, \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace factorization requires considerable communication and presents a major barrier for GPU programming. On the other hand, the direct solve is generally more accurate by 2--3 orders of magnitude.
\section{Iterative vs.~direct solve with $H_{\delta}$ in \cref{alg:CGschur}}
\label{sec:itdirect}
We may ask if forming $H_{\delta} \equiv H + D_x + J_d^T D_s J_d + J^T Q J + \delta_1 I$ and its Cholesky factorization $H_{\delta} = LL^T$ is worthwhile when it could be avoided by iterative solves with $H_{\delta}$.
Systems \eqref{Schur1}--\eqref{Schur2} require two solves with $H_{\delta}$ and an iterative solve with $S = JH_{\delta}^{-1} J^T + \delta_2 I$, which includes ``inner" solves with $H_{\delta}$ until CG converges. As we see in \cref{sec:testing},
between 6 and 92 solves with $H_{\delta}$ are needed in our cases (and possibly more in other cases). Further, the inner iterative solves with $H_{\delta}$ would degrade the accuracy compared to inner direct solves or would require an excessive number of iterations. Therefore, for iterative solves with $H_{\delta}$ to be viable, the direct solve would have to cause substantial densification of the problem (i.e., the Cholesky factor $L$ would have to be very dense).
Let $\textrm{nnz}_\textrm{op}(H_{\delta}) = \textrm{nnz}(\widetilde H) + 2\,\textrm{nnz}(J) + \nx$ be the number of multiplications when $H_{\delta}$ is applied as an operator, and $\textrm{nnz}_\textrm{fac}(H_{\delta})=2\,\textrm{nnz}(L)$ be the number of multiplications for solving systems with $H_{\delta}$. These values (generated in {\sc Matlab}\xspace) and their ratio are given in \cref{tab:dense}. The ratio is always small and does not grow with problem size, meaning $L$ remains very sparse and the factorization is efficient. As the factorization dominates the total time of a direct solve with multiple right-hand sides, this suggests that performing multiple inner iterative solves is not worthwhile.
\begin{table}[t]
\caption{\label{tab:dense} Densification of the problem for cases where the direct-iterative method is viable. Numbers are rounded to 3 digits.
K and M signify $10^3$ and $10^6$. $\textrm{nnz}_\textrm{op}(H_{\delta}) = \textrm{nnz}(\widetilde H) + 2\,\textrm{nnz}(J) + n$ is the number of multiplications when $H_{\delta}$ is applied as an operator, and $\textrm{nnz}_\textrm{fac}(H_{\delta})=2\,\textrm{nnz}(L)$ is the number of multiplications for solving systems with $H_{\delta}$.
The ratio $\textrm{nnz}_\textrm{fac}(H_{\delta})/\textrm{nnz}_\textrm{op}(H_{\delta})$
is only about 2 in all cases.}
\centering
\smallskip
\footnotesize
\begin{tabular}{lrrr} \toprule
Name & $\textrm{nnz}_\textrm{op}(H_{\delta})$ & $\textrm{nnz}_\textrm{fac}(H_{\delta})$ & ratio
\\ \midrule
Illinois & 20.5K & 34.9K & 1.70
\\ Texas & 249K & 646K & 2.59
\\ Western US & 1.05M & 2.23M & 2.11
\\ Eastern US & 7.23M & 17.7M & 2.45
\\ \bottomrule
\end{tabular}
\end{table}
\section{Summary}
\label{sec:summary}
Following the approach of Golub and Greif \cite{GG03}, we have developed a novel direct-iterative method for solving saddle point systems, and shown that it scales better with problem size than \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace on systems arising from optimal power flow~\cite{ACOPF}. The method is tailored for execution on hardware accelerators where pivoting is difficult to implement and degrades solver performance dramatically.
To solve KKT systems of the form \eqref{linsys4x4}, \cref{alg:CGschur} presents a method with an outer iterative solve and inner direct solve. The method assumes
$H_{\gamma} = H + D_x + J_d^T D_s J_d + \gamma J^T J$
is SPD (or almost) for some $\gamma\ge0$, and if necessary,
uses the minimal amount of regularization $\delta_1 \ge 0$
(to within a factor of 2)
to ensure $H_{\delta} = H_{\gamma} + \delta_1 I$ is SPD. We proved that as $\gamma$ grows large, the condition number of $H_{\gamma}$ grows linearly with $\gamma$, and the eigenvalues of the iteration matrix $S$ converge to $1/\gamma$ (\eqref{eq:S3}). These results provide some heuristics for choosing $\gamma$,
and explain why CG on the Schur complement system \eqref{Schur1} converges rapidly.
A future direction of research is developing a better method to select $\gamma$.
On several sequences of systems arising from applying an interior method to OPF problems, the number of CG iterations for solving \eqref{Schur1} was less than 20 iterations on average, even though no preconditioning was used.
Four of the five matrix series were solved with $\delta_1=\delta_2=0$, and the BE for the original system \eqref{linsys4x4} was always less than $10^{-8}$. The efficiency gained by using a Cholesky factorization (instead of \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace) and avoiding pivoting is demonstrated in \Cref{tab:ldlcomp}. Even though $H_{\gamma}$ in \eqref{linsys2x2mod} is denser than $K_k$ in \eqref{linsys4x4}, its factors are sparser.
\Cref{tab:solvecg} shows that our method, when it succeeds, has better scalability than \hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace and is able to utilize GPUs. This is the most substantial result of our paper. For the fifth series (smaller than 2 of the others) $\delta_2=0$ worked, but $\delta_1$ had to be of order 1 and no accurate solution could be obtained. The development of robust heuristics to select $\delta_1$ and $\delta_2$, and to integrate with the filter line-search algorithm, will be subject of future work.
The fact that the Cholesky factors are scarcely denser than the original matrix suggests that not much could be gained by using nullspace methods~\cite{Rozloznik2018} for the four sequences we were able to solve, as those require sparse LU or QR factorization of $J^T$, which is typically less efficient than sparse Cholesky factorization of $H_{\delta}$.
For the fifth sequence, and sequences similar to it, an efficient nullspace method may be better than the current fail-safe $\hbox{LDL$^{\hbox{\scriptsize \raisebox{-1pt}{\!T}}}$}\xspace$ factorization of the $4 \times 4$
system \eqref{linsys4x4}.
\section*{Acknowledgements}
This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations (the Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem---including software, applications, hardware, advanced system engineering, and early testbed platforms---to support the nation's exascale computing imperative.
We thank Research Computing at Pacific Northwest National Laboratory (PNNL) for computing support. We are also grateful to Christopher Oehmen and Lori Ross O'Neil of PNNL for critical reading of the manuscript and for providing helpful feedback. Finally, we would like to express our gratitude to Stephen Thomas of National Renewable Energy Laboratory for initiating discussion that motivated Section \ref{sec:itdirect}.
\footnotesize
|
1,108,101,563,181 | arxiv | \section{Introduction}\label{sec:1}
\setcounter{equation}{0}
Let $(M,g)$ be a closed connected Riemannian manifold with cotangent bundle $\pi:T^*M\to M$. Denote by $\omega_0=-d\lambda$ the canonical symplectic form on $T^*M$, where $\lambda=pdq$ is the Liouville $1$-form in local coordinates $(q,p)$ of $T^*M$. Let $(\tilde{M},\tilde{g})$ be the universal cover of $(M,g)$. For a closed $2$-form $\sigma\in \Omega^2(M)$ we denote by $\tilde{\sigma}$ the lift of $\sigma$ to $\tilde{M}$, and say that $\sigma$ is \emph{weakly exact} if $\tilde{\sigma}$ is exact. Let $D_RT^*M$ denote the open disc cotangent bundle of finite radius $R$ with respect to the metric $g$. Denote by $\Omega_w^2(M)$ the set of all weakly exact $2$-forms on $M$. For each $\sigma\in \Omega_w^2(M)$ we denote
$$\omega_\sigma:=\omega_0+\pi^*\sigma$$
the \emph{twisted symplectic form}, and call the symplectic manifold $(T^*M,\omega_\sigma)$ the \emph{twisted cotangent bundle} and $(D_RT^*M,\omega_\sigma)$ the \emph{twisted disc bundle}.
For a smooth function $H:S^1\times T^*M\to \mathbb{R}$ we are interested in the existence of periodic solutions of the associated Hamiltonian system
$$\dot{x}(t)=X_{H,\sigma}(t,x(t))\quad\forall t\in S^1,$$
where the Hamiltonian vector field $X_{H,\sigma}$ on $T^*M$ is determined by $dH_t=\iota(X_{H,\sigma})\omega_{\sigma}$. The existence problem of contractible Hamiltonian periodic solutions on the twisted cotangent bundle has been studied in a number of papers; see, e.g., \cite{CGK,FS,GG0,GG1,Lu1,Lu2,Lu3,Us}. On the other hand, non-contractible periodic orbits of the Hamiltonian systems on the ordinary cotangent bundle have been previously investigated in several papers; see, e.g., \cite{BPS,GL,Ir,We0,Xu}.
The aim of the present paper is to study certain existence results for
$1$-periodic orbits in given free homotopy classes of loops for compactly supported Hamiltonians on twisted cotangent bundles. The proof relies on the machinery of Floer homology for non-contractible periodic orbits.
In the course of the last two decades, this version of Floer homology has been studied and used among others
in 2003 by Biran, Polterovich and Salamon~\cite{BPS} on $T^*M$ for
$M=\mathbb{T}^n$ or $M$ closed and
negatively curved, in 2006 by Weber~\cite{We0} on $T^*M$ for all closed Riemannian manifolds $M$, in 2013 by G\"{u}rel~\cite{Gu} on closed symplectic manifolds for simple non-contractible periodic orbits of Hamiltonian diffeomorphisms with arbitrarily large periods and in 2016 by Ginzburg and G\"{u}rel~\cite{GG1} on closed symplectic manifolds for infinitely many non-contractible periodic orbits . For further references
concerning the existence of non-contractible orbits we refer to \cite{Ba,Ci,GG2,GL,KO,Ni,Xu}.
The main difficulty in applying this technique to detect non-contractible periodic orbits of the Hamiltonian flow of $H$ is to compute the filtered Floer homology ${\rm HF}_\alpha^{(a,b)}(H)$. It is well known that the method adopted by
Biran, Polterovich and Salamon in~\cite{BPS} is applied successfully to many other situations~\cite{CGK,GG0,GG1,Us,Xu,KO}. The basic strategy is to examine the commutative diagram
\begin{equation}\notag\label{e:ch
\xymatrix{{\rm HF}_\alpha^{(a,b)}(H_0)
\ar[rr]^{\Psi_{H_1H_0}}\ar[dr]_{\Psi_{HH_0}}& & {\rm HF}_\alpha^{(a,b)}(H_1)\\ & {\rm HF}_\alpha^{(a,b)}(H)\ar[ur]_{\Psi_{H_1H}} & }
\end{equation}
where $H_0\leq H\leq H_1$ and $H_0$ and $H_1$ are time-independent Hamiltonians with ${\rm HF}_\alpha^{(a,b)}(H_0)$ and ${\rm HF}_\alpha^{(a,b)}(H_1)$
being not hard to compute. However, in our cases, one can not directly use the method mentioned above which is based on Pozniak's Theorem~\cite{Poz} since the sets of critical points of the symplectic action functionals of the squeezing functions $H_0$ and $H_1$, in general, are not Morse-Bott in the sense of ~\cite{BPS}. Instead, we interpret the Floer homology in the twisted cotangent bundle as a ``small perturbation'' of the Floer homology in the ordinary cotangent bundle and show the isomorphism between them. Of course we shall have to make suitable assumptions about the twisted symplectic forms. For a nice introduction to the theories of deformations of Floer homology on a closed symplectic manifold under symplectic perturbations we refer to the papers by Ritter~\cite{Ri}, Zhang~\cite{Zh} and Bae and Frauenfelder~\cite{BF}. It is worth to mention that the monotone homomorphisms (see Section~\ref{subsec:MH}) are preserved under the isomorphisms which we obtain in this paper. This observation, together with the computation of Floer homology given by Weber~\cite{We0}, helps us to circumvent the difficulty of computing the Floer homology on the twisted disc bundle directly. Our results can be seen as a twisted version of Weber's results~\cite{We0} which, in particular, recover a result of Niche~\cite{Ni}.
\subsection{Main results}\label{sec:1.1}
\setcounter{equation}{0}
\begin{definition}
{ \rm
A $1$-form $\tau\in \Omega^1(\tilde{M})$ is said to be a primitive of $\sigma$ with \emph{at most linear growth} on the universal cover $\tilde{M}$ if $d\tau=\tilde{\sigma}$ and, for any $x\in \tilde{M}$, there exists a constant $\varrho_x>0$ such that for all $s\geq 0$,
}
\begin{equation}\label{e:LG}
\sup\limits_{z\in B_x(s)}
\|\tau_z\|_{\tilde{g}}\leq \varrho_x(s+1).
\end{equation}
\end{definition}
\noindent Here $B_x(s)$ denotes the geodesic ball in $\tilde{M}$ of radius $s>0$ centered at $x$. Since $M$ is compact, the above definition is independent of the choice of the metric $g$ on $M$. Put
\begin{equation}
u_{\sigma,g,x}(s)=\inf\limits_{\tau\in \mathcal{F}_\sigma}\sup\limits_{z\in B_x(s)}
\|\tau_z\|_{\tilde{g}}\quad \forall s>0,
\end{equation}
where $\mathcal{F}_\sigma=\{\tau\in\Omega^1(\tilde{M})|d\tau=\tilde{\sigma}\}$.
We call $u_{\sigma}=u_{\sigma,g,x}:\mathbb{R}_+\to\mathbb{R}_+$ the \emph{cofilling function} of $\sigma\in\Omega_w^2(M)$; see \cite{BF,Gr,Po}. If $g^\prime$ is another Riemannian metric on $M$ and $x^\prime\in\tilde{M}$ is a different base point then it is easy to check that $L^{-1}u_{\sigma,g,x}\leq u_{\sigma,g^\prime,x^\prime}\leq L u_{\sigma,g,x}$ for some constant $L>0$.
We define the set $\mathcal{P}(M)$ as
\begin{equation}\label{e:CW2F}
\mathcal{P}(M):=\{\sigma\in\Omega_w^2(M)|\hbox{ there exists a constant}\;\epsilon>0\;\hbox{such that}\; u_\sigma(s)\leq \epsilon(s+1)\}.
\end{equation}
\noindent Notice that
each $\sigma \in \mathcal{P}(M)$, by definition, admits a primitive with at most linear growth on the universal cover $\tilde{M}$, and hence $\sigma|_{\pi_2(M)}=0$. So $\omega_\sigma$ is \emph{symplectically aspherical}, which means that the symplectic form $\omega_\sigma$ vanishes on $\pi_2(T^*M)$.
\begin{example}\label{ex:3exa}
{\rm
Let us list some manifolds on which there exists a weakly exact $2$-form admitting a primitive with at most linear growth on the universal cover; see, e.g., Gromov \cite{Gr}, Polterovich \cite{Po}, Sikorav \cite{Si} and references therein.
\begin{enumerate}
\item If a Riemannian manifold $M$ is closed and of non-positive curvature then every closed 2-form $\sigma$ on $M$ admits a primitive with at most linear growth on $\tilde{M}$. Moreover, whenever $M$ admits a metric of negative curvature, $\tilde{\sigma}$ admits a bounded primitive, that is, there exists a $1$-form $\tau\in \Omega^1 (\tilde{M})$ such that $d\tau=\tilde{\sigma}$ and
\begin{equation}\label{e:bp
\sup\limits_{x\in \tilde{M}}\|\tau_x\|<\infty.
\end{equation}
In particular, from (1) we have the following examples:
\item Assume that $M$ is a closed oriented Riemannian surface with infinite fundamental group. Then the volume form on $M$ admits a primitive with at most linear growth on $\tilde{M}$.
\item For the standard sympletic torus $(\mathbb{T}^{2n},\sum_{i=1}^{2n}dx_i\wedge dy_i)$ it holds that $\sum_{i=1}^{2n}dx_i$ $\wedge dy_i\in \mathcal{P}(\mathbb{T}^{2n})$. In fact, for $n$-dimensional tori $\mathbb{T}^n$ any closed non-exact 2-form $\sigma$ admits a primitive with at most linear growth on the universal cover $\mathbb{R}^n$ but does not satisfy~(\ref{e:bp}); see \cite{FMP0}.
\end{enumerate}
}
\end{example}
\begin{remark}\label{rem:LGC}
{\rm
When $(M,g)$ is a closed Riemannian manifold of negative curvature, the constant $\epsilon=\epsilon(\sigma)$ in $\mathcal{P}(M)$ can be chosen to converge to zero as $|\sigma|_g\to0$ Hereafter we denote $|\sigma|_g\overset{\rm def}{=}\sup_{x\in M}\|\sigma(x)\|_g$. Indeed, there exists a universal constant $\rho>0$ such that every closed $2$-form $\beta\in\Omega^2(M)$ with $|\beta|_g\leq 1$ satisfies $u_\beta(s)\leq \rho,\;\forall s\in[0,\infty)$; see Gromov~\cite[$5. {\rm B}_5$]{Gr}. Then for any closed 2-form $\sigma\in\Omega^2(M)$ the rescaling form $\hat{\sigma}=\sigma/|\sigma|_g$ satisfies $|\hat{\sigma}|_g=1$, $\hat{\sigma}\in\mathcal{P}(M)$ and $u_{\hat{\sigma}}(s)\leq \rho$. This implies that $u_{\sigma}(s)\leq \epsilon(\sigma):=\rho|\sigma|_g$ and $\epsilon(\sigma)\to 0$ as $|\sigma|_g\to 0$.
}
\end{remark}
Let $S^1=\mathbb{R}/\mathbb{Z}$, and we denote the free loop space of $M$ by $\mathfrak{L} M:=C^\infty(S^1,M)$. Given a free homotopy class $\alpha\in[S^1,M]$ we define
$$\mathfrak{L}_\alpha M:=\{\gamma\in \mathfrak{L} M\big|[\gamma]=\alpha \}.$$
We use the symbol $\Lambda_\alpha$ for the set of lengths of all periodic geodesics in $M$ with respect to $g$ which represent $\alpha$. It is a closed and nowhere dense subset of $\mathbb{R}$ (see~\cite[Lemma 3.3]{We0}), and hence there exists a periodic geodesic $q$ such that
$$l_\alpha:=\int_{S^1}\|\dot{q}(t)\|_gdt=\inf\Lambda_\alpha.$$
\begin{definition}
{\rm Let $\sigma$ be a two-form on $M$. A class $\alpha\in [S^1,M]$ is said to be a \emph{$\sigma$-atoroidal class} if any map $f:S^1\to \mathfrak{L}_\alpha M$, thought of as a map $f:\mathbb{T}^2\to M$, satisfies $\int_{\mathbb{T}^2}f^*\sigma=0$.
}
\end{definition}
\noindent Note that if $\sigma$ is weakly exact, then the class $0$ of nullhomotopic loops is atoroidal, since both statements are equivalent to the statement that $\sigma\big|_{\pi_2(M)}=0$.
\begin{remark}\label{rem:NC}
{\rm If $M$ admits a metric of negative curvature, then every smooth map $f:\mathbb{T}^2\to M$ induces the zero map
$f^*:H^2_{dR}(M,\mathbb{R})\to H_{dR}^2(\mathbb{T}^2,\mathbb{R})$; see~\cite[Lemma~2.3]{Me} or \cite{Ni}. Therefore, for any closed $2$-form $\sigma\in \Omega^2 (M)$, every free homotopy class $\alpha\in [S^1,M]$ is a $\sigma$-atoroidal class. Another example is the $3$-dimensional torus $\mathbb{T}^3=\{(x,y,z)|x,y,z\in\mathbb{R}/\mathbb{Z}\}$ with $\sigma=dx\wedge dy$. It is easy to check that the homotopy class $\alpha=(0,0,n)$ for any integer $n$ is $\sigma$-atoroidal, see also~\cite{FMP0}.
}
\end{remark}
Throughout this paper, for the sake of brevity, we always assume that $\sigma \in \mathcal{P}(M)$ and $\alpha\in[S^1,M]$ is a $\sigma$-atoroidal class without any special statement. Given a homotopy class $\alpha$ of free loops in $M$, denote by $\mathscr{P}_{\alpha}(H,\sigma)$ the set of periodic orbits of $X_{H,\sigma}$:
$$\mathscr{P}_{\alpha}(H,\sigma)=\{x\in C^\infty(S^1,T^*M)\big|\dot{x}(t)=X_{H,\sigma}(t,x(t))\quad\forall t\in S^1,\;[\pi(x)]=\alpha \}.$$
Our first main result establishes the following existence of non-contractible periodic orbits for compactly supported Hamiltonians on twisted cotangent bundles.
\begin{theorem}\label{thm:1}
Let $(M,g)$ be a closed connected Riemannian manifold, and let $D_RT^*M$ be the open disc cotangent bundle of finite radius $R$ with respect to the metric $g$. Assume that $\sigma \in \mathcal{P}(M)$ and $\alpha\in[S^1,M]$ is any $\sigma$-atoroidal class. For every compactly supported Hamiltonian $H\in C^\infty(S^1\times D_RT^*M)$ with
$$\sup\limits_{S^1\times M}H< -Rl_{\alpha}$$
there exists $\delta_0(H,g,\sigma,\alpha)>0$ such that if $|\delta|<\delta_0(H,g,\sigma,\alpha)$ then $\mathscr{P}_{\alpha}(H,\delta\sigma)\neq \emptyset$.
\end{theorem}
Moreover, if $M$ is a closed and negatively curved manifold we have the following.
\begin{theorem}[Niche~\cite{Ni}]\label{thm:2}
Assume that $(M,g)$ is a closed Riemannian manifold of negative curvature, and that $\alpha\in[S^1,M]$ is any free homotopy class. Denote by $\sigma$ a closed $2$-form on $M$. Let $H\in C^\infty(S^1\times D_RT^*M)$ be a compactly supported function.
Then there exist some positive constants $C=C(\alpha)$ and $\delta_0=\delta_0(H,g,\alpha)$ such that if
$$\inf\limits_{S^1\times M}H>C$$ then $\mathscr{P}_{-\alpha}(H,\sigma)\neq \emptyset$ whenever $|\sigma|_g<\delta_0$.
\end{theorem}
\begin{remark}
{\rm In \cite{Ni} the author used Pozniak's theorem to compute Floer homology of the squeezing functions $K^{\pm}$ for a twisted cotangent bundle $T^*M$ of a negatively curved manifold $M$, see~\cite[Proposition~3]{Ni}. He proved that the set $\mathcal{P}_\omega(\rho)$ of $1$-periodic orbits of Hamiltonian $K^+$ with respect to the twisted symplectic form $\omega$ is homeomorphic to $S^1$, see lines 20-25, page 627 in \cite{Ni}. However, it seems very difficult for us to verify the condition that $\mathcal{P}_\omega(\rho)$ is a Morse-Bott manifold, meaning that $C_0=\{x(0):x\in \mathcal{P}_\omega(\rho)\}$ is a compact submanifold $M$ and $T_{x_0}C_0={\rm Ker}(d\phi_1(x_0)-Id)$ for every $x_0\in C_0$, where $\phi_1$ is the time one flow of $K^+$ with respect to $\omega$ (not to $\omega_0$!). This condition is the key to make use of Pozniak's theorem. For this reason, we devise a different method to show Niche's result, which is a partial motivation of our work.
}
\end{remark}
Theorem~\ref{thm:1} (resp.Theorem~\ref{thm:2}) is a soft consequence of the invariance of Floer homology under symplectic deformations (see Theorem~\ref{thm:Invariance} (resp. Theorem~\ref{thm:Inv})).
As a result, to obtain periodic orbits one needs to bound the magnitude of the weak exact $2$-form in terms of $H$. To get rid of the dependence on $H$, we introduce a class of Hamiltonian functions compactly supported in $D_RT^*M$, and show the finiteness of a symplectic capacity defined by it. To state this result, we need to put more restrictive assumptions on $H$, and the capacity defined here is slightly different from the \emph{Biran-Polterovich-Salamon (BPS) capacity} (for the original definition see~\cite{BPS,We0}). These additional constraints on the class of Hamiltonians to define the capacity are natural because we will apply the Theorem~\ref{thm:Invariance} (resp. Theorem~\ref{thm:Inv}) to two \textit{fixed} Hamiltonians sandwiching at the same time the whole class of functions to estimate the capacity, see~Figure~\ref{fig:9}.
Let us denote $\mathscr{P}_{\alpha}(H,\sigma;\tau)$ the set of $\tau$-periodic orbits of $X_{H,\sigma}$ representing $\alpha\in[S^1,M]$ (identifying $S^1$ with $\mathbb{R}/(\tau\mathbb{Z})$). We say a periodic orbit $x\in \mathscr{P}_{\alpha}(H,\sigma;\tau)$ \emph{fast} if $0<\tau\leq 1$; otherwise, we say it \emph{slow}.
Let $W$ be an open subset of $T^*M$ containing $M$, and let $U$ be an open subset of $T^*M$ with compact closure $\bar{U}\subset W$.
Given a number $A>0$, denote by $\mathscr{H}(W,U,A)$ the class of smooth functions $H:W\to \mathbb{R}$ such that
\begin{itemize}
\item[(H0)] $H(x)\leq 0$, for all $x\in W$;
\item[(H1)] $H$ is compactly supported in $U$; and
\item[(H2)] $\inf_{W}H>-A$.
\end{itemize}
Let $M\subset V\subset U$, where $V$ is an open subset of $T^*M$. For $c\in(0,A]$, denote
$$\mathscr{H}_c(W,U,V,A):=\big\{H\in\mathscr{H}(W,U,A) \big| \sup_{V}H\leq-c \big\}.
$$
Then the \emph{restricted BPS capacity $\hat{c}_{\rm BPS}$} is defined as
$$\hat{c}_{\rm BPS}(W,U,V,A;\sigma,\alpha)=\inf\big\{c>0\big|\forall H\in \mathscr{H}_c(W,U,V,A),\;\hbox{there is a fast periodic of $H$}\big\}.$$
Here we use the convention that $\inf\emptyset=\infty$.
\begin{remark}
{\rm It is easy to check that if $\hat{c}_{\rm BPS}(W,U,V,A;\sigma,\alpha)<\infty$, then every $H\in\mathscr{H}(W,U,A)$ satisfying $\sup_{V}H\leq-\hat{c}_{\rm BPS}(W,U,V,A;\sigma,\alpha)$ has at least a fast periodic orbit (with respect to the twisted symplectic form $\omega_{\sigma}=\omega_0+\pi^*\sigma$) whose projection to $M$ represents $\alpha$.
}
\end{remark}
In what follows, let us denote $U_R:=D_RT^*M$,
let $V$ be an open subset of $U_{R-\rho}$ containing $M$ with $0<\rho<R$.
\begin{theorem}\label{thm:3}
Let $(M,g)$ be a closed connected Riemannian manifold. Denote $V,A,\rho$ as above. Assume that $\sigma \in \mathcal{P}(M)$ and $\alpha\in[S^1,M]$ is any $\sigma$-atoroidal class. If $A>Rl_\alpha$, then for any sufficiently small number $\epsilon>0$ there exists a constant $\delta_0=\delta_0(g,\sigma,\alpha, V,A,\rho,\epsilon)>0$ such that for every $\delta\in(-\delta_0,\delta_0)$ it holds that
$$\hat{c}_{\rm BPS}(U_R,U_{R-\rho},V,A;\delta\sigma,\alpha)\leq Rl_\alpha+\epsilon.$$
\end{theorem}
\begin{theorem}\label{thm:4}
Let $(M,g)$ be a closed Riemannian manifold of negative curvature, and let $\alpha\in[S^1,M]$ be a free homotopy class. Denote $V,A,\rho$ as above, and denote by $\sigma$ any closed $2$-form on $M$. If $A>Rl_\alpha$, then for any sufficiently small number $\epsilon>0$ there exists a constant $\delta_0=\delta_0(g,\alpha, V,A,\rho,\epsilon)>0$ such that if $|\sigma|_g<\delta_0$ then we have
$$\hat{c}_{\rm BPS}(U_R,U_{R-\rho},V,A;\sigma,\alpha)\leq Rl_\alpha+\epsilon.$$
\end{theorem}
Fix $\rho_0\in (0,R-\rho)$ and take $V=U_{\rho_0}$. By the definition of $\hat{c}_{\rm BPS}$ and a scaling argument like in the proof of Proposition~\ref{pro:nonexistence}, one can obtain
$$\delta\cdot \hat{c}_{\rm BPS}(U_R,U_{R-\rho},U_{
\rho_0},A;\sigma,\alpha)=\hat{c}_{\rm BPS}(U_{\delta R},U_{\delta(R-\rho)},U_{
\delta\rho_0},\delta A;\delta\sigma,\alpha)$$
for any positive number $\delta$. Combining this with Theorem~\ref{thm:3} and Theorem~\ref{thm:4} we have the following two theorems.
\begin{theorem}\label{thm:3'}
Let $(M,g)$ be a closed connected Riemannian manifold. Denote $A,\rho_0, \rho$ as above. Assume that $\sigma \in \mathcal{P}(M)$ and $\alpha\in[S^1,M]$ is any $\sigma$-atoroidal class. If $A>Rl_\alpha$, then for any sufficiently small number $\epsilon>0$ there exists a constant $\delta_0=\delta_0(g,\sigma,\alpha, A,\rho,\rho_0,\epsilon)>0$ such that for any $\lambda\in (0,+\infty)$ it holds that
$$\hat{c}_{\rm BPS}(U_{\lambda R},U_{\lambda(R-\rho)},U_{\lambda\rho_0},\lambda A;\delta_0\lambda\sigma,\alpha)\leq\lambda(Rl_\alpha+\epsilon).$$
\end{theorem}
\begin{theorem}\label{thm:4'}
Let $(M,g)$ be a closed Riemannian manifold of negative curvature, and let $\alpha\in[S^1,M]$ be a free homotopy class. Denote $A,\rho,\rho_0$ as above, and denote by $\sigma$ any closed $2$-form on $M$. If $A>Rl_\alpha$, then for any sufficiently small number $\epsilon>0$ there exists a constant $\delta_0=\delta_0(g,\alpha, A,\rho,\rho_0,\epsilon)>0$ such that
$$\hat{c}_{\rm BPS}(U_{\delta_0R|\sigma|_g},U_{\delta_0(R-\rho)|\sigma|_g},U_{\delta_0\rho_0|\sigma|_g},\delta_0A|\sigma|_g;\sigma,\alpha)\leq\delta_0(Rl_\alpha+\epsilon)|\sigma|_g.$$
\end{theorem}
\begin{remark}
{\rm
In the definition of the restricted BPS capacity, one can also allow $H$ to be time-dependent. In this case, we require that $H_t,t\in[0,1]$ to be periodic in time, conditions (H0)-(H2) hold uniformly for $t\in S^1$, and that $\sup_{S^1\times V}H\leq -c$ in the definition of $\mathscr{H}_c(W,U,V,A)$. Then one can show that Theorem~\ref{thm:3} -- Theorem~\ref{thm:4'} still hold.
}
\end{remark}
Like the Hofer-Zehnder capacity, the finiteness of restricted BPS capacity implies almost existence of periodic orbits.
Consider a proper\footnotemark[1]\footnotetext[1]{A map is called proper if preimages of compact sets are compact.} smooth function
$$H:T^*M\to \mathbb{R}$$
which is bounded from below. Then $\{H\leq s\}$ is compact, and hence $\{H<s\}$ is contained in $D_rT^*M$ for some sufficiently large radius $r=r(s)$. Suppose that $M\subset \bar{V}\subset\{H< d\}$, where $V$ is an open neighborhood of $M$ in $T^*M$. Given $\rho>0$ and $T>d$, let $A>(r(T+\rho)+\rho)l_\alpha$. Then
the finiteness of $\hat{c}_{\rm BPS}(U_{r(T+\rho)+\rho},U_{r(T+\rho)},V,A;\delta\sigma,\alpha)$ implies the following.
\begin{theorem}[Almost existence theorem]\label{thm:AET1}
Let $(M,g)$ be a closed connected Riemannian manifold. Assume $\sigma \in \mathcal{P}(M)$ and $\alpha\in[S^1,M]$ is a non-trivial $\sigma$-atoroidal class. Let $H:T^*M\to \mathbb{R}$ be a proper smooth function bounded from below, and denote $d,V,A,T,\rho$ as above. Then there exists a constant $\delta_0=\delta_0(g,\sigma,\alpha, V, T, A,$ $\rho)>0$ such that
if $\delta\in(-\delta_0,\delta_0)$ then for almost all $s\in[d,T]$ the level set $\{H=s\}$ carries a periodic Hamiltonian orbit with respect to $\omega_{\delta\sigma}=\omega_0+\delta\pi^*\sigma$ whose projection to $M$ represents $\alpha$.
\end{theorem}
Similarly, for manifolds of negative curvature the finiteness of $\hat{c}_{\rm BPS}(U_{r(T+\rho)+\rho},$ $U_{r(T+\rho)},V,A;\sigma,\alpha)$ implies the following.
\begin{theorem}[Almost existence theorem]\label{thm:AET2}
Let $(M,g)$ be a closed Riemannian manifold of negative curvature, and let $\alpha\in[S^1,M]$ be any non-trivial free homotopy class. Let $H:T^*M\to \mathbb{R}$ be a proper smooth function bounded from below. Denote by $\sigma$ any closed $2$-form on $M$, and denote $d,V,A,T,\rho$ as above. Then there exists a constant $\delta_0=\delta_0(g,\alpha, V, T, A,\rho)>0$ such that
if $|\sigma|<\delta_0$ then for almost every $s\in[d,T]$ the level set $\{H=s\}$ carries a periodic Hamiltonian orbit with respect to $\omega_{\sigma}=\omega_0+\pi^*\sigma$ whose projection to $M$ represents $\alpha$.
\end{theorem}
\subsection{Applications}\label{sec:1.2}
\setcounter{equation}{0}
Recall that the Hamiltonian function $H_g(q,p)=\|p\|^2_g/2$ is said to be the \emph{standard kinetic energy} on $T^*M$, the Hamiltonian flow of $X_{H_g,\sigma}$ on $T^*M$, called a \emph{twisted geodesic flow}, describes the motion of a charge on $M$ in the \emph{magnetic field} $\sigma$. For related results on the existence of periodic orbits of twisted geodesic flows, we refer to the papers by Ginzburg and G\"{u}rel~\cite{GG0,GG1} using the Floer theory for contractible periodic orbits, by Lu~\cite{Lu3} applying pseudo symplectic capacities for magnetic fields given by symplectic forms and by Frauenfelder and Schlenk~\cite{FS} using the spectral metric when the two-form $\sigma$ is exact. Recently, Asselle and Benedetti~\cite{AB} showed that for almost all energy levels above the maximum critical value of an autonomous Tonelli Hamiltonian there exists a periodic magnetic geodesic based on variational methods. By the same method, Abbondandolo, Macarini, Mazzucchelli and Paternain~\cite{AMMP} proved the existence of infinitely many periodic orbits of exact magnetic flows on surfaces for almost every subcritical energy level. It is worth to mention that Ginzburg~\cite{Gi1} also gave a counterexample and showed the nonexistence of closed trajectories of $H_g$; see~\cite{Gi0} for a more comprehensive survey. Numerous results about periodic orbits of the Hamitonian flow in twisted cotangent bundles can be found in \cite{FMP0,FMP1,GK,Lu1,Lu2,Us,Xu}. For $H_g$ the function $r(s)$ can be taken as $\sqrt{2s}$. So by using Theorem~\ref{thm:AET1} we obtain the following.
\begin{corollary}\label{cor:AET1}
Let $(M,g)$ be a closed connected Riemannian manifold. Assume $\sigma \in \mathcal{P}(M)$ and $\alpha\in[S^1,M]$ is a non-trivial $\sigma$-atoroidal class. Then for $0<d<T$,
there exists a positive constant $\delta_0=\delta_0(g,\sigma,\alpha,d,T)$ such that if $\delta\in(-\delta_0,\delta_0)$, then for almost all $s\in[d,T]$ the energy level $\{H_g=s\}$ carries a periodic orbit of $X_{H_g,\delta\sigma}$ whose projection to $M$ represents $\alpha$.
\end{corollary}
Similarly, Theorem~\ref{thm:AET2} implies the following.
\begin{corollary}\label{cor:AET2}
Let $(M,g)$ be a closed Riemannian manifold of negative curvature, and let $\alpha\in[S^1,M]$ be any non-trivial free homotopy class. Denote by $\sigma$ any closed $2$-form on $M$. Then for $0<d<T$, there exists a positive constant $\delta_0=\delta_0(g,\alpha,d,T)$ such that if $|\sigma|_g<\delta_0$, then for almost every $s\in[d,T]$ the energy level $\{H_g=s\}$ carries a periodic orbit of $X_{H_g,\sigma}$ whose projection to $M$ represents $\alpha$.
\end{corollary}
\begin{remark}
{\rm
The assertions of the above corollaries overlap with Merry's results~\cite{Me0}. Indeed, Corollary~\ref{cor:AET1} complements~\cite[Theorem 1.1]{Me0} in the case that magnetic field $\sigma$ has no bounded primitive in the universal cover. On the other hand, \cite[Theorem 1.1]{Me0} implies Corollary~\ref{cor:AET2}. Merry actually showed a stronger result whenever $M$ admits a metric $g$ of negative curvature:
for each $d>0$ and each nontrivial homotopy class $\alpha\in[S^1,M]$, there is a constant $c=c(g,d)>0$ such that for every $s\in(d,\infty)$ and every closed $2$-form $\sigma$ with $|\sigma|_g<c$ the energy level $\{H_g=s\}$ carries a periodic orbit of $X_{H_g,\sigma}$ whose projection to $M$ represents $\alpha$.
}
\end{remark}
\begin{remark}
{\rm When $M$ is simply connected and $\sigma$ is a non-exact closed $2$-form on $M$, our method in this paper does not apply to the twisted cotangent bundle $(T^*M,\omega_\sigma)$ since $\omega_\sigma$ is not weakly exact and hence the Floer action functional is not single-valued any more in this case. Fortunately, very recently this weakly exact assumption is removed in the context of a general symplectic cohomology for magnetic cotangent bundles, see~\cite{BR,GM}. In particular, the case $M=S^2$ is also considered in~\cite{BR} by Benedetti and Ritter.
}
\end{remark}
The paper is organized as follows. In Section~\ref{sec:2} we recall some background results and definitions of the filtered Floer homology on the twisted cotangent bundle. Section~\ref{sec:3} is mainly devoted to prove the invariance of Floer homology for symplectic deformations.
Before the proof,
the action spectrum properties (see Lemma~\ref{lem:NAS1} and Lemma~\ref{lem:NAS2}) are established in this section. The goal of Section~\ref{sec:4} is to compute the Floer homology in $T^*M$ with its canonical symplectic form $\omega_0$.
In Section~\ref{sec:5} we prove the main theorems and discuss the flows without non-contractible periodic orbits.
\section*{Acknowledgments}
I am deeply grateful to my Ph.D. advisor Guangcun Lu for introducing me to symplectic geometry and for valuable suggestions. My special thanks go to my colleague Xingpeng Dong for teaching me to draw beautiful graphs. I thank Rongrong Jin and Kun Shi for helpful discussions. I am also grateful to Viktor L. Ginzburg for useful comments and especially for his ideas in the proof of Proposition~\ref{pro:nonexistence}. Finally,
I am greatly indebted to the anonymous referee for the very carefully reading and helpful suggestions to improve the paper and for pointing out how to obtain Theorem~\ref{thm:3'} and Theorem~\ref{thm:4'}.
\section{Floer homology}\label{sec:2}
\setcounter{equation}{0}
\subsection{Preliminaries}
The Riemannian metric $g$ on $M$ induces a metric $\langle\cdot,\cdot\rangle$ on $TM$ and a horizontal-vertical splitting of $TT^*M$, together with isomorphisms
$$T_zT^*M=T^h_zT^*M\oplus T_z^vT^*M\cong T_qM\oplus T_q^*M\cong T_qM\oplus T_qM,\quad z=(q,p)\in T^*M.$$
The above splitting gives rise to the almost complex structure $J_g$ on $T^*M$ represented by
\begin{eqnarray}\label{e:Jg}
J_g=\begin{pmatrix}
0 & -I \\ I & 0
\end{pmatrix}.
\end{eqnarray}
Recall that an almost complex structure $J$ on a symplectic manifold $(W,\omega)$ is \emph{$\omega$-compatible} if the bilinear form $\omega(\cdot,J\cdot)$ defines a Riemannian metric on $W$.
It is easy to check that $J_g$ is $\omega_0$-compatible for every Riemmanian metric $g$ on $M$. Denote $G_g(\cdot,\cdot):=\omega_0(\cdot,J_g\cdot)$. When $W$ is a manifold with a Riemanian metric $G$, various $L^\infty$-norms $|\cdot|_G$ are defined by
$$|v|_{G}:=\sup\limits_{x\in W}\|v(x)\|_G\quad \forall \;v\in \Gamma(TW),\quad |\theta|_g:=\sup_{x\in W}\|\theta(x)\|_g\quad \forall \;\theta\in\Omega^k(W),$$
$$|X|_{G}:=\sup\limits_{x\in W}\sup\{\|X(x)v\|_G\big|v\in T_xW,\;\|v\|_G=1\}\quad \forall \;X\in\Gamma({\rm End} (TW)).$$
Let $\mathscr{J}(T^*M)$ be the set of one-periodic almost complex structures on $T^*M$ with finite $|\cdot|_{G_g}$-norm, and denote $$\mathscr{J}(\omega_\sigma):=\{J\in\mathscr{J}(T^*M):J\;\hbox{is $\omega_\sigma$-compatible on $(T^*M,\omega_\sigma)$}\}.$$
Floer homology is the main tool utilized in this paper to prove the existence of periodic orbits of the Hamiltonian flow in twisted bundles. To define the Floer homology of compactly supported functions on a non-compact symplectically aspherical manifold we need to impose certain conditions on the manifold at infinity.
\begin{definition}
{\rm
We say that a symplectic manifold $(W,\omega)$ without boundary is geometrically bounded if there exists an almost complex structure $J$ and a complete Riemannian metric $g$ on $W$ such that
\begin{enumerate}
\item $J$ is uniformly $\omega$-tame, which means that for all tangent vectors $X$ and $Y$ to $M$
$$\omega(X,JX)\geq \kappa_1 \|X\|_g^2\quad\hbox{and}\quad|\omega(X,Y)|\leq \kappa_2\|X\|_g\|Y\|_g$$
for some positive constants $\kappa_1$ and $\kappa_2$.
\item the injectivity radius of $(W,g)$ is bounded away from zero and the sectional curvature of $(W,g)$ is bounded from above.
\end{enumerate}
}
\end{definition}
\noindent Obviously, closed symplectic manifolds are geometrically bounded; a product of two geometrically bounded sympletic manifolds is such a manifold. For a more detailed discussion of this concept please refer to Chapters V (by J.-C. Sikorav) and X (by M. Audin, F. Lalonde and L. Polterovich) in \cite{AL}. One can easily check that manifolds convex at infinity, e.g., $(\mathbb{R}^{2m},\sum_{i=1}^{2m}dx_i\wedge dy_i)$ and the cotangent bundle $(T^*M,\omega_0)$, are geometrically bounded. It is also well known that every twisted cotangent bundle $(T^*M,\omega_\sigma)$ admits abundant almost complex structures such that it is geometrically bounded (see \cite[Proposition~2.2]{CGK}). In general $J_g\notin \mathscr{J}(\omega_\sigma)$. However, Proposition~4.1 in \cite{Lu} implies that there exists a constant $\varepsilon_0=\varepsilon_0(g)>0$ such that for all $r\geq\varepsilon_0|\sigma|_{g}$,
$$\mathscr{J}(\omega_\sigma)\cap B_{J_g}(r)\neq\emptyset,$$
where $B_{J_g}(r)$ denotes the open ball of radius $r$ about $J_g$ in $\mathscr{J}(T^*M)$. In fact, Lu in~\cite{Lu} shows that for $r\geq\varepsilon_0|\sigma|_{g}$ we can find an almost complex structure in $\mathscr{J}(\omega_\sigma)\cap B_{J_g}(r)$ such that $(T^*M,\omega_\sigma)$ for the natural metric $G_g$ is geometrically bounded; see also \cite{Me}. This will be very useful in the proof of Theorem~\ref{thm:Invariance} and Theorem~\ref{thm:Inv} since we will use it to choose suitable almost complex structures to estimate the energy of Floer cylinders and to ensure the necessary compactness of moduli spaces of solutions of Floer equations on twisted cotangent bundles.
Finally let us note that the first Chern class satisfies $c_1(T^*M,J)=0$ for each $J\in\mathscr{J}(\omega_\sigma)$ since the twisted cotangent bundle $(T^*M,\omega_\sigma)$ admits a Lagrangian distribution $T^vT^*M$ (see \cite{Se,Me}).
\subsection{The definition of filtered Floer homology}\label{subsec:FFH}
Let $\sigma \in \mathcal{P}(M)$, and let $\alpha\in [S^1,M]$ be a $\sigma$-atoroidal class. We denote $\mathscr{H}$ as the space of smooth compactly supported Hamiltonian functions on $S^1\times D_RT^*M$. For $c>0$ we denote by $\mathscr{H}_c$ the subspace of all Hamiltonian functions $H\in\mathscr{H}$ satisfying $\sup_{S^1\times M}H\leq -c$. Let $\mathfrak{L}_\alpha T^*M$ be the set of all $1$-periodic loops $x$ whose projections to $M$ belong to $\mathfrak{L}_\alpha M$. Fix a reference loop $q_\alpha\in\mathfrak{L}_\alpha M$.
We define the action functional $\mathscr{A}_{H,\sigma}:\mathfrak{L}_\alpha T^*M\to \mathbb{R}$ by
\begin{equation}\label{e:AF}
\mathscr{A}_{H,\sigma}(x)=\int_{S^1}x^*\lambda-\int_{[0,1]\times S^1} w^*\sigma-\int^1_0H(t,x)dt,
\end{equation}
where $w:[0,1]\times S^1\to M$ is any smooth map such that
$$w(0,t)=q_\alpha(t)\quad\hbox{and}\quad w(1,t)=\pi(x(t)).$$
\noindent Since $\alpha$ is a $\sigma$-atoroidal class,
$$\mathscr{A}_{\sigma}(q):=\int_{[0,1]\times S^1} w^*\sigma\quad
\forall\;q\in\mathfrak{L}_\alpha M$$
is independent of the choice of $w$, and therefore $\mathscr{A}_{H,\sigma}$ is well defined. It is easy to check that the set ${\rm Crit}\mathscr{A}_{H,\sigma}$ of critical points of $\mathscr{A}_{H,\sigma}$ equals $\mathscr{P}_{\alpha}(H,\sigma)$. The set of values of $\mathscr{A}_{H,\sigma}$ on $\mathscr{P}_{\alpha}(H,\sigma)$ is called the \emph{action spectrum} with respect to $\alpha$, and we denote it by
$$\mathscr{S}_\alpha(H,\sigma)=\{\mathscr{A}_{H,\sigma}(x)\big|
x\in \mathscr{P}_{\alpha}(H,\sigma)\}.$$
Consider the space $\mathscr{H}$ with the strong Whitney $C^\infty$-topology. Note that the action spectrum $\mathscr{S}_\alpha(H,\sigma)$ is a compact and measure zero subset of $\mathbb{R}$ for any $H\in \mathscr{H}$, and is lower semicontinuous as a multivalued function of $H\in\mathscr{H}$ (see \cite[Section~4.4]{BPS}).
For $a,b\in \mathbb{R}\cup\{\pm\infty\}$, if $a,b\notin \mathscr{S}_\alpha(H,\sigma)$, we set
$$\mathscr{P}^{(a,b)}_{\alpha}(H,\sigma)=\{x\in \mathscr{P}_{\alpha}(H,\sigma)\big|a<\mathscr{A}_{H,\sigma}(x)<b\}.$$
\noindent In order to define the filtered Floer homology associated to $H,\sigma$ and $\alpha$ we need the following nondegeneracy condition:
\begin{description}
\item {(C)}\quad Every element $x\in\mathscr{P}^{(a,b)}_{\alpha}(H,\sigma)$ is non-degenerate, that is, the linear map $d\phi_1^{H,\sigma}(x(0))$ does not have $1$ as an eigenvalue, where $\phi_1^{H,\sigma}$ is the time-one map of the flow of $X_{H,\sigma}$.
\end{description}
For any pair $a<b$ we consider the class of \emph{admissible Hamiltonians} by
$$\mathscr{H}_{\sigma;\alpha}^{a,b}=\{H\in\mathscr{H}\big|a,b\notin \mathscr{S}_\alpha(H,\sigma)\}.$$
If $\alpha=0$, then every point in the complement of $\bigcup_{t\in S^1}{\rm supp}(H_t)$ is a degenerate $1$-periodic orbit of $X_{H,\sigma}$ with zero action of $\mathscr{A}_{H,\sigma}$. To avoid these trivial periodic orbits, we require that $0\notin[a,b]$ whenever $\alpha=0$.
Given $H\in\mathscr{H}_{\sigma;\alpha}^{a,b}$ satisfying nondegeneracy condition (C), for every $x=(q(t),p(t))\in\mathscr{P}^{(a,b)}_{\alpha}(H,\sigma)$ we define the index $\mu(x)=-\mu_{CZ}(x)+\nu(x)$ following the paper by Weber \cite{We1}. Here $\mu_{CZ}(x)$ denotes the Conley-Zehnder index of $x$ (see \cite{Sa,SZ}), and $\nu(x):=0$ if the pullback bundle $q^*TM$ over $S^1$ is trivial and $\nu(x):=1$ otherwise. Consider the $\mathbb{Z}_2$-vector space ${\rm CF}^{(a,b)}_{\alpha}(H,\sigma)$\footnotemark[2]\footnotetext[2]{We use the convention that the complex generated by the empty set is zero.} defined by
$${\rm CF}^{(a,b)}_{\alpha}(H,\sigma):={\rm CF}^b_{\alpha}(H,\sigma)/{\rm CF}^a_{\alpha}(H,\sigma),\quad {\rm CF}^a_{\alpha}(H,\sigma):=\bigoplus\limits_{x\in\mathscr{P}^{(-\infty,a)}
_{\alpha}(H,\sigma)}\mathbb{Z}_2x$$
graded by the index $\mu$.
The Floer boundary operator is defined as follows. Let $J_{gb}$ be an almost complex structure such that $(T^*M,\omega_\sigma)$ is geometrically bounded. Denote by $\mathcal{J}$ the set of smooth time-dependent $\omega_\sigma$-tame almost complex structures on $T^*M$ that are compatible with $\omega_\sigma$ near supp$(H)$ and equal to $J_{gb}$ outside some compact set. Every $J_t\in\mathcal{J}$ give rises to a positive-definite bilinear form on $\mathfrak{L}_\alpha T^*M$.
Given $x_{\pm}\in {\rm CF}^{(a,b)}_{\alpha}(H,\sigma)$ we denote by $\mathcal{M}^\alpha(x_-,x_+,H,J,\sigma)$ the \emph{moduli space} of smooth solution $u:\mathbb{R}\times S^1\to T^*M$ of the Floer differential equation
\begin{equation}\label{e:FDE}
\partial_su+J_t(u)(\partial_tu-X_{H,\sigma}(u))=0
\end{equation}
with the asymptotic boundary conditions
\begin{equation}\label{e:ABC}
\lim\limits_{s\to\pm\infty}u(s,t)=x_{\pm}\quad\hbox{and}\quad
\lim\limits_{s\to\pm\infty}\partial_su(s,t)=0
\end{equation}
uniformly in $t\in S^1$. For every solution of (\ref{e:FDE}) and
(\ref{e:ABC}) we have the energy identity
\begin{equation}\label{e:EI}
E(u):=\int^\infty_{-\infty}\int_0^1\omega_\sigma(\partial_su,
J\partial_su)dsdt=\mathscr{A}_{H,\sigma}(x_-)-\mathscr{A}_{H,\sigma}(x_+).
\end{equation}
Now we observe:
\begin{itemize}
\item[(i)] The moduli space $\mathcal{M}^\alpha(x_-,x_+,H,J,\sigma)$ is uniformly $C^0$-bounded. This is because $H$ is compactly supported, and $(T^*M,\omega_\sigma)$ with $J_{gb}$ is geometrically bounded; see Chapter V in \cite{AL} or \cite{CGK,Lu}.
\item[(ii)] Since $\omega_\sigma$ is symplectically aspherical, no bubbling off of holomorphic spheres can occur in $T^*M$. From this fact, the energy identity (\ref{e:EI}) and (i) we deduce that the moduli space $\mathcal{M}^\alpha(x_-,x_+,H,J,\sigma)$ is compact with respect to $C^\infty$-convergence on compact sets.
\item[(iii)] For a dense subset $\mathcal{J}_{reg}(H,\sigma)\subset\mathcal{J}$, the linearized operator for equation~(\ref{e:FDE}) is surjective for each finite-energy solution of (\ref{e:FDE}) in the homotopy class $\alpha$ (see \cite{FHS}).
\end{itemize}
For each $H$ satisfying (C), each $J\in\mathcal{J}_{reg}(H,\sigma)$ and each pair $x_{\pm}\in {\rm CF}^{(a,b)}_{\alpha}(H,\sigma)$ the moduli space $\mathcal{M}^\alpha(x_-,x_+,H,J,\sigma)$ is an empty set or a smooth manifold of dimension $\mu(x_+)-\mu(x_-)=\mu_{CZ}(x_+)-\mu_{CZ}(x_-)$; see \cite{Sa}. As usual, the Floer boundary operator $\partial=\partial^{H,J}_{\sigma;\alpha}$ on ${\rm CF}^{(a,b)}_{\alpha}(H,\sigma)$ is defined by
\begin{equation}\label{e:FBO}
\partial x_-=\sum\limits_{\substack{x_+\in {\rm CF}^{(a,b)}_{\alpha}(H,\sigma)
\\ \mu(x_-)-\mu(x_+)=1}}n(x_-,x_+)x_+.
\end{equation}
Here $n(x_-,x_+)$ stands for the number (mod $2$) of elements in the set $\mathcal{M}^\alpha(x_-,x_+,H,J,\sigma)/\mathbb{R}$ (modulo time shift). The operator $\partial$ satisfies $\partial\circ\partial=0$, and the resulting Floer homology groups
\begin{equation}\label{e:FHG}
{\rm HF}^{(a,b)}_{\alpha}(H,\sigma)=\frac{\rm ker\partial}{\rm im\partial}
\end{equation}
are independent of the choice of $J\in\mathcal{J}_{reg}(H,\sigma)$.
\begin{remark}
{\rm
It is unclear whether or not the Floer homology ${\rm HF}^{(a,b)}_{\alpha}(H,\sigma)$ is always independent of the choice of $J_{gb}$. It is independent of the choice of $J_{gb}$ whenever the set of almost complex structures for which $(T^*M,\omega_\sigma)$ is geometrically bounded is connected.
}
\end{remark}
Assume that $a<b<c$, $0\notin [a,c]$ and $a,b,c\notin \mathscr{S}_\alpha(H,\sigma)$. Then we have the exact sequence of complexes
\begin{equation}\label{e:esc
0\to {\rm CF}^{(a,b)}_{\alpha}(H,\sigma)\to{\rm CF}^{(a,c)}_{\alpha}(H,\sigma)\to {\rm CF}^{(b,c)}_{\alpha}(H,\sigma)\to 0.
\end{equation}
This induces the exact sequence at the homology level
\begin{equation}\label{e:esFH
\cdots\to {\rm HF}^{(a,b)}_{\alpha*}(H,\sigma)\to {\rm HF}^{(a,c)}_{\alpha*}(H,\sigma)\to {\rm HF}^{(b,c)}_{\alpha*}(H,\sigma)\to {\rm HF}^{(a,b)}_{\alpha(*-1)}(H,\sigma)\to\cdots.
\end{equation}
\subsection{Homotopic invariance}
Suppose that $H^{\pm}\in \mathscr{H}_{\sigma;\alpha}^{a,b}$ satisfy (C) and $x_{\pm}\in \mathscr{P}_{\alpha}(H^{\pm},\sigma)$. Let $H^s:\mathbb{R}\to \mathscr{H}$ be a smooth homotopy connecting $H^-$ and $H^+$ such that $H^s=H^-$ for $s\leq 0$ and $H^s=H^+$ for $s\geq 1$. Consider the parameter-dependent Floer equation
\begin{equation}\label{e:PFE}
\partial_su+J^s_t(u)(\partial_tu-X_{H^s_t,\sigma}(u))=0
\end{equation}
which satisfies uniformly in $t\in S^1$ the asymptotic boundary conditions
\begin{equation}\label{e:SABC}
\lim\limits_{s\to\pm\infty}u(s,t)=x_{\pm}\quad\hbox{and}\quad
\lim\limits_{s\to\pm\infty}\partial_su(s,t)=0.
\end{equation}
Here $J^s_t:\mathbb{R}\to \mathcal{J}$ is a \emph{regular homotopy} of smooth families of almost complex structures satisfying
\begin{itemize}
\item $J^s_t=J^-_t\in\mathcal{J}_{reg}(H^-,\sigma)$ for $s\leq0$.
\item $J^s_t=J^+_t\in\mathcal{J}_{reg}(H^+,\sigma)$ for $s\geq1$.
\item $J^s_t$ is constant and and equals to $J_{gb}$ outside some compact set of $D_RT^*M$.
\item The linearized operator for equation~(\ref{e:PFE}) is surjective for each finite-energy solution of (\ref{e:PFE}) in the homotopy class $\alpha$.
\end{itemize}
The moduli space $\mathcal{M}^\alpha(x_-,x_+,H^s,J^s,\sigma)$ of smooth solutions of (\ref{e:PFE}) satisfying the boundary conditions (\ref{e:SABC}) is $C_{loc}^\infty$-compact. A crucial ingredient for the proof of the compactness is the following energy identity:
\begin{eqnarray}\label{e:PEI}
E(u):&=&\int^\infty_{-\infty}\int_0^1\omega_\sigma(\partial_su,
J^s_t\partial_su)dsdt\nonumber\\
&=&\mathscr{A}_{H^-,\sigma}(x_-)-\mathscr{A}_{H^+,\sigma}(x_+)
-\int^\infty_{-\infty}\int^1_0(\partial_s H^s)(t,u(s,t))dsdt.
\end{eqnarray}
For $|H^+-H^-|$ small enough we can define a chain map (being similar to the argument in~\cite[Section~4.4]{BPS})
$$\widetilde{\Psi}^{\sigma}_{H^+H^-}:{\rm CF}^{(a,b)}_{\alpha}(H^-,\sigma)\to {\rm CF}^{(a,b)}_{\alpha}(H^+,\sigma)$$
which induces an isomorphism
$$\Psi^\sigma_{H^+H^-}:{\rm HF}^{(a,b)}_{\alpha}(H^-,\sigma)\to {\rm HF}^{(a,b)}_{\alpha}(H^+,\sigma).$$
The isomorphism $\Psi^\sigma_{H^+H^-}$ is independent of the choice of the homotopy $H^s$ and $J^s$ by a homotopy of homotopies argument; see \cite{Sa,SZ}. As a result, we can define the Floer homology groups
${\rm HF}^{(a,b)}_{\alpha}(H,\sigma)$ for any $H\in \mathscr{H}_{\sigma;\alpha}^{a,b}$ by a small perturbation since the Hamiltonians satisfying (C) for $a<b$ are dense in $\mathscr{H}_{\sigma;\alpha}^{a,b}$.
\subsection{Monotone homotopies}\label{subsec:MH}
Let $H,K\in \mathscr{H}_{\sigma;\alpha}^{a,b}$ be two functions with $H(t,x)\leq K(t,x)$ for all $(t,x)\in S^1\times D_RT^*M$. Choose a monotone homotopy $s\to H^s\in \mathscr{H}$ from $H$ to $K$ such that $\partial_s H^s\geq0$ everywhere (Here we do not require $H^s$ to be in $\mathscr{H}_{\sigma;\alpha}^{a,b}$ for every $s\in [0,1]$). From the energy identity (\ref{e:PEI}) we deduce that such a homotopy induces a natural homomorphism, which is called \emph{monotone homomorphism}
\begin{equation}\label{e:MH}
\Psi_{KH}^\sigma:{\rm HF}^{(a,b)}_{\alpha}(H,\sigma)\to {\rm HF}^{(a,b)}_{\alpha}(K,\sigma).
\end{equation}
It is well known that these monotone homomorphisms are independent of the choice of the monotone homotopy of Hamiltonians and satisfy the following properties
(see, e.g., \cite{BPS,FH,CFH,SZ,Vi}):
\begin{lemma}\label{lem:mh
\begin{equation}
\begin{split}
\Psi^\sigma_{HH}={\rm id}\quad\forall\;H\in \mathscr{H}_{\sigma;\alpha}^{a,b},\\
\Psi^\sigma_{KH}\circ\Psi^\sigma_{HG}=\Psi^\sigma_{KG}
\end{split}
\end{equation}
whenever $ G,H,K\in \mathscr{H}_{\sigma;\alpha}^{a,b}$ satisfy $G\leq H\leq K$.
\end{lemma}
\begin{lemma}[{\rm see~\cite{Vi} or \cite[Section~4.5]{BPS}}]\label{lem:iso}
If $K^s$ is a monotone homotopy from $H$ to $K$ such that
$K^s\in \mathscr{H}_{\sigma;\alpha}^{a,b}$ for every $s\in[0,1]$, then $\Psi^\sigma_{KH}$ is an isomorphism.
\end{lemma}
\begin{remark}\label{rem:HFfcsf
{\rm
To simplify the notation, we omit the mark $0$ whenever $\sigma=0$, and abbreviate, for example, $\mathscr{A}_{H,0}$, $\mathscr{H}_{0;\alpha}^{a,b}$,
${\rm HF}^{(a,b)}_{\alpha}(H,0)$ and $\Psi^0_{H^{+}H^{-}}$ by $\mathscr{A}_{H}$, $\mathscr{H}_{\alpha}^{a,b}$, ${\rm HF}^{(a,b)}_{\alpha}(H)$ and $\Psi_{H^{+}H^{-}}$ respectively.
All the above arguments in this section go through word for word whenever the closed two-form $\sigma$ vanishes identically . In this case, there is no restriction on the free homotopy class $\alpha\in[S^1,M]$, and the Floer homology for $H\in\mathscr{H}_{\alpha}^{a,b}$ defined here is exactly as that in Weber's paper~\cite{We0}. Indeed, $J_g\in\mathscr{J}(\omega_0)$ is an almost complex structure such that $(T^*M,\omega_0)$ is convex at infinity, and hence is geometrically bounded for $G_g$. Using \cite[Proposition~2.3]{We0}, Weber defines the filtered Floer homology of a broader class of admissible Hamiltonians:
\begin{eqnarray}
\mathscr{K}_{R;\alpha}^{a,b}:=&\big\{&H\in C^\infty(S^1\times T^*M)\big|\exists\tau\geq0\;\exists c\in\mathbb{R}\;\hbox{such that}\;H_t(q,p)=\tau\|p\|_g+c\;\hbox{if}\;\notag\\
&&\|p\|_g\geq R\;\hbox{with}\;\{a,b\}\cap \mathscr{S}_\alpha(H)=\emptyset,\;\hbox{and}\;\tau\notin\Lambda_\alpha\;
\hbox{or}\;c\notin[a,b]\big\}\notag
\end{eqnarray}
which satisfies $\mathscr{K}_{R;\alpha}^{a,b}\supseteq\mathscr{H}_{\alpha}^{a,b}$.
Here we emphasize that Lemma~\ref{lem:iso} also holds for monotone homotopies in $\mathscr{K}_{R;\alpha}^{a,b}$; see \cite{We0}. This will be very useful to compute Floer homology in Section~\ref{sec:4}.
}
\end{remark}
\section{Symplectic deformations of Floer homology}\label{sec:3}
\setcounter{equation}{0}
Floer's work~\cite{Fl0,Fl1,Fl2} tells us that Floer homology is a topological invariant of a closed symplectically aspherical manifold on which different symplectic forms give rise to the same Floer homology (up to isomorphisms). A direct proof of such a fact can be found in the paper by Viterbo~\cite{Vi}. In this section, by following the idea of Bae and Frauenfelder~\cite{BF}, we discuss the continuation homomorphisms for symplectic deformations under additional hypotheses concerning the sympletic forms. For related results about Floer homology under symplectic perturbations, we refer to the paper by Ritter~\cite{Ri}.
\subsection{Quadratic isoperimetric inequality}
\begin{lemma}\label{lem:QII}
Assume that $\sigma \in \mathcal{P}(M)$ and $\alpha\in[S^1,M]$ is a $\sigma$-atoroidal class. Then for every $q\in\mathfrak{L}_\alpha M$ we have
$$\big|\mathscr{A}_{\sigma}(q)\big|\leq \epsilon_0\bigg(
\int^1_0\|\dot{q}(t)\|_gdt\bigg)^2+\epsilon_1,$$
where $\epsilon_0=\epsilon_0(M,g,\sigma)>0$ and $\epsilon_1=\epsilon_1(M,g,\sigma,\alpha)>0$ are some constants.
\end{lemma}
This lemma is based on Lemma~2.7 in \cite{BF}; for a proof of it we refer to~\cite[Lemma~3]{FMP0}.
\begin{remark}\label{rem:lgc}
{\rm It is easy to check that $\mathcal{P}(M)$ is a linear space. Moreover, if $\sigma\in\mathcal{P}(M)$, for any $\delta\in\mathbb{R}$ it holds that
$$\big|\mathscr{A}_{\delta\sigma}(q)\big|\leq |\delta|\epsilon_0\bigg(
\int^1_0\|\dot{q}(t)\|_gdt\bigg)^2+|\delta|\epsilon_1,$$
where the constants $\epsilon_0$ and $\epsilon_1$ are given as in Lemma~\ref{lem:QII}. By Remark~\ref{rem:LGC}, for every closed Riemannian manifold $(M,g)$ of negative curvature, the constants $\epsilon_0(M,g,\sigma)$ and $\epsilon_1(M,g,\sigma,\alpha)$ in Lemma~\ref{lem:QII} converge to zero as $|\sigma|_g\to 0$.
}
\end{remark}
\subsection{Gap estimates for action spectrum}
\begin{lemma}\label{lem:NAS1
Let $H\in\mathscr{H}$.
Suppose that $a\in\mathbb{R}$ is not in the action spectrum of $\mathscr{A}_{H}$. Then there exist some constants $\varepsilon_0=\varepsilon_0(H,g,\sigma,a,\alpha)>0$ and $\delta_0=\delta_0(H,g,\sigma,a,\alpha)>0$ such that if $|\delta|<\delta_0$, then
$[a-\varepsilon_0,a+\varepsilon_0]\cap\mathscr{S}_\alpha(H,\delta\sigma)
=\emptyset$.
\end{lemma}
\begin{proof}
Arguing by contradiction, suppose that there is a sequence of number $\{\delta_k\}_{k\in \mathbb{N}}\subseteq \mathbb{R}$ such that $|\delta_k|<1/(k|\sigma|_g)$ (if $\sigma=0$ let $\delta_k=0$ for all $k$), and for every $\sigma_k:=\delta_k\sigma$ there exists some $x_k\in\mathscr{P}_{\alpha}(H,\sigma_k)$, that is,
\begin{equation}\label{e:hpo}
\dot{x}_k=X_{H,\sigma_k}(t,x_k)\quad \hbox{with}\quad x_k(0)=x_k(1),
\end{equation}
such that $\mathscr{A}_{H,\sigma_k}(x_k)=a_k\in(a-1/k,a+1/k)$.
Since $$\iota(X_{H})\omega_0=dH_t=\iota(X_{H,\sigma_k})\omega_{\sigma_k}
=\iota(X_{H,\sigma_k})\omega_0+\iota(X_{H,\sigma_k})\pi^*\sigma_k,$$
we deduce that
\begin{equation}\notag
\omega_0\big(X_{H}-X_{H,\sigma_k},J_g(X_{H}-X_{H,\sigma_k})\big)
=\pi^*\sigma_k\big(X_{H,\sigma_k},J_g(X_{H}-X_{H,\sigma_k})\big),
\end{equation}
where $J_g$ is defined as in (\ref{e:Jg}). Then we have
\begin{equation}\notag
\|X_{H}-X_{H,\sigma_k}\|^2_{G_g}\leq\|\sigma_k\|_g
\|X_{H,\sigma_k}\|_{G_g}\|X_{H}-X_{H,\sigma_k}\|_{G_g}
\end{equation}
which implies that
\begin{equation}\label{e:ub}
\|X_{H}-X_{H,\sigma_k}\|_{G_g}\leq\|\sigma_k\|_g\|X_{H,\sigma_k}\|_{G_g}\quad
\hbox{and}\quad
\|X_{H,\sigma_k}\|_{G_g}\leq\frac{\|X_{H}\|_{G_g}}{1-\|\sigma_k\|_g}
\end{equation}
for $|\sigma_k|_g<1$. Combining this with the fact that $H$ is compactly supported in $S^1\times D_RT^*M$ we deduce that $X_{H,\sigma_k}$ is uniformly bounded, and hence $x_k(t)$ is equicontinuous. Then, by Arzela-Ascoli theorem, passing to a subsequence, $x_k(t)$ converges to some $x_0(t)$ in $D_RT^*M$. We claim that $x_0(t)$ is a Hamiltonian periodic orbit of $H$ for $\omega_0$. For $k$ large enough, we may assume without loss of generality that $H$ is defined on $\mathbb{R}^{2n}$ (by using some local coordinate near $x_0(t)$). Now we only need to prove
\begin{equation}\notag
x_0(t)-x_0(0)=\int^t_0X_H(s,x_0(s))ds.
\end{equation}
Note that $$x_0(t)-x_0(0)=\lim_{k\to\infty}(x_k(t)-x_k(0))
=\lim_{k\to\infty}\int^t_0\dot{x}_k(s)ds.$$ We compute
\begin{eqnarray}\notag
x_0(t)-x_0(0)-\int^t_0X_H(s,x_0(s))ds&=&\lim\limits_{k\to\infty}
\int^t_0\big(\dot{x}_k(s)-X_H(s,x_0(s))\big)ds\nonumber\\
&=&\lim\limits_{k\to\infty}
\int^t_0\big(\dot{x}_k(s)-X_{H,\sigma_k}(s,x_k(s))\big)ds\nonumber\\
&&+\lim\limits_{k\to\infty}
\int^t_0\big(X_{H,\sigma_k}(s,x_k(s))-X_{H}(s,x_k(s))\big)ds\nonumber\\
&&+\lim\limits_{k\to\infty}
\int^t_0\big(X_{H}(s,x_k(s))-X_{H}(s,x_0(s))\big)ds.\nonumber
\end{eqnarray}
In the last equality, the first term is zero due to (\ref{e:hpo}), the second term is zero since $X_{H,\sigma_k}$ uniformly converges to $X_H$ by (\ref{e:ub}) and the compactness of supp$(H)$, and the third term is zero since $x_k(t)$ uniformly tends to $x_0(t)$. Let $q_k(t)=\pi(x_k(t))$. By lemma~\ref{lem:QII}, we have
\begin{eqnarray}\notag
\big|\mathscr{A}_{\sigma_k}(q_k)\big|&\leq& \delta_k\epsilon_0\bigg(
\int^1_0\|\dot{q}_k(t)\|_gdt\bigg)^2+\delta_k\epsilon_1\nonumber\\
&\leq&\delta_k\epsilon_0\bigg(
\int^1_0\|X_{H,\sigma_k}\|_{G_g}dt\bigg)^2+\delta_k\epsilon_1\to0\quad
\hbox{as} \;k\to0.\nonumber\\
\end{eqnarray}
It follows that
$$a=\lim\limits_{k\to\infty}\mathscr{A}_{H,\sigma_k}(x_k)=\lim\limits_{k\to\infty}\mathscr{A}_{H}(x_k)
-\lim\limits_{k\to\infty}\mathscr{A}_{\sigma_k}(q_k)=\mathscr{A}_{H}(x_0)$$
which contradicts our assumption that $a$ is not in the action spectrum of $\mathscr{A}_{H}$.
\end{proof}
By Remark~\ref{rem:lgc}, the proof of Lemma~\ref{lem:NAS1} implies
\begin{lemma}\label{lem:NAS2}
Let $(M,g)$ be a closed Riemannian manifold of non-positive curvature, and let $H\in\mathscr{H}$.
Suppose that $a\in\mathbb{R}$ is not in the action spectrum of $\mathscr{A}_{H}$, and that $\sigma$ is any closed $2$-form on $M$. Then there exists some constants $\delta_0=\delta_0(H,g,a,\alpha)>0$ and $\varepsilon_0=\varepsilon_0(H,g,a,\alpha)>0$ such that if $|\sigma|_g<\delta_0$, then $[a-\varepsilon_0,a+\varepsilon_0]\cap \mathscr{S}_\alpha(H,\sigma)=\emptyset$.
\end{lemma}
\begin{remark}\label{rem:ppo
{\rm Under the hypotheses of Lemma~\ref{lem:NAS1}, if, moreover, $\{H_k\}_{k\in\mathbb{N}}\subseteq\mathscr{H}$ converges to $H$ in the $C^\infty$-topology, then we conclude that there exists a positive integer $k_0>0$ such that $[a-\varepsilon_0,a+\varepsilon_0]\cap\mathscr{S}_\alpha(H_k,\delta\sigma)
=\emptyset$ for every $\delta$ satisfying $|\delta|<\delta_0(H,g,\sigma,a,\alpha)$ and every $k\geq k_0$.
Similarly, under the hypotheses of Lemma~\ref{lem:NAS2}, if $\mathscr{H}\ni H_k \stackrel{C^\infty}{\longrightarrow}H$, then there exists a positive integer $k_0>0$ such that if $k\geq k_0$ and $|\sigma|_g<\delta_0(H,g,a,\alpha)$, then $[a-\varepsilon_0,a+\varepsilon_0]\cap \mathscr{S}_\alpha(H_k,\sigma)=\emptyset$.
}
\end{remark}
\subsection{Invariance of Floer homology for symplectic deformations}
\begin{theorem}\label{thm:Invariance}
Assume that $H\in \mathscr{H}_{\alpha}^{a,b}$. Then the following holds.
\begin{description}
\item[(1)] There exists a constant $\delta_0=\delta_0(H,g,\sigma,a,b,\alpha)>0$ such that if $|\delta|<\delta_0$, then there is a continuation chain map
$$\widetilde{\Psi_{\omega_0}^{\omega_{\delta\sigma}}}:{\rm CF}^{(a,b)}_{\alpha}(H)\to {\rm CF}^{(a,b)}_{\alpha}(H,\delta
\sigma)$$
which induces an isomorphism
\begin{equation}
\Psi_{\omega_0}^{\omega_{\delta\sigma}}:{\rm HF}^{(a,b)}_{\alpha}(H)\to {\rm HF}^{(a,b)}_{\alpha}(H,\delta
\sigma).
\end{equation}
\item[(2)] If $K\in \mathscr{H}_{\alpha}^{a,b}$ is another Hamiltonian function satisfying $$H(t,x)\leq K(t,x)\quad \forall\;(t,x)\in [0,1]\times D_RT^*M,$$
then for $|\delta|<\min\{\delta_0(H,g,\sigma,a,b,\alpha),
\delta_0(K,g,\sigma,a,b,\alpha)\}$ the following diagram commutes:
\begin{eqnarray}
\begin{CD}\label{diag:dc0}
{\rm HF}^{(a,b)}_{\alpha}(H) @>{\Psi_{KH}}>> {\rm HF}^{(a,b)}_{\alpha}(K)\\
@V{\Psi_{\omega_0}^{\omega_{\delta\sigma}}}VV @VV{\Psi_{\omega_0}^{\omega_{\delta\sigma}}}V \\
{\rm HF}^{(a,b)}_{\alpha}(H,\delta\sigma) @>{\Psi^{\delta\sigma}_{KH}}>> {\rm HF}^{(a,b)}_{\alpha}(K,\delta\sigma)
\end{CD}
\end{eqnarray}
\end{description}
\end{theorem}
\begin{proof}
In what follows, we always assume that $|\sigma|_g\neq0$ (nothing needs to be proved if $\sigma=0$).
Consider $H^i\in\mathscr{H}_{\alpha}^{a,b}$, $i=0,1$, which satisfies
$$H^0(t,x)\leq H^1(t,x)\quad \forall\;(t,x)\in [0,1]\times D_RT^*M.$$
By Lemma~\ref{lem:NAS1}, there exist some constants $\varepsilon^i=\varepsilon^i(H^i,g,\sigma,a,b,\alpha)>0$ and $\hat{\delta}^i=\hat{\delta}(H^i,g,\sigma,a,b,\alpha)>0$, $
i=0,1$, such that for every $\delta^i\in(-\hat{\delta}^i,\hat{\delta}^i)$ it holds that
\begin{equation}
\begin{split}
\mathscr{S}_\alpha(H^i,\delta^i\sigma)\cap[a-2\varepsilon^i,a+2\varepsilon^i]=\emptyset
\quad \hbox{and}\quad \mathscr{S}_\alpha(H^i,\delta^i\sigma)\cap[b-2\varepsilon^i,b+2\varepsilon^i]=\emptyset.
\end{split}
\end{equation}
By a perturbation argument, we may assume that without loss of generality $H^0$ and $H^1$ satisfy the nondegeneracy condition (C) for $\omega_{\delta^0\sigma}$ and $\omega_{\delta^1\sigma}$ respectively. We will show that there is a Floer chain map from
${\rm CF}^{(a,b)}_{\alpha}(H^0,\delta^0\sigma)$ to ${\rm CF}^{(a,b)}_{\alpha} (H^1,\delta^1\sigma)$ which induces a homomorphism
$$\Psi^{\delta^1\delta^0}_{H^1H^0}:{\rm HF}^{(a,b)}_{\alpha}(H^0,\delta^0\sigma)\to {\rm HF}^{(a,b)}_{\alpha}(H^1,\delta^1\sigma)$$
whenever $|\delta^i|$ $(i=0,1)$ is small enough.
Let $\beta:\mathbb{R}\to [0,1]$ be a smooth cut-off function such that
$\beta=0$ for $s\leq0$, $\beta(s)=1$ for $s\geq 1$ and $0\leq\beta'(s)\leq1$.
Set $\delta^s=(1-\beta(s))\delta^0+\beta(s)\delta^1$ and
$H^s=(1-\beta(s))H^0+\beta(s)H^1$. Let $\omega^s=\omega_0+\delta^s\sigma$, and let $X^{\omega^s}_{H_t^s}$ be the Hamiltonian vector field such that $$dH_t^s=\iota_{X^{\omega^s}_{H_t^s}}\omega^s.$$
Then by~(\ref{e:ub}) we have
\begin{equation}\label{e:ubHv}
|X^{\omega^s}_{H_t^s}|_{G_g}\leq\frac{|X^{\omega_0}_{H_t^s}|_{G_g}}
{1-|\delta^s||\sigma|_g}
\leq2\min\big\{|X^{\omega_0}_{H_t^0}|_{G_g},|X^{\omega_0}_{H_t^1}|_{G_g}
\big\}
\end{equation}
for $|\delta^0|,|\delta^1|\leq1/(2|\sigma|_g)$ ($\sigma\neq 0$).
Let $s\to J^s\in\mathscr{J}(\omega_{\delta^s\sigma})\cap
B_{J_g}(\varepsilon_0|\delta^s\sigma|_{g})$ (recall that $\varepsilon_0$ is a constant given on page 10)
be a homotopy of one-periodic almost complex structures such that $J_t^s=J_t^-\in\mathcal{J}_{reg}(H^0,\delta^0\sigma)$ for $s\leq0$, and $J_t^s=J_t^+\in\mathcal{J}_{reg}(H^1,\delta^1\sigma)$ for $s\geq1$. Such a choice of $J^s$ is possible because $\delta^s$ is constant outside of $[0,1]$ and Proposition 4.1 in~\cite{Lu} can actually be extended to a parametric version for a family of twisted symplectic forms $\omega_{\sigma^s}$ with $s$ belonging to some compact interval since those almost complex structures constructed there are canonical with respect to a choice of metric. For every $(s,t)\in \mathbb{R}\times S^1$ and every $X,Y\in TT^*M$, we have
\begin{eqnarray}\label{e:utJs0}
\omega^s(X,J^s_tX)&=&\omega_0(X,J_gX)+\omega_0(X,(J^s_t-J_g)X)
+\delta^s\pi^*\sigma(X,J_t^sX)\nonumber\\
&\geq&\|X\|^2_{G_g}-|J^s_t-J_g|_{G_g}\|X\|^2_{G_g}-
|\delta^s||\sigma|_g|J^s_t|_{G_g}\|X\|^2_{G_g}\notag\\
&\geq&\big(1-\varepsilon_0|\delta^s||\sigma|_g-(1+
\varepsilon_0|\delta^s||\sigma|_g)|\delta^s||\sigma|_g\big)
\|X\|^2_{G_g}\notag\\
&\geq&\frac{1}{2}\|X\|^2_{G_g}
\end{eqnarray}
provided $|\delta^0|,|\delta^1|\leq (6\varepsilon_0+4)^{-1}|\sigma|_g^{-1}$, and it holds that
\begin{eqnarray}\label{e:utJs1}
|\omega^s(X,Y)|\leq\frac{5}{3}\|X\|_{G_g}\|Y\|_{G_g}.
\end{eqnarray}
Therefore $J^s$ is a $1$-periodic almost complex structure for which $(T^*M,\omega^s)$ for the natural metric $G_g$ is geometrically bounded for every $s\in \mathbb{R}$ and every $t\in S^1$.
Given $x\in \mathscr{P}_{\alpha}(H^0,\delta^0\sigma)$ and $y\in \mathscr{P}_{\alpha}(H^1,\delta^1\sigma)$,
consider $u:\mathbb{R}\times S^1\to T^*M$ satisfying the following equation
\begin{equation}\label{e:sfe}
\partial_su+J^s_t(u)(\partial_tu-X^{\omega^s}_{H_t^s}(u))=0
\end{equation}
with the asymptotic boundary conditions
\begin{equation}\label{e:abc}
\lim\limits_{s\to-\infty}u(s,t)=x,\quad
\lim\limits_{s\to+\infty}u(s,t)=y\quad\hbox{and}\quad
\lim\limits_{s\to\pm\infty}\partial_su(s,t)=0.
\end{equation}
Here we emphasize that $\{J^s_t\}$ is also chosen such that solutions of (\ref{e:sfe}) and (\ref{e:abc}) are transverse (the associated linearized operators are surjective) by a perturbation argument; see~\cite{FHS}.
Now we can define a map
$$\widetilde{\Psi^{\delta^1\delta^0}_{H^1H^0}}:{\rm CF}^{(a,b)}_{\alpha}(H^0,\delta^0
\sigma)\to {\rm CF}^{(a,b)}_{\alpha}(H^1,\delta^1
\sigma)$$
given by
$$\widetilde{\Psi^{\delta^1\delta^0}_{H^1H^0}}(x)
=\sum\limits_{\substack{\mu(x)=\mu(y)}}\#_2
\mathcal{M}^\alpha(x,y,H^s,J^s,\omega^s)y,$$
where the space $\mathcal{M}^\alpha(x,y,H^s,J^s,\omega^s)$ consists of solutions of (\ref{e:sfe}) and (\ref{e:abc}), and $\#_2$ denotes the number of elements modulo two. Since $\omega_s$ is symplectically aspherical for every $s\in \mathbb{R}$, there is no bubbling. Then in order to obtain the compactness of $\mathcal{M}^\alpha(x,y,H^s,J^s,\omega^s)$, we need certain energy estimations. Write
$$\mathscr{A}_{H,\delta(s)\sigma}(u(s,\cdot))=\int_{S^1}u(s,\cdot)^*
\lambda-\mathscr{A}_{\delta(s)\sigma}(q(s,\cdot))-\int^1_0H(t,u(s,t))dt,$$
where $q(s,t)$ is the projection of $u(s,t)\in T^*M$ to $M$ for every $(s,t)\in \mathbb{R}\times S^1$. Then we compute
\begin{eqnarray}\label{e:EIE}
\mathscr{A}_{H^1,\delta^1\sigma}(y)-\mathscr{A}_{H^0,\delta^0\sigma}(x)&=&
\int^\infty_{-\infty}\frac{d}{ds}\mathscr{A}_{H,\delta(s)\sigma}
(u(s,\cdot)){ds}\nonumber\\
&=&-\int^\infty_{-\infty}\int_0^1\omega_s(\partial_s u,J_t^s\partial_s u)ds dt-\int^\infty_{-\infty}\int_0^1\partial_sH^s_t(u(s,t))dsdt.\nonumber\\
&&-\int^\infty_{-\infty}(\delta^1-\delta^0)\beta^\prime(s)\mathscr{A}
_{\sigma}(q(s,\cdot))ds.
\end{eqnarray}
By Lemma~\ref{lem:QII} we estimate
\begin{eqnarray}\label{e:EIE0}
\big|\mathscr{A}
_{\sigma}(q(s,\cdot))\big|&\leq&\epsilon_0\bigg(\int^1_0
\|\partial_tq(s,t)\|_gdt\bigg)^2+\epsilon_1\nonumber\\
&\leq&\epsilon_0\int^1_0\|\partial_tq(s,t)\|^2_gdt+\epsilon_1\nonumber\\
&\leq&\epsilon_0\int^1_0\|\partial_tu(s,t)\|^2_{G_g}dt+\epsilon_1
\end{eqnarray}
for some positive constants $\epsilon_0=\epsilon_0(g,\sigma)$ and $\epsilon_1=\epsilon_1(g,\sigma,\alpha)$. By (\ref{e:ubHv}) and (\ref{e:sfe}) we have
\begin{eqnarray}\label{e:EIE1}
\|\partial_tu\|^2_{G_g}&=&\|J^s_t(u)\partial_su+X_{H_t^s}^{\omega^s}(u)
\|^2_{G_g}\nonumber\\
&\leq&2\big(\|J^s_t(u)\partial_su\|^2_{G_g}+\|X_{H_t^s}^{\omega_s}(u)
\|^2_{G_g}\big)\nonumber\\
&\leq&2\big(|J^s_t|_{G_g}\|\partial_su\|^2_{G_g}+C_0\big)\nonumber\\
&\leq&2\bigg(\frac{4}{3}
\|\partial_su\|^2_{G_g}+C_0\bigg)
\end{eqnarray}
for $|\delta^0|,|\delta^1|\leq (6\varepsilon_0+4)^{-1}|\sigma|_g^{-1}$.
Here $C_0:=C_0(H^0,H^1,g)=4\min\{|X^{\omega_0}_{H^0}
|^2_{G_g},|X^{\omega_0}_{H^1}|^2_{G_g}\}$.
Plugging (\ref{e:utJs0}) into (\ref{e:EIE1}), we arrive at
\begin{eqnarray}\label{e:EIE2}
\|\partial_tu\|^2_{G_g}
&\leq&\frac{16}{3}\omega^s
(\partial_su,J^s_t\partial_su)+2C_0.
\end{eqnarray}
Combining (\ref{e:EIE0}) and (\ref{e:EIE2}) leads to
\begin{eqnarray}\label{e:EIE3}
\big|\mathscr{A}
_{\sigma}(q(s,\cdot))\big|\leq\frac{16\epsilon_0}{3}
\int^1_0\omega_s(\partial_su,J^s_t\partial_su)dt+2\epsilon_0C_0+\epsilon_1.
\end{eqnarray}
Hence,
\begin{eqnarray}\label{e:EIE4}
\bigg|\int^\infty_{-\infty}\beta^\prime(s)\mathscr{A}
_{\sigma}(q(s,\cdot))ds\bigg|&=&\bigg|\int^1_{0}\beta^\prime(s)\mathscr{A}
_{\sigma}(q(s,\cdot))ds\bigg|\notag\\
&\leq&C_1\int^\infty_{-\infty}\int^1_0\omega_s(\partial_su,J^s_t\partial_su)dsdt
+C_2,
\end{eqnarray}
where $C_1=16\epsilon_0/3$ and
$C_2=2\epsilon_0C_0+\epsilon_1$. Then by (\ref{e:EIE}) we obtain
\begin{eqnarray}\label{e:EIE5}
(1-\delta C_1)\int^\infty_{-\infty}\int^1_0\omega_s
(\partial_su,J^s_t\partial_su)dsdt&\leq&
\mathscr{A}_{H^0,\delta^0\sigma}(x)-\mathscr{A}_{H^1,\delta^1\sigma}(y)
+\delta C_2\nonumber\\
&&-\int^\infty_{-\infty}\int_0^1\partial_sH^s_t(u(s,t))dsdt,
\end{eqnarray}
with $\delta=(|\delta^0|+|\delta^1|)$.
Denote $\varepsilon:=\min\{\varepsilon^0,\varepsilon^1\}$, and set
$$\delta_0:=\delta_0(H^0,H^1,g,\sigma,a,b,\alpha):=\min\bigg\{\hat{\delta}^0, \hat{\delta}^1,\frac{1}{(6\varepsilon_0+4)|\sigma|_g},
\frac{1}{2C_1},\frac{\varepsilon}{2C_2}\bigg\}.$$
For brevity, hereafter we drop the explicit dependence on $a,b$ in the notation $\delta_0$.
Then for $|\delta^i|<\delta_0$ the Floer map from ${\rm CF}_{\alpha}(H^0,\delta^0\sigma)$ to ${\rm CF}_{\alpha}(H^1,\delta^1\sigma)$ defined by the solutions of (\ref{e:sfe}) preserves the subcomplexes ${\rm CF}^a_{\alpha}$ and ${\rm CF}^b_{\alpha}$ (since $\partial_sH^s=\dot{\beta}(s)(H^1-H^0)\geq0$).
Therefore, the solutions of (\ref{e:sfe}) give rise to the continuation map $\widetilde{\Psi^{\delta^1\delta^0}_{H^1H^0}}$. By a standard gluing argument in Floer homology theory, $\widetilde{\Psi^{\delta^1\delta^0}_{H^1H^0}}$ commutes with the boundary operators. Hence $\widetilde{\Psi^{\delta^1\delta^0}_{H^1H^0}}$ is a chain map which induces a homomorphism
$$\Psi^{\delta^1\delta^0}_{H^1H^0}:{\rm HF}^{(a,b)}_{\alpha}(H^0,\delta^0\sigma)\to {\rm HF}^{(a,b)}_{\alpha}(H^1,\delta^1\sigma).$$
Suppose that
$$\Psi^{\delta^2\delta^1}_{H^2H^1}:{\rm HF}^{(a,b)}_{\alpha}(H^1,\delta^1\sigma)\to {\rm HF}^{(a,b)}_{\alpha}(H^2,\delta^2\sigma)$$
is another homomorphism defined as above if $H^1\leq H^2$ and
$|\delta^1|,|\delta^2|<\delta_0(H^1,H^2,g,$ $\sigma,\alpha)$. By a homotopy-of-homotopies argument, we have the following commutative diagram
\begin{equation}\label{e:ch'
\xymatrix{{\rm HF}^{(a,b)}_{\alpha}(H^0,\delta^0\sigma)
\ar[rr]^{\Psi^{\delta^2\delta^0}_{H^2H^0}}\ar[dr]_{\Psi^{\delta^1\delta^0}_{H^1H^0}}& & {\rm HF}^{(a,b)}_{\alpha}(H^2,\delta^2\sigma)\\ & {\rm HF}^{(a,b)}_{\alpha}(H^1,\delta^1\sigma)\ar[ur]_{\Psi^{\delta^2\delta^1}_{H^2H^1}} & }
\end{equation}
for $|\delta^0|,|\delta^1|,|\delta^2|<\min\{\delta_0(H^0,H^1,g,\sigma,\alpha)
,\delta_0(H^1,H^2,g,\sigma,\alpha),\delta_0(H^0,H^2,$ $g,\sigma,\alpha)\}$.
If $H^s\equiv H\in\mathscr{H}_{\alpha}^{a,b}$, then for $|\delta^1|,|\delta^2|<\delta_0(H,H,g,\sigma,\alpha)$
we can define the map
$$\Psi^{\delta^0\delta^1}_{HH}:{\rm HF}^{(a,b)}_{\alpha}(H,\delta^1\sigma)\to {\rm HF}^{(a,b)}_{\alpha}(H,\delta^0\sigma).$$
Since $\Psi^{\delta\delta}_{HH}$ is an isomorphism for every $\delta\in\mathbb{R}$ and every
$H\in\mathscr{H}_{\delta\sigma;\alpha}^{a,b}$, we deduce from (\ref{e:ch'}) with $\delta_0=\delta_2$ that the homomorphism
\begin{equation}\label{e:iso}
\Psi^{\delta^1\delta^0}_{HH}:{\rm HF}^{(a,b)}_{\alpha}(H,\delta^0\sigma)\to {\rm HF}^{(a,b)}_{\alpha}(H,\delta^1\sigma)
\end{equation}
is an isomorphism with inverse $\Psi^{\delta^0\delta^1}_{HH}$ for $|\delta^0|,|\delta^1|<\delta_0(H,H,g,\sigma,\alpha)$.
Now we are in a position to prove Theorem~\ref{thm:Invariance}. Denote $$\delta_0(H,g,\sigma,\alpha):=\delta_0(H,H,g,\sigma,\alpha).$$
Let $\delta^0=0$ and $\delta^1=\delta$ with $|\delta|<\delta_0(H,g,\sigma,\alpha)$ for every $H\in \mathscr{H}_{\alpha}^{a,b}$, and denote $$\Psi_{\omega_0}^{\omega_{\delta\sigma}}:=\Psi^{\delta^1\delta^0}_{HH}
:{\rm HF}^{(a,b)}_{\alpha}(H)\to {\rm HF}^{(a,b)}_{\alpha}(H,\delta\sigma).$$
Then statement~(1) follows from the isomorphism (\ref{e:iso}) immediately. Notice that
$$\min\{\delta_0(H,g,\sigma,\alpha),\;\delta_0(K,g,\sigma,\alpha)\}
\leq\delta_0(H,K,g,\sigma,\alpha).$$
For $|\delta|<\min\{\delta_0(H,g,\sigma,\alpha),
\delta_0(K,g,\sigma,\alpha)\}$ we deduce from (\ref{e:ch'}) that the homomorphism
$$\Psi^{\delta 0}_{KH}:{\rm HF}^{(a,b)}_{\alpha}(H)\to {\rm HF}^{(a,b)}_{\alpha}(K,\delta\sigma)$$
satisfies $\Psi^{\delta 0}_{KH}=\Psi^{\delta\sigma}_{KH}
\circ\Psi_{\omega_0}^{\omega_{\delta\sigma}}$ with $(H^0,\delta^0)=(H,0), (H^1,\delta^1)=(H,\delta)$ and $(H^2,\delta^2)=(K,\delta)$, and
$\Psi^{\delta 0}_{KH}=\Psi_{\omega_0}
^{\omega_{\delta\sigma}}\circ\Psi_{KH}$ with $(H^0,\delta^0)=(H,0), (H^1,\delta^1)=(K,0)$ and $(H^2,\delta^2)=(K,\delta)$. So $\Psi^{\delta\sigma}_{KH}
\circ\Psi_{\omega_0}^{\omega_{\delta\sigma}}=\Psi_{\omega_0}
^{\omega_{\delta\sigma}}\circ\Psi_{KH}$. The proof of statement~(2) is completed.
\end{proof}
\begin{remark}\label{rem:inv}
{\rm
The quadratic isoperimetric inequality in Lemma~\ref{lem:QII} plays an essential role in the proof of the above theorem. A careful inspection of the proof of Theorem~\ref{thm:Invariance} shows that one could obtain the desired various estimations whenever the term $\mathscr{A}_{\sigma}$ associated to $\sigma \in \mathcal{P}(M)$ is well controlled. In fact, when the underlying closed manifold $M$ admits a metric $g$ of negative curvature, Theorem~\ref{thm:Invariance} can be upgraded to the following theorem.
}
\end{remark}
\begin{theorem}\label{thm:Inv}
Let $(M,g)$ be a closed Riemannian manifold of negative curvature, and let $\alpha\in[S^1,M]$ be a free homotopy class.
Assume that $H\in \mathscr{H}_{\alpha}^{a,b}$ and that $\sigma$ is any closed $2$-form on $M$. Then there exists a constant $\delta_0=\delta_0(H,g,a,b,\alpha)>0$ such that if $|\sigma|_g<\delta_0$, then there is a continuation chain map
$$\widetilde{\Psi_{\omega_0}^{\omega_{\sigma}}}:{\rm CF}^{(a,b)}_{\alpha}(H)\to {\rm CF}^{(a,b)}_{\alpha}(H,\sigma)$$
which induces an isomorphism
\begin{equation}
\Psi_{\omega_0}^{\omega_{\sigma}}:{\rm HF}^{(a,b)}_{\alpha}(H)\to {\rm HF}^{(a,b)}_{\alpha}(H,\sigma).
\end{equation}
Moreover, if $K\in \mathscr{H}_{\alpha}^{a,b}$ is another Hamiltonian function satisfying $$H(t,x)\leq K(t,x)\quad \forall\;(t,x)\in [0,1]\times D_RT^*M,$$ then for $|\sigma|_g<\min\{\delta_0(H,g,a,b,\alpha),
\delta_0(K,g,a,b,\alpha)\}$ the following diagram commutes:
\begin{eqnarray}
\begin{CD}\label{diag:dc1}
{\rm HF}^{(a,b)}_{\alpha}(H) @>{\Psi_{KH}}>> {\rm HF}^{(a,b)}_{\alpha}(K)\\
@V{\Psi_{\omega_0}^{\omega_{\sigma}}}VV @VV{\Psi_{\omega_0}^{\omega_{\sigma}}}V \\
{\rm HF}^{(a,b)}_{\alpha}(H,\sigma) @>{\Psi^\sigma_{KH}}>> {\rm HF}^{(a,b)}_{\alpha}(K,\sigma)
\end{CD}
\end{eqnarray}
\end{theorem}
\begin{proof}
Firstly, Remark~\ref{rem:NC} implies that under the hypotheses of Theorem~\ref{thm:Inv} the filtered Floer homology ${\rm HF}^{(a,b)}_{\alpha}(H,\sigma)$ can be defined as in Subsection~\ref{subsec:FFH}. As Remark~\ref{rem:inv} points out, the proof of Theorem~\ref{thm:Invariance} for $\sigma$-atoroidal class $\alpha$ goes through verbatim with only a minor modification. Here we just mention that in order to obtain the corresponding energy estimations we need to replace Lemma~\ref{lem:NAS1} by Lemma~\ref{lem:NAS2} and use the fact that the constants $\epsilon_0$ and $\epsilon_1$ in Lemma~\ref{lem:QII} converge to zero as $|\sigma|_g\to 0$ (see Remark~\ref{rem:lgc}).
\end{proof}
\section{Computations of Floer homology}\label{sec:4}
\setcounter{equation}{0}
In this section we will closely follow the paper by Weber~\cite{We0} to construct two sequences of Hamiltonian functions compactly supported in $D_RT^*M$ and compute their Floer homologies for $T^*M$ endowed with the canonical symplectic form $\omega_0=-d\lambda$.
\subsection{Radial Hamiltonians}
Let $H^f:T^*M\to \mathbb{R}$ be an autonomous function of the form
$$H^f(q,p)=f(\|p\|_g)\quad \forall (q,p)\in T^*M,$$
where $f:\mathbb{R}\to\mathbb{R}$ is a smooth function such that $f(r)=f(-r)$. Then the set of the critical points of $\mathscr{A}_{H^f}$ is given by
\begin{eqnarray}\notag
\mathscr{P}_{\alpha}(H^f):=&\big\{& z=(q,p)\in C^\infty(S^1, T^*M)\big|
q(t)\;\hbox{is a geodesic in the class}\;\alpha,\;\notag\\
&&l:=\|\dot{q}\|_g, \;p(t)=\pm\frac{r}{l}\dot{q}(t),\;\hbox{where}\;r>0\; \hbox{satisfies}\; f^\prime(r)=\pm l\big\}.\notag
\end{eqnarray}
Moreover, for each $z\in \mathscr{P}_{\alpha}(H^f)$ it holds that $\mathscr{A}_{H^f}(z)=f^\prime(r)r-f(r)$, that is, the value of the action functional $\mathscr{A}_{H^f}$ at $z$ is equal to the minus $y$-intercept of the tangential line of the graph $y=f(x)$ at $x=r$.
\subsection{The action functional on free loop space}
The action functional $\mathscr{E}$ on $\mathcal{L}_\alpha M$ is defined by
$$\mathscr{E}(q)=\frac{1}{2}\int^1_0\|\dot{x}(t)\|_g^2dt.$$
It is not hard to check that a loop $q\in\mathcal{L}_\alpha M$ is a critical point of $\mathscr{E}$ if and only if $q$ is a $1$-periodic geodesic representing $\alpha$. Given $a\in\mathbb{R}$, denote
$$\mathcal{L}_\alpha^aM:=\{q\in\mathcal{L}_\alpha M\big|\mathscr{E}(q)\leq a\}.$$
For $a\leq b$ the natural inclusion
$$\iota_a^b:\mathcal{L}_\alpha^aM\hookrightarrow \mathcal{L}_\alpha^bM$$
induces the homomorphism
$$[\iota_a^b]:{\rm H}_*(\mathcal{L}_\alpha^aM)\to {\rm H}_*(\mathcal{L}_\alpha^bM).$$
Here ${\rm H}_*(\mathcal{L}_\alpha^aM)$ denotes the singular homology with $\mathbb{Z}_2$-coefficients of the sublevel set $\mathcal{L}_\alpha^aM$.
\begin{remark}\label{rem:nontriviality}
{\rm
The homomorphism $[\iota_a^b]$ is nonzero whenever $l_\alpha\leq a\leq b$.}
\end{remark}
\subsection{Two families of profile functions}\label{subsec:pf}
Fix $c>0$ and pick $a\in(0,c]$ with $a/R\notin\Lambda_\alpha$.
Since the marked length spectrum $\Lambda_\alpha\subseteq\mathbb{R}$ is a closed and nowhere dense subset, there is a dense subset $\Delta$ of $(0,a/c)$ such that for every $\eta\in \Delta$ it holds that
$$\quad \nu_\eta:=\frac{1}{R}\bigg[\frac{a}{\eta}-(c-a)\bigg]\in (a/R,\infty)\setminus\Lambda_\alpha.$$
Fix $\eta\in \Delta$, using the conventions $\sup\emptyset=0$ and $\inf\emptyset=\infty$, we define
\begin{equation}
\begin{array}{ll}
l_0:=\inf(\Lambda_\alpha\cap(a/ R,\infty)),\quad l_-=l_-(\eta):=\sup((0,\nu_\eta)\cap\Lambda_\alpha),\notag\\
l_+=l_+(\eta):=\inf((\nu_\eta,\infty)\cap\Lambda_\alpha),\quad l_1=l_1(\eta):=\frac{\nu_\eta+l_-}{2}.
\end{array}
\end{equation}
Clearly, we have
$$(a/R,l_0)\cap\Lambda_\alpha=\emptyset,\quad
(l_-,l_+)\cap\Lambda_\alpha=\emptyset\quad \hbox{and}\quad 0\leq l_-<l_1<\nu_\eta<l_+\leq\infty.$$
Denote
\begin{equation}
\begin{array}{lrc}
r_{k1}:=\frac{(k-1)R}{k}+\frac{3}{16k}\big(R-\frac{a}{l_0}\big),\quad
r_{k2}:=R-\frac{3}{16k}\big(R-\frac{a}{l_0}\big),\notag\\
\lambda_k(x):=ak\frac{x-R+(3/(16k))(R-a/l_0)}{R-(3/8)(R-a/l_0)},\qquad k\in\mathbb{N}.
\end{array}
\end{equation}
Here let us remark that $\nu_\eta,l_+(\eta),l_1(\eta)\to+\infty$ as $\eta\to0$, and $r_{k1},r_{k2}\to R$ as $k\to\infty$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{fk.png}
\caption{Sequence A of profile functions}\label{fig:1}
\end{figure}
\noindent\textbf{Sequence A of profile functions}: Choose a sequence of smooth functions $f_k$ (see Figure 1) by smoothing out a sequence of piecewise linear functions $\hat{f}_k$, which are given by
\begin{equation}\notag
\hat{f}_k(x):=\left\{
\begin{array}{ll}
\lambda_k(r_{k1}/2)&\hbox{if}\;x\in[0,r_{k1}/2), \\
\lambda_k(x) & \hbox{if}\;x\in[r_{k1}/2,r_{k2}),\\
0 & \hbox{if}\;x\in[r_{k2},+\infty).
\end{array}
\right.
\end{equation}
Each $f_k$ is required to coincide with $\hat{f}_k$ away from sufficiently small neighbourhoods of $r_{k1}/2$ and $r_{k2}$. In particular, $\hbox{graph}f_k$ equals $\hbox{graph}\hat{f}_k$ in the region that lies below the line $x\to ax/R-a$ and above the line $x\to -a$ (grey region in Figure 1). Besides, we require that $f_k^\prime\geq 0$ everywhere, $f_k^{\prime\prime}\geq 0$ near $r_{k1}/2$, $f_k^{\prime\prime}\leq 0$ near $r_{k2}$, and $f_k^{\prime\prime}=0$ elsewhere. Since $a/R<a/r_{k2}<l_0$, the slope of the unique tangent of the graph of $f_k$ through the point $(0,-a)$ lies in the interval $(a/R,l_0)$. It follows that $a\notin \mathscr{S}_{\alpha}(H^{f_k})$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{hk.png}
\caption{Sequence B of profile functions}\label{fig:2}
\end{figure}
\noindent \textbf{Sequence B of profile functions}: Let $\eta_k\in\Delta$ be a sequence of numbers such that $\eta_k\to 0$ as $k\to\infty$. Denote $$T_k:=\max\{0,l_1(\eta_k)R-a\},\quad \tau_k:=\frac{a}{\eta_kR}
\quad\hbox{and}\quad r_k:=\frac{\eta_kR(T_k+c)}{a}.$$
Note that $T_k\to+\infty$ and $\tau_k\to+\infty$ as $k\to\infty$, and that $0\leq r_k\leq R$. Consider the piecewise linear curve $\Gamma$ in $\mathbb{R}^2$:
\begin{equation}\notag
\forall\; \Gamma\ni(x,y)=\left\{
\begin{array}{ll}
\big(x,\tau_kx-c\big) & \hbox{if}\;x\in[0,r_k], \\
(x,T_k) & \hbox{if}\;x\in[r_k,R],\\
(R,y) & \hbox{if}\;y\in[0,T_k].
\end{array}
\right.
\end{equation}
Smoothing out this piecewise linear curve near its corners we obtain a sequence of smooth functions $h_k$ (see Figure 2). Here every $h_k$ is also required to satisfy $h_k^{\prime\prime}\geq 0$ near the point $(0,-c)$ and $h_k^\prime(0)=h_k^\prime(1)=0$. We claim that $a\notin \mathscr{S}_{\alpha}(H^{h_k})$. In fact, the action of $\mathscr{A}_{h_k}$ at some one-periodic orbit is positive if and only if the $y$-intercept of the tangential line of $\hbox{graph}~h_k$ at the corresponding point is negative. This happens in two clusters. The slope of the tangent of any point in one of those clusters passing through the point $(0,-a)$ lies in the interval $(l_-,l_+)$. The other cluster is located near the point $(0,-c)$, at which the $y$-intercepts of the tangential lines of $\hbox{graph}f_k$ are less than $-c$. So $(l_-,l_+)\cap\Lambda_\alpha=\emptyset$ implies that $a\notin \mathscr{S}_{\alpha}(H^{h_k})$.
\begin{proposition}\label{prop:pf
Fix $0<a\leq c$ with $a/R\notin\Lambda_\alpha$, and choose $\eta_k\in \Delta$ satisfying $\eta_k\to 0$ as $k\to 0$. Let $\{f_k\}_{k\in\mathbb{N}}$ and $\{h_k\}_{k\in\mathbb{N}}$ be two sequences of those functions constructed above. Choose $k\in\mathbb{N}$ sufficiently large so that
$f_k(0)\leq-c$ and $f_k\leq h_k$. Set $\mu_k:=\nu_{\eta_k}$. Then there exist natural isomorphisms
\begin{equation}\label{e:Ipf1
\Theta_{f_k}:{\rm HF}^{(a,+\infty)}_{\alpha}(H^{f_k})\to
{\rm H}_*(\mathcal{L}_\alpha^{a^2/(2R^2)}M)
\end{equation}
and
\begin{equation}\label{e:Ipf2
\Theta_{h_k}:{\rm HF}^{(a,+\infty)}_{\alpha}(H^{h_k})\to
{\rm H}_*(\mathcal{L}_\alpha^{\mu_k^2/2}M)
\end{equation}
such that the following diagram commutes:
\begin{eqnarray}
\begin{CD}\label{diag:dc3}
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{f_k}) @>{\Psi_{h_k f_k}}>> {\rm HF}^{(a,+\infty)}_{\alpha}(H^{h_k})\\
@V{\Theta_{f_k}}V\simeq V @V\simeq V{\Theta_{h_k}}V \\
{\rm H}_*(\mathcal{L}_\alpha^{a^2/(2R^2)}M) @>{[\iota^{\mu_k^2/2}_{a^2/(2R^2)}}]>> {\rm H}_*(\mathcal{L}_\alpha^{\mu_k^2/2}M)
\end{CD}
\end{eqnarray}
\end{proposition}
The proof of proposition~\ref{prop:pf} is standard, which can be completed by mimicking that of \cite[Theorem~3.1]{We0}. Indeed, in Weber's paper~\cite{We0} he gives the above
result with $R=1$ but by a rescaling argument one gets the result for general $R>0$.
The basic idea is to deform $f_k$ and $h_k$ by monotone homotopies to convex radial functions so that Theorem~\cite[Theorem~2.9]{We0} can be applied. Full details can be found in Appendix~\ref{app:{prop:pf}}.
\section{Proofs of the main theorems and remarks.}\label{sec:5}
\setcounter{equation}{0}
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{functionH.png}
\caption{The function $H$}\label{fig:8}
\end{figure}
\subsection{Proof of Theorem~\ref{thm:1}}
\begin{proof}
Let $c=-\sup_{S^1\times M}H$. Then $c> Rl_{\alpha}$.
Choose $a\in[Rl_\alpha,c]$ so that $a/R\notin \Lambda_\alpha$. Then we can construct two sequences of functions $\{f_k\}_{k\in\mathbb{N}}$ and $\{h_k\}_{k\in\mathbb{N}}$ as in Section~\ref{subsec:pf}. For sufficiently large $k$ it follows that
$$H^{f_k}\leq H\leq H^{h_k}$$
as illustrated in Figure~\ref{fig:8}.
Fix such an integer $k=k(H)$. Then by Proposition~\ref{prop:pf} we have the commutative diagram (\ref{diag:dc3}).
So the monotone homomorphism $\Psi_{h_k f_k}$ is nonzero due to our assumption that $\mu_k\geq a/R\geq l_\alpha$; see Remark~\ref{rem:nontriviality}. Applying Theorem~\ref{thm:Invariance} to the Hamiltonians $H^{f_k}$ and $H^{h_k}$, there exist positive constants $\delta_1=\delta_1(H^{f_k},g,\sigma,a,\alpha)$ and $\delta_2=\delta_2(H^{h_k},g,\sigma,a,\alpha)$ such that if $|\delta|<\delta_0:=\min\{\delta_1, \delta_2\}$, then we have the following commutative diagram
\begin{eqnarray}
\begin{CD}\label{diag:dc5}
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{f_k}) @>{\Psi_{h_kf_k}}>> {\rm HF}^{(a,+\infty)}_{\alpha}(H^{h_k})\\
@V{\Psi_{\omega_0}^{\omega_{\delta\sigma}}}VV @VV{\Psi_{\omega_0}^{\omega_{\delta\sigma}}}V \\
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{f_k},\delta\sigma) @>{\Psi^{\delta\sigma}_{h_kf_k}}>> {\rm HF}^{(a,+\infty)}_{\alpha}(H^{h_k},\delta\sigma)
\end{CD}
\end{eqnarray}
Combining (\ref{diag:dc3}) and (\ref{diag:dc5}) we deduce that for $|\delta|<\delta_0$ the monotone homomorphism
$\Psi^{\delta\sigma}_{h_kf_k}$ is also nonzero. Given $\delta\in(-\delta_0,\delta_0)$, choose a sequence of Hamiltonian functions $H_i \in \mathscr{H}$ such that $H_i$ satisfy the non-degeneracy condition (C) and converge to $H$ in $C^\infty$ topology, $H^{f_k}\leq H_i\leq H^{h_k}$, $a\notin\mathscr{S}_\alpha(H_i,\delta\sigma)$ and $\sup_{S^1\times M}H_i<- Rl_{\alpha}$. Then, by Lemma~\ref{lem:mh}, the non-trivial homomorphism
$\Psi^{\delta\sigma}_{h_kf_k}$ factors through the Floer homology group ${\rm HF}^{(a,+\infty)}_{\alpha}(H_i,\delta\sigma)$. Hence there exists a sequence of periodic orbits $x_i\in\mathscr{P}_{\alpha}(H_i,\delta\sigma)$ such that
$\mathscr{A}_{H_i,\delta\sigma}(x_i)> a$. Passing to a converging subsequence, we get a periodic orbit $x\in\mathscr{P}_{\alpha}(H,\delta\sigma)$ with $\mathscr{A}_{H,\delta\sigma}(x)\geq a$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:2}}
\begin{proof}
Consider the Hamiltonian function defined by
$$\bar{H}(t,x):=-H(-t,x)\quad \forall\;(t,x)\in S^1\times D_RT^*M.$$
Let $C(\alpha)=Rl_\alpha$. Obviously, $x(t)$ is a periodic orbit of $H$ representing $-\alpha$ if and only if $x(-t)$ is a periodic orbit of $\bar{H}$ representing $\alpha$, and
it holds that
$$\sup_{S^1\times M}\bar{H}\leq -C(\alpha).$$
Imitating the proof of Theorem~\ref{thm:1} with replacing Theorem~\ref{thm:Invariance} by Theorem~\ref{thm:Inv} concludes the proof of Theorem~\ref{thm:2}.
\end{proof}
\subsection{Proofs of Theorem~\ref{thm:3} and Theorem~\ref{thm:4}}
The idea of the proof of Theorem~\ref{thm:3} is the same as that of Theorem~\ref{thm:1}, while the difference between the two proofs is that one can squeeze uniformly a class of functions (whose graphs lie in the grey region in Figure~\ref{fig:9} and on the line $y=0,x\geq R-\rho$) from above and below in the proof of Theorem~\ref{thm:3}. The proof of Theorem~\ref{thm:4} is similar to that of Theorem~\ref{thm:3}. Here we only show the latter.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{BPS.png}
\caption{A class of functions}\label{fig:9}
\end{figure}
\begin{proof}
Let $\epsilon>0$ be an arbitrary number in $(0,A-Rl_\alpha)$ and set $c=Rl_\alpha+\epsilon$. Pick $a=a(\epsilon)\in[Rl_\alpha,c]$ so that $a/R\notin \Lambda_\alpha$. Construct two sequences of functions $\{f_k\}_{k\in\mathbb{N}}$ and $\{h_k\}_{k\in\mathbb{N}}$ as in Section~\ref{subsec:pf}. Then there exists an integer $k_0=k_0(g,\sigma,\alpha, V,A,\rho,\epsilon)>0$ such that
$$H^{f_{k_0}}\leq H\leq H^{h_{k_0}}$$
for every $H\in\mathscr{H}_c(U_{R},U_{R-\rho},V,A)$.
Following the proof of Theorem~\ref{thm:1} shows that
there exists a constant
$$\delta_0:=\delta_0(g,\sigma,\alpha, V,A,\rho,\epsilon)>0$$ such that if $|\delta|<\delta_0$ then every $H\in\mathscr{H}_c(U_{R},U_{R-\rho},V,A)$ admits a $1$-periodic Hamiltonian orbit with respect to the twisted symplectic form $\omega_{\delta\sigma}$ whose projection to $M$ represents $\alpha$. Therefore, by definition, for each $\delta\in(-\delta_0,\delta_0)$ we have $\hat{c}_{\rm BPS}(U_R,U_{R-\rho},V,A;\delta\sigma,\alpha)\leq Rl_\alpha+\epsilon$.
\end{proof}
\subsection{Proofs of Theorem~\ref{thm:AET1} and Theorem~\ref{thm:AET2}}
The key to the proofs of Theorem~\ref{thm:AET1} and Theorem~\ref{thm:AET2} is the following monotonicity of the restricted BPS capacity.
\begin{proposition}\label{prop:monotonicity}
Let $W_1,W_2$ be open subsets of $T^*M$ containing $M$, $V_2\subset V_1\subset U_1\subset U_2$, $W_1\subset W_2$ and $0<A_1\leq A_2$, where $W_i,U_i,V_i$, $i=1,2$ are open subsets of $T^*M$ and $U_i$ have compact closure $\bar{U}_i\subset W_i$. Then it holds that
$$\hat{c}_{\rm BPS}(W_1,U_1,V_1,A_1;\sigma,\alpha)\leq\hat{c}_{\rm BPS}(W_2,U_2,V_2,A_2;\sigma,\alpha).$$
\end{proposition}
Our proof of Theorem~\ref{thm:AET1} is an adaption of the standard almost existence theorem (see~\cite[Section~4.2]{HZ} or~\cite{SW}).
For the sake of completeness we shall give a proof of Theorem~\ref{thm:AET1}. The proof of Theorem~\ref{thm:AET2} is nearly identical to that of Theorem~\ref{thm:AET1}, we omit it here.
\begin{proof}
Along the lines of~\cite[Section~4.2]{HZ}, we proceed in 3 steps.
\noindent \textbf{Step 1.}
By our assumption, the sublevel $\{H<s\}$ is contained in $U_{r(s)}$, where $r:\mathbb{R}\to (0,\infty)$ is a nondecreasing function. Consider the monotone functions $\hat{c}_{\delta}:[d,T]\to[0,\infty]$ defined by
$$\hat{c}_{\delta}(s):=\hat{c}_{\rm BPS}(U_{r(T+\rho)+\rho},\{H<s\},V,A;\delta\sigma,\alpha).$$
Theorem~\ref{thm:3} implies that there exists a constant
$\delta_0:=\delta_0(g,\sigma,\alpha, V,A,T,\rho)>0$ such that if $|\delta|<\delta_0$ then
$$\hat{c}_{\rm BPS}(U_{r(T+\rho)+\rho},U_{r(T+\rho)},V,A;\delta\sigma,\alpha)< A$$
provided $A>(r(T+\rho)+\rho)l_\alpha$.
Then we deduce from Proposition~\ref{prop:monotonicity} that
$$\hat{c}_{\delta}(s)\leq \hat{c}_{\delta}(T+\rho)\leq \hat{c}_{\rm BPS}(U_{r(T+\rho)+\rho},U_{r(T+\rho)},V,A;\delta\sigma,\alpha)< A\quad \forall s\in[d,T+\rho],$$
where $|\delta|<\delta_0(g,\sigma,\alpha, V,A,T,\rho)$. Fix $\delta\in(-\delta_0,\delta_0)$.
So Lebesgue's last theorem implies that the function $\hat{c}_{\delta}$ is
differentiable at almost every point in the sense of measure theory. Suppose that $s_0\in [d,T]$ is a regular value of $H$ and $\hat{c}_{\delta}$ is Lipschitz continuous at $s_0$. Then, by the
implicit function theorem, $S_0:=H^{-1}(s_0)$ is a hypersurface in $T^*M$, which is compact and bounds
the sublevel set $B_0:=\{H<s_0\}$ since $H$ is proper and bounded below. By the implicit function theorem again and compactness of
$S_0$, one can find a parameterized family of hypersurfaces $S_\varepsilon$ in $T^*M$ with $S_\varepsilon=H^{-1}(s_0+\varepsilon)$ such that
$$c(\varepsilon)\leq c(0)+L\varepsilon,\quad c(\varepsilon):=\hat{c}_{\delta}(s_0+\varepsilon)$$
for $0\leq\varepsilon\leq\eta$, where $L$ and $\eta$ are positive constants. By choosing $\eta$ smaller, let us require that $\eta<\min\{\rho,(A-c(0))/(2L)\}$ ($A>c(0)$ by our assumption). For $0<\varepsilon\leq \eta$, let us denote $B_\varepsilon:=\{H<s_0+\varepsilon\}$, then
$B_\varepsilon\subset \{H<T+\rho\}\subset U_{r(T+\rho)}$.
Fixing $\varsigma\in(0,\eta]$, we define a smooth function $f:\mathbb{R}\to [-2L\varsigma,0]$ by
\begin{equation}\notag
\begin{array}{ll}
f(s)=-2L\varsigma & \hbox{if}\;s\leq 0 \\
f(s)=0 & \hbox{if}\;s\geq\frac{\varsigma}{2}\\
0<f^\prime(s)\leq 8L & \hbox{if}\;0<s<\frac{\varsigma}{2}.
\end{array}
\end{equation}
By the definition of the restricted BPS capacity $\hat{c}_\delta(s_0)=c(0)$, there exists a Hamiltonian function $G\in\mathscr{H}(U_{r(T+\rho)+\rho},B_0,A)$ such that
\begin{equation}\label{e:funG}
-c(0)<\sup_{V}G\leq -(c(0)-L\varsigma)
\quad \hbox{and}\quad \mathscr{P}_{\alpha}(G,\delta\sigma;\tau)=\emptyset\quad\forall\; 0<\tau\leq1.
\end{equation}
Otherwise, $c(0)=\hat{c}_\delta(s_0)\leq c(0)-L\varsigma$ which contradicts with the fact that $L\varsigma\geq 0$.
Now we constructed a new function $\widetilde{G}$ by cutting off $G$ without creating new fast periodic orbits.
More precisely, let $\chi:[-c(0)-\epsilon,-c(0)+\epsilon]\to \mathbb{R}$ be a function with $0\leq \chi^\prime\leq 1$ such that $\chi(t)=-c(0)$ for $t$ near the left endpoint of the interval and $\chi(t)=t$ for $t$ near the right endpoint of the interval, where $\epsilon>0$ is sufficiently small
so that $-c(0)+\epsilon<\sup_{V}G$. Set
\begin{equation}\notag
\widetilde{G}(x):=\left\{
\begin{array}{ll}
-c(0)&\hbox{if}\;G(x)\leq -c(0)-\epsilon, \\
\chi(G(x)) & \hbox{if}\;-c(0)-\epsilon\leq G(x)\leq-c(0)+\epsilon,\\
G(x) & \hbox{if}\;-c(0)+\epsilon\leq G(x).
\end{array}
\right.
\end{equation}
The new non-positive function $\widetilde{G}$ compactly supported in $B_0$ satisfies $$\mathscr{P}_{\alpha}(\widetilde{G},\delta\sigma;\tau)
=\emptyset\quad\forall\; 0<\tau\leq1$$ since $|\chi^\prime|\leq 1$. Furthermore, $\sup_{V}\widetilde{G}\leq -(c(0)-L\varsigma)$ and $\inf_{B_0}\widetilde{G}\geq -c(0)$.
Consider the compactly supported Hamiltonian function $K\in C^\infty(B_\varsigma)$ defined by
\begin{equation}\notag
\begin{array}{ll}
K(x)=\widetilde{G}(x)-2L\varsigma & \hbox{if}\;x\in B_0 \\
K(x)=f(\varepsilon) & \hbox{if}\;x\in S_\varepsilon,\; 0\leq \varepsilon<\varsigma\\
K(x)=0 & \hbox{if}\;x\notin B_\varsigma.
\end{array}
\end{equation}
The function $K$ satisfies
$$\sup_{V}K=\sup_{V}\widetilde{G}-2L\varsigma \leq -(c(0)-L\varsigma)-2L\varsigma\leq -c(\varsigma)\quad\hbox{and}$$
$$\inf_{B_\varsigma}K= \inf_{B_0}\widetilde{G}-2L\varsigma
> -c(0)-2L\eta> -A.$$
The definition of the restricted BPS capacity $c(\varsigma)$ shows that
$K$ has a fast periodic orbit $x$ with respect to $\omega_{\delta\sigma}$ whose projection to $M$ represents $\alpha$. We claim that $x$ cannot intersect $B_0$. Indeed, if $x$ intersects $B_0$ then it stays completely inside $B_0$ since $B_0$ is invariant under the flow of $K$. This is impossible because the flows of $K$ and $\widetilde{G}$ on $B_0$ coincide and
$\widetilde{G}$ does not have fast periodic orbits. Since $\alpha\neq 0$, the fast periodic orbit $x$ whose projection represents $\alpha$ is nontrivial.
As a consequence, $x$ must be contained in $B_\varsigma\setminus \overline{B_0}$, and hence it lies on
$S_\varepsilon$ for some $0< \varepsilon<\varsigma$.
\noindent \textbf{Step 2.} Step 1 works for every $\varsigma\in(0,\eta]$. Choosing a sequence $\varsigma_j\to 0$, one can find sequences $K_j$ and $\varepsilon_j$, and a corresponding sequence $x_j(t)$ of periodic orbits of $X_{K_j,\delta\sigma}$ having periods $0<\tau_j\leq 1$ and lying on
$S_{\varepsilon_j}$ with $\varepsilon_j\to 0$. Consider the Hamiltonian $H$ on the set
$$U=\bigcup\limits_{\varepsilon\in(-\eta,\eta)}S_\varepsilon.$$
Obviously, if $x\in S_\varepsilon$ then $H(x)=s_0+\varepsilon$ and $K_j(x)=f_j(\varepsilon)=f_j(H(x)-s_0)$. By construction, the periodic orbits $x_j$ solve the equations
\begin{equation}\notag
\begin{array}{ll}
\dot{x}_j(t)=f^\prime_j(\varepsilon_j)X_{H,\delta\sigma}\big(x_j(t)\big)\\
x_j(0)=x(\tau_j)
\end{array}
\end{equation}
with the periods $0\leq\tau_j\leq1$. Normalizing the periods to $1$ we define the functions
$$y_j(t)=x_j\big(\tau_jt\big)\quad \forall t\in[0,1]$$
which solve the Hamiltonian equations
$$\dot{y}_j(t)=f^\prime_j(\varepsilon_j)\tau_jX_{H,\delta\sigma}(y_j(t))\quad \hbox{and}\quad H\big(y_j(t)\big)=\varepsilon_j.$$
\noindent \textbf{Step 3.} By construction, $f^\prime_j\leq 8L$ and hence $f^\prime_j(\varepsilon_j)\tau_j$ are bounded. This observation is very useful for us to obtain a periodic orbit on $S_0$. Indeed, we first note that the hypersurfaces $S_{\varepsilon_j}$ are contained in the compact set $\overline{B_\eta}$, hence the functions $x_j$ are uniformly bounded. For all $t\in S^1$ and all $j\in \mathbb{N}$ we estimate
$$\|\dot{y}_j(t)\|_{G_g}=|f^\prime_j(\varepsilon_j)\tau_j|\cdot \|X_{H,\delta\sigma}\big(x_j(t)\big)\|_{G_g}\leq 8L \sup_{x\in \overline{B_\eta}}\|X_{H,\delta\sigma}\|_{G_g}.
$$
Then, by Arzela-Ascoli theorem, passing to subsequences, $f^\prime_j(\varepsilon_j)\tau_j$ converges to some $\tau\geq0$ and $y_j(t)$ converges in $C^0$-topology, and, by making use of the equations, even converges in $C^\infty$-topology to a smooth $1$-periodic solution $y$ of the equation
$$\dot{y}(t)=\tau X_{H,\delta\sigma}(y(t)),\quad y(t)\subset S_0.$$
We claim that $\tau\neq 0$. Otherwise, $y(t)=y^*$ for some point $y^*\in S_0$. This contradicts with the fact that
the projection of $y$ on $M$ represents the non-trivial class $\alpha$ since $[\pi (y_j)]=\alpha$ and $\pi (y_j)$ converges to $ \pi(y)$ in $C^\infty$-topology. Reparametrizing time we obtain
the $\tau$-periodic solution $x(t):=y(t/\tau)$ of the equation
$$\dot{x}(t)=X_{H,\delta\sigma}(x(t))$$
whose projection on $M$ represents $\alpha$. The proof of Theorem~\ref{thm:AET1} is completed.
\end{proof}
\subsection{Concluding remarks}
\subsubsection{Hamiltonian flows without non-contractible closed trajectories}
The following proposition shows that the constant $\delta_0(H,g,\sigma,a,b,\alpha)$ in Theorem~\ref{thm:1} depends on $H$. As a consequence, one can not extend Theorem~\ref{thm:1} (resp. Theorem~\ref{thm:2}) to the case that $\delta_0(H,g,\sigma,a,b,\alpha)$ (resp. $\delta_0(H,g,a,b,\alpha)$) is arbitrarily large.
\begin{proposition}~\label{pro:nonexistence}
Let $M$ be a closed Riemanian surface endowed with a metric $g$ of constant curvature $K=-1$. Then there exists a sequence of compactly supported Hamiltonians $\{H_n\}_{n\in \mathbb{N}}\subset C^\infty(D_1T^*M)$ with $\inf_{M}H_n>n$ and a sequence of numbers $\{\delta_n\}_{n\in \mathbb{N}}$ converging to $0$ such that the periodic orbits of the Hamiltonian flow of $H_n$ with respect to $\omega_n=\omega_0+\delta_n\pi^*\sigma$ are all contractible.
\end{proposition}
This proposition is due to Niche~\cite{Ni}. For the sake of completeness, we outline a proof of Proposition~\ref{pro:nonexistence} below. The following proof is different from Niche's proof and is based on a theorem of Ginzburg~\cite{Gi1}.
\begin{proof}
We start with $\omega=\omega_0+\pi^*dA$ where $dA$ is the area form on $(M,g)$. Further, let $F$ be the standard kinetic energy Hamiltonian.
Then \cite[Theorem~2.5]{Gi1} shows that on every level $\{F=c\}$ with $c<1/2$ all integral curves of $F$ are closed and contractible, and there is no closed orbit on $\{F=1/2\}$ (on which the Hamiltonian flow is the horocycle flow, cf.~\cite{He}). Let $V=\{F<1/2\}$. Let $\chi:[0,1/2]\to [0,C]$ be a ``one-sided bump" function with $\chi^\prime\leq 0$ such that $\chi(t)=C$ near $t=0$ and $\chi(t)=0$ near $t=1/2$. Here $C$ is a constant and can be made arbitrarily large. Let $H=\chi\circ F$. Then $H$ is supported in $V$. Moreover, the flow of $H$ is essentially a reparametrization of the flow of $H$ on $V$ and hence all orbits are closed and contractible.
Let us now replace the area form $dA$ by a new magnetic field $\delta dA$ where $0<\delta \leq 1$. The region $V$ will shrink to $V_\delta $ or in other words the threshold level $\{F=1/2\}$ will get closer to $M$ getting replaced by $\{F=\delta^2 /2\}$. But everything else remains the same. Indeed, by a rescaling argument, the existence of closed trajectories of $\omega=\omega_0+\delta \pi^*dA$ on the energy level $\{F=\delta^2 /2\}$ is equivalent to the existence of closed trajectories of $\omega=\delta \omega_0+\delta \pi^*dA$ on the energy level $\{F=1/2\}$. The flow of the later is a reparametrization of the flow of $F$ with respect to $\omega=\omega_0+\pi^*dA$ on $\{F=1/2\}$, and hence has no non-contractible periodic obits.
Finally, by taking some sequence $\{\delta _n\}_{n=1}^{\infty}$ satisfying $\delta _n\to 0$ and a sequence of Hamiltonians $H_n$ obtained by composing the same $F$ with more and more narrow bump functions, we obtain the desired result.
\end{proof}
\subsubsection{Counterexample}
The condition $\{H<d\}\supset M$ in Theorem~\ref{thm:AET1} and Theorem~\ref{thm:AET2} cannot be dropped. The following example is given by Salom\~{a}o and Weber~\cite{SW}.
\begin{example}
{\rm
Let $M=S^1=\mathbb{R}/\mathbb{Z}$. Consider the function $H:T^*M=S^1\times \mathbb{R}\to \mathbb{R}$ given by
$$H(q,p)=\frac{1}{2}\|p\|_g^2+V(q),$$
where $V(q)=1+cos2\pi q$. Then $\{H<1\}$ does not contain $M$, and for any energy $s\in[1, 2)$ the level set ${H=s}$ does not carry non-contractible periodic orbits.
\rm}
\end{example}
\section{Appendix. The proof of Proposition~\ref{prop:pf}}\label{app:{prop:pf}}
To prove Proposition~\ref{prop:pf}, we need a theorem of Weber in~\cite{We0} that computes Floer homology of convex radial Hamiltonians.
\begin{theorem}[{\cite[Theorem~2.9]{We0}}]\label{thm:crH
Let $f:\mathbb{R}\to\mathbb{R}$ be a smooth symmetric function with $f^{\prime\prime}\geq0$. If $\tau\in\mathbb{R}^+\setminus\Lambda_\alpha$ and $f^{\prime}(r)=\tau$ for some $r>0$, then there is a natural isomorphism
\begin{equation}\label{e:B
\Phi_f^\tau:{\rm HF}^{(-\infty,b_{f,\tau})}_{\alpha}(H^f)\to
{\rm H}_*(\mathcal{L}_\alpha^{\tau^2/2}M),\quad b_{f,\tau}:=rf^{\prime}(r)-f(r).
\end{equation}
If $H^h$ is another such Hamiltonian, then there exists an isomorphism $\Psi_{hf}^\tau$ such that the following diagram commutes:
\begin{eqnarray}\label{e:crHdc
\xymatrix{{\rm HF}^{(-\infty,b_{f,\tau})}_{\alpha}(H^f)
\ar[rr]_{\simeq}^{\Psi_{hf}^\tau}\ar[dr]^{\simeq}_{\Phi_f^\tau}& &{\rm HF}^{(-\infty,b_{h,\tau})}_{\alpha}(H^h)\ar[dl]_{\simeq}^{\Phi_h^\tau}\\ & {\rm H}_*(\mathcal{L}_\alpha^{\tau^2/2}M)& }
\end{eqnarray}
If $\rho\in(0,\tau]\setminus\Lambda_\alpha$ and $f^{\prime}(s)=\rho$ for some $s>0$, then we have the following communicative diagram whose top horizonal row is the natural inclusion $\iota^F$:
\begin{eqnarray}
\begin{CD}\label{diag:dc2}
{\rm HF}^{(-\infty,b_{f,\rho})}_{\alpha}(H^f) @>{\iota^F}>> {\rm HF}^{(-\infty,b_{f,\tau})}_{\alpha}(H^f)\\
@V{\Phi_{f}^\rho}V\simeq V @V\simeq V{\Phi_{f}^\tau}V \\
{\rm H}_*(\mathcal{L}_\alpha^{\rho^2/2}M) @>{[\iota^{\tau^2/2}{\rho^2/2}}]>> {\rm H}_*(\mathcal{L}_\alpha^{\tau^2/2}M)
\end{CD}
\end{eqnarray}
\end{theorem}
\noindent Here the Floer homology ${\rm HF}^{(a,b)}_{\alpha}(H^f)$ is well defined for every $H^f\in \mathscr{K}_{R;\alpha}^{a,b}$; see Remark~\ref{rem:HFfcsf}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{fk1.png}
\caption{The monotone homotopy between $f_k$ and $f_k^\prime$}\label{fig:3}
\end{figure}
\begin{proof}[Proof of Proposition~\ref{prop:pf}]
The basic idea is to deform $f_k$ and $h_k$ by monotone homotopies to convex functions so that Theorem~\ref{thm:crH} can be applied. To show the isomorphism~(\ref{e:Ipf1}), first, we follow the graph of $f_k$ until the slope of it becomes $a$ for the second time at a point, saying $p$, near $r_{k2}$, then continue linearly with slope $a$, we obtain a function which is of $C^1$-class at $p$.
Then smoothing out such a function near $p$ yields a smooth function $f_k^\prime\in \mathscr{K}_{R;\alpha}^{-\infty,a}$ (see Figure~\ref{fig:3}). The monotone homotopy between $f_k$ and $f_k^\prime$, as showed in Figure~\ref{fig:3}, provides the monotone isomorphism
\begin{equation}\label{e:hmfk1
\Psi_{f_k^\prime f_k}:{\rm HF}^{(a,+\infty)}_{\alpha}(H^{f_k})\to
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{f^\prime_k}).
\end{equation}
This is the consequence of the fact that the $y$-intercepts of the tangential line at all points that do not remain constant during the homotopy are strictly larger than $-a$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{fk2.png}
\caption{The monotone homotopy between $f_k^\prime$ and $f_k^{\prime\prime}$}\label{fig:4}
\end{figure}
Second, we define the function $f_k^{\prime\prime}$ obtained by following the the graph $f_k^\prime$ until the slope of it becomes $a$ for the first time at some point $q$; then continue linearly with slope $a$ (see Figure~\ref{fig:4}). The result function is of $C^1$-class at $q$. Smoothing near $q$ we obtain a smooth function $f_k^{\prime\prime}\in\mathscr{K}_{R;\alpha}^{-\infty,a}$.
Note that all tangential lines of the graphs during the monotone homotopy, as indicated in Figure~\ref{fig:4}, which pass through the point $(0,-a)$ lie strictly between the lines $x\mapsto ax/R-a$ and $x\mapsto l_0x-a$. It follows that the homotopy between $f_k^\prime$ and $f_k^{\prime\prime}$ induces the monotone isomorphism
\begin{equation}\label{e:hmfk2}
\Psi_{f_k^{\prime\prime}f_k^\prime}:{\rm HF}^{(a,+\infty)}_{\alpha}(H^{f^\prime_k})\to
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{f_k^{\prime\prime}}).
\end{equation}
Since the $y$-intercept of the tangential line of graph $f_k^{\prime\prime}$ at any point is less than $-a$, the action of
$\mathscr{A}_{H^{f_k^{\prime\prime}}}$ at each $1$-periodic orbit
is larger than $a$. Then the exact sequence~(\ref{e:esFH}) yields isomorphisms
\begin{equation}\label{e:hmfk3}
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{f_k^{\prime\prime}})\xrightarrow
{}
{\rm HF}^{(-\infty,+\infty)}_{\alpha}(H^{f_k^{\prime\prime}})
\to {\rm HF}^{(-\infty,B)}_{\alpha}(H^{f_k^{\prime\prime}}).
\end{equation}
Here $B=b_{f_k^{\prime\prime},a/R}$ is defined in Theorem~\ref{thm:crH} (the fact that $a/R\notin\Lambda_\alpha$ is used). Then by Theorem~\ref{thm:crH} we have the natural isomorphism
\begin{equation}\label{e:hmfk4}
\Phi_{f_k^{\prime\prime}}^{a/R}:{\rm HF}^{(-\infty,B)}_{\alpha}(H^{f_k^{\prime\prime}})\to
{\rm H}_*(\mathcal{L}_\alpha^{a^2/(2R^2)}M).
\end{equation}
By composing~(\ref{e:hmfk1})-(\ref{e:hmfk4}) we arrive at the desired isomorphism~(\ref{e:Ipf1}).
\begin{figure}[H]
\centering
\subfigure[The monotone homotopy between $h_k$ and $h_k^{\prime}$]{
\label{fig:6a}
\includegraphics[scale=0.38]{hk1.png}}
\subfigure[The monotone homotopy between $h_k^\prime$ and $h_k^{\prime\prime}$]{
\label{fig:6b}
\includegraphics[scale=0.38]{hk2.png}}
\caption{Monotone homotopies}
\label{fig:6}
\end{figure}
To obtain the isomorphism~(\ref{e:Ipf2}), we deform $h_k$ initially by the monotone homotopy as showed in Figure~\ref{fig:6}~(a) to a smooth function $h_k^\prime$, then keep on deforming $h_k^\prime$ by the monotone homotopy (see Figure~\ref{fig:6}~(b)) to a smooth function $h_k^{\prime\prime}$. Here if $x\geq R+\varepsilon$ for some sufficiently small positive constant $\varepsilon$, the graph of $h_k^{\prime\prime}$ turns into a ray with slope $\mu_k$ which is very close to the line $x\to \mu_kx-a$. Note that all points on members of the homotopy whose tangential lines pass through the point $(0,-a)$ lie strictly between the lines $l_-x-a$ and $l_+-a$ ; see Figure~\ref{fig:6}. Therefore we obtain the monotone isomorphisms
\begin{equation}\label{e:hmhk1
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{h_k})\xrightarrow{\Psi_{h_k^{\prime} h_k}} {\rm HF}^{(a,+\infty)}_{\alpha}(H^{h^{\prime}_k})
\xrightarrow{\Psi_{h_k^{\prime\prime} h_k^{\prime} }}
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{h^{\prime\prime}_k}).
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{hk3.png}
\caption{The monotone homotopy between $h_k^{\prime\prime}$ and $\tilde{h}_k$}\label{fig:7}
\end{figure}
Next, consider function $\tilde{h}_k$ given by following the graph of $h_k^{\prime\prime}$ until it takes on slope $\mu_k$ for the first time (near the point $(0,-c)$); then continue linearly with slope $\mu_k$ (see~Figure~\ref{fig:7}). Smoothing near $(0,-c)$ yields $\tilde{h}_k$. Since the $y$-intercepts of the tangential lines
of the graphs during the homotopy as showed in~\ref{fig:7} are less than $-a$, we obtain the monotone isomorphism
\begin{equation}\label{e:hmhk2}
\Psi_{\tilde{h}_kh_k^{\prime\prime}}:{\rm HF}^{(a,+\infty)}_{\alpha}(H^{h_k^{\prime\prime}})\to
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{\tilde{h}_k}).
\end{equation}
Again, using the fact that the $y$-intercepts of the tangential lines of the graph $\tilde{h}_k$ are less than $-a$,
the exact sequence~(\ref{e:esFH}) implies the following isomorphisms:
\begin{equation}\label{e:hmhk3
{\rm HF}^{(a,+\infty)}_{\alpha}(H^{\tilde{h}_k})\to {\rm HF}^{(-\infty,+\infty)}_{\alpha}(H^{\tilde{h}_k})
\to{\rm HF}^{(-\infty,B^\prime)}_{\alpha}(H^{\tilde{h}_k})
\end{equation}
where $B^\prime=b_{\tilde{h}_k,\mu_k}$ is as in~(\ref{e:B}). Then Theorem~\ref{thm:crH} gives the isomorphsim
\begin{equation}\label{e:hmhk4}
\Phi_{\tilde{h}_k}^{\mu_k}:{\rm HF}^{(-\infty,B^\prime)}_{\alpha}(H^{\tilde{h}_k})\to
{\rm H}_*(\mathcal{L}_\alpha^{\mu_k^2/2}M).
\end{equation}
Composing ~(\ref{e:hmhk1}) - (\ref{e:hmhk4}) yields the desired isomorphism (\ref{e:Ipf2}).
The proof of (\ref{diag:dc3})
is identical to that of the communicative diagram~(53) in~\cite{We0} without any essential changes; see p.563 and p. 564, which is an easy application of Theorem~\ref{thm:crH}, we omit it here.
\end{proof}
|
1,108,101,563,182 | arxiv | \section*{Acknowledgments}
This work was partially supported by the Ministry of Science and Technology of Taiwan,
under grants MOST-106-2112-M-006-010-MY2 (CHC).
|
1,108,101,563,183 | arxiv | \section{Introduction}
\label{introduction}
Shear thickening is a rheological process in which the viscosity increases as the shear rate increases.
There are two types of shear thickenings, the continuous shear thickening (CST) and
the discontinuous shear thickening (DST).
In particular, the DST is used for industrial applications such as a body armor and a traction control.
The DST has attracted the attention of physicists~\cite{Barnes89,Mewis11,Brown14,Lootens05,Cwalina14} as a typical nonequilibrium discontinuous phase transition between a liquid-like phase and a solid-like phase.
Apart from other important factors~\cite{Brown09,Brown10,Waitukaitis12}, it has been recognized recently that the mutual friction between grains plays an important role in the DST for dense suspensions~\cite{Otsuki11,Seto13,Mari14,Bi11,Pica11,Haussinger13,Kawasaki14}.
In addition, the normal stress difference becomes large when the shear thickening takes place~\cite{Lootens05,Cwalina14}.
The mechanism of the DST can also be understood by the introduction of an order parameter which exhibits a S-shape in
a plane of stress-strain rate (flow curve)~\cite{Wyart14,Nakanishi11,Nakanishi12,Nagahiro13,Grob14}.
Although most of previous studies on shear thickening are oriented to dense suspensions,
it would be convenient to consider relatively low density systems where kinetic theory tools ~\cite{Brilliantov04,Brey98,Garzo99,Lutsko05,Garzo13} can provide
a deeper understanding on the microscopic mechanisms involved in the DST.
Indeed, some papers have reported that a DST-like process for the kinetic temperature can take place as a result of a saddle-node bifurcation~\cite{Tsao95,BGK2016,DST16,Saha17}.
Thus, Tsao and Koch~\cite{Tsao95}
demonstrated the existence of a non-equilibrium discontinuous phase transition for the kinetic temperature between
a quenched state (a low temperature state) and an ignited state (a high temperature state) in a simple shear flow of a
(granular) gas-solid suspension described by the Boltzmann kinetic equation.
Recently, other works ~\cite{BGK2016,DST16,Saha17} have identified the discontinuous quenched-ignited transition with the DST if the system is agitated by thermal fluctuations. The validity of these studies have been verified from the event-driven Langevin simulation for hard spheres (EDLSHS)~\cite{Scala12} and the direct simulation Monte Carlo method \cite{B94}.
Such gas-solid suspensions are usually discussed in the context of fluidized beds~\cite{Gidaspow94,Jackson00} which
might be categorized as one of the typical inertial suspensions~\cite{Koch01}.
In particular, the homogeneous phase achieved by the balance between the injected gas flow from the bottom of a container and the gravity in fluidized beds is the target of our study.
It is remarkable that the previous studies on dilute gas-solid suspensions suggested that the DST
(or the discontinuous quenched-ignited transition) tends towards the CST (or the continuous quenched-ignited transition)
as the density increases~\cite{BGK2016,Saha17}.
Notice that the Newtonian branch for low shear rates disappears if the thermal agitation is absent.
As a result, one can only observe the CST in the rheology for such systems~\cite{Tsao95,Chamorro15},
though the discontinuous ignited-quench transition can still be observed for the kinetic temperature.
These results are consistent with the analysis made by Santos {\it et al.} \cite{Santos98} which found the existence of a CST in moderately dense hard-core gases by using the revised Enskog theory.
It is worth noting that most of the previous theoretical studies of the above solid-gas suspensions~\cite{Tsao95,BGK2016,DST16,Chamorro15} are based on the application of
Grad's moment method~\cite{Grad49} to the Boltzmann~\cite{Garzo02,Santos04} and Enskog ~\cite{Garzo13} kinetic equations.
A slightly different method has been recently adopted by Refs.~\cite{Saha17,Lutsko04,Saha14,Saha16} since they consider an anisotropic Maxwellian distribution which reduces to the Maxwellian in the isotropic limit.
Although the latter solution can be more appropriate for highly dissipative sheared suspensions,
it is quite intricate and requires some additional approximations to get explicit results.
In this context, the conventional Grad's moment method (which is based on the assumption that the distribution function is a local Maxwellian times a sum over Hermite polynomials) is simple enough to reproduce for instance the normal stress differences~\cite{Lutsko04,Saha14,Saha16}. Therefore, the conventional Grad's moment method can be still considered as a powerful method to describe the rheology of gas-solid suspensions.
Although the previous achievements of Refs.~\cite{Tsao95,DST16,Saha17} are remarkable,
they are limited to the low-density regime and hence their predictions are far from typical experimental situations.
One of the few works devoted to dense gases was carried out by Sangani {\it et al.} \cite{Sangani96} two decades ago.
In this paper, the authors extended the analysis of Ref.~\cite{Tsao95} to moderate densities by considering the Enskog equation.
Their analysis showed that the discontinuous transition of the kinetic temperature for dilute suspensions becomes continuous at relatively low density~\cite{Sangani96}.
This conclusion agrees with the previous theories ~\cite{BGK2016,Saha17} for dilute suspensions.
However, the treatment of Sangani {\it et al.}~\cite{Sangani96} is not completely systematic since they ignore
the effects of thermal fluctuations.
The purpose of this paper is to extend the previous dilute results to moderately dense systems by solving the Enskog kinetic equation~\cite{Garzo99,Lutsko05,Garzo13,Resibois77} by two complementary and independent routes:
(i) Grad's moment method and (ii) event-driven simulations (EDLSHS).
The influence of the background fluid on particles is modeled via an external force constituted by two terms:
(i) a viscous drag force which mimics the friction of solid particles with the interstitial fluid and
(ii) a stochastic Langevin-like term accounting for thermal fluctuations. To assess the finite density effects on rheology, a set of coupled equations of the stress tensor, the kinetic temperature and the anisotropic temperatures corresponding to the normal stress differences are derived from Grad's approximation.
The validity of our simple theory is also examined through a comparison with computer simulations.
The motivation of the the present work is twofold. First, since there is some evidence~\cite{Chialvo13} that the Enskog theory is accurate for solid volume fractions smaller than 0.5, our results will allow us to analyze the behavior of rheology for moderately dense suspensions corresponding to typical experiments.
As a second point, our results will allow us to clarify whether the scenario proposed by Sangani {\it et al.}~\cite{Sangani96} is universal.
The organization of this paper is as follows.
The outline of the Enskog kinetic theory of moderately dense suspensions under a simple shear flow is briefly summarized in Sec.\ \ref{Enskogbis}. Section \ref{rheology} discusses the rheology of the suspension model where the details of the calculations appear in a series of Appendices.
Theoretical results are compared against computer simulations in Sec.\ \ref{simulation} for two values of the restitution coefficient $e$ ($e=1$ and $0.9$) and several values of the solid volume fraction $\varphi$ in the main text.
As a complement, to assess the influence of inelasticity on rheology, theory and simulation results are also displayed in the Appendix G for the density $\varphi=0.3$ and several values of the restitution coefficient ($e=1,0.9,0.7,0.5$, and $0.3$).
Section \ref{DST} deals with the transition from DST to CST. The paper is closed in Sec.\ \ref{discussion} where the results reported here are briefly discussed.
\section{Enskog kinetic equation for suspensions under simple shear flow}
\label{Enskogbis}
\subsection{Enskog kinetic equation for sheared granular suspensions}
Let us consider a collection of monodisperse smooth spherical grains of diameter
$\sigma$, mass $m$, and restitution coefficient $e$ satisfying $0< e \le 1$.
Because we are interested in the homogeneous state of fluidized beds,
the solid particles are distributed in a $d-$dimensional space only influenced by the background fluid under a uniform shear flow.
This state is macroscopically characterized by a constant number density $n$, a uniform kinetic temperature $T$,
and macroscopic velocity field $\bm{u}=(u_x,\bm{u}_\perp)$, where the constant shear rate $\dot\gamma$ is given by
\begin{equation}
\label{plane_shear}
u_x=\dot\gamma y, \quad \bm{u}_\perp=\bm{0} .
\end{equation}
Let us introduce the peculiar momentum of $i-$th particle as $\bm{p}_i\equiv m(\bm{v}_{i}-\dot\gamma y \bm{e}_x)$, where $\bm{v}_i$ is the instantaneous velocity of $i-$th particle, and $\bm{e}_x$ is the unit vector parallel to $x$ direction. For low Reynolds numbers, a reliable model for describing solid particles immersed in a fluid (suspensions) is the Langevin equation
\begin{equation}
\label{Langevin_eq}
\frac{d{\bm{p}}_i}{dt}=-\zeta \bm{p}_i + \bm{F}_i^{({\rm imp})}+ m\bm{\xi}_i,
\end{equation}
where we have assumed that the solid particles are suspended by the gas flow and the gravity does not play any role.
We have also introduced the impulsive force $\bm{F}_i^{(\rm imp)}$ to express collisions between grains and the noise $\bm{\xi}_i(t)=\xi_{i,\alpha}(t)\bm{e}_\alpha$ has the average properties
\begin{equation}
\label{noise}
\langle \bm{\xi}_i(t)\rangle=0, \quad
\langle \xi_{i,\alpha}(t)\xi_{j,\beta}(t')\rangle = 2\zeta T_{\rm ex} \delta_{ij}\delta_{\alpha\beta}\delta(t-t').
\end{equation}
Here, the parameters $\zeta$ and $T_{\rm ex}$ characterize the drag from the background fluid and the
environmental temperature, respectively. Actually, the drag coefficient $\zeta$ should be a resistance matrix as a result of the hydrodynamic interactions between grains which strongly depends on the configuration of grains. For simplicity, however, we regard $\zeta$ as a scalar function of the average volume fraction $\varphi$ defined as
\begin{equation}
\label{volume_fraction}
\varphi=\frac{\pi^{d/2}}{2^{d-1} d \Gamma\left(\frac{d}{2}\right)}
n\sigma^d,
\end{equation}
where $\Gamma(x)=\int_0^\infty dt e^{-t}t^{x-1}$ is the Gamma function. This is a mean field approximation where the drag coefficient $\zeta$ is independent of the configuration of grains.
This simple model might be applicable to the description of inertial suspensions in which the mean diameter of suspended particles is approximately ranged from 1$\mu$m to 70$\mu$m~\cite{Koch01}.
In this paper, we assume that $\zeta \propto \eta_0 \propto \sqrt{T_{\rm ex}}$, where $\eta_0$ is the viscosity of the solvent or fluid phase.
If we ignore the density dependence of $\zeta$ and the grains are bidisperse soft spheres,
the Langevin model~\eqref{Langevin_eq} is equivalent to that used by Kawasaki {\it et al}.~\cite{Kawasaki14}.
So far, we did not specify the explicit dependence of $\zeta$ on $\varphi$ and $T_{\rm ex}$.
Let us rewrite $\zeta$ as
\begin{equation}
\label{zeta=zeta_0*R}
\zeta=\zeta_0 R(\varphi) ,
\end{equation}
where $\zeta_0=3\pi \eta_0 \sigma\propto \sqrt{m T_{\rm ex}}/\sigma$ and the solvent viscosity $\eta_0\propto \sqrt{m T_{\rm ex}}/\sigma^2$ for $d=3$. We adopt the following empirical expressions for the dimensionless resistance $R(\varphi)$:
\begin{equation}
R(\varphi)=
1+3\displaystyle\sqrt{\frac{\varphi}{2}}
\end{equation}
for $\varphi\le 0.1$~\cite{Tsao95,Koch90},
and
\begin{equation}
\label{Sangani3_18}
R(\varphi)= k_1(\varphi)-\varphi g_0(\varphi) \ln \epsilon_m
\end{equation}
for $\varphi>0.1$~\cite{Sangani96}.
Here, $g_0(|\bm{r}|=\sigma,\varphi)$ is the radial distribution at contact,
which is believed to be uniform in the simple shear flow problem.
For hard spheres ($d=3$) and $\varphi<0.49$, a good approximation for the radial distribution is~\cite{CS}
\begin{equation}\label{radial_fn}
g_0(|\bm{r}|=\sigma,\varphi)=\frac{1-\varphi/2}{(1-\varphi)^3}.
\end{equation}
Hereafter, we will use $g_0\equiv g_0(|\bm{r}|=\sigma,\varphi)$ as the abbreviation.
In addition, in Eq.\ \eqref{Sangani3_18},
$\epsilon_m$ is the gap parameter characterizing the lubrication force between rough spheres, and $k_1(\varphi)$ for $d=3$ is the empirical function given by
\begin{equation}
k_1(\varphi)=1+\frac{3}{\sqrt{2}}\varphi^{1/2}+\frac{135}{64}\varphi \ln\varphi+
11.26\varphi (1-5.1\varphi+16.57\varphi^2-21.77\varphi^3).
\end{equation}
Because $\epsilon_m$ is related to the limitation of continuum description of suspensions, it is difficult to present its microscopic expression. Nevertheless, it is known that typical values of $\epsilon_m$ are in the range 0.01-0.05.
In this paper we will take $\epsilon_m=0.01$ for the later explicit calculation following Ref.~\cite{Garzo2012}.
Let us assume now that the suspension is under simple shear flow.
At a microscopic level, the simple shear flow state is generated by Lees-Edwards boundary conditions~\cite{LE72} which are simply periodic boundary conditions in the local Lagrangian frame $\bm{V}=(v_x-\dot\gamma y)\bm{e}_x+\bm{v}_\perp$.
In this frame, the velocity distribution function $f(\bm{r},\bm{v},t)$ is uniform
\beq
\label{vic1}
f(\bm{r},\bm{v},t)=f(\bm{V},t),
\eeq
and the Enskog equation for the granular suspension becomes ~\cite{Hayakawa03,Chamorro15}
\begin{equation}
\label{Enskog}
\left(\frac{\partial}{\partial t}-\dot\gamma
V_{y}\frac{\partial}{\partial V_{x}}
\right)f(\bm{V},t)
=
\zeta\frac{\partial}{\partial \bm{V}} \cdot \left(
\left\{ \bm{V}+ \frac{T_{\rm ex}}{m} \frac{\partial}{\partial \bm{V}} \right\}
f(\bm{V},t) \right)+
J_\text{E}(\bm{V}|f,f).
\end{equation}
The Enskog collision operator $J_\text{E}[\bm{V}|f,f]$ is given by (See the Appendix \ref{Enskog_base} )
\begin{equation}
\label{J(V|f)}
J_{\text{E}}\left[{\bf V}_{1}|f,f\right] =\sigma^{d-1}g_0 \int d{\bf V}
_{2}\int d\widehat{\boldsymbol{\sigma}}\,\Theta (\widehat{{\boldsymbol {\sigma}}}
\cdot \bm{v}_{12})(\widehat{\boldsymbol {\sigma }}\cdot \bm{v}_{12})\left[
\frac{
f(\bm{V}_1'',t)
f(\bm{V}_2''+\dot\gamma\sigma \widehat{\sigma}_y \bm{e}_x,t)}{e^2}
-f(\bm{V}_1,t)f(\bm{V}_2-\dot\gamma\sigma \widehat{\sigma}_y \bm{e}_x,t)
\right].
\end{equation}
In Eq.\ \eqref{J(V|f)}, the Heaviside step function is defined as $\Theta(x)=1$ for $x\ge 0$ and $\Theta(x)=0$ otherwise, $\bm{v}_{12}=\bm{V}_1-\bm{V}_2=\bm{v}_1-\bm{v}_2$ is the relative velocity at contact, and $\bm{\sigma}=\bm{r}_{12}$ where $\bm{r}_{12}\equiv \bm{r}_1-\bm{r}_2$.
In addition, the double primes in Eq.\ \eqref{J(V|f)} denote the pre-collisional velocities $\left\{\bm{V}_1^{''}, \bm{V}_2''\right\}$ that lead to $\left\{\bm{V}_1, \bm{V}_2\right\}$ following a binary collision:
\begin{equation}
\label{collision_rule}
\bm{V}_1^{''}=\bm{V}_1-\frac{1+e}{2e}(\bm{v}_{12}^{''}\cdot\widehat{\bm{\sigma}})\widehat{\bm{\sigma}}, \quad
\bm{V}_2^{''}=\bm{V}_2+\frac{1+e}{2e}(\bm{v}_{12}^{''}\cdot\widehat{\bm{\sigma}})\widehat{\bm{\sigma}}.
\end{equation}
In this paper we do not consider the effects of tangential friction and rotation induced by each binary collision.
The most important quantity in a shear flow problem is the pressure tensor $\sf{P}$. It has kinetic and collisional transfer contributions, i.e., $\sf{P}=\sf{P}^k+\sf{P}^c$. The kinetic contribution is
\begin{equation}
\label{pressure_tensor:kinetic}
P^k_{\alpha\beta}=m \int d\bm{V} V_\alpha V_\beta f(\bm{V}),
\end{equation}
while its collisional contribution is given by (see Appendix B for the derivation)
\begin{equation}
\label{pressure:collisonal}
P^c_{\alpha\beta}=\frac{(1+e)}{4} m\sigma^d g_0 \int d\bm{V}_1 \int d\bm{V}_2\int d\widehat{\bm{\sigma}} \Theta(\bm{v}_{12}\cdot\widehat{\bm{\sigma}})
(\bm{v}_{12}\cdot\widehat{\bm{\sigma}})^2\widehat{\sigma}_\alpha\widehat{\sigma}_\beta
f\left(\bm{V}_1+\frac{1}{2}\dot\gamma \sigma \widehat{\sigma}_y \bm{e}_x\right)f\left(\bm{V}_2-\frac{1}{2}\dot\gamma \sigma \widehat{\sigma}_y \bm{e}_x\right).
\end{equation}
As usual, the hydrostatic pressure $P$ is defined as $P\equiv P_{\alpha\alpha}/d$,
where we adopt Einstein's rule for the summation {\it i.e.} $P_{\alpha\alpha}=\sum_{\alpha=1}^d P_{\alpha\alpha}$. The kinetic part of the pressure tensor satisfies the equation of the state of ideal gases, namely, $P^k\equiv P^k_{\alpha\alpha}/d =n T$, where
\begin{equation}
\label{vic3}
n=\int d\bm{V} f(\bm{V})
\end{equation}
is the number density and
\begin{equation}
\label{kinetic_T}
T =\frac{1}{dn}\int d\bm{V} \bm{V}^2f(\bm{V})
\end{equation}
is the kinetic granular temperature.
\subsection{Grad's moment method}
The kinetic contribution $P^k_{\alpha\beta}$ to the pressure tensor can be obtained by multiplying both sides of Eq.\ \eqref{Enskog} by $m V_\alpha V_\beta$ and integrating over $\bm{V}$. The result is
\begin{equation}
\frac{\partial}{\partial t}P^k_{\alpha\beta}
+\dot\gamma (\delta_{\alpha x}P_{y \beta}^k+\delta_{\beta x} P_{y \alpha}^k)
=-2\zeta ( P_{\alpha\beta}^k- n T_{\rm ex} \delta_{\alpha\beta} )
-\Lambda_{\alpha\beta}^E ,
\label{Garzo31}
\end{equation}
where
\begin{equation}
\label{Garzo32}
\Lambda_{\alpha\beta}^E\equiv -m \int d\bm{V} V_\alpha V_\beta J_E(\bm{V}|f,f) .
\end{equation}
The collisional moment \eqref{Garzo32} can be rewritten as (see Appendix \ref{details_collision_transfer} for technical details)
\beq
\label{vic4}
\Lambda_{\alpha\beta}^E=\overline{\Lambda}_{\alpha\beta}^E+\dot\gamma (\delta_{\alpha x}P_{y \beta}^c+\delta_{\beta x}
P_{y \alpha}^c),
\eeq
where $\overline{\Lambda}_{\alpha\beta}^E$ is defined by Eq.\ \eqref{over_Lambda} and we have omitted the last term on the right hand side of Eq.\ \eqref{total_Lambda_E} because we have accounted for that the heat flux vanishes in the simple shear flow problem by symmetry reasons [this can easily be deduced by considering Grad's distribution \eqref{Grad} as shown in the Appendix C.2].
Taking into account Eq.\ \eqref{vic4}, Eq.\ \eqref{Garzo31} reads
\begin{equation}
\label{Garzo31b}
\frac{\partial}{\partial t}P^k_{\alpha\beta}
+\dot\gamma (\delta_{\alpha x}P_{y \beta}+\delta_{\beta x} P_{y \alpha})
=-2\zeta ( P_{\alpha\beta}^k- n T_{\rm ex} \delta_{\alpha\beta} )
-\overline{\Lambda}_{\alpha\beta}^E.
\end{equation}
The simple shear flow state is in general non-Newtonian.
This can be characterized for instance by the anisotropic temperatures $\Delta T$ and $\delta T$ which are, respectively, defined as
\beq
\label{DT}
\Delta T \equiv \frac{P_{xx}^k-P_{yy}^k}{n},
\eeq
\beq
\label{dT}
\delta T \equiv \frac{P_{xx}^k-P_{zz}^k}{n}.
\eeq
Apart from the normal stresses, one can define a non-Newtonian shear viscosity coefficient $\eta(\dot\gamma,e)$ by
\beq
\label{shear_viscosity}
\eta(\dot\gamma,e)\equiv -\frac{P_{xy}}{\dot\gamma}.
\eeq
The time-dependent equations for $T$, $\Delta T$, $\delta T$, and $P_{xy}^k$ can be easily derived from Eq.\
\eqref{Garzo31b}.
They are given by
\begin{eqnarray}
\label{partT}
\frac{\partial}{\partial t} T&=&
-\frac{2\dot\gamma}{d n}P_{xy}+2\zeta (T_{\rm ex}-T) -
\frac{\overline{\Lambda}^E_{\alpha\alpha}}{d n},
\\
\label{part_DT}
\frac{\partial}{\partial t} \Delta T&=&
-\frac{2}{n}\dot\gamma P_{xy}-2\zeta \Delta T-\frac{\overline{\Lambda}_{xx}^E-\overline{\Lambda}_{yy}^E}{n},
\\
\label{part_dT}
\frac{\partial}{\partial t}\delta T&=&
-\frac{2}{n}\dot\gamma P_{xy}-2\zeta \delta T-\frac{\overline{\Lambda}_{xx}^E-\overline{\Lambda}_{zz}^E}{n},
\\
\label{part_P_{xy}}
\frac{\partial}{\partial t}P_{xy}^k&=&- \dot\gamma P_{yy}
-2\zeta P_{xy}^k-\overline{\Lambda}_{xy}^E.
\end{eqnarray}
The moment equations \eqref{partT}--\eqref{part_P_{xy}} are still exact and have been obtained without the explicit knowledge of the velocity distribution function $f$.
On the other hand, the exact expression of the collision integral $\overline{\Lambda}_{\alpha\beta}^E$ is not known, even in the elastic case. A good estimate of this collisional moment can be expected by using Grad's
approximation~\cite{Garzo13,BGK2016,Chamorro15,Grad49,Garzo02,Santos04}
\begin{equation}
\label{Grad}
f(\bm{V})=f_{\rm M}(\bm{V})\left(1+\frac{m}{2T}\Pi^k_{\alpha\beta}V_\alpha V_\beta
\right),
\end{equation}
where
\beq
\label{Maxwell}
f_{\rm M}(\bm{V})=
n\left(\frac{m}{2\pi T}\right)^{d/2}\exp\left(-\frac{m V^2}{2T} \right)
\eeq
is the Maxwellian distribution and
\beq
\label{vic5}
\Pi^k_{\alpha\beta}\equiv \frac{P^k_{\alpha\beta}}{nT}-\delta_{\alpha\beta}
\end{equation}
is the traceless part of the (dimensionless) kinetic pressure tensor $P^k_{\alpha\beta}$.
The collisional moment $\overline{\Lambda}_{\alpha\beta}^E$ can be determined when the trial distribution \eqref{Grad} is inserted into Eq.\ \eqref{over_Lambda}. After a lengthy algebra (see the Appendices \ref{details_collision_transfer} and \ref{derivation_Lambda} for details), one achieves the expression
\beqa
\label{Lambda_E:result}
\overline{\Lambda}^E_{\alpha\beta}&=&
g_0 nT \left\{
\nu \Pi^k_{\alpha\beta}+\lambda \delta_{\alpha\beta}
-\frac{2^{d-2}}{(d+2)(d+4)} \varphi (1+e)\dot\gamma \left[(d+4)(1-3e)
(\delta_{\alpha x}\delta_{\beta y}+\delta_{\alpha y}\delta_{\beta x})\right.\right.
\nonumber\\
& & \left.\left.
+
2(d+1-3e)\left(\Pi^k_{\alpha x}\delta_{\beta y}+\Pi^k_{\alpha y}\delta_{\beta x}
+\Pi^k_{\beta x}\delta_{\alpha y}+\Pi^k_{\beta y}\delta_{\alpha x}\right)
-6(1+e)\delta_{\alpha \beta} \Pi^k_{xy}
\right]\right\}.
\end{eqnarray}
Here, the quantities $\nu$ and $\lambda$ are given, respectively, by~\cite{Garzo02,Santos04,DST16}
\beq
\label{nu}
\nu= \frac{\sqrt{2}\pi^{(d-1)/2}}{d(d+2)\Gamma\left(d/2\right)} (1+e)(2d+3-3e) n \sigma^{d-1} v_T,
\eeq
\beq
\label{nu'}
\lambda=\frac{\sqrt{2}\pi^{(d-1)/2}}{d\Gamma\left( d/2\right)} (1-e^2) n \sigma^{d-1}v_T ,
\eeq
where
$v_T=\sqrt{2T/m}$ is the thermal velocity.
Notice that upon deriving the expression \eqref{Lambda_E:result} for $\overline{\Lambda}^E_{\alpha\beta}$ nonlinear terms in $\Pi_{\alpha\beta}^k$ have been neglected. As will show below, for \emph{dilute} gases ($\varphi\to 0$),
this approximation yields $P_{xx}^k\neq P_{yy}^k$ but $P_{yy}^k=P_{zz}^k$.
The latter equality disagrees with computer simulation results \cite{Tsao95,Chamorro15}.
The evaluation of $\overline{\Lambda}^E_{\alpha\beta}$ for dilute gases by retaining all the quadratic terms in the pressure tensor has been carried out in Ref.\ \cite{Chamorro15}.
The inclusion of these nonlinear corrections allows us to determine the normal stress differences in the plane orthogonal to the shear flow (e.g., $P_{yy}-P_{zz}$).
Nevertheless, since this difference is small, the expression \eqref{Lambda_E:result} can be considered as accurate, even in the limit of dilute gases as demonstrated in Ref.\ \cite{DST16}.
The set of coupled differential equations \eqref{partT}--\eqref{part_P_{xy}} can be written more explicitly when one takes into account the result \eqref{Lambda_E:result}:
\begin{eqnarray}
\label{d_tT}
\frac{\partial}{\partial t} T
&=& -\frac{2\dot\gamma}{d n}{\cal C}_d P^k_{xy}-\frac{2\dot\gamma}{d n}P_{xy}^c
+2\zeta (T_{\rm ex}-T) -g_0 \lambda T, \\
\label{d_tDT}
\frac{\partial}{\partial t} \Delta T&=&-\frac{2}{n}\dot\gamma \left(P_{xy}^k+P_{xy}^c\right)-(\nu g_0+2\zeta) \Delta T
, \\
\label{d_tdT}
\frac{\partial}{\partial t}\delta T&=&
-\frac{2}{n}\dot\gamma \left({\cal E}_d P_{xy}^k+P_{xy}^c\right)-(\nu g_0+2\zeta)\delta T
,\\
\label{d_tP_{xy}}
\frac{\partial}{\partial t} P_{xy}^k
&=& \dot\gamma n\left(\frac{d-1}{d} {\cal D}_d \Delta T
-\frac{d-2}{d}{\cal E}_d \delta T-{\cal C}_d T\right)
-\dot\gamma P_{yy}^c
-(\nu g_0+2\zeta) P^k_{xy}.
\end{eqnarray}
Here, we have introduced the (dimensionless) quantities
\beq
\label{c_C}
{\cal C}_d(e,\varphi)=1-\frac{2^{d-2}}{d+2}(1+e)(1-3e)\varphi g_0,
\eeq
\beq
\label{c_E}
{\cal E}_d(e,\varphi)= 1-\frac{2^{d}}{(d+2)(d+4)}(1+e)(d+1-3e)\varphi g_0,
\eeq
\beq
\label{c_D}
{\cal D}_d(e,\varphi)= 1-\frac{2^{d-1}(d-2)}{(d-1)(d+2)(d+4)}(1+e)(d+1-3e)\varphi g_0.
\eeq
In addition, upon deriving Eqs.\ \eqref{d_tT}--\eqref{d_tP_{xy}} we have used the relations
\beq
\label{vic6}
\Pi_{xx}^k=\frac{\Delta T}{d T}+\frac{d-2}{d}\frac{\delta T}{T}, \quad
\Pi_{yy}^k=\frac{1-d}{d}\frac{\Delta T}{T}+\frac{d-2}{d}\frac{\delta T}{T}, \quad
\Pi_{zz}^k=\frac{\Delta T}{d T}-\frac{2}{d}\frac{\delta T}{T}.
\eeq
To close the problem, one still needs to compute the collisional transfer contributions $P_{\alpha\beta}^c$ to the pressure tensor.
This can be achieved by inserting Grad's distribution \eqref{Grad} into Eq.\ \eqref{pressure:collisonal}.
On the other hand, this computation yields an intricate expression for $P_{\alpha\beta}^c$ that must be numerically evaluated.
Thus, in order to get simple and accurate results, only terms up to the first order in the shear rate are considered
in the above calculation. The final result is (see the Appendix \ref{collision_stress})
\beq
\label{P_c:main_text}
P^c_{\alpha\beta}\approx
2^{d-2}(1+e) \varphi g_0 n T
\left[
\delta_{\alpha\beta}+\frac{2}{d+2}\Pi^k_{\alpha\beta}
-\dot\gamma^* \tau_T \frac{2\sqrt{2}}{\sqrt{\pi}(d+2)}
\left(
\delta_{\alpha x}\delta_{\beta y}+\delta_{\alpha y}\delta_{\beta x}\right)
\right],
\eeq
where
\beq
\label{dimless_time}
\dot\gamma^*\equiv \frac{\dot\gamma}{\zeta_0}, \quad \tau_T=\frac{\zeta_0 \sigma}{v_T}.
\eeq
The quantity $\zeta_0$ is defined in Eq.\ \eqref{zeta=zeta_0*R}.
Since $\zeta_0 \propto \sqrt{T_\text{ex}}$ and $v_T \propto \sqrt{T}$, the parameter $\tau_T$ measures the competing effect between
the environmental temperature $T_\text{ex}$ and the kinetic temperature $T$.
In the case that the environmental temperature $T_{\rm ex}$ is much lower than the kinetic temperature, then $\tau_T$ can be considered as a small parameter and could be neglected in the expression \eqref{P_c:main_text} of the collision contribution to the pressure tensor. In fact, as we will show below, the theoretical predictions compare better with simulations when we neglect this term ($\tau_T=0$) in Eq.\ \eqref{P_c:main_text}. In this context, one could argue for that the results derived here could be relevant for situations where the stresses applied by the background fluid on solid particles have a weak influence on the dynamics of grains.
It is important to remark that the use of the expression \eqref{P_c:main_text} is mainly motivated by the desire of analytic expressions for the rheological properties that allow to unveil in a clean way the impact of both the restitution coefficient $e$ and the (scaled) shear rate $\dot\gamma^*$ on momentum transport.
Of course, since the collisional transfer contribution $P_{\alpha\beta}^c$ are expected to strongly depend
on $\dot\gamma^*$ in the steady state~\cite{Santos04}, the truncation made in Eq.\ \eqref{P_c:main_text} can be likely only justified for nearly elastic systems.
On the other hand, as we will show in Sec.\ IV, the good agreement found between theory and simulations for moderately strong dissipation
(i.e., $e=0.9$) justifies the use of the expression \eqref{P_c:main_text} beyond the elastic limit ($e \to 1$).
After a transient period one expects that the system achieves a steady state. In this steady state, the viscous heating term ($-\dot\gamma P_{xy}>0$) is exactly balanced by the cooling terms arising from the collisional dissipation and the friction between the background fluid and the solid particles.
One of the main goals of this paper is to determine the rheological properties of the gas-solid suspension in the steady state.
This will be carried out analytically in the next section by solving the set of coupled equations \eqref{partT}--\eqref{part_P_{xy}} when $\partial_t\to 0$.
\section{Rheology for steady simple shear flow}
\label{rheology}
As mentioned before, the rheology of gas-solid suspensions are determined in this section by solving the constitutive
equations \eqref{d_tT}-\eqref{d_tP_{xy}} in the steady state. First, in order to solve this set of equations, it is convenient to write it in dimensionless form. To do that, since $\zeta\propto \sqrt{T_{\rm ex}}R(\varphi)$, we introduce here the reduced quantities
\begin{equation}
\label{nu**}
\nu^{*}=\frac{\nu}{\sqrt{\theta}\zeta_0R(\varphi)}, \quad
{\lambda}^{*}=\frac{{\lambda}}{\sqrt{\theta}\zeta_0R(\varphi)},
\end{equation}
where $\theta\equiv T/T_{\rm ex}$.
In terms of the above quantities, in the steady state, Eqs.\ \eqref{d_tT}-\eqref{d_tP_{xy}} read
\beq
\label{steady1}
-\frac{2\dot \gamma^*}{d R}\left(\mathcal{C}_d \Pi_{xy}^k+P_{xy}^{c*}\right)=g_0\sqrt{\theta}\lambda^*+2(1-\theta^{-1}),
\eeq
\beq
\label{steady2}
-\frac{2\dot \gamma^*}{R}\left(\Pi_{xy}^k+P_{xy}^{c*}\right)=\left(2+g_0\sqrt{\theta}\nu^*\right)\frac{\Delta \theta}{\theta},
\eeq
\beq
\label{steady3}
-\frac{2\dot \gamma^*}{R}\left(\mathcal{E}_d\Pi_{xy}^k+P_{xy}^{c*}\right)=\left(2+g_0\sqrt{\theta}\nu^*\right)\frac{\delta \theta}{\theta},
\eeq
\beq
\label{steady4}
\frac{\dot\gamma^*}{R}P_{yy}^{c*}
+\left(2+\nu^*g_0 \sqrt{\theta}\right)\Pi_{xy}^k=-
\frac{\dot\gamma^*}{R} \left(\frac{d-1}{d}
{\cal D}_d\frac{\Delta \theta}{\theta}-\frac{d-2}{d}{\cal E}_d\frac{\delta \theta}{\theta}
-{\cal C}_d\right),
\eeq
where $P_{ij}^{c*}\equiv P_{ij}^c/nT$, $\Delta \theta\equiv \Delta T/T_\text{ex}$ and $\delta \theta\equiv \Delta T/T_\text{ex}$.
The solution to Eqs.\ \eqref{steady1}--\eqref{steady3} can be written as
\beq
\label{steady5}
\Pi_{xy}^k=\frac{d R\left[2(1-\theta)-g_0 \theta^{3/2}\lambda^*\right]+2\sqrt{\frac{2}{\pi}}{\cal F}_d \tau_T\theta \dot\gamma^{*2}}
{2\left({\cal C}_d+{\cal F}_d\right) \theta \dot\gamma^*},
\eeq
\beq
\label{steady6}
\frac{\Delta \theta}{\theta}=\frac{2\sqrt{\frac{2}{\pi}}{\cal F}_d \left({\cal C}_d-1\right)\tau_T\theta \dot\gamma^{*2}+d\left(1+{\cal F}_d\right)R\left[2(\theta-1)+g_0\theta^{3/2}\lambda^*\right]}{R\left({\cal C}_d+{\cal F}_d\right)(2+g_0 \sqrt{\theta}\dot\gamma^*)},
\eeq
\beq
\label{steady7}
\frac{\delta \theta}{\theta}=\frac{2\sqrt{\frac{2}{\pi}}{\cal F}_d\left({\cal C}_d-{\cal E}_d\right)\tau_T\theta \dot\gamma^{*2}+d\left({\cal E}_d+{\cal F}_d\right)R\left[2(\theta-1)+g_0\theta^{3/2}\lambda^*\right]}{R\left({\cal C}_d+{\cal F}_d\right)(2+g_0 \sqrt{\theta}\dot\gamma^*)},
\eeq
where
\beq
\label{Fd}
{\cal F}_d=\frac{2^{d-1}(1+e)}{d+2}g_0 \varphi.
\eeq
Upon deriving Eqs.\ \eqref{steady5}--\eqref{steady7}, use has been made of Eq.\ \eqref{P_c:main_text} for the collision transfer contribution to the pressure tensor. Finally, when Eqs.\ \eqref{steady5}--\eqref{steady7} are substituted into Eq.\ \eqref{steady4}, one achieves the following quartic equation in $\dot\gamma^*$:
\begin{equation}
\label{quartic}
R^4\mathscr{C}_4(e,\varphi,\theta) \dot\gamma^{*4}+R^2
\mathscr{C}_2(e,\varphi,\theta) \dot\gamma^{*2}+
\mathscr{C}_0(e,\varphi,\theta)=0.
\end{equation}
The coefficients $\mathscr{C}_4$, $\mathscr{C}_2$, and $\mathscr{C}_0$ are nonlinear functions of the restitution coefficient $e$, the volume fraction $\varphi$, and the (scaled) kinetic temperature $\theta$. Their explicit forms are given in the Appendix \ref{vicente}.
Although an explicit expression of $\theta$ in terms of $e$, $\varphi$, and $\dot\gamma^*$ is not known, the dependence of $\theta$ on the latter parameters can be implicitly obtained from the physical solution to Eq.\ \eqref{quartic} as $\dot\gamma^{*2}(\theta,e,\varphi)$. Once $\theta$ is known, the remaining rheological functions can be determined from Eqs.\ \eqref{steady5}--\eqref{steady7} in terms of $e$, $\varphi$, and $\dot\gamma^*$. In the low-density limit ($\varphi\to 0$), previous results \cite{DST16} obtained for dilute granular suspensions are recovered.
On the other hand, given that the collisional stress has been obtained by retaining terms up to the first order in the shear rate, for practical purposes it is more convenient to consider the limit $\tau_T \to 0$ but finite $e$ and $\varphi$ in the quartic equation \eqref{quartic}. In this case, we can write
\beq
\label{quartic.2}
\dot\gamma^*=\dot\gamma_0+\dot\gamma_1 \tau_T+\cdots,
\eeq
where the coefficients $\dot\gamma_0$ and $\dot\gamma_1$ can be easily obtained from the quartic equation \eqref{quartic} as
\beq
\label{quartic.3}
\dot\gamma_0
=\frac{1}{R}\sqrt{-\frac{\mathscr{C}_0}{\mathscr{C}_2^{(0)}}},
\eeq
\beq
\label{quartic.4}
\dot{\gamma}_1=-\frac{\mathscr{C}_2^{(1)}\dot\gamma_0+\mathscr{C}_4^{(0)}R^2 \dot\gamma_0^3}{2\mathscr{C}_2^{(0)}}.
\eeq
The quantities $\mathscr{C}_4^{(0)}$, $\mathscr{C}_2^{(0)}$, and $\mathscr{C}_2^{(1)}$ are defined in the Appendix \ref{vicente}. As mentioned before, an accurate and simple estimate of $\dot \gamma^*$ is provided by its zeroth-order form $\gamma_0$.
In summary, for given values of the restitution coefficient and density, Eq.\ \eqref{quartic.2} gives the shear-rate dependence of the (scaled) kinetic temperature $\theta$. The stress tensor $P_{xy}^*\equiv P_{xy}/nT$ and the first $\Delta T$ and second $\delta T$ stress normal differences are obtained by substituting $\theta(\dot\gamma^*)$ into Eqs.\ \eqref{steady1}--\eqref{steady3}, respectively.
The reliability of these theoretical results will be assessed in Sec.\ \ref{simulation} via a comparison against computer simulations.
\section{Comparison between theory and simulation}
\label{simulation}
\begin{figure}[htbp]
\includegraphics[width=140mm]{fig1.eps}
\caption{
(Color online)
Plots of the configuration of particles and
the displacement vectors (black arrows) during the interval $1.0/\zeta_0$.
in cross sections for the shear rates (a) $\dot\gamma^*=1.0$ , (b) $3.0$, and (c) $10.0$. The restitution coefficient is $e = 0.90$ while the density is $\varphi=0.3$. Notice that the uniform shear term is subtracted in the displacement vector.
We also show the temperature for the $i$-th particle $T_i \equiv (1/N) \sum_{i=1}^N m(\bm{v}_i - \bm{u})2/d$.
The color indicates the magnitude of $T_i/T - 1$.
}
\label{fig_snapshot}
\end{figure}
The goal of this section is to validate our theoretical results by using the EDLSHS.
We consider Lees-Edwards boundary conditions in a three-dimensional ($d=3$) periodic box~\cite{LE72,Scala12}.
Under these conditions, the Langevin equation \eqref{Langevin_eq} is equivalent to
Eqs.~\eqref{Enskog} and \eqref{J(V|f)_app}, when molecular chaos ansatz is assumed.
Therefore, if we can approximate Eq.~\eqref{J(V|f)_app} by the Enskog collision operator \eqref{J(V|f)}, our theory gives a good approximation of Eq.~\eqref{Langevin_eq}.
Notice that it is difficult to adopt neither the conventional event-driven simulation nor the soft-core simulation for our problem. The existence of both
the inertia term $d\bm{p}/dt$ and the drag term proportional to $\zeta$ in Eq.~\eqref{Langevin_eq} makes difficult the use of the conventional event-driven simulation. In addition, a sudden increment of the viscosity in the vicinity of a DST gives rise to numerical difficulties of soft-core simulations.
Thus, to avoid the above difficulties, we adopt in this paper the EDLSHS. This is in fact a powerful simulator for hard spheres under the influence of the drag and the inertia terms with the aid of Trotter decomposition~\cite{Scala12}
(some details of the EDLSHS method are provided in the Appendix \ref{EDLSHS}).
In our simulations, we fix the number of grains $N=1000$ as well as the background fluid temperature $T_{\rm ex}^*\equiv T_{\rm ex}/(m\sigma^2\zeta_0^2)=0.01$.
Several volume fractions $\varphi$ are considered: $\varphi=0.01, 0.05, 0.10, 0.20, 0.30, 0,40$ and 0.50. The first density corresponds to a dilute suspension while the latter can be considered as a relatively high dense suspension.
Notice that previous works \cite{LBD02,DHGD02,MGAL06,MDCPH11,MGH14} have shown that the results derived from
the Enskog equation are quite accurate for moderately dense systems (for instance, $\varphi \lesssim 0.2$ for $d=3$).
Two different values of the restitution coefficient $e$ are considered in this section: $e=1$ (elastic grains) and $e=0.9$ (granular grains with moderate inelasticity) in the main text.
More inelastic systems are considered in the Appendix \ref{echange} for the density $\varphi=0.3$.
All the rheological variables presented in this paper are measured after the system reaches a steady state (for $t>400/\zeta_0$). In addition, all the variables are averaged by 10 ensemble averages which have different initial conditions and 10 time averages during the time intervals $10/\zeta_0$ for each initial condition. We have confirmed that the fluctuations of the observables are sufficiently small.
Before considering the rheological properties of the gas-solid suspension, Fig.\ \ref{fig_snapshot} displays a snapshot of the configurations and displacements of particles in a cross section for each given set of parameters. In particular, the panels (a), (b) and (c) of Fig.~\ref{fig_snapshot} represent the quenched, intermediate and ignited states, respectively, for $e=0.9$ and $\varphi=0.3$.
Here, the intermediate state means the intermediate between the quenched and ignited states.
Only a configuration of particles in a cross section in each panel of Fig.~\ref{fig_snapshot} is displayed.
Because the motion and configuration of the moderately dense gas seem to be uniform,
the use of the (homogeneous) Enskog kinetic equation \eqref{Enskog} for describing the simple shear flow is justified.
Figures \ref{fig1}-\ref{fig7} show the shear-rate dependence of the (scaled) kinetic temperature $\theta$
and the (dimensionless) nonlinear shear viscosity $\eta^*$ for $\varphi=0.01$ (Fig.\ref{fig1}), $\varphi=0.05$ (Fig.\ref{fig2}), $\varphi=0.10$ (Fig.\ref{fig3}), $\varphi=0.20$ (Fig. \ref{fig4}), $\varphi=0.30$ (Fig.\ref{fig5}), $\varphi=0.40$ (Fig. \ref{fig6}) and $\varphi=0.50$ (Fig. \ref{fig7}). According to Eq.\ \eqref{shear_viscosity}, the (scaled) viscosity $\eta^*$ is defined as
\beq
\label{visc_vic}
\eta^*\equiv \frac{\zeta_0 \eta}{n T_\text{ex}}=-\frac{\theta P_{xy}^*}{\dot\gamma^*},
\eeq
where $P_{ij}^*\equiv P_{ij}/nT$. The dashed lines in those plots correspond to the (perturbative) theoretical results obtained by retaining the first-order terms in $\tau_T$ [namely, when the (scaled) shear rate is approximated by $\dot\gamma^*=\dot\gamma_0+\dot\gamma_1 \tau_T$].
These results will be referred here to as the first-order theory. Analogously, the solid lines refer to the theoretical results by assuming $\tau_T=0$ (zeroth-order theory).
We recall that the term proportional to $\dot\gamma^*\tau_T$ is the last term appearing in the expression \eqref{P_c:main_text} for $P_{\alpha\beta}^c$.
Moreover, the symbols in Figs.\ \ref{fig1}-\ref{fig7} correspond to the simulation results. Surprisingly, we observe that in general the zeroth-order results compare better with simulations than the first-order results.
On the other hand, as expected, both theories (zeroth- and first-order theories) are practically indistinguishable for dilute suspensions (see Figs.\ \ref{fig1} and \ref{fig2}).
Regarding the comparison with simulations, it is quite apparent that the zeroth-order theoretical results for the kinetic temperature $\theta$ agree well with simulations in the complete range of densities studied. This shows the accuracy of Grad's approximation to capture the shear-rate dependence of $\theta$, even for high densities. On the other hand, although the agreement between theory and simulation for $\eta^*$ is still good for $\varphi \lesssim 0.4$, some quantitative discrepancies are observed for the highest density $\varphi=0.5$. It is interesting to note that the simulation data for viscosity in the low shear (Newtonian) regime of the high density regions ($\varphi=0.50$ and $0,40$) seem to deviate from the theoretical predictions. We believe that this deviation is originated from the crystallization which takes place at $\varphi_c=0.49$.
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig2.eps}
\caption{
(Color online)
Plots of $\theta$ (panel (a)) and $\eta^*$ (panel (b)) versus the (scaled) shear rate $\dot\gamma^{*}$ for $\varphi=0.01$ and two different values of the restitution coefficient $e$: $e=1$ and $e=0.9$. The solid and dashed lines correspond to the (perturbative) theoretical results obtained in the zeroth-order (denoted by 0th in the legend) and first-order (denoted by 1st in the legend) in $\tau_T$, respectively. Symbols refer to computer simulation results.
}
\label{fig1}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig3.eps}
\caption{
(Color online)
Plots of $\theta$ (panel (a)) and $\eta^*$ (panel (b)) versus the (scaled) shear rate $\dot\gamma^{*}$ for $\varphi=0.05$ and two different values of the restitution coefficient $e$: $e=1$ and $e=0.9$. The solid and dashed lines correspond to the (perturbative) theoretical results obtained in the zeroth-order (denoted by 0th in the legend) and first-order (denoted by 1st in the legend) in $\tau_T$, respectively. Symbols refer to computer simulation results.
}
\label{fig2}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig4.eps}
\caption{
(Color online)
Plots of $\theta$ (panel (a)) and $\eta^*$ (panel (b)) versus the (scaled) shear rate $\dot\gamma^{*}$ for $\varphi=0.10$ and two different values of the restitution coefficient $e$: $e=1$ and $e=0.9$. The solid and dashed lines correspond to the (perturbative) theoretical results obtained in the zeroth-order (denoted by 0th in the legend) and first-order (denoted by 1st in the legend) in $\tau_T$, respectively. Symbols refer to computer simulation results.
}
\label{fig3}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig5.eps}
\caption{
(Color online)
Plots of $\theta$ (panel (a)) and $\eta^*$ (panel (b)) versus the (scaled) shear rate $\dot\gamma^{*}$ for $\varphi=0.20$ and two different values of the restitution coefficient $e$: $e=1$ and $e=0.9$. The solid and dashed lines correspond to the (perturbative) theoretical results obtained in the zeroth-order (denoted by 0th in the legend) and first-order (denoted by 1st in the legend) in $\tau_T$, respectively. Symbols refer to computer simulation results.
}
\label{fig4}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig6.eps}
\caption{
(Color online)
Plots of $\theta$ (panel (a)) and $\eta^*$ (panel (b)) versus the (scaled) shear rate $\dot\gamma^{*}$ for $\varphi=0.30$ and two different values of the restitution coefficient $e$: $e=1$ and $e=0.9$. The solid and dashed lines correspond to the (perturbative) theoretical results obtained in the zeroth-order (denoted by 0th in the legend) and first-order (denoted by 1st in the legend) in $\tau_T$, respectively. Symbols refer to computer simulation results.
}
\label{fig5}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig7.eps}
\caption{
(Color online)
Plots of $\theta$ (panel (a)) and $\eta^*$ (panel (b)) versus the (scaled) shear rate $\dot\gamma^{*}$ for $\varphi=0.40$ and two different values of the restitution coefficient $e$: $e=1$ and $e=0.9$. The solid and dashed lines correspond to the (perturbative) theoretical results obtained in the zeroth-order (denoted by 0th in the legend) and first-order (denoted by 1st in the legend) in $\tau_T$, respectively. Symbols refer to computer simulation results.
}
\label{fig6}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig8.eps}
\caption{
(Color online)
Plots of $\theta$ (panel (a)) and $\eta^*$ (panel (b)) versus the (scaled) shear rate $\dot\gamma^{*}$ for $\varphi=0.50$ and two different values of the restitution coefficient $e$: $e=1$ and $e=0.9$. The solid and dashed lines correspond to the (perturbative) theoretical results obtained in the zeroth-order (denoted by 0th in the legend) and first-order (denoted by 1st in the legend) in $\tau_T$, respectively. Symbols refer to computer simulation results.
}
\label{fig7}
\end{figure}
As advanced in Sec. II, the evaluation of $P^c_{\alpha\beta}$ by including the complete nonlinear dependence on the shear rate yields a quite intricate expression that must be numerically integrated (see Eq.\ (3.14) of Ref.\ \cite{MGSB99}).
For this reason, a more simplified expression of $P^c_{\alpha\beta}$ has been obtained in Eq.\ \eqref{P_c:main_text} by considering only the linear contributions in the (scaled) shear rate $\dot\gamma^*$.
On the other hand, as the panel (a) of Fig.\ \ref{fig_tauT} shows, the term $\dot\gamma^*\tau_T \propto \dot\gamma/\sqrt{T}$
becomes small in the limit of large shear rates for perfectly elastic collisions ($e=1$). This means that the contribution to the
collisional contribution to the shear stress coming from the term proportional to $\dot\gamma^*\tau_T$ in Eq.\ \eqref{P_c:main_text}
can be neglected in the case of dense gas-solid elastic suspensions.
Note that the
parameter $\dot\gamma^*\tau_T$ increases first with increasing the shear rate, reaches a maximum value and then decreases as $\dot\gamma^*$ increases. In fact, $\dot\gamma^*\tau_T$ tends asymptotically towards a constant value in the limit of large shear rates ($\dot\gamma^*\to \infty$) for inelastic collisions [see Fig.~\ref{fig_tauT} (a) for $\varphi=0.3$].
The maximum value of $\dot\gamma^*\tau_T$ (which occurs at the (scaled) shear rate $\dot\gamma^*=\dot\gamma_\tau$) is obtained from the condition
\begin{equation}
\left(\frac{\partial (\dot\gamma^*\tau_T)}{\partial \dot\gamma^*}\right)_{\dot\gamma^*=\dot\gamma_\tau}=0.
\end{equation}
The dependence of $\text{max}(\dot\gamma^*\tau_T)$ on the solid volume fraction $\varphi$ is plotted in the panel (b) of Fig.\ \ref{fig_tauT} for $e=1$ and $e=0.9$. It is quite apparent that $\text{max}(\dot\gamma^*\tau_T)$ decreases as $\varphi$ increases. Since the collisional contribution $\mathsf{P}^c$ to the shear stress decreases with increasing the density, then one can conclude that $\mathsf{P}^c$ displays a weak dependence on the parameter $\dot\gamma^*\tau_T$ in the complete range of $\varphi$, at least for not quite high
inelasticity. This is likely the main reason for which the approximation $\tau_T=0$ in the collisional stress gives good results for $\theta$ and $\eta^*$.
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig9.eps}
\caption{
(Color online) Plot of $\dot\gamma^*\tau_T$ versus the (scaled) shear rate $\dot\gamma^{*}$ (panel (a)) for $\varphi=0.30$ and two different values of the restitution coefficient $e$: $e=1$ and $e=0.9$. The solid and dashed lines correspond to the (perturbative) theoretical results obtained in the zeroth-order (denoted by 0th in the legend) and first-order (denoted by 1st in the legend) in $\tau_T$, respectively. Symbols refer to computer simulation results.
Plot of the maximum value of $\dot\gamma^*\tau_T$ against the solid volume fraction (panel (b)) for $e=1$ and $e=0.9$ with the condition $\varphi\ge 0.02$.
}
\label{fig_tauT}
\end{figure}
Figures\ \ref{fig1}-\ref{fig7} clearly highlight that both theory and simulation predict that both $\theta$ and $\eta^*$ monotonically increase with $\dot\gamma^*$ from the Newtonian branch in the low shear regime to the Bagnolian branch for $e<1$ or the branch in which the viscosity is proportional to $\dot\gamma^2$ for $e=1$ in the high shear regime for densities $\varphi\gtrsim 0.05$.
Similar CST for dense suspensions of $e=1$ has been observed in Ref.~\cite{Kawasaki14}.
On the other hand, these monotonic tendencies disagree with the shear thinning effect observed in dense disordered suspensions in the low shear regime.
This might suggest that the shear thinning could be suppressed if one would use a mono-disperse suspension.
On the other hand, the flow curves have S-shapes for the dilute suspension $\varphi=0.01$.
More precisely, the shear thickening is continuous (CST) above the critical volume fraction $\varphi_c\approx 0.0176$, while it is discontinuous (DST) for $\varphi<\varphi_c$.
This is an interesting finding that contrasts with typical experimental observations for dense suspensions.
Notice that a similar change from a discontinuous transition to a continuous transition for the kinetic temperature has already been reported in Refs.~\cite{BGK2016,Saha17,Sangani96}.
The detailed theoretical explanation of this discontinuous-continuous transition will be presented in the next section.
As occurs in driven granular fluids \cite{Garzo13b}, we also observe the weak influence of inelasticity on $\theta$ and $\eta^*$ for small shear rates. This is because the influence of the interstitial fluid (accounted for by the thermostat and the viscous damping term) on the dynamics of grains is more important than the effect of collisions in the low shear regime. On the other hand, the impact of inelasticity on rheology increases with increasing the shear rate.
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig10.eps}
\caption{
(Color online)
Plots of the stress ratio $\mu\equiv -P_{xy}/P$ versus the (scaled) shear rate $\dot\gamma^{*}$ for $\varphi=0.01$ (panel (a)) and $\varphi=0.30$ (panel (b)) and two different values of the restitution coefficient $e$: $e=1$ and $e=0.9$. The solid and dashed lines correspond to the (perturbative) theoretical results obtained in the zeroth-order (denoted by 0th in the legend) and first-order (denoted by 1st in the legend) in $\tau_T$, respectively. Symbols refer to computer simulation results.
}
\label{fig_mu1}
\end{figure}
Now, the results of the shear-rate dependence of the stress ratio $\mu\equiv -P_{xy}/P$ are presented in Fig.~\ref{fig_mu1}. The panel (a) of Fig.\ \ref{fig_mu1} shows the theoretical results of the dilute case ($\varphi=0.01$), where the theory gives almost perfect agreement with simulations.
The asymptotic expression of $\mu$ for large $\dot\gamma^*$ strongly depends on whether collisions are elastic or inelastic.
In particular, while the stress ratio reaches a plateau when $e<1$, $\mu$ tends to zero in the limit $\dot\gamma^*\to \infty$ when $e=1$ as explained in Ref.~\cite{DST16}.
The results of $\mu$ for denser situations are interesting (see the panel (b) of Fig.~\ref{fig_mu1} for $\varphi=0.30$)
because the first-order theory compares better with simulations than the simple results with $\tau_T=0$.
This result contrasts with the findings of Fig.~\ref{fig5} where the zeroth-order theory provides the best performance.
This change of behavior can be understood because although the zeroth-order theory for both $P$ and $P_{xy}$ deviates
from the simulation data less than the first-order one, the opposite happens for the ratio $\mu=-P_{xy}/P$
due to a cancelation of errors. See the Appendix \ref{app:mu} for details on this point.
We consider now the normal stress differences $N_1$ and $N_2$. They are defined as
\beq
\label{normal.1}
N_1\equiv \frac{P_{xx}-P_{yy}}{P}, \quad N_2\equiv \frac{P_{yy}-P_{zz}}{P}.
\eeq
In terms of $\Delta \theta$ and $\delta \theta$, the expressions of $N_1$ and $N_2$ are
\beq
\label{normal.2}
N_1=\displaystyle\frac{1+\frac{2^{d-1}}{d+2}(1+e)\varphi g_0}{1+2^{d-2}(1+e)\varphi g_0}\frac{\Delta \theta}{\theta},
\eeq
\beq
\label{normal.3}
N_2 =\displaystyle\frac{1+\frac{2^{d-1}}{d+2}(1+e)\varphi g_0}{1+2^{d-2}(1+e)\varphi g_0}\frac{\delta \theta-\Delta \theta}{\theta}.
\eeq
Figure \ref{N1N2_comp} shows $N_1$ and $N_2$ versus $\dot\gamma^*$ for $e=0.9$ and two different solid volume fractions $\varphi$: $\varphi=0.01$ (dilute suspensions) and 0.1 (moderately dense suspension).
Only the theoretical results of the zeroth-order approximation are plotted. It is seen that the theory agrees well with simulations for this range of densities.
On the other hand, the deviations between theory and simulations becomes larger for higher densities.
Moreover, it must be stressed that the normal stress differences become large when the shear thickening takes place. In particular, such a tendency is clearly observed if we focus on $N_1$ in the vicinity of the critical shear rate of the DST for dilute suspensions.
\begin{figure}[htbp]
\begin{tabular}{cc}
\centering
\includegraphics[width=90mm]{fig11.eps}
\end{tabular}
\caption{
(Color online)
Plots of the scaled normal stress differences $N_1$ and $N_2$ versus the (scaled) shear rate $\dot\gamma^*$ from the kinetic theory and those from the simulation against $\dot\gamma^*$ for $e=0.9$ and two different values of the solid volume fraction: $\varphi=0.01$ and $0.10$. The solid and dashed lines are the theoretical results obtained by assuming $\tau_T=0$ while the symbols correspond to the simulation results.
}
\label{N1N2_comp}
\end{figure}
\section{Transition from discontinuous shear tickening (DST) to continuous shear tickening (CST)}
\label{DST}
The results discussed in Sec.\ \ref{simulation} have clearly provided evidence on the fact that the DST observed for dilute suspensions tends towards the CST as the density increases.
This transition can be analyzed as follows. For simplicity, we focus in this section on the discontinuous-continuous transition for the kinetic temperature between an ignited state and a quenched state. This transition is almost equivalent to the one found between the DST and the CST.
Because we are interested in a constant volume system, the condition for obtaining the critical point is given by
\begin{equation}
\label{critical_cd}
\left(\frac{\partial \dot\gamma^*}{\partial \theta} \right)_{e,\varphi}=0,
\quad {\rm and} \quad
\left(\frac{\partial^2 \dot\gamma^*}{\partial \theta^2} \right)_{e,\varphi}=0.
\end{equation}
This condition is analogous to that of the critical point of the second-order phase transition at equilibrium.
Let us determine the critical point.
In order to get it, we consider the zeroth-order theory and so,
\beq
\label{zeroth}
\dot \gamma^{*2}=-\frac{1}{R(\varphi)^2}\frac{\mathscr{C}_0(e,\varphi,\theta)}{\mathscr{C}_2^{(0)}(e,\varphi,\theta)}.
\eeq
From Eq.~\eqref{zeroth}, the conditions \eqref{critical_cd} can be rewritten as
\begin{eqnarray}
\label{critical_cd2}
\left(\frac{\partial \mathscr{C}_0}{\partial \theta}\right)_\varphi \mathscr{C}_2^{(0)}
-\mathscr{C}_0\left(\frac{\partial \mathscr{C}_2^{(0)}}{\partial \theta}\right)_{e,\varphi} &=&0,
\\
\left(\frac{\partial^2 \mathscr{C}_0}{\partial \theta^2}\right)_\varphi \mathscr{C}_2^{(0)}
-\mathscr{C}_0\left(\frac{\partial^2 \mathscr{C}_2^{(0)}}{\partial \theta^2}\right)_{e,\varphi} &=&0.
\label{critical_cd2_2}
\end{eqnarray}
For a given value of the restitution coefficient $e$, the numerical solution to Eqs.~\eqref{critical_cd2} and \eqref{critical_cd2_2} provides the critical point.
In particular, for elastic collisions ($e=1$), the critical point is given by $\varphi_{\rm c}\simeq 0.0176$, $\theta_{\rm c}\simeq38.4$, and $\dot\gamma_{\rm c}\simeq4.39$.
As the panel (a) of Fig.\ \ref{critical} shows, Eqs.\ \eqref{critical_cd2} and \eqref{critical_cd2_2} can be seen as analogous to the phase coexistence and spinodal lines at equilibrium phase transitions, respectively, in the phase space of $(\theta, \varphi,\dot\gamma^*)$. Because of this analogy, we will employ the above terminology for the later discussion.
To confirm the validity of our analysis, we have also performed the EDLSHS simulations in the vicinity of the critical point for the case $e=1$.
We have gradually changed the shear rate from $\dot\gamma^*_0=0.400 (0.826)$ to sequentially increasing (decreasing) values as $\dot\gamma^*=\dot\gamma^*_0, a\dot\gamma^*_0, a^2\dot\gamma^*_0, \cdots, a^{63}\dot\gamma^*_0=0.826 (0.400)$ with the rate $a=10^{0.005}\simeq1.0116$.
We have verified that the coexistence of an ignited state and a quenched state in our simulation exists on the phase coexistence line as shown in the panel (a) of Fig.~\ref{critical}.
The intersection of the two lines correspond to the critical point.
Notice that the spinodal line is located outside the phase coexistence line in our case, which is different from equilibrium situations.
This difference might be a universal feature of non-equilibrium bifurcations because models of traffic flows have similar structures~\cite{Komatsu95,Hayakawa98}.
Near the critical point, the equation of the coexistence curve between $\theta-\theta_c$ and $\varphi_c-\varphi$ for $\varphi<\varphi_c$ is determined as
\begin{equation}
\label{theory}
\theta-\theta_c=\pm C \sqrt{\varphi_c-\varphi} ,
\end{equation}
where $C=\left\{6(\partial^2 \dot\gamma^*/\partial\theta\partial\varphi)/(\partial^3\dot\gamma^* /
\partial \theta^3)\right\}_{\varphi_{\rm c},\theta_{\rm c}}^{1/2}\simeq 750$ for $e=1$.
The theoretical curve in Eq.~\eqref{theory} is drawn as the solid (red) line in the panel (b) of Fig.~\ref{critical}.
This analytical prediction captures qualitatively well the numerical result obtained from Eqs.~\eqref{critical_cd2}
and \eqref{critical_cd2_2} (the doted line in Fig.~\ref{critical}).
\begin{figure}[htbp]
\includegraphics[width=150mm]{fig12.eps}
\caption{
(Color online) Panel (a) Plots of the phase coexistence line $\partial\dot\gamma/\partial \theta=0$ (solid lines) and the spinodal line $\partial^2\dot\gamma/\partial \theta^2=0$ (dashed line).
We also plot the results of our simulation (open circles), where the temperature discontinuously increases (decreases) when we gradually increase (decrease) the shear rate.
Notice that the phase coexistence curve does not exist for $\varphi>\varphi_c$.
Panel (b) Plots of the projection of the phase coexistence line and the spinodal line onto the $(\varphi,\theta)$-plane.
}
\label{critical}
\end{figure}
\section{Discussion and conclusion}
\label{discussion}
The Enskog kinetic equation for inelastic hard spheres has been considered in this paper as the starting point to study the rheology of gas-solid suspensions under simple shear flow.
The effect of the interstitial fluid on the dynamics of solid particles has been modeled through an external force composed by a viscous drag force plus a stochastic Langevin-like term.
While the first term models the friction of grains on the gas phase, the latter accounts for thermal fluctuations.
Two independent but complementary routes have been employed to determine the non-Newtonian transport properties.
First, the Enskog equation has been approximately solved by means of Grad's moment method.
Given that the heat flux vanishes in the simple shear flow state, only the kinetic pressure tensor has been retained in the trial distribution function.
Then, the analytical results for the kinetic temperature, the viscosity, the stress ratio, and the normal stress differences have been compared against computer simulations based on the event-driven Langevin simulation method.
The main goal of the paper has been to determine how the flow curve (stress-strain rate relation) depends on the density (or volume fraction) of the confined gases.
One of the limitations of the theory is that the collisional moment $\overline{\Lambda}_{\alpha\beta}^E$ [defined by Eq.\ \eqref{over_Lambda}] has been evaluated by neglecting nonlinear terms in the kinetic pressure tensor $\Pi_{\alpha\beta}^k$.
For dilute gases ($\varphi\to 0$), this simplification leads to the absence of normal stress differences in the shear flow plane ($P_{xx}^k= P_{yy}^k$). However, although this equality differs from the results found in computer simulations \cite{Tsao95,Chamorro15}, the difference $P_{xx}^k-P_{yy}^k$ observed in simulations is in general very small.
As a consequence, the importance of this approximation seems to be not relevant for the calculations carried out in the present paper. Another simplification of our theory is that one of the contributions to the collisional stress $P_{xy}^c$ has been determined by neglecting nonlinear terms in the shear rate [see the third term on the right hand side of Eq.\ \eqref{P_c:main_text}].
On the other hand, the comparison with simulations has shown that the reliability of the theory is clearly improved when this term is neglected (zeroth-order theory).
The theoretical results derived in this paper from Grad's method indicate that in general the Enskog theory describes well the rheology of sheared suspensions.
In particular, the agreement found between theory and simulations for the shear viscosity clearly shows that the shear thickening effect is well captured by the Enskog kinetic equation.
Moreover, in contrast to typical experimental observations for dense suspensions, both theory and simulations have confirmed that there is a transition from the DST in dilute suspensions to the CST for dense suspensions at relatively low density.
This finding is consistent with the results reported in previous works ~\cite{DST16,Tsao95,BGK2016,Sangani96,Santos04,Saha17} where only the transition between the quenched state and the ignited state for the kinetic temperature was analyzed.
As advanced before, in spite of the fact that our theoretical results are based in some approximations, it must be stressed that the theoretical predictions for the shear-rate dependence of the shear viscosity compare well with simulations for moderately dense suspensions (for instance, densities $\varphi$ smaller than or equal to 0.3).
This is the expected result since several previous works \cite{LBD02,DHGD02,MGAL06,MDCPH11,MGH14} have confirmed the reliability of the Enskog equation in this range of densities.
The disagreement between theory and simulation for denser cases could be in part originated by the incomplete treatment of the collisional stress ${\sf P}^c$ where our expression is the same as the one obtained by Garz\'{o} and Dufty~\cite{Garzo99} from the first-order Chapman-Enskog solution. Given that the latter theory is not applicable in the high shear-rate regime, it is obvious that the present results could be refined by considering higher-order terms in the shear rate in the expression of the collisional stress.
This point is one of the important tasks for the near future.
Typical DSTs observed in experiments and simulations for dense suspensions ($\varphi>0.5$) should be the result of mutual friction between grains.
Although the Enskog kinetic equation is not applicable to such dense suspensions, an extension of Grad's moment method to dense systems might be applicable for the explanation of the DST of frictional grains~\cite{Suzuki17},
which might be better than the previous theory of dense granular liquids~\cite{Suzuki15}.
This study will be reported elsewhere~\cite{Saitoh17} (see also Ref.~\cite{Saitoh16}).
The Langevin equation \eqref{Langevin_eq} employed in our study assumes that the gravity force is perfectly balanced with the drag force immersed by the air flow. This assumption is only true if the homogeneous state is stable. On the other hand, the simple shear flow state becomes unstable above the critical shear rate.
If the homogeneous state is unstable, one would need to consider the time evolution of local structure as well as the consideration of the inhomogeneous drag.
The fact that the restitution coefficient $e$ is assumed to be constant has allowed to get quite explicit results.
However, the above hypothesis disagrees with experimental observations \cite{BHL84} or with mechanics of particle collisions \cite{RPBS99} and hence, the coefficient $e$ depends on the impact velocity. The simplest model that takes into account dissipative material deformation is the model of viscoelastic particles \cite{BP00,BP03,DBPB13}.
On the other hand, in spite of the mathematical difficulties involved in this viscoelastic model, some progresses have been made in the past few years \cite{BP00,BP03,DBPB13} in the limit of small inelasticity for dilute granular gases. The extension of the present results for a velocity dependent restitution coefficient is beyond the scope of this paper. In addition,
since the transition between DST to CST for elastic suspensions is qualitatively similar to that of inelastic suspensions (except in the high shear asymptotic region), we think that the impact of the velocity dependence of $e$ on the above transition will be not relevant for such a problem.
As shown in the Appendix G, since the theoretical predictions deviate from simulation results for strong inelasticity, the reliability of our theory is essentially limited to moderate inelasticities.
Thus, as a future task, we plan to improve our theoretical treatment for highly inelastic cases.
Finally, it is important to note that the monodisperse system analyzed here is crystallized, at least, in the region of low shear rates for densities $\varphi>0.49$. Therefore, one should study a sheared polydisperse system to prevent it from crystallization. This is also an interesting problem to be carried out in the future.
\acknowledgements
We thank Satoshi Hayakawa, Koshiro Suzuki, Takeshi Kawasaki, Michio Otsuki, and Kuniyasu Saitoh for their useful comments.
The research of HH and ST has been partially supported by the Grant-in-Aid of MEXT for Scientific Research (Grant No. 16H04025) and the YITP activity (YITP-W-16-14).
The research of VG has been supported by the Spanish Government through Grant No. FIS2016-76359-P, partially financed by FEDER funds and by the Junta de Extremadura (Spain) through Grant No. GR15104.
|
1,108,101,563,184 | arxiv | \section{Charm production at the \lhcb experiment}
\label{sec:LHCb}
\lhcb\cite{Alves:2008zz}, the dedicated flavor experiment at CERN\xspace's Large
Hadron Collider (LHC\xspace), is the only LHC\xspace experiment with a broad charm
physics program including measurements of charm \ensuremath{C\!P}\xspace violation (\ensuremath{C\!PV}\xspace) and
$\ensuremath{\D^0}\xspace$-$\ensuremath{\Dbar^0}\xspace$ mixing.
The cross-section to produce charm hadrons into the \lhcb acceptance
in the LHC's $\sqrt{s} = 7\,\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$ proton-proton collisions is
\mbox{$1.23 \pm 0.19\,\ensuremath{\rm \,mb}\xspace$}, creating a huge potential data
set\@.\cite{LHCb-CONF-2010-013}
The \lhcb trigger system has a flexible design that includes
charm triggers so that this prolific production can be exploited.
\lhcb recorded a total integrated luminosity of \ensuremath{37.7\,\ensuremath{\mbox{\,pb}^{-1}}\xspace}\xspace in 2010.
The charm samples collected in 2010 are already large enough for \lhcb to
be competitive in several measurements.
With an expectation of more than $1\,\ensuremath{\mbox{\,fb}^{-1}}\xspace$, the 2011-12 run will yield
even larger samples.
Because the LHC\xspace collides protons, there may be asymmetries in the production
of charm and anti-charm hadrons.
\lhcb has measured the production asymmetry of \ensuremath{\D^0}\xspace/\ensuremath{\Dbar^0}\xspace using \ensuremath{37\,\ensuremath{\mbox{\,pb}^{-1}}\xspace}\xspace of
2010 data\@.\cite{LHCb-CONF-2011-023}
The analysis uses both untagged samples of reconstructed \ensuremath{\D^0}\xspace decays
and tagged samples that are reconstructed as the product of a
$\ensuremath{\D^{*+}}\xspace \to \ensuremath{\D^0}\xspace \ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace^+$ decay.
In the tagged sample, the initial flavor of the \ensuremath{\PD}\xspace is
identified (tagged) as \ensuremath{\D^0}\xspace or \ensuremath{\Dbar^0}\xspace by the charge of the tagging slow
pion, $\ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace^\pm$.
In both samples, \ensuremath{\D^0}\xspace is reconstructed in the final states
$\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace$, $\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace$, and $\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace$.
For a final state $f$, the raw observed untagged asymmetry, $\ensuremath{{\cal A}_{\mathrm{Raw}}}\xspace(f)$, and
the raw observed \ensuremath{\D^*}\xspace-tagged asymmetry, $\ensuremath{\ARaw^{\ast}}\xspace(f)$, can be factored
into components:
\begin{eqnarray}
\ensuremath{{\cal A}_{\mathrm{Raw}}}\xspace(f) & \equiv & \frac{N(\ensuremath{\D^0}\xspace \to f) - N(\ensuremath{\Dbar^0}\xspace \to \bar{f})}{N(\ensuremath{\D^0}\xspace \to f) + N(\ensuremath{\Dbar^0}\xspace \to \bar{f})} = \ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(f) + \ensuremath{{\cal A}_{\mathrm{D}}}\xspace(f) + \ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^0}\xspace),\label{eq:cpv:comp:untag} \\
\ensuremath{\ARaw^{\ast}}\xspace(f) & \equiv & \frac{N(\ensuremath{\D^{*+}}\xspace \to \ensuremath{\D^0}\xspace(f)\ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace^+) - N(\ensuremath{\D^{*-}}\xspace \to \ensuremath{\Dbar^0}\xspace(\bar{f})\ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace^-)}{N(\ensuremath{\D^{*+}}\xspace \to \ensuremath{\D^0}\xspace(f)\ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace^+) + N(\ensuremath{\D^{*-}}\xspace \to \ensuremath{\Dbar^0}\xspace(\bar{f})\ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace^-)}\nonumber \\
& = & \ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(f) + \ensuremath{{\cal A}_{\mathrm{D}}}\xspace(f) + \ensuremath{{\cal A}_{\mathrm{D}}}\xspace(\ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace) + \ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^{*+}}\xspace),\label{eq:cpv:comp:tag}
\end{eqnarray}
where the $N(\mbox{decay})$ are the numbers of reconstructed decays,
$\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(f)$ is the \ensuremath{C\!P}\xspace asymmetry of the \ensuremath{\D^0}\xspace decay (further studied in
Section~\ref{sec:cpv}), $\ensuremath{{\cal A}_{\mathrm{D}}}\xspace(f)$ and
$\ensuremath{{\cal A}_{\mathrm{D}}}\xspace(\ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace)$ are the detection asymmetries of $f$ and $\ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace^{\pm}$, and
$\ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^0}\xspace)$ and $\ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^{*+}}\xspace)$ are the production asymmetries.
For the self-conjugate final states $\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace$ and $\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace$,
$\ensuremath{{\cal A}_{\mathrm{D}}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace) = \ensuremath{{\cal A}_{\mathrm{D}}}\xspace(\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace) = 0$.
Therefore, the remaining detection asymmetries can be canceled by
considering combinations of raw asymmetries,
\begin{eqnarray}
\ensuremath{{\cal A}_{\mathrm{Raw}}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace) - \ensuremath{\ARaw^{\ast}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace) + \ensuremath{\ARaw^{\ast}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace) & = & \ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^0}\xspace) + \ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace),\label{eq:prod:kk} \\
\ensuremath{{\cal A}_{\mathrm{Raw}}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace) - \ensuremath{\ARaw^{\ast}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace) + \ensuremath{\ARaw^{\ast}}\xspace(\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace) & = & \ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^0}\xspace) + \ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace).\label{eq:prod:pp}
\end{eqnarray}
Using the \hfag world averages of $\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace)$ and
$\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace)$\cite{Asner:2010qjmod} and a Bayesian minimizer to optimally
solve this over-constrained system for $\ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^0}\xspace)$, we measure a mean value of
$\ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^0}\xspace) = \left[-1.08 \pm 0.32\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.12\,\ensuremath{\mathrm{(syst)}}\xspace \right]\%$ in
\lhcb's acceptance.
\section{Time-integrated \ensuremath{C\!PV}\xspace in \ensuremath{\PD}\xspace mesons}
\label{sec:cpv}
\lhcb is searching for evidence of new sources of \ensuremath{C\!P}\xspace asymmetry in the
time-integrated decay rates of \ensuremath{\PD}\xspace mesons.
The time-integrated \ensuremath{C\!P}\xspace asymmetry, $\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(f)$, is conventionally defined as
\begin{equation}
\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(f) = \frac{\Gamma(\ensuremath{\PD}\xspace \to f) - \Gamma(\ensuremath{\Dbar}\xspace \to \bar{f})}{\Gamma(\ensuremath{\PD}\xspace \to f) + \Gamma(\ensuremath{\Dbar}\xspace \to \bar{f})}
\label{eq:cpv:acp}
\end{equation}
for a given final state $f$.
For \ensuremath{\D^0}\xspace decays, $\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace$ may have contributions from both indirect and direct
\ensuremath{C\!PV}\xspace\@.
In the Standard Model, \ensuremath{C\!PV}\xspace in the charm system is highly suppressed.
Indirect \ensuremath{C\!PV}\xspace is negligibly small and should be common for all decay modes.
Direct \ensuremath{C\!PV}\xspace is expected to be $\mathcal{O}(10^{-3})$ or less and to vary
among decay modes\@.\cite{Bianco:2003vb}
In \ensuremath{C\!PV}\xspace searches in singly Cabibbo suppressed decays,
such as $\ensuremath{\D^0}\xspace \to \ensuremath{\kaon^-}\xspace \ensuremath{\kaon^+}\xspace$, participation of well-motivated new physics
(NP) particles in the interfering penguin amplitude
could enhance direct \ensuremath{C\!PV}\xspace up to $\mathcal{O}(10^{-2})$\@.\cite{Grossman:2006jg}
\lhcb recently presented its first time-integrated \ensuremath{C\!PV}\xspace
measurement with decays $\ensuremath{\D^0}\xspace \rightarrow \ensuremath{\kaon^-}\xspace \ensuremath{\kaon^+}\xspace$ and
$\ensuremath{\D^0}\xspace \rightarrow \ensuremath{\pion^-}\xspace \ensuremath{\pion^+}\xspace$\@.\cite{LHCb-CONF-2011-023}
The analysis uses the tagged samples of $\ensuremath{\D^{*+}}\xspace \rightarrow \ensuremath{\D^0}\xspace \ensuremath{\ensuremath{\Ppi}\xspace_{\mathrm{slow}}}\xspace^+$
decays also used in the measurement of $\ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^0}\xspace)$ (Section~\ref{sec:LHCb}).
Using Equation~\ref{eq:cpv:comp:tag}, the difference in $\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(f)$ for
$f = \ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace$ and $\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace$ can be measured precisely with the
production and detection asymmetries canceling exactly:
\begin{eqnarray}
\Delta\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace & \equiv & \ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace) - \ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace(\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace),\label{eq:cpv:delacp:def} \\
& = & \ensuremath{\ARaw^{\ast}}\xspace(\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace) - \ensuremath{\ARaw^{\ast}}\xspace(\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace).\label{eq:cpv:delacp:raw}
\end{eqnarray}
In \ensuremath{37\,\ensuremath{\mbox{\,pb}^{-1}}\xspace}\xspace of \lhcb 2010 data, we measure
$\Delta\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace = \left[-0.28 \pm 0.70\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.25\,\ensuremath{\mathrm{(syst)}}\xspace \right]\%$,
consistent with zero.
This result is approaching the sensitivity of \ensuremath{C\!PV}\xspace measurements performed by
the \ensuremath{\PB}\xspace-factories in these decay modes,\cite{Aubert:2007if,Staric:2008rx}
but not yet at the level of CDF's recent measurement\@.\cite{CDF-10296}
Due to differential proper-time acceptance between the $\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace$ and
$\ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace$ samples, the measured value of $\Delta\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace$ includes a residual
$10\%$ of the mode-independent indirect \ensuremath{C\!P}\xspace asymmetry.
No limiting systematic bias has been identified in the method, so
future iterations of the measurement with the much larger data set
anticipated for 2011-2012 will be significantly more precise.
\section{Time-dependent \ensuremath{C\!PV}\xspace and mixing measurements in \ensuremath{\D^0}\xspace}
\label{sec:mix}
The conventional parameterization of charm mixing is fully explained
elsewhere\@.\cite{Nakamura:2010zzi:D0mix}
Briefly, the mass eigenstates of the neutral \ensuremath{\PD}\xspace system $\ensuremath{\PD}\xspace_1$ and $\ensuremath{\PD}\xspace_2$
are expressed as normalized superpositions of the flavor eigenstates \ensuremath{\D^0}\xspace
and \ensuremath{\Dbar^0}\xspace, $\ensuremath{\PD}\xspace_{1,2} = p \ensuremath{\D^0}\xspace \pm q \ensuremath{\Dbar^0}\xspace$, where $p$ and $q$ are complex
scalars, $|p|^2 + |q|^2 = 1$.
The relative argument of $q$ and $p$ is conventionally chosen equal to
the phase that parameterizes \ensuremath{C\!PV}\xspace in the interference between mixing and
direct decays, $\arg\frac{q}{p} = \phi$.
\ensuremath{C\!P}\xspace is violated in the mixing if $|\frac{q}{p}| \ne 1$ and in
the interference between mixing and decay if $\phi \ne 0$.
Letting $m_{1,2}$ and $\Gamma_{1,2}$ represent respectively the masses and
widths of $\ensuremath{\PD}\xspace_{1,2}$, mixing is parameterized by
the mass difference $x \equiv \frac{m_1 - m_2}{\Gamma}$
and the width difference $y \equiv \frac{\Gamma_1 - \Gamma_2}{2 \Gamma}$
where $\Gamma \equiv \frac{1}{2}\left(\Gamma_1 + \Gamma_2 \right)$.
\lhcb is working towards its first measurements of \ensuremath{C\!PV}\xspace and mixing in
\ensuremath{\D^0}\xspace-\ensuremath{\Dbar^0}\xspace with lifetime ratios of \mbox{$\ensuremath{\D^0}\xspace \rightarrow \ensuremath{\kaon^-}\xspace \ensuremath{\pion^+}\xspace$} and
\mbox{$\ensuremath{\D^0}\xspace \rightarrow \ensuremath{\kaon^-}\xspace \ensuremath{\kaon^+}\xspace$} decays.
The lifetime of decays to the \ensuremath{C\!P}\xspace-even eigenstate $\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace$, $\tau(\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace)$,
is related to the lifetime of the flavor-specific final state $\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace$,
$\tau(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace)$, by the mixing parameters:
\begin{equation}
\ycp \equiv \frac{\tau(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace)}{\tau(\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace)} - 1 = y \cos\phi
- \frac{1}{2}\left(\left|\frac{q}{p}\right| - \left|\frac{p}{q}\right|\right) x \sin\phi.
\label{eq:mix:ycp}
\end{equation}
If \ensuremath{C\!P}\xspace is conserved, $\ycp = y$.
The asymmetry in the lifetimes of \ensuremath{\D^0}\xspace and \ensuremath{\Dbar^0}\xspace decays to the \ensuremath{C\!P}\xspace eigenstate
$\ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace$ is related to the \ensuremath{C\!PV}\xspace and mixing parameters by
\begin{equation}
\agamma \equiv \frac{\tau(\ensuremath{\Dbar^0}\xspace \to \ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace) - \tau(\ensuremath{\D^0}\xspace \to \ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace)}{\tau(\ensuremath{\Dbar^0}\xspace \to \ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace) + \tau(\ensuremath{\D^0}\xspace \to \ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace)}
= \frac{1}{2}\left(\left|\frac{q}{p}\right| - \left|\frac{p}{q}\right|\right) y \cos\phi - x \sin\phi.
\label{eq:mix:Agamma}
\end{equation}
\ensuremath{\D^*}\xspace-tagged candidates are used in the measurement of \agamma, while \ycp
can be measured with the larger untagged sample.
\begin{figure}[hpb]
\subfloat[$\ensuremath{\D^{*+}}\xspace \to \ensuremath{\D^0}\xspace \ensuremath{\pion^+}\xspace$]{\label{fig:mix:deltam:Dz}\psfig{file=Spradlin_MoriondQCD2011Proc-deltam_D0.pdf,width=0.5\textwidth}}%
\subfloat[$\ensuremath{\D^{*-}}\xspace \to \ensuremath{\Dbar^0}\xspace \ensuremath{\pion^-}\xspace$]{\label{fig:mix:deltam:Dzb}\psfig{file=Spradlin_MoriondQCD2011Proc-deltam_D0bar.pdf,width=0.5\textwidth}}
\caption{Distributions of the mass difference, $\Delta m$, between
\ensuremath{\D^0}\xspace(\ensuremath{\Dbar^0}\xspace) candidates and their parent \ensuremath{\D^{*+}}\xspace(\ensuremath{\D^{*-}}\xspace) candidates for
decays $\ensuremath{\D^{*+}}\xspace \rightarrow \ensuremath{\D^0}\xspace \ensuremath{\pion^+}\xspace$,
$\ensuremath{\D^0}\xspace \rightarrow \ensuremath{\kaon^-}\xspace \ensuremath{\pion^+}\xspace$ (c.c.).
\label{fig:mix:deltam}}
\end{figure}
In the 2010 run, we collected a sample of untagged
\mbox{$\ensuremath{\D^0}\xspace \to \ensuremath{\kaon^-}\xspace \ensuremath{\kaon^+}\xspace$} decays comparable in size to those of recent
Belle and BaBar measurements\@.\cite{Staric:2007dt,Auber:2009ck}
In 2011-2012, \lhcb expects to have the world's largest charm sample in this
mode.
The measurements of \ycp and \agamma are currently blinded.
As a test, the \agamma analysis was applied to a subset of the
2010 data in the right-sign (RS) control channel $\ensuremath{\D^0}\xspace \rightarrow \ensuremath{\kaon^-}\xspace \ensuremath{\pion^+}\xspace$.
Figure~\ref{fig:mix:deltam} shows the distributions of the differences
\mbox{$\Delta m $}\xspace between the masses of the reconstructed \ensuremath{\D^0}\xspace candidates and their
parent \ensuremath{\D^{*+}}\xspace candidates for the RS validation sample.
The purity of the sample is better than $90\%$.
Since the most powerful signal/background discriminants in hadronic
collisions exploit the relatively long lifetime of \ensuremath{\PD}\xspace mesons,
the trigger and selection criteria introduce a proper-time acceptance for
the reconstructed \ensuremath{\D^0}\xspace decays.
Unbiased time-dependent measurements require careful treatment of the
acceptance effects of these discriminants.
\lhcb can precisely evaluate the proper-time acceptance on an event-by-event
basis with the swimming method\@.\cite{Gersabeck:1217589,Aaltonen:2010ta}
Statistical separation of \ensuremath{\D^0}\xspace mesons produced at the primary interaction
vertex from those produced in the decays of $b$-hadrons is accomplished
using the impact parameter (IP) $\chi^2$ of the \ensuremath{\D^0}\xspace.
The event-by-event acceptance and the IP $\chi^2$ are incorporated into an
unbinned multi-dimensional likelihood fit to measure the lifetimes.
Figure~\ref{fig:mix:t} shows the proper-time distributions for the tagged
RS validation sample.
The lines on the plots are the fitted distributions from the unbinned
multi-dimensional likelihood fit.
\begin{figure}[ht]
\subfloat[\ensuremath{\D^0}\xspace]{\label{fig:mix:t:Dz}\psfig{file=Spradlin_MoriondQCD2011Proc-propertime_D0.pdf,width=0.5\textwidth}}%
\subfloat[\ensuremath{\Dbar^0}\xspace]{\label{fig:mix:t:Dzb}\psfig{file=Spradlin_MoriondQCD2011Proc-propertime_D0bar.pdf,width=0.5\textwidth}}
\caption{Distributions of the reconstructed proper time of \ensuremath{\D^0}\xspace(\ensuremath{\Dbar^0}\xspace)
candidates for decays $\ensuremath{\D^{*+}}\xspace \rightarrow \ensuremath{\D^0}\xspace \ensuremath{\pion^+}\xspace$,
$\ensuremath{\D^0}\xspace \rightarrow \ensuremath{\kaon^-}\xspace \ensuremath{\pion^+}\xspace$ (c.c.).
The line on each plot is the result of a likelihood fit incorporating
per-event acceptance distributions computed with the swimming method.
\label{fig:mix:t}}
\end{figure}
\section{Summary}
\label{sec:sum}
\lhcb had a successful year of data taking in 2010, collecting \ensuremath{37.7\,\ensuremath{\mbox{\,pb}^{-1}}\xspace}\xspace of
$\ensuremath{\Pp}\xspace\proton$ collisions at $\sqrt{s} = 7\,\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$.
We observe an asymmetry in \ensuremath{\D^0}\xspace production of
$\ensuremath{{\cal A}_{\mathrm{P}}}\xspace(\ensuremath{\D^0}\xspace) = [-1.08 \pm 0.32\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.12\,\ensuremath{\mathrm{(syst)}}\xspace]\%$, which is
the first evidence for an asymmetry in heavy flavor production at the LHC\xspace.
In our first precision charm \ensuremath{C\!PV}\xspace measurement with this
data, the difference between the time-integrated \ensuremath{C\!P}\xspace asymmetries of
$\ensuremath{\D^0}\xspace \rightarrow \ensuremath{\kaon^-}\xspace\ensuremath{\kaon^+}\xspace$ and $\ensuremath{\D^0}\xspace \rightarrow \ensuremath{\pion^-}\xspace\ensuremath{\pion^+}\xspace$ decays is measured
to be $\Delta\ensuremath{{\cal A}_{\ensuremath{C\!P}\xspace}}\xspace = [-0.28 \pm 0.70\,\ensuremath{\mathrm{(stat)}}\xspace \pm 0.25\,\ensuremath{\mathrm{(syst)}}\xspace]\%$.
A broad program of charm physics is underway and
further results in more channels are soon to follow.
With the large data set expected in 2011-2012, \lhcb is poised to become a
leader in charm physics.
\section*{References}
\providecommand{\href}[2]{#2}
\begingroup\raggedright
|
1,108,101,563,185 | arxiv | \section{Introduction}
\label{sec:introduction}
\vspace{-2mm}
Deep neural networks have dominated computer vision and machine learning in recent years, and this has led to their widespread deployment in real-world systems \citep{Cao2018,Chen2018,Kamilaris2018,Ker2018,Wang2018}. However, many current multi-class classification networks in particular are poorly calibrated, in the sense that the probability values that they associate with the class labels they predict overestimate the likelihoods of those class labels being correct in the real world. This is a major problem, since if networks are routinely overconfident, then downstream components cannot trust their predictions. The underlying cause is hypothesised to be that these networks' high capacity leaves them vulnerable to overfitting on the negative log-likelihood (NLL) loss they conventionally use during training \citep{Guo2017}.
Given the importance of this problem, numerous suggestions for how to address it have been proposed. Much work has been inspired by approaches that were not originally formulated in a deep learning context, such as Platt scaling~\citep{Platt1999}, histogram binning~\citep{Zadrozny2001}, isotonic regression~\citep{Zadrozny2002}, and Bayesian binning and averaging~ \citep{Naeini2015, Naeini2016}. As deep learning has become more dominant, however, various works have begun to directly target the calibration of deep networks. For example, \cite{Guo2017} have popularised a modern variant of Platt scaling known as \emph{temperature scaling}, which works by dividing a network's logits by a scalar $T > 0$ (learnt on a validation subset) prior to performing softmax. Temperature scaling has the desirable property that it can improve the calibration of a network without in any way affecting its accuracy. However, whilst its simplicity and effectiveness have made it a popular network calibration method, it does have downsides. For example, whilst it scales the logits to reduce the network's confidence in incorrect predictions, this also slightly reduces the network's confidence in predictions that were correct \citep{Kumar2018}. Moreover, it is known that temperature scaling does not calibrate a model under data distribution shift \citep{snoek2019can}.
By contrast, \cite{Kumar2018} initially eschew temperature scaling in favour of minimising a differentiable proxy for calibration error at training time, called Maximum Mean Calibration Error (MMCE), although they do later also use temperature scaling as a post-processing step to obtain better results than cross-entropy followed by temperature scaling~\citep{Guo2017}. Separately, \cite{muller2019does} propose training models on cross-entropy loss with label smoothing instead of one-hot labels, and show that label smoothing has a very favourable effect on model calibration.
In this paper, we propose a technique for improving network calibration that works by replacing the cross-entropy loss conventionally used when training classification networks with the focal loss proposed by \cite{Lin2017}. We observe that unlike cross-entropy, which minimises the KL divergence between the predicted (softmax) distribution and the target distribution (one-hot encoding in classification tasks) over classes, focal loss minimises a regularised KL divergence between these two distributions, which ensures minimisation of the KL divergence whilst \emph{increasing the entropy} of the predicted distribution, thereby preventing the model from becoming overconfident. Since focal loss, as shown in \S\ref{sec:focalloss}, is dependent on a hyperparameter, $\gamma$, that needs to be cross-validated, we also provide a method for choosing $\gamma$ automatically for each sample, and show that it outperforms all the baseline models.
The intuition behind using focal loss is to direct the network's attention during training towards samples for which it is currently predicting a low probability for the correct class, since trying to reduce the NLL on samples for which it is already predicting a high probability for the correct class is liable to lead to NLL overfitting, and thereby miscalibration \citep{Guo2017}. More formally, we show in \S\ref{sec:focalloss} that focal loss can be seen as \emph{implicitly} regularising the weights of the network during training by causing the gradient norms for confident samples to be lower than they would have been with cross-entropy, which we would expect to reduce overfitting and improve the network's calibration.
Overall, we make the following contributions:
\begin{enumerate}[leftmargin=*,topsep=0pt,itemsep=0pt,partopsep=0pt,parsep=0pt]
\item In \S\ref{sec:cause_cali}, we study the link that \cite{Guo2017} observed between miscalibration and NLL overfitting in detail, and show that the overfitting is associated with the predicted distributions for misclassified test samples becoming peakier as the optimiser tries to increase the magnitude of the network's weights to reduce the training NLL.
\item In \S\ref{sec:focalloss}, we propose the use of focal loss for training better-calibrated networks, and provide both theoretical and empirical justifications for this approach. In addition, we provide a principled method for automatically choosing $\gamma$ for each sample during training.
\item In \S\ref{sec:experiments}, we show, via experiments on a variety of classification datasets and network architectures, that DNNs trained with focal loss are more calibrated than those trained with cross-entropy loss (both with and without label smoothing), MMCE or Brier loss~\citep{brier1950verification}. Finally, we also make the interesting observation that whilst temperature scaling may not work for detecting out-of-distribution (OoD) samples, our approach can. We show that our approach is better at detecting out-of-distribution samples, taking CIFAR-10 as the in-distribution dataset, and SVHN and CIFAR-10-C as out-of-distribution datasets.
\end{enumerate}
\section{Problem Formulation}
\vspace{-2.5mm}
Let $D = \langle(\bm{\mathrm{x}}_i, y_i)\rangle_{i=1}^N$ denote a dataset consisting of $N$ samples from a joint distribution $\mathcal{D}(\mathcal{X}, \mathcal{Y})$, where for each sample $i$, $\mathbf{x}_i \in \mathcal{X}$ is the input and $y_i \in \mathcal{Y} = \{1, 2, ..., K\}$ is the ground-truth class label. Let $\hat{p}_{i,y} = f_\theta(y|\bm{\mathrm{x}}_i)$ be the probability that a neural network $f$ with model parameters $\theta$ predicts for a class $y$ on a given input $\bm{\mathrm{x}}_i$. The class that $f$ predicts for $\mathbf{x}_i$ is computed as $\hat{y}_i = \mathrm{argmax}_{y \in \mathcal{Y}} \; \hat{p}_{i,y}$, and the predicted confidence as $\hat{p}_i = \mathrm{max}_{y \in \mathcal{Y}} \; \hat{p}_{i,y}$. The network is said to be \emph{perfectly calibrated} when, for each sample $(\bm{\mathrm{x}}, y) \in D$, the confidence $\hat{p}$ is equal to the model accuracy $\mathbb{P}(\hat{y} = y | \hat{p})$, i.e.\ the probability that the predicted class is correct. For instance, of all the samples to which a perfectly calibrated neural network assigns a confidence of $0.8$, $80\%$ should be correctly predicted.
A popular metric used to measure model calibration is the \textit{expected calibration error} (ECE) \citep{Naeini2015}, defined as the expected absolute difference between the model's confidence and its accuracy, i.e.\ \( \mathbb{E}_{\hat{p}} \big[ \left| \mathbb{P}(\hat{y} = y | \hat{p}) - \hat{p} \right| \big] \). Since we only have finite samples, the ECE cannot in practice be computed using this definition. Instead, we divide the interval $[0,1]$ into $M$ equispaced bins, where the $i^{\mathrm{th}}$ bin is the interval $\left(\frac{i-1}{M}, \frac{i}{M} \right]$. Let $B_i$ denote the set of samples with confidences belonging to the $i^{\mathrm{th}}$ bin. The accuracy $A_i$ of this bin is computed as \(A_i = \frac{1}{|B_i|} \sum_{j \in B_i} \mathbbm{1} \left(\hat{y}_j = y_j\right) \), where $\mathbbm{1}$ is the indicator function, and $\hat{y}_j$ and $y_j$ are the predicted and ground-truth labels for the $j^{\mathrm{th}}$ sample. Similarly, the confidence $C_i$ of the $i^{\mathrm{th}}$ bin is computed as \(C_i = \frac{1}{|B_i|} \sum_{j \in B_i} \hat{p}_j \), i.e.\ $C_i$ is the average confidence of all samples in the bin. The ECE can be approximated as a weighted average of the absolute difference between the accuracy and confidence of each bin: $\mathrm{ECE} = \sum_{i=1}^{M} \frac{|B_i|}{N} \left| A_i - C_i \right|$.
A similar metric, the \textit{maximum calibration error} (MCE) \citep{Naeini2015}, is defined as the maximum absolute difference between the accuracy and confidence of each bin: $\mathrm{MCE} = \mathrm{max}_{i \in \{1, ..., M\}}\left|A_i - C_i\right|$.
\textbf{AdaECE:} One disadvantage of ECE is the uniform bin width. For a trained model, most of the samples lie within the highest confidence bins, and hence these bins dominate the value of the ECE. We thus also consider another metric, AdaECE (Adaptive ECE), for which bin sizes are calculated so as to evenly distribute samples between bins (similar to the adaptive binning procedure in \cite{Nguyen2015posterior}): $\mathrm{AdaECE} = \sum_{i=1}^{M} \frac{|B_i|}{N} \left| A_i - C_i \right| \text{ s.t.\ } \forall i, j \cdot |B_i| = |B_j|$.
\textbf{Classwise-ECE:} The ECE metric only considers the probability of the predicted class, without considering the other scores in the softmax distribution. A stronger definition of calibration would require the probabilities of all the classes in the softmax distribution to be calibrated \citep{Kull2019beyond, Vaicenavicius2019, Widmann2019calibration, Kumar2019verified}. This can be achieved with a simple classwise extension of the ECE metric: $\mathrm{Classwise ECE} = \frac{1}{K} \sum_{i=1}^{M}\sum_{j=1}^{K} \frac{|B_{i,j}|}{N} \left| A_{i,j} - C_{i,j} \right|$, where $K$ is the number of classes, \(B_{ij}\) denotes the set of samples from the $j^{th}$ class in the $i^{th}$ bin, \(A_{ij} = \frac{1}{|B_{ij}|} \sum_{k \in B_{ij}} \mathbbm{1} \left(j = y_k\right) \) and \(C_{i,j} = \frac{1}{|B_{ij}|} \sum_{k \in B_{ij}} \hat{p}_{kj} \).
\noindent A common way of visualising calibration is to use a \emph{reliability plot} \citep{Niculescu2005}, which plots the accuracies of the confidence bins as a bar chart (see Appendix Figure~\ref{fig:rel_conf_bin_plot}). For a perfectly calibrated model, the accuracy for each bin matches the confidence, and hence all of the bars lie on the diagonal. By contrast, if most of the bars lie above the diagonal, the model is more accurate than it expects, and is under-confident, and if most of the bars lie below the diagonal, then it is over-confident.
\vspace{-2.5mm}
\section{What Causes Miscalibration?}
\label{sec:cause_cali}
\vspace{-2.5mm}
We now discuss why high-capacity neural networks, despite achieving low classification errors on well-known datasets, tend to be miscalibrated. A key empirical observation made by \cite{Guo2017} was that poor calibration of such networks appears to be linked to overfitting on the negative log-likelihood (NLL) during training. In this section, we further inspect this observation to provide new insights.
For the analysis, we train a ResNet-50 network on CIFAR-10 with state-of-the-art performance settings~\citep{PyTorchCIFAR}. We use Stochastic Gradient Descent (SGD) with a mini-batch of size 128, momentum of 0.9, and learning rate schedule of $\{0.1, 0.01, 0.001\}$ for the first 150, next 100, and last 100 epochs, respectively. We minimise cross-entropy loss (a.k.a.\ NLL) $\mathcal{L}_c$, which, in a standard classification context, is $-\log \hat{p}_{i,y_i}$, where $\hat{p}_{i,y_i}$ is the probability assigned by the network to the correct class $y_i$ for the i$^{th}$ sample. Note that the NLL is minimised when for each training sample $i$, $\hat{p}_{i,y_i} = 1$, whereas the classification error is minimised when $\hat{p}_{i,y_i} > \hat{p}_{i,y}$ for all $y \neq y_i$. This indicates that even when the classification error is $0$, the NLL can be positive, and the optimisation algorithm can still try to reduce it to $0$ by further increasing the value of $\hat{p}_{i,y_i}$ for each sample (see Appendix~\ref{rel_plots_appendix}).
To study how miscalibration occurs during training, we plot the average NLL for the train and test sets at each training epoch in Figures~\ref{fig:nll_entropy_ece}(a) and \ref{fig:nll_entropy_ece}(b). We also plot the average NLL and the entropy of the softmax distribution produced by the network for the correctly and incorrectly classified samples. In Figure \ref{fig:nll_entropy_ece}(c), we plot the classification errors on the train and test sets, along with the test set ECE.
\begin{figure*}[!t]
\centering
\subfigure[]{\includegraphics[width=0.32\linewidth]{./train_nll_entropy_correct_incorrect.pdf}}
\subfigure[]{\includegraphics[width=0.32\linewidth]{./test_nll_entropy_correct_incorrect.pdf}}
\subfigure[]{\includegraphics[width=0.32\linewidth]{./train_test_error_ece.pdf}}
\vspace{-4mm}
\caption{Metrics related to calibration plotted whilst training a ResNet-50 network on CIFAR-10.}
\label{fig:nll_entropy_ece}
\vspace{-6mm}
\end{figure*}
\textbf{Curse of misclassified samples:} Figures \ref{fig:nll_entropy_ece}(a) and \ref{fig:nll_entropy_ece}(b) show that although the average train NLL (for both correctly and incorrectly classified training samples) broadly decreases throughout training, after the $150^{th}$ epoch (where the learning rate drops by a factor of $10$), there is a marked rise in the average test NLL, indicating that the network starts to overfit on average NLL. This increase in average test NLL is caused only by the incorrectly classified samples, as the average NLL for the correctly classified samples continues to decrease even after the $150^{th}$ epoch. We also observe that after epoch $150$, the test set ECE rises, indicating that the network is becoming miscalibrated. This corroborates the observation in \cite{Guo2017} that miscalibration and NLL overfitting are linked.
\textbf{Peak at the wrong place:} We further observe that the entropies of the softmax distributions for both the correctly and incorrectly classified {\em test} samples decrease throughout training (in other words, the distributions get peakier). This observation, coupled with the one we made above, indicates that {\em for the wrongly classified test samples, the network gradually becomes more and more confident about its incorrect predictions}.
\textbf{Weight magnification:} The increase in confidence of the network's predictions can happen if the network increases the norm of its weights $W$ to increase the magnitudes of the logits. In fact, cross-entropy loss is minimised when for each training sample $i$, $\hat{p}_{i,y_i} = 1$, which is possible only when $||W|| \to \infty$. Cross-entropy loss thus inherently induces this tendency of weight magnification in neural network optimisation. The promising performance of weight decay \citep{Guo2017} (regulating the norm of weights) on the calibration of neural networks can perhaps be explained using this. This increase in the network's confidence during training is one of the key causes of miscalibration.
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathbf{w}}{\mathbf{w}}
\vspace{-2.5mm}
\section{Improving Calibration using Focal Loss}
\label{sec:focalloss}
\vspace{-2.5mm}
As discussed in \S\ref{sec:cause_cali}, overfitting on NLL, which is observed as the network grows more confident on all of its predictions irrespective of their correctness, is strongly related to poor calibration. One cause of this is that the cross-entropy objective minimises the difference between the softmax distribution and the ground-truth one-hot encoding over an entire mini-batch, irrespective of how well a network classifies individual samples in the mini-batch. In this work, we study an alternative loss function, popularly known as \textit{focal loss} \citep{Lin2017}, that tackles this by weighting loss components generated from individual samples in a mini-batch by how well the model classifies them. For classification tasks where the target distribution is a one-hot encoding, it is defined as $\mathcal{L}_f = -(1 - \hat{p}_{i,y_i})^\gamma \log \hat{p}_{i,y_i}$, where $\gamma$ is a user-defined hyperparameter\footnote{We note in passing that unlike cross-entropy loss, focal loss in its general form is not a proper loss function, as minimising it does not always lead to the predicted distribution $\hat{p}$ being equal to the target distribution $q$ (see Appendix~\ref{reg_bregman} for the relevant definition and a longer discussion). However, when $q$ is a one-hot encoding (as in our case, and for most classification tasks), minimising focal loss does lead to $\hat{p}$ being equal to $q$.}.
\textbf{Why might focal loss improve calibration?} We know that cross-entropy forms an upper bound on the KL-divergence between the target distribution $q$ and the predicted distribution $\hat{p}$, i.e.\ $\mathcal{L}_c \geq \mathrm{KL}(q||\hat{p})$, so minimising cross-entropy results in minimising $\mathrm{KL}(q||\hat{p})$. Interestingly, a general form of focal loss can be shown to be an upper bound on the regularised KL-divergence, where the regulariser is the negative entropy of the predicted distribution $\hat{p}$, and the regularisation parameter is $\gamma$, the hyperparameter of focal loss (a proof of this can be found in Appendix~\ref{reg_bregman}):
\begin{equation}
\label{eq:reg_bregman}
\mathcal{L}_f \geq \mathrm{KL}(q||\hat{p})- \gamma\mathbb{H}[\hat{p}].
\end{equation}
The most interesting property of this upper bound is that it shows that replacing cross-entropy with focal loss has the effect of adding a maximum-entropy regulariser \citep{Pereyra2017} to the implicit minimisation that was previously being performed.
In other words, trying to minimise focal loss minimises the KL divergence between $\hat{p}$ and $q$, whilst simultaneously increasing the entropy of the predicted distribution $\hat{p}$.
Note, in the case of ground truth with one-hot encoding, only the component of the entropy of $\hat{p}$ corresponding to the ground-truth index, $\gamma (-\hat{p}_{i,y_i} \log \hat{p}_{i,y_i})$, will be maximised (refer~Appendix~\ref{reg_bregman}).
Encouraging the predicted distribution to have higher entropy can help avoid the overconfident predictions produced by DNNs (see the `Peak at the wrong place' paragraph of \S\ref{sec:cause_cali}), and thereby improve calibration.
\begin{figure*}[!t]
\centering
\subfigure[\vspace{-2mm}]{\includegraphics[width=0.19\linewidth]{./test_nll.pdf}}
\subfigure[\vspace{-2mm}]{\includegraphics[width=0.19\linewidth]{./test_nll_correct.png}}
\subfigure[\vspace{-2mm}]{\includegraphics[width=0.19\linewidth]{./test_nll_incorrect.pdf}}
\subfigure[\vspace{-2mm}]{\includegraphics[width=0.19\linewidth]{./test_entropy_incorrect.pdf}}
\subfigure[\vspace{-2mm}]{\includegraphics[width=0.19\linewidth]{./weight_norm.pdf}}
\vspace{-\baselineskip}
\caption{How metrics related to model calibration change whilst training several ResNet-50 networks on CIFAR-10, using either cross-entropy loss, or focal loss with $\gamma$ set to 1, 2 or 3.}
\vspace{-\baselineskip}
\label{fig:nll_corr_incorr_entropy}
\end{figure*}
\textbf{Empirical observations:} To analyse the behaviour of neural networks trained on focal loss, we use the same framework as mentioned above, and train four ResNet-50 networks on CIFAR-10, one using cross-entropy loss, and three using focal loss with $\gamma = 1, 2$ and $3$. Figure \ref{fig:nll_corr_incorr_entropy}(a) shows that the test NLL for the cross-entropy model significantly increases towards the end of training (before saturating), whereas the NLLs for the focal loss models remain low. To better understand this, we analyse the behaviour of these models for correctly and incorrectly classified samples. Figure~\ref{fig:nll_corr_incorr_entropy}(b) shows that even though the NLLs for the correctly classified samples broadly-speaking decrease over the course of training for all the models, the NLLs for the focal loss models remain consistently higher than that for the cross-entropy model throughout training, implying that the focal loss models are relatively less confident than the cross-entropy model for samples that they predict correctly. This is important, as we have already discussed that it is overconfidence that normally makes deep neural networks miscalibrated. Figure \ref{fig:nll_corr_incorr_entropy}(c) shows that in contrast to the cross-entropy model, for which the NLL for misclassified test samples increases significantly after epoch $150$, the rise in this value for the focal loss models is much less severe. Additionally, in Figure \ref{fig:nll_corr_incorr_entropy}(d), we notice that the entropy of the softmax distribution for misclassified test samples is consistently (if marginally) higher for focal loss than for cross-entropy (consistent with Equation~\ref{eq:reg_bregman}).
Note that from Figure \ref{fig:nll_corr_incorr_entropy}(a), one may think that applying early stopping when training a model on cross-entropy can provide better calibration scores. However, there is no ideal way of doing early stopping that provides the best calibration error and the best test set accuracy. For fair comparison, we chose $3$ intermediate models for each loss function with the best val set ECE, NLL and accuracy, and observed that: a) for every stopping criterion, focal loss outperforms cross-entropy in both test set accuracy and ECE, b) when using val set ECE as a stopping criterion, the intermediate model for cross-entropy indeed improves its test set ECE, but at the cost of a significantly higher test error. Please refer to Appendix~\ref{sec:early_stopping} for more details.
As per \S\ref{sec:cause_cali}, an increase in the test NLL and a decrease in the test entropy for misclassified samples, along with no corresponding increase in the test NLL for the correctly classified samples, can be interpreted as the network starting to predict softmax distributions for the misclassified samples that are ever more peaky in the wrong place. Notably, our results in Figures~\ref{fig:nll_corr_incorr_entropy}(b), \ref{fig:nll_corr_incorr_entropy}(c) and \ref{fig:nll_corr_incorr_entropy}(d) clearly show that this effect is significantly reduced when training with focal loss rather than cross-entropy, leading to a better-calibrated network whose predictions are less peaky in the wrong place.
\textbf{Theoretical justification:} As mentioned previously, once a model trained using cross-entropy reaches high training accuracy, the optimiser may try to further reduce the training NLL by increasing the confidences for the correctly classified samples. It may achieve this by magnifying the network weights to increase the magnitudes of the logits. To verify this hypothesis, we plot the $L_2$ norm of the weights of the last linear layer for all four networks as a function of the training epoch (see Figure \ref{fig:nll_corr_incorr_entropy}(e)). Notably, although the norms of the weights for the models trained on focal loss are initially higher than that for the cross-entropy model, \textit{a complete reversal} in the ordering of the weight norms occurs between epochs $150$ and $250$. In other words, as the networks start to become miscalibrated, the weight norm for the cross-entropy model also starts to become greater than those for the focal loss models. In practice, this is because focal loss, by design, starts to act as a regulariser on the network's weights once the model has gained a certain amount of confidence in its predictions. This behaviour of focal loss can be observed even on a much simpler setup like a linear model (see Appendix~\ref{linear_model}). To better understand this, we start by considering the following proposition (proof in Appendix~\ref{sec:proof}):
\begin{pro}
\label{pro1}
For focal loss $\mathcal{L}_f$ and cross-entropy $\mathcal{L}_c$, the gradients $\frac{\partial \mathcal{L}_f}{\partial \mathbf{w}} = \frac{\partial \mathcal{L}_c}{\partial \mathbf{w}} g(\hat{p}_{i,y_i}, \gamma)$, where $g(p, \gamma) = (1-p)^\gamma - \gamma p (1-p)^{\gamma - 1} \log(p)$, $\gamma \in \mathbb{R}^+$ is the focal loss hyperparameter, and $\mathbf{w}$ denotes the parameters of the last linear layer. Thus $\norm{\frac{\partial \mathcal{L}_f}{\partial \mathbf{w}}} \leq \norm{\frac{\partial \mathcal{L}_c}{\partial \mathbf{w}}}$ if $g(\hat{p}_{i,y_i}, \gamma) \in [0, 1]$.
\end{pro}
\vspace{-.5\baselineskip}
Proposition~\ref{pro1} shows the relationship between the norms of the gradients of the last linear layer for focal loss and cross-entropy loss, for the same network architecture. Note that this relation depends on a function $g(p, \gamma)$, which we plot in Figure~\ref{fig:g_pt_grad_norms}(a) to understand its behaviour. It is clear that for every $\gamma$, there exists a (different) threshold $p_0$ such that for all $p \in [0,p_0]$, $g(p,\gamma) \ge 1$, and for all $p \in (p_0, 1]$, $g(p,\gamma) < 1$. (For example, for $\gamma = 1$, $p_0 \approx 0.4$.) We use this insight to further explain why focal loss provides implicit weight regularisation.
\begin{figure*}[!t]
\centering
\subfigure[]{\includegraphics[width=0.24\linewidth]{./g_p_t_gamma.png}}
\subfigure[Epoch 10]{\includegraphics[width=0.24\linewidth]{./grad_norm_10_epochs.png}}
\subfigure[Epoch 100]{\includegraphics[width=0.24\linewidth]{./grad_norm_100_epochs.png}}
\subfigure[Epoch 200]{\includegraphics[width=0.24\linewidth]{./grad_norm_200_epochs.png}}
\vspace{-\baselineskip}
\caption{(a): $g(p, \gamma)$ vs.\ $p$ and (b-d): histograms of the gradient norms of the last linear layer for both cross-entropy and focal loss.}
\label{fig:g_pt_grad_norms}
\vspace{-5mm}
\end{figure*}
\textbf{Implicit weight regularisation:} For a network trained using focal loss with a fixed $\gamma$, during the initial stages of the training, when $\hat{p}_{i,y_i} \in (0,p_0)$, $g(\hat{p}_{i,y_i}, \gamma) > 1$. This implies that the confidences of the focal loss model's predictions will initially increase faster than they would for cross-entropy. However, as soon as $\hat{p}_{i,y_i}$ crosses the threshold $p_0$, $g(\hat{p}_{i,y_i}, \gamma)$ falls below $1$ and reduces the size of the gradient updates made to the network weights, thereby having a regularising effect on the weights. This is why, in Figure \ref{fig:nll_corr_incorr_entropy}(e), we find that the weight norms of the models trained with focal loss are initially higher than that for the model trained using cross-entropy. However, as training progresses, we find that the ordering of the weight norms reverses, as focal loss starts regularising the network weights. Moreover, we can draw similar insights from Figures~\ref{fig:g_pt_grad_norms}(b), \ref{fig:g_pt_grad_norms}(c) and \ref{fig:g_pt_grad_norms}(d), in which we plot histograms of the gradient norms of the last linear layer (over all samples in the training set) at epochs $10$, $100$ and $200$, respectively. At epoch $10$, the gradient norms for cross-entropy and focal loss are similar, but as training progresses, those for cross-entropy decrease less rapidly than those for focal loss, indicating that the gradient norms for focal loss are consistently lower than those for cross-entropy throughout training.
Finally, observe in Figure~\ref{fig:g_pt_grad_norms}(a) that for higher $\gamma$ values, the fall in $g(p,\gamma)$ is steeper. We would thus expect a greater weight regularisation effect for models that use higher values of $\gamma$. This explains why, of the three models that we trained using focal loss, the one with $\gamma = 3$ outperforms (in terms of calibration) the one with $\gamma = 2$, which in turn outperforms the model with $\gamma = 1$. Based on this observation, one might think that, in general, a higher value of gamma would lead to a more calibrated model. However, this is not the case, as we notice from Figure~\ref{fig:g_pt_grad_norms}(a) that for $\gamma \ge 7$, $g(p,\gamma)$ reduces to nearly $0$ for a relatively low value of $p$ (around $0.5$). As a result, using values of $\gamma$ that are too high will cause the gradients to die (i.e.\ reduce to nearly $0$) early, at a point at which the network's predictions remain ambiguous, thereby causing the training process to fail.
\textbf{How to choose $\gamma$:} As discussed, focal loss provides implicit entropy and weight regularisation, which heavily depend on the value of $\gamma$. Finding an appropriate $\gamma$ is normally done using cross-validation. Also, traditionally, $\gamma$ is fixed for all samples in the dataset. However, as shown, the regularisation effect for a sample $i$ depends on $\hat{p}_{i,y_i}$, i.e.\ the predicted probability for the ground truth label for the sample. It thus makes sense to choose $\gamma$ for each sample based on the value of $\hat{p}_{i,y_i}$. To this end, we provide Proposition~\ref{pro:gamma} (proof in Appendix~\ref{sec:proof}), which we use to find a solution to this problem:
\begin{pro}
\label{pro:gamma}
Given a $p_0$, for $1 \geq p \geq p_0 > 0$, $g(p, \gamma) \leq 1$ for all $\gamma \geq \gamma^* = \frac{a}{b} + \frac{1}{\log a}W_{-1} \big(-\frac{a^{(1-a/b)}}{b} \log a \big)$, where $a = 1-p_0$, $b = p_0 \log p_0$, and $W_{-1}$ is the Lambert-W function~\citep{corless1996lambertw}. Moreover, for $p \geq p_0 > 0$ and $\gamma \geq \gamma^*$, the equality $g(p, \gamma) = 1$ holds only for $p = p_0$ and $\gamma = \gamma^*$.
\end{pro}
It is worth noting that there exist multiple values of $\gamma$ where $g(p, \gamma) \leq 1$ for all $p \geq p_0$. For a given $p_0$, Proposition~\ref{pro:gamma} allows us to compute $\gamma$ s.t.\ (i) $g(p_0,\gamma) = 1$; (ii) $g(p, \gamma) \ge 1$ for $p \in [0,p_0)$; and (iii) $g(p, \gamma) < 1$ for $p \in (p_0,1]$. This allows us to control the magnitude of the gradients for a particular sample $i$ based on the current value of $\hat{p}_{i,y_i}$, and gives us a way of obtaining an informed value of $\gamma$ for each sample. For instance, a reasonable policy might be to choose $\gamma$ s.t.\ $g(\hat{p}_{i,y_i}, \gamma) > 1$ if $\hat{p}_{i,y_i}$ is small (say less than $0.25$), and \ $g(\hat{p}_{i,y_i}, \gamma) < 1$ otherwise. Such a policy will have the effect of making the weight updates larger for samples having a low predicted probability for the correct class and smaller for samples with a relatively higher predicted probability for the correct class.
Following the aforementioned arguments, we choose a threshold $p_0$ of $0.25$, and use Proposition~\ref{pro:gamma} to obtain a $\gamma$ policy such that $g(p, \gamma)$ is observably greater than $1$ for $p \in [0, 0.25)$ and $g(p, \gamma) < 1$ for $p \in (0.25, 1]$. In particular, we use the following schedule: if $\hat{p}_{i,y_i} \in [0,0.2)$, then $\gamma = 5$, otherwise $\gamma = 3$ (note that $g(0.2, 5) \approx 1$ and $g(0.25, 3) \approx 1$: see Figure~\ref{fig:g_pt_grad_norms}(a)).
We find this $\gamma$ policy to perform consistently well across multiple classification datasets and network architectures. Having said that, one can calculate multiple such schedules for $\gamma$ following Proposition~\ref{pro:gamma}, using the intuition of having a relatively high $\gamma$ for low values of $\hat{p}_{i, y_i}$ and a relatively low $\gamma$ for high values of $\hat{p}_{i, y_i}$.
\vspace{-2.5mm}
\section{Experiments}
\label{sec:experiments}
\vspace{-3mm}
We conduct image and document classification experiments to test the performance of focal loss. For the former, we use CIFAR-10/100 \citep{Krizhevsky2009} and Tiny-ImageNet \citep{deng2009imagenet} , and train ResNet-50, ResNet-110 \citep{He2016}, Wide-ResNet-26-10 \citep{Zagoruyko2016} and DenseNet-121 \citep{Huang2017} models, and for the latter, we use 20 Newsgroups \citep{Lang1995} and Stanford Sentiment Treebank (SST) \citep{Socher2013} datasets and train Global Pooling CNN \citep{Lin2013} and Tree-LSTM~\citep{Tai2015} models. Further details on the datasets and training can be found in Appendix~\ref{dataset}.
\textbf{Baselines} Along with cross-entropy loss, we compare our method against the following baselines: a) \emph{MMCE} (Maximum Mean Calibration Error) \citep{Kumar2018}, a continuous and differentiable proxy for calibration error that is normally used as a regulariser alongside cross-entropy, b) \emph{Brier loss}~\citep{brier1950verification}, the squared error between the predicted softmax vector and the one-hot ground truth encoding (Brier loss is an important baseline as it can be decomposed into calibration and refinement~\citep{degroot1983comparison}), and c) \emph{Label smoothing}~\citep{muller2019does} (LS): given a one-hot ground-truth distribution $\bm{\mathrm{q}}$ and a smoothing factor $\alpha$ (hyperparameter), the smoothed vector $\bm{\mathrm{s}}$ is obtained as $\bm{\mathrm{s}}_i = (1-\alpha)\bm{\mathrm{q}}_i + \alpha(1-\bm{\mathrm{q}}_i)/(K-1)$, where $\bm{\mathrm{s}}_i$ and $\bm{\mathrm{q}}_i$ denote the $i^{th}$ elements of $\bm{\mathrm{s}}$ and $\bm{\mathrm{q}}$ respectively, and $K$ is the number of classes. Instead of $\bm{\mathrm{q}}$, $\bm{\mathrm{s}}$ is treated as the ground truth. We train models using $\alpha = 0.05$ and $\alpha = 0.1$, but find $\alpha = 0.05$ to perform better. We thus report the results obtained from LS-$0.05$ with $\alpha = 0.05$.
\textbf{Focal Loss}: As mentioned in \S\ref{sec:focalloss}, our proposed approach is the sample-dependent schedule FLSD-$53$ ($\gamma=5$ for $\hat{p}_y \in [0, 0.2)$, and $\gamma=3$ for $\hat{p}_y \in [0.2, 1]$), which we find to perform well across most classification datasets and network architectures. In addition, we also train other focal loss baselines, including ones with $\gamma$ fixed to $1,2$ and $3$, and also ones that have a training epoch-dependent schedule for $\gamma$. Among the focal loss models trained with a fixed $\gamma$, using validation set we find $\gamma = 3$ (FL-3) to perform the best. Details of all these approaches can be found in Appendix~\ref{results}.
\begin{figure*}[!t]
\centering
\includegraphics[width=\linewidth]{./cifar10_resnet.pdf}
\vspace{-\baselineskip}
\caption{Bar plots with confidence intervals for ECE, AdaECE and Classwise-ECE, computed for ResNet-50 (first $3$ figures) and ResNet-110 (last $3$ figures) on CIFAR-10.}
\label{fig:error_ba}
\vspace{-1\baselineskip}
\end{figure*}
\begin{table*}[!t]
\centering
\scriptsize
\resizebox{\linewidth}{!}{%
\begin{tabular}{cccccccccccccc}
\toprule
\textbf{Dataset} & \textbf{Model} & \multicolumn{2}{c}{\textbf{Cross-Entropy}} &
\multicolumn{2}{c}{\textbf{Brier Loss}} & \multicolumn{2}{c}{\textbf{MMCE}} &
\multicolumn{2}{c}{\textbf{LS-0.05}} & \multicolumn{2}{c}{\textbf{FL-3 (Ours)}} &
\multicolumn{2}{c}{\textbf{FLSD-53 (Ours)}} \\
&& Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T \\
\midrule
\multirow{4}{*}{CIFAR-100} & ResNet-50&17.52&3.42(2.1)&6.52&3.64(1.1)&15.32&2.38(1.8)&7.81&4.01(1.1)&\tikzmark{top left}5.13&\textbf{1.97(1.1)}&\textbf{4.5}&2.0(1.1)\\
& ResNet-110&19.05&4.43(2.3)&\textbf{7.88}&4.65(1.2)&19.14&\textbf{3.86(2.3)}&11.02&5.89(1.1)&8.64&3.95(1.2)&8.56&4.12(1.2)\\
& Wide-ResNet-26-10&15.33&2.88(2.2)&4.31&2.7(1.1)&13.17&4.37(1.9)&4.84&4.84(1)&\textbf{2.13}&2.13(1)&3.03&\textbf{1.64(1.1)}\\
& DenseNet-121&20.98&4.27(2.3)&5.17&2.29(1.1)&19.13&3.06(2.1)&12.89&7.52(1.2)&4.15&\textbf{1.25(1.1)}&\textbf{3.73}&1.31(1.1)\\
\midrule
\multirow{4}{*}{CIFAR-10} & ResNet-50&4.35&1.35(2.5)&1.82&1.08(1.1)&4.56&1.19(2.6)&2.96&1.67(0.9)&\textbf{1.48}&1.42(1.1)&1.55&\textbf{0.95(1.1)}\\
& ResNet-110&4.41&1.09(2.8)&2.56&1.25(1.2)&5.08&1.42(2.8)&2.09&2.09(1)&\textbf{1.55}&\textbf{1.02(1.1)}&1.87&1.07(1.1)\\
& Wide-ResNet-26-10&3.23&0.92(2.2)&\textbf{1.25}&1.25(1)&3.29&0.86(2.2)&4.26&1.84(0.8)&1.69&0.97(0.9)&1.56&\textbf{0.84(0.9)}\\
& DenseNet-121&4.52&1.31(2.4)&1.53&1.53(1)&5.1&1.61(2.5)&1.88&1.82(0.9)&1.32&1.26(0.9)&\textbf{1.22}&\textbf{1.22(1)}\\
\midrule
Tiny-ImageNet & ResNet-50&15.32&5.48(1.4)&4.44&4.13(0.9)&13.01&5.55(1.3)&15.23&6.51(0.7)&1.87&1.87(1)&\textbf{1.76}&\textbf{1.76(1)}\\
\midrule
20 Newsgroups & Global Pooling CNN&17.92&2.39(3.4)&13.58&3.22(2.3)&15.48&6.78(2.2)&\textbf{4.79}&2.54(1.1)&8.67&3.51(1.5)&6.92&\textbf{2.19(1.5)}\\
\midrule
SST Binary & Tree-LSTM&7.37&2.62(1.8)&9.01&2.79(2.5)&5.03&4.02(1.5)&\textbf{4.84}&4.11(1.2)&16.05&1.78(0.5)&9.19&\textbf{1.83(0.7)}\tikzmark{bottom right}\\
\bottomrule
\end{tabular}%
}
\caption{ECE $(\%)$ computed for different approaches both pre and post temperature scaling (cross-validating T on ECE). Optimal temperature for each method is indicated in brackets. $T\approx 1$ indicates innately calibrated model. \vspace{-3mm}}
\label{table:ece_tab1}
\end{table*}
\begin{table*}[!t]
\centering
\scriptsize
\resizebox{\linewidth}{!}{%
\begin{tabular}{cccccccc}
\toprule
\textbf{Dataset} & \textbf{Model} & \textbf{Cross-Entropy} &
\textbf{Brier Loss} & \textbf{MMCE} & \textbf{LS-0.05} & \textbf{FL-3 (Ours)} & \textbf{FLSD-53 (Ours)} \\
\midrule
\multirow{4}{*}{CIFAR-100} & ResNet-50&23.3&23.39&23.2&23.43&22.75&23.22\\
& ResNet-110&22.73&25.1&23.07&23.43&22.92&22.51\\
& Wide-ResNet-26-10&20.7&20.59&20.73&21.19&19.69&20.11\\
& DenseNet-121&24.52&23.75&24.0&24.05&23.25&22.67\\
\midrule
\multirow{4}{*}{CIFAR-10} & ResNet-50&4.95&5.0&4.99&5.29&5.25&4.98\\
& ResNet-110&4.89&5.48&5.4&5.52&5.08&5.42\\
& Wide-ResNet-26-10&3.86&4.08&3.91&4.2&4.13&4.01\\
& DenseNet-121&5.0&5.11&5.41&5.09&5.33&5.46\\
\midrule
Tiny-ImageNet & ResNet-50&49.81&53.2&51.31&47.12&49.69&49.06\\
\midrule
20 Newsgroups & Global Pooling CNN&26.68&27.06&27.23&26.03&29.26&27.98\\
\midrule
SST Binary & Tree-LSTM&12.85&12.85&11.86&13.23&12.19&12.8\\
\bottomrule
\end{tabular}}
\caption{Test set error $(\%)$ computed for different approaches. \vspace{-3mm}}
\label{table:error_tab1}
\end{table*}
\textbf{Temperature Scaling:} In order to compute the optimal temperature, we use two different methods: (a) learning the temperature by minimising val set NLL, and (b) performing grid search over temperatures between 0 and 10, with a step of 0.1, and finding the one that minimises val set ECE. We find the second approach to produce {\em stronger baselines} and report results obtained using this approach.
\textbf{Performance Gains:} We report ECE$\%$ (computed using 15 bins) along with optimal temperatures in Table \ref{table:ece_tab1}, and test set error in Table~\ref{table:error_tab1}. We report the other calibration scores (AdaECE, Classwise-ECE, MCE and NLL) in Appendix~\ref{results}. Firstly, for all dataset-network pairs, we obtain very competitive classification accuracies (shown in Table~\ref{table:error_tab1}). Secondly, {\em it is clear from Table~\ref{table:ece_tab1}, and Tables~\ref{table:ada_ece_tab1} and~\ref{table:sce_tab1} in the appendix, that focal loss with sample-dependent $\gamma$ and with $\gamma = 3$ outperform all the baselines: cross-entropy, label smoothing, Brier loss and MMCE.} They broadly produce the lowest calibration errors {\em both before and after temperature scaling}. This observation is particularly encouraging, as it also indicates that a principled method of obtaining values of $\gamma$ for focal loss can produce a very calibrated model, with no need to use validation set for tuning $\gamma$. As shown in Figure~\ref{fig:error_ba}, we also compute $90\%$ confidence intervals for ECE, AdaECE and Classwise-ECE using $1000$ bootstrap samples following \cite{Kumar2019verified}, and using ResNet-50/110 trained on CIFAR-10 (see Appendix~\ref{sec:bar_plots} for more results). Note that FLSD-53 produces the lowest calibration errors in general, and the difference in the metric values between FLSD-53 and other approaches (except Brier loss) is mostly statistically significant (i.e., confidence intervals don't overlap), especially before temperature scaling. In addition to the lower calibration errors, there are other advantages of focal loss as well, which we explore next.
\textbf{More advantages of focal loss:} \textit{Behaviour on Out-of-Distribution (OoD) data:} A perfectly calibrated model should have low confidence whenever it misclassifies, including when it encounters data which is OoD \citep{Thulasidasan2019mixup}. Although temperature scaling calibrates a model under the i.i.d.\ assumption, it is known to fail under distributional shift \citep{snoek2019can}. Since focal loss has implicit regularisation effects on the network (see \S\ref{sec:focalloss}), we investigate if it helps to learn representations that are more robust to OoD data. To do this, we use ResNet-110 and Wide-ResNet-26-10 trained on CIFAR-10 and consider the SVHN \citep{Netzer2011} test set and CIFAR-10-C \citep{Hendrycks2019benchmarking} with Gaussian noise corruption at severity 5 as OoD data. We use the entropy of the softmax distribution as the measure of confidence or uncertainty, and report the corresponding AUROC scores both before and after temperature scaling in Table \ref{table:auroc_tab1}. For both SVHN and CIFAR-10-C (using Gaussian noise), models trained on focal loss clearly obtain the highest AUROC scores. \textit{Note that Focal loss even without temperature scaling performs better than other methods with temperature scaling.} We also present the ROC plots pre and post temperature scaling for models trained on CIFAR-10 and tested on SVHN in Figure~\ref{fig:ood_roc_plots}. Thus, it is quite encouraging to note that models trained on focal loss are not only better calibrated under the i.i.d.\ assumption, but also seem to perform better than other competitive loss functions when we try shifting the distribution from CIFAR-10 to SVHN or CIFAR-10-C (pre and post temperature scaling).
\textit{Confident and Calibrated Models:} It is worth noting that focal loss with sample-dependent $\gamma$ has optimal temperatures that are very close to 1, mostly lying between 0.9 and 1.1 (see Table~\ref{table:ece_tab1}). This property is shown by the Brier loss and label smoothing models as well, albeit with worse calibration errors. By contrast, the temperatures for cross-entropy and MMCE models are significantly higher, with values lying between 2.0 and 2.8. An optimal temperature close to 1 indicates that the model is innately calibrated, and cannot be made significantly more calibrated by temperature scaling. In fact, a temperature much greater than 1 can make a model underconfident in general, as it is applied irrespective of the correctness of model outputs. We observe this empirically for ResNet-50 and ResNet-110 trained on CIFAR-10. Although models trained with cross-entropy have much higher confidence before temperature scaling than those trained with focal loss, after temperature scaling, focal loss models are significantly more confident in their predictions. We provide quantitative and qualitative empirical results to support this claim in Appendix~\ref{conf_and_cal}.
\begin{table*}[!t]
\centering
\scriptsize
\resizebox{\linewidth}{!}{%
\begin{tabular}{cccccccccccccc}
\toprule
\textbf{Dataset} & \textbf{Model} & \multicolumn{2}{c}{\textbf{Cross-Entropy}} &
\multicolumn{2}{c}{\textbf{Brier Loss}} & \multicolumn{2}{c}{\textbf{MMCE}} &
\multicolumn{2}{c}{\textbf{LS-0.05}} & \multicolumn{2}{c}{\textbf{FL-3 (Ours)}} &
\multicolumn{2}{c}{\textbf{FLSD-53 (Ours)}} \\
&& Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T \\
\midrule
\multirow{2}{*}{CIFAR-10/SVHN} & ResNet-110&61.71&59.66&94.80&95.13&85.31&85.39&68.68&68.68&\textbf{96.74}&\textbf{96.92}&90.83&90.97\\
& Wide-ResNet-26-10&96.82&97.62&94.51&94.51&97.35&97.95&84.63&84.66&98.19&98.05&\textbf{98.29}&\textbf{98.20}\\
\midrule
\multirow{2}{*}{CIFAR-10/CIFAR-10-C} & ResNet-110&77.53&75.16&84.09&83.86&71.96&70.02&72.17&72.18&82.27&82.18&\textbf{85.05}&\textbf{84.70}\\
& Wide-ResNet-26-10&81.06&80.68&85.03&85.03&82.17&81.72&71.10&71.16&82.17&81.86&\textbf{87.05}&\textbf{87.30}\\
\bottomrule
\end{tabular}%
}
\caption{AUROC $(\%)$ computed for models trained on CIFAR-10 (in-distribution), and using SVHN and CIFAR-10-C (Gaussian Noise corruption with severity level 5) respectively as the OoD datasets. \vspace{-3mm}}
\label{table:auroc_tab1}
\end{table*}
\begin{figure*}[!t]
\centering
\subfigure[ResNet-110 (pre-T)]{\includegraphics[width=0.24\linewidth]{cifar10_resnet110_entropy_pretemp_roc.pdf}}
\subfigure[ResNet-110 (post-T)]{\includegraphics[width=0.24\linewidth]{cifar10_resnet110_entropy_posttemp_roc.pdf}}
\subfigure[Wide-ResNet (pre-T)]{\includegraphics[width=0.24\linewidth]{cifar10_wide_resnet_entropy_pretemp_roc.pdf}}
\subfigure[Wide-ResNet (post-T)]{\includegraphics[width=0.24\linewidth]{cifar10_wide_resnet_entropy_posttemp_roc.pdf}}
\caption{ROC plots obtained from ResNet-110 and Wide-ResNet-26-10 architectures trained on CIFAR-10 (in-distribution) and tested on SVHN (OoD), both pre and post temperature scaling.}
\label{fig:ood_roc_plots}
\vspace{-1.2\baselineskip}
\end{figure*}
\vspace{-2.5mm}
\section{Conclusion}
\vspace{-2mm}
In this paper, we have studied the properties of focal loss, an alternative loss function that can yield classification networks that are more naturally calibrated than those trained using the conventional cross-entropy loss, while maintaining accuracy. In particular, we show in \S\ref{sec:focalloss} that focal loss implicitly maximises entropy while minimising the KL divergence between the predicted and the target distributions. We also show that, because of its design, it naturally regularises the weights of a network during training, reducing NLL overfitting and thereby improving calibration. Furthermore, we empirically observe that models trained using focal loss are not only better calibrated under i.i.d.\ assumptions, but can also be better at detecting OoD samples which we show by taking CIFAR-10 as the in-distribution dataset and SVHN and CIFAR-10-C as out-of-distribution datasets, something which temperature scaling fails to achieve.
\newpage
\section{Broader Impact}
Our work shows that using the right kind of loss function can lead to a calibrated model. This helps in improving the reliability of these models when used in real-world applications. It can help in deployment of the models such that users can be alerted when its prediction may not be trustworthy. We do not directly see a situation where calibrated neural networks can have a negative impact on society, but we do believe that research on making models more calibrated will help improve fairness and trust in AI.
\begin{ack}
This work was started whilst J Mukhoti was at FiveAI, and completed after he moved to the University of Oxford. V Kulharia is wholly funded by a Toyota Research Institute grant. A Sanyal acknowledges support from The Alan Turing Institute under the Turing Doctoral Studentship grant TU/C/000023. This work was also supported by the Royal Academy of Engineering under the Research Chair and Senior Research Fellowships scheme, EPSRC/MURI grant EP/N019474/1 and FiveAI.
\end{ack}
\bibliographystyle{plainnat}
|
1,108,101,563,186 | arxiv | \section{Introduction}\label{sec:intro}
Let $M$ be an $n$-dimensional compact oriented manifold and
$S$ be a subset of $\partial M$.
We denote the group of orientation preserving diffeomorphisms
of $M$ whose restrictions on $S$ are identity by $\mathrm{Diff} (M, S)$,
the subgroups of them consisting of elements that are isotopic to
identity by $\mathrm{Diff}_0 (M, S)$,
and the quotient group $\mathrm{Diff} (M, S)/\mathrm{Diff}_0 (M, S)$
by $\mathcal{M}(M,S)$.
For an element $f$ of $\mathrm{Diff} (M, S)$, let $[f]$ be the element of
$\mathcal{M}(M,S)$ represented by $f$.
The homomorphism $\pi_{M,S}$ from $\mathrm{Diff} (M, S)$ to $\mathcal{M}(M,S)$
defined by $\pi_{M,S} (h) = [h]$ is a surjection.
Let $\Gamma$ be a subgroup of $\mathcal{M}(M,S)$.
We call a homomorphism $s$ from $\Gamma$ to $\mathrm{Diff} (M, S)$
which satisfies $\pi_{M,S} \circ s = id_{\Gamma}$ a {\em section\/} for
$\pi_{M,S}$ over $\Gamma$.
Morita \cite{Morita} showed that the natural surjection from
$\mathrm{Diff}^2 (\Sigma_g)$ to the mapping class group $\mathcal{M}(\Sigma_g)$ of
$\Sigma_g$ has no section over $\mathcal{M}(\Sigma_g)$ when $g \geq 5$.
Markovic \cite{Markovic} (when $g \geq 6$) and
Markovic and Saric \cite{MS} (when $g \geq 2$)
showed that the natural surjection from
$\mathrm{Homeo} (\Sigma_g)$ to $\mathcal{M}(\Sigma_g)$ has no section
over $\mathcal{M}(\Sigma_g)$.
By using the different method from them,
Franks and Handel \cite{FK} showed that the natural surjection
from $\mathrm{Diff} (\Sigma_g)$ to $\mathcal{M}(\Sigma_g)$ has no section
over $\mathcal{M}(\Sigma_g)$ when $g \geq 3$.
Let $H_g$ be an oriented 3-dimensional handlebody of genus $g$,
which is an oriented 3-manifold constructed from a $3$-ball by attaching
$g$ 1-handles.
Let $\Sigma_g$ be an oriented closed surface of genus $g$,
then $\partial H_g = \Sigma_g$.
The restriction to the boundary defines a homomorphism
$\rho_{\partial} : \mathrm{Diff}(H_g) \to \mathrm{Diff}(\Sigma_g)$,
and $\rho_{\partial}$ induces a injection
$\mathcal{M}(H_g) \hookrightarrow \mathcal{M}(\Sigma_g)$
since $H_g$ is an irreducible 3-manifold.
We will show:
\begin{thm}\label{thm:handle-lift}
If $g \geq 6$,
there is no section for
$\pi_{H_g} : \mathrm{Diff} (H_g) \to \mathcal{M}(H_g)$
over $\mathcal{M}(H_g)$.
\end{thm}
For contradiction, we assume that there is a section
$s : \mathcal{M}(H_g) \to \mathrm{Diff} (H_g)$.
Let $\Gamma$ be a subgroup of $\mathcal{M}(H_g)$, and
$i_{\Gamma}$ be the inclusion from $\Gamma$ to $\mathcal{M}(H_g)$.
Then $\Gamma$ is a subgroup of $\mathcal{M}(\Sigma_g)$,
and the composition $\rho_{\partial} \circ s \circ i_{\Gamma}$ is
a section for $\pi_{\Sigma_g} : \mathrm{Diff}(\Sigma_g) \to \mathcal{M}(\Sigma_g)$
over $\Gamma$.
Therefore, if we can find a subgroup $\Gamma$ of $\mathcal{M}(H_g)$,
over which there is no section for $\pi_{\Sigma_g}$,
then Theorem \ref{thm:handle-lift} follows.
Let $D$ be a 2-disk in $\Sigma_g$, and $\Sigma_{g,1}$ be $\Sigma_g \setminus int \, D$.
Let $c$ be an essential simple closed curve on $\Sigma_g$ such that
$\Sigma_g \setminus c$ is not connected,
then the closure of one component of $\Sigma_g \setminus c$ is
diffeomorphic to $\Sigma_{g_1, 1}$ and
the closure of the other component of $\Sigma_g \setminus c$
is diffeomorphic to $\Sigma_{g_2, 1}$.
We remark that $g = g_1 + g_2$ and $g_1, g_2 \geq 1$.
These diffeomorphisms induce injections
$\mathcal{M}(\Sigma_{g_1, 1}, \partial \Sigma_{g_1, 1}) \to \mathcal{M}(\Sigma_g)$
and $\mathcal{M}(\Sigma_{g_2, 1}, \partial \Sigma_{g_2, 1}) \to \mathcal{M}(\Sigma_g)$
(see \cite{PR}).
By these injections,
we consider $\mathcal{M}(\Sigma_{g_1, 1}, \partial \Sigma_{g_1, 1})$ and
$\mathcal{M}(\Sigma_{g_2, 1}, \partial \Sigma_{g_1, 1})$ as subgroups of
$\mathcal{M}(\Sigma_g)$.
From Theorem 1.6 in \cite{FK} proved by Franks and Handel, we see:
\begin{thm}\cite{FK}\label{thm:FK}
Let $\Gamma_1$ be a nontrivial finitely generated subgroup of
$\mathcal{M}(\Sigma_{g_1,1}, \partial \Sigma_{g_1,1})$
such that $H^1 (\Gamma_1, \mathbb{R}) = 0$,
and $\mu$ be an element of $\mathcal{M}(\Sigma_{g_2,1}, \partial \Sigma_{g_2,1})$
which is represented by a pseudo-Anosov homeomorphism on
$int\, \Sigma_{g_2,1}$.
Then there is no section for $\pi_{\Sigma_g} : \mathrm{Diff}(\Sigma_g) \to \mathcal{M}(\Sigma_g)$ over $\langle \Gamma_1, \mu \rangle$.
\end{thm}
We assume $g \geq 6$.
The 3-manifold $\Sigma_{2,1} \times [0,1]$ is diffeomorphic to $H_{4}$.
Let $D_1$ be a 2-disk in $int\, \partial \Sigma_{2,1} \times [0,1] \subset
\partial(\Sigma_{2,1} \times [0,1])$, $D_2$ and $D_3$ be
disjoint 2-disks on $ \partial H_{g-6}$, and $D_4$ be 2-disk on $\partial H_2$.
Along these 2-disks, we glue $\Sigma_{2,1} \times [0,1]$, $H_{g-6}$ and $H_2$
such that $D_1 = D_2$, $D_3=D_4$,
then the $3$-manifold obtained as a result is diffeomorphic to $H_g$.
By the above construction, we get two natural inclusions
$\Sigma_{2,1} \times [0,1] \hookrightarrow H_g$ and
$H_2 \hookrightarrow H_g$.
These inclusions induce natural homomorphisms
$i_1 : \mathcal{M}(\Sigma_{2,1} \times [0,1], \partial \Sigma_{2,1} \times [0,1])
\to \mathcal{M}(H_g) $
and $i_2 : \mathcal{M}(H_2, D_4) \to \mathcal{M}(H_g)$.
If $[h]$ is in
$\mathcal{M}(\Sigma_{2,1} \times [0,1], \partial \Sigma_{2,1} \times [0,1])$
(resp. $\mathcal{M}(H_2, D_4)$)
represented by $h \in
\mathrm{Diff}(\Sigma_{2,1} \times [0,1], \partial \Sigma_{2,1} \times [0,1])$
(resp. $\mathrm{Diff}(H_2,D_4)$),
then $i_1([h])$ (resp. $i_2([h])$ is represented by extending $h$ to
$H_g$ using the identity mapping
on $H_g \setminus \Sigma_{2,1} \times [0,1]$
(resp. $H_g \setminus H_2$).
We define homomorphisms
$\Pi : \mathrm{Diff}(\Sigma_{2,1}, \partial \Sigma_{2,1}) \to
\mathrm{Diff}(\Sigma_{2,1} \times [0,1], \partial \Sigma_{2,1} \times [0,1])$
by $\Pi(h) = h \times id_{[0,1]}$,
and $I_1 :
\mathrm{Diff}(\Sigma_{2,1} \times [0,1], \partial \Sigma_{2,1} \times [0,1])
\to \mathrm{Diff}(H_g)$ by the identity on
$H_g \setminus \Sigma_{2,1} \times [0,1]$,
then the composition $I_1 \circ \Pi$ induces an injection
$P : \mathcal{M}(\Sigma_{2,1}, \partial \Sigma_{2,1}) \to
\mathcal{M}(H_g)$.
By applying Corollary 4.2 of \cite{PR} to the subsurface
$\Sigma_{2,1} \times \{ 0,1 \} \subset \partial H_g$, the injectivity of $P$ is shown.
Korkmaz \cite{Korkmaz} showed that
$H_1(\mathcal{M}(\Sigma_{2,1}, \partial \Sigma_{2,1}), \mathbb{Z})
= \mathbb{Z}/10 \, \mathbb{Z}$, hence
$H^1 (\mathcal{M}(\Sigma_{2,1}, \partial \Sigma_{2,1}), \mathbb{R})
=0$. Therefore,
$\Gamma_1 = P(\mathcal{M}(\Sigma_{2,1}, \partial \Sigma_{2,1}))$
satisfies the assumption of Theorem \ref{thm:FK} when $g_1=g-2$, $g_2=2$.
Fathi and Laudenbach \cite{FL} constructed a
pseudo-Anosov homeomorphism $\phi$ on $\partial(H_2)$ which
is a restriction of a homeomorphism on $H_2$.
Definition of pseudo-Anosov homeomorphisms and
terminologies (e.g., singular foliation) related to them
can be found in \cite{CB}.
Any pseudo-Anosov homeomorphism preserves the set of singular points
of the singular foliation which is preserved by this homeomorphism.
Since the number of singular points of singular foliation is finite,
a proper power of $\phi$, say $\phi^n$, fixes some points.
Let $p$ be a point fixed by $\phi^n$, then $\phi^n$ defines a
pseudo-Anosov homeomorphism on
$\partial(H_2) \setminus p = int\, \Sigma_{2,1}$.
Let $\mu$ be an element of $\mathcal{M}(\Sigma_{2,1}, \partial \Sigma_{2,1})
\subset \mathcal{M}(\Sigma_g)$
represented by this homomorphism, then
$\mu$ is an element of $\mathcal{M}(H_g)$ and satisfies the assumption of
Theorem \ref{thm:FK} when $g_1=g-2$, $g_2=2$.
Then $\langle P(\mathcal{M}(\Sigma_{2,1}, \partial \Sigma_{2,1})), \mu \rangle$
is a subgroup of $\mathcal{M}(H_g)$ and,
by Theorem \ref{thm:FK}, there is no section
$\langle P(\mathcal{M}(\Sigma_{2,1}, \partial \Sigma_{2,1})), \mu \rangle \to
\mathcal{M}(\Sigma_g)$.
Therefore, there is no section
for $\pi_{H_g} : \mathrm{Diff} (H_g) \to \mathcal{M}(H_g)$
over $\mathcal{M}(H_g)$.
|
1,108,101,563,187 | arxiv | \section{introduction}
Just as a 4-manifold can have many inequivalent smooth structures, there can be many different smooth embeddings of surfaces into a 4-manifold which are topologically isotopic, but smoothly distinct. Two embeddings of the same surface that have this property are called \emph{exotic embeddings}.
In this paper we will show that null-homologous tori first discovered by Fintushel and Stern in their knot surgery construction, in fact provide examples of exotic tori. Specifically,
\begin{theorem}\label{t:main}
Let $X$ be a smooth 4-manifold with $b_2 \geq |\sigma | + 6$, non-trivial Seiberg-Witten invariant, and and embedded torus $T$ of self intersection 0 such that $\pi_1(X \setminus T) = 1$. Then $X$ contains an infinite family of distinct tori that are topologically isotopic to the unknotted torus (a torus that bounds a solid handlebody in $X$), but not smoothly isotopic to it.
\end{theorem}
The first examples of exotic embeddings come from Fintushel and Stern's ``rim surgery'' technique \cite{FSsurf}. Their surfaces all have simply connected complement. A variation on rim surgery was given by Kim, and Kim-Ruberman which works in the case that the complement has non-trivial fundamental group (\cite{Kim, RK, RK2}). Tom Mark has used Heegaard-Floer homology to show that these constructions are also effective for constructing exotic embeddings of surfaces with negative self intersection (\cite{T}). On the other hand, all of these constructions involve surfaces whose complement has finite first homology, and moreover all of these constructions essentially begin with symplectically embedded surfaces in a symplectic 4-manifold. Such surfaces can never be null-homologous. The significance of our examples is that they are null-homologous.
It is not difficult to satisfy the hypotheses of the theorem. For example, any elliptic surface contains such a torus and has non-trivial Seiberg-Witten invariant by virtue of being a symplectic manifold.
The strategy of proof is as follows: The knot surgery construction of Fintushel and Stern produces an infinite family of exotic smooth structures on a 4-manifold through a series of log-transforms on null-homologous tori. These are the tori we will focus on. We will define a gauge theoretic invariant to distinguish the tori smoothly. Finally, we will show that all such tori are topologically isotopic by a theorem of the second author:
\begin{theorem}[{\cite[Theorem 7.2]{N}}] \label{t:iso} Let $\Sigma_0$ and $\Sigma_1$ be locally flat embedded surfaces of the same genus in a simply connected 4-manifold $X$. The surfaces are topologically isotopic when $\pi_1(X \setminus \Sigma_i) = \mathbb{Z}$ and $b_2 \geq |\sigma| + 6$.
\end{theorem}
Presumably if the surfaces are smooth, then the surfaces are topologically isotopic without the condition that $b_2 \geq |\sigma| + 6$. If one could show that, then one could similarly drop that condition from Theorem \ref{t:main}. On the other hand, for locally flat surfaces this condition is necessary. Examples can be derived from \cite{HT} and \cite{HT2} wherein topological 4-manifolds are constructed with infinite cyclic fundamental group that are not connected sums with $S^1\times S^3$. In their examples, surgery on a loop generating $\pi_1$ will result in a surface in $n\mathbb{CP}^2$ that has cyclic fundamental group. However, this surface cannot be topologically isotopic to the trivial one, because surgery on the trivial surface results in a manifold which is a connect sum with $S^1\times S^3$. Furthermore, such a surface cannot be smoothly embedded, otherwise it would be possible to smooth the original topological 4-manifold it was derived from.
Also, one might wonder how robust these exotic embeddings are. That is, what does it take to make any of the exotically embedded topologically trivial surfaces constructed here smoothly equivalent again? In \cite{BS}, Inanc Baykur and the second author show that these tori become smoothly equivalent once one increases the genus of each of these surfaces in the most trivial possible way. Namely, tubing any one of the topologically trivial surfaces of Theorem 1.1 with a smoothly trivial surface results in a smoothly trivial surface.
We conclude this introduction with an open question.
\begin{question}
Do their exist exotically embedded surfaces in $S^4$? In particular, is there an embedded $S^2$ that is topologically isotopic to the unknot but not smoothly isotopic to the unknot?
\end{question}
If one could produce such an exotic unknot, its complement would be an exotic $S^1 \times D^3$, and surgery along it would be an exotic $S^1\times S^3$, two of the most elusive exotic creatures. The examples in this paper can be seen as prototypes for answering this sort of question: Since the tori we construct bound solid handlebodies in $X$, they are close to being exotic surfaces in $S^4$ in the sense that they are topologically isotopic to a surface lying in a ball in $X$.
\textbf{Acknowledgements:} Both authors would like to thank the Max Planck Institute for Mathematics for hosting them while they worked on this project, and Danny Ruberman and Tom Mark for their comments on an early draft of this paper.
\section{Constructing the tori}\label{s:constructing}
Let $T$ be an embedded torus with self intersection zero in a 4-manifold $X$ such that $\pi_1(X \setminus T) = 1$. Such a torus is necessarily homologically essential. We will not construct exotic embeddings of $T$, but rather we will find exotic embeddings of nearby null-homologous tori which arise in the ``knot surgery'' construction of Fintushel and Stern (\cite{FSknot} and \cite{Fknot}). Knot surgery along torus $T$ using a knot $K\subset S^3$ is most straightforwardly defined as $X_K = (X \setminus \nu (T)) \cup (S^1\times S^3\setminus \nu (K))$ where the union is formed by taking the longitude of $K$ to the meridian of $T$ (apart from this requirement, the gluing is not, strictly speaking, well defined, but this is in general irrelevant). Fintushel and Stern proved that $X$ is homeomorphic to $X_K$ under the assumption that the complement of $T$ is simply connected, and they further proved that their Seiberg-Witten invariants are related by $SW_{X_K} = SW_X \cdotp \Delta_K(2[T])$ where $\Delta_K$ is the Alexander polynomial for $K$. Therefore, by varying $K$, one can construct infinitely many smooth structures on $X$. The Seiberg-Witten formula is proved by viewing knot surgery as a series of log-transforms on null-homologous tori. That is, rather than cutting out $\nu(T) = S^1 \times (S^1 \times D^2)$ and replacing it with $S^1\times S^3\setminus \nu (K)$, we can view knot surgery as a series of log-transforms in $S^1 \times (S^1 \times D^2)$ which eventually lead to $S^1\times S^3\setminus \nu (K)$. Forgetting the extra $S^1$ direction for the moment, one can go from $S^3\setminus \nu (K)$ to $(S^1 \times D^2)$, the complement of the unknot, by doing $\pm 1$ surgery along crossings of $K$ to unknot it. See Figure \ref{f:knotsurgery}. Crossing this whole picture with $S^1$ gives the log-transforms needed for knot surgery.
\begin{figure}
\labellist
\small\hair 2pt
\pinlabel {$\mathcal{T}_K$} at 168 140
\pinlabel {\LARGE{$\times S^1$}} at 260 82
\pinlabel {$K$} at 18 29
\endlabellist
\includegraphics{knotsurgery}
\caption{\label{f:knotsurgery} +1 surgery on $\mathcal{T}_K$ unknots $K$.}
\end{figure}
Suppose for the moment, that $K$ is a knot of unknotting number 1. Then knot surgery is equivalent to a single log-transform on a null homologous torus $\mathcal{T}_K$. As long as the complement of $T$ is simply connected, then $\mathcal{T}_K$ will have $\pi_1(X\setminus \mathcal{T}_K) = \mathbb{Z}$. This is because \[ \pi_1(X_K \setminus \mathcal{T}_K) = \frac{\pi_1(S^1 \times S^3\setminus (\nu K \cup \mathcal{T}_K))}{\langle S^1 \times pt, \mu_K,\lambda_K \rangle} \]
where $\mu_K$ and $\lambda_k$ are respectively the meridian and longitude of $K$. This implies that that all loops are homotopic to a multiple of the meridian to $\mathcal{T}_K$.
Already we see that this gives at least one exotically embedded torus. Specifically, $\mathcal{T}_K$ is topologically standard by Theorem \ref{t:iso}, and moreover, performing a log-transform on $\mathcal{T}_K$ will give an exotic smooth structure on $X$, whereas performing a log-transform on the standardly embedded torus, (i.e. the one that bounds a solid handlebody), will not. Therefore these tori are smoothly distinct, but by Theorem \ref{t:iso} they must be topologically isotopic.
To construct infinite families of exotic surfaces, we need to be more careful. For instance, supposing that $K_i$ is the $i$-th twist knot, it might be possible to construct $X_{K_i}$ for any $i$ via some log-transform on $\mathcal{T}_K$. (The effect of $\frac{1}{n}$-log transforms on $\mathcal{T}_K$ is explored in \cite{FStori}.) In other words, it is not always straightforward to distinguish the exotic tori that arise from different knots. To resolve this issue, we have to look more deeply at how the Seiberg-Witten invariant changes under log-transforms on $\mathcal{T}_K$, and restrict ourselves to certain classes of knots.
\section{Smooth invariants of null-homologous tori}
The Seiberg-Witten invariant of a 4-manifold $X$ is a map $SW_X : \mathcal{S}\longrightarrow \mathbb{Z}$, where $\mathcal{S}$ is the set of isomorphism classes of $spin^{\text{c}}$ structures on $X$. The \emph{basic classes} of $X$ are defined to be the $spin^{\text{c}}$ structures that map to non-zero integers. It is a well known property of the Seiberg-Witten invariant that a closed 4-manifold has only a finite number of basic classes. Below, we will often not distinguish between a $spin^{\text{c}}$ structure and its first Chern class or even the Poincare dual of its first Chern class.
We will distinguish our null-homologous tori by computing an invariant that is, in a technical sense clarified below, related to the Seiberg-Witten basic classes of the complement of the tori. To do this we will need to understand how the Seiberg-Witten invariant of a 4-manifold is affected by log-transforms. Suppose we are given a 4-manifold with $T^3$ boundary, e.g. $X\setminus \nu T$, and suppose $H_1(T^3) = \mathbb{Z}[a,b,c]$. Denote the log-transformed 4-manifold constructed by gluing on a $D^2\times T^2$, where $[D^2]$ is glued to $[pa+qb+rc]$ as $X_T{(p,q,r)}$, and denote the core torus in the $D^2\times T^2$ part of this manifold as $T_{(p,q,r)}$.
A formula of Morgan-Mrowka-Szabo from \cite{MMS} give a formula relating the Seiberg-Witten invariants of various log-transforms:
\begin{align*} \sum_i & SW_{X_{T}(p,q,r)}(k_{(p,q,r)} + i[T_{(p,q,r)}]) = p\sum_i SW_{X_{T}(1,0,0)}(k_{(1,0,0)} + i[T_{(1,0,0)}]) \\
& +q\sum_i SW_{X_{T}(0,1,0)}(k_{(0,1,0)} + i[T_{(0,1,0)}]) + r\sum_i SW_{X_{T}(0,0,1)}(k_{(0,0,1)} + i[T_{(0,0,1)}])
\label{e:eq1}
\end{align*}
\noindent where the $k_{(a,b,c)}$ are $spin^{\text{c}}$ structures that are equivalent on $X \setminus T$ and are trivial on the log-transformed torus $T_{(a,b,c)}$. The sums here are intended to indicate summing over all $spin^{\text{c}}$ structures on $X_{(a,b,c)}$ which restrict to $k_{(a,b,c)}$ on $X\setminus T$. In particular, if $T_{(p,q,r)}$ is null-homologous, then the left hand side of the equation has only one term.
Moreover, since there are a finite number (not depending on $(p,q,r)$) of $k_{(1,0,0)}$ such that the sum $\sum_i SW_{X_T(1,0,0)}(k_{(1,0,0)} + i[T_{(1,0,0)}])$ is not zero (respectively for $k_{(0,1,0)}$ and $k_{(0,0,1)}$), then according to the MMS formula there is a fixed, finite number of homology classes that can be basic classes for $X_T(p,q,r)$ in the case that $T_{(p,q,r)}$ is null-homologous. To put this another way, there are only a finite number of $spin^{\text{c}}$ structures on $X\setminus \nu T$ that can be extended to basic classes on $X_T(p,q,r)$ when $[T_{(p,q,r)}]=0$. Therefore, the following invariant is well defined:
\begin{definition}
Let $T$ be a null-homologous torus in $X$. Define $B(X,T)$ to be the maximum divisibility of the difference between any two basic classes of $X_{(p,q,r)}$ for any $(p,q,r)$ such that $[T_{(p,q,r)}] = 0$.
\end{definition}
\section{Families of unknotting number one knots, and the proof of Theorem 1}
Now that we have a better understanding of the smooth invariants needed to distinguish potential infinite families of smooth tori, we can describe an explicit family of knots that will give rise to smoothly distinct $\mathcal{T}_K$.
For the purposes of this paper, it will be sufficient to focus on a nice family of two-bridge knots. All two-bridge knots can be given in the form of Figure \ref{f:twobridge} where $a_i$ is the number of right half-twists when $i$ is odd, and left half-twists when $i$ is even. We refer to two-bridge knots using Conway's notation, $C(a_0,\ldots,a_m)$, and we note that it is well known (see \cite{BZ} for instance), that two 2-bridge knots are equivalent if and only if $[a_0,\ldots,a_m]$ and $[a'_0,\ldots,a'_{m'}]$ are continued fraction expansions of the same rational number.
\begin{proposition}[Kanenobu-Murakami \cite{KM}]\label{p:unknotprop}
A two-bridge knot has unknotting number one if and only if it can be expressed as
$$C(b,b_1,b_2,\ldots ,b_k,\pm 2, -b_k,\ldots, -b_2,-b_1).$$
\end{proposition}
\begin{figure}
\labellist
\small\hair 2pt
\pinlabel {$a_0$} at 43 18
\pinlabel {$a_1$} at 92 53
\pinlabel {$a_2$} at 146 18
\pinlabel {$a_3$} at 192 53
\pinlabel {$a_{2n-1}$} at 286 53
\pinlabel {$\ldots$} at 240 32
\endlabellist
\includegraphics{twobridge}
\caption{The two-bridge knot $C(a_0,\ldots,a_{2n-1})$.}
\label{f:twobridge}
\end{figure}
The following proposition of Burde-Zieschang is stated in terms of our convention for presenting two-bridge knots as in the Figure \ref{f:twobridge}. The two-bridge knot diagram in Figure \ref{f:twobridge} can be converted to a 4-plat diagram in Burde-Zieschang by pulling the inner strand on the right hand side of the figure over the outer strand. This has the effect of adding a new crossing (i.e. $a_{2n}=+1$) and adjusting $a_{2n-1}$ by $+1$.
\begin{proposition}[Burde-Zieschang {\cite[Proposition 12.23]{BZ}}]\label{p:conwayprop}
The Conway polynomial of a two-bridge knot expressed as $C(a_0,\ldots , a_{2n-1})$ has degree $\sum_{j=0}^{n-1} |a_{2i}|$.
\end{proposition}
\begin{lemma}
There exists an infinite family of unknotting number one knots whose Alexander polynomials have arbitrarily high degree.
\end{lemma}
\begin{proof}
Combining Propositions \ref{p:unknotprop} and \ref{p:conwayprop} shows that there exists an infinite family of two-bridge knots of unknotting number one such that the Conway polynomial has arbitrarily high degree. The lemma is thus immediate from the fact that the Conway polynomial is related to the Alexander polynomial by the formula $\nabla (t-t^{-1}) = \Delta (t^2)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t:main}]
Let $\{ K_i \}$ be a sequence of knots of unknotting number 1 such that the degree of their Alexander polynomials goes to infinity, and let $\mathcal{T}_{K_i}$ be the associated (topologically trivial) tori from Section \ref{s:constructing}.
Since there is a log-transform on $\mathcal{T}_{K_i}$ that gives $X_{K_i}$, we have that
\begin{align*}
\lim_{i\to \infty} B(X,\mathcal{T}_{K_i}) \geq& \lim_{i\to \infty} \left( \begin{array}{ll}
\text{max divisibility of the difference} \\
\text{between any two basic classes of } X_{K_i}
\end{array} \right) \\
\geq& \lim_{i\to \infty} deg(\Delta_{K_i}) = \infty.
\end{align*}
Therefore, there are an infinite number of the $\mathcal{T}_{K_i}$ that are smoothly distinguished by their $B$ invariant.
\end{proof}
|
1,108,101,563,188 | arxiv | \section{Introduction}
Speculations about the composite nature of leptons and quarks have
been developed during a long period of time \cite{general}. With the
help of compositeness of fundamental fermions one could hope to
inderstand a number of principal features of the Standard Model
scheme such as the structure of fermion generations, mass spectrum
of fermions and the symmetry breaking scenario.
A large number of phenomenological studies of the possibility to
observe the sigatures of compositeness at the new generation of
$e^+ e^-$, $ep$ and $p \bar p$ colliders exist (see
\cite{ee,ep,pp}
and references therein). One can imagine a simplified picture
when leptons and quarks consist of some pointlike particles (preons)
bound by some new interaction (metacolor force) which is probably confining
and
become strong at some energy scale $\Lambda$. If at the new colliders the
momentun transfer exceeds $\Lambda$, leptons and quarks would interact
in a manner completely different from their pointlike low energy
structure, showing directly the hard
scattering processes of the constituents. At the energies less
than $\Lambda$ one could observe some indications to the constituent
dynamics (residual effective interactions) and describe this regime
in the framework of some effective lagrangian approach. This effective
lagrangian is given by a Standard Model lagrangian and some
operators of higher dimension involving the fields of the SM. For
instance, the simplest effective term of this type is given by
dimension six four-fermion operator $(\psi \gamma \psi)(\psi \gamma
\psi)$ multiplied by $g^2/\Lambda^2$ giving the effective term
with correct dimension four. The strength of such nonrenormalisable
effective interactions is determined by a dimensionless coupling $g$
and powers of the compositeness scale $\Lambda$.
\section{Distributions in the Standard Model with $LL-$ contact term
for the process $e^+ e^- \rightarrow e^+ e^- \mu^+ \mu^-$}
\subsection{Parametrization of contact interactions}
We are using helicity conserving contact interactions of the form
\cite{Eichten}
\begin{equation}
L_c=\frac{g^2}{2\Lambda^2}(\eta_{LL} \bar \psi_L \gamma_{\mu} \psi_L
\bar \psi_L \gamma^{\mu} \psi_L
+ \eta_{RR} \bar \psi_R \gamma_{\mu} \psi_R
\bar \psi_R \gamma^{\mu} \psi_R
+ 2 \eta_{LR} \bar \psi_R \gamma_{\mu} \psi_R
\bar \psi_L \gamma^{\mu} \psi_L)
\end{equation}
where $g^2/4\pi=1$, $|\eta|=1$ and $\psi_{L,R}=(1 \mp \gamma_5)\psi/2$.
In the case of positive $\eta$ the
first and second terms are denoted by $LL+$ and $RR+$, if $\eta$ is
negative they are denoted by $LL-$ and $RR-$ correspondingly \cite{ee}.
Particular choice of $\eta_i$ gives $VV$ and $AA$ (vector-vector and
axial-axial) current interactions.
In the following we choose the $LL-$ contact term. No qualitative
difference in the results appears if we choose any other variant from the
six possible. Previous analyses performed in \cite{Eichten,Schrempp}
for the reactions $e^+ e^- \rightarrow e^+ e^-,\, e^+ e^- \rightarrow
\mu^+ \mu^-$ showed that the effect of $LL$ and $RR$ terms is typically
several times smaller than the effect of $VV$, $AA$ terms.
\begin{figure}[t]
\begin{center}
\input{fd_1.tex} \\
\input{fd_3.tex} \\
\input{fd_2.tex}
\end{center}
\caption{Subset of 10 t-channel diagrams for the process
$e^+ e^- \rightarrow e^+ e^- \mu^+ \mu^-$. SM diagrams are in
the first row, in the second and third rows the X-particle exchange
corresponds to some contact interaction.}
\end{figure}
\subsection{Search strategies and kinematical cuts}
Careful analysis is necessary in the Standard Model
$W$ and Higgs boson production for the definition of the signal versus
the background in the four fermion final state \cite{4f}.
Usually it is more difficult to separate the small signal of new physics,
strongly restricted by the data from independent experiments, in the exclusive
multiparticle final state.
One can propose two contact terms search strategies. In the framework of
the first strategy we impose loose kinematical cuts on the four fermion
final state, the number of identifiable events is large enough, the
contribution
of the contact term in addition to the SM distribution is small, but the
statistical error is also small and one can hope to observe a deviation
from the SM cross section in the high statistics experiment. Especially
interesting is the case if the interference of SM and $LL-$ defined
amplitudes is large. In the framework
of the second strategy we impose stringent kinematical cuts, the number
of events is very small, but the contribution of the contact term in addition
to the SM distribution can be large and
one can hope to observe large deviation in the experiment with
a small number of events. Generally speaking it is difficult to say in
advance what strategy would be better.
For the first and second strategy we are using the following
cuts:
\noindent {\bf Set I (Loose cuts):}\\
muon pair mass cut $M(\mu^+ \mu^-) \geq$ 30, 60, 85 GeV (three cases)\\
final muon energy cut $E \geq$ 10 GeV\\
final muon angle with the beams $\vartheta \geq$ 10 degrees
\noindent {\bf Set II (Strong cuts):}\\
muon pair mass cut $M(\mu^+ \mu^-) \geq$ 30, 60, 85 GeV (three cases)\\
electron pair mass cut $M(e^+ e^-) \geq$ 3.16 GeV \\
electrons angular cut with the beam $\vartheta \geq$ 10 degrees \\
final lepton energy cut $E \geq$ 10 GeV
Set I corresponds to "no-tag" experiment when the forward and backward
electrons at very small angles (less than 0.1 degree) in the dominant
final state configuration are not detected.
\subsection{Total cross sections and distributions}
In the SM with $LL-$ contact term 110 tree level diagrams for the process
$e^+ e^- \rightarrow e^+ e^- \mu^+ \mu^-$ can be generated. In order to
optimize the procedure of calculation we separate them into subsets. Each
subset contains subgraph corresponding to (with in-(out-) particles taken
on-shell) some gauge invariant process of lower order. Detailed
description of this procedure can be found in \cite{subsets}.
We select the subset of two diagrams with t-channel photons (multiperipheral
diagrams) in the SM case and for the case of contact terms we add to them
eight diagrams with one t-channel photon and one contact interaction
vertex (10 diagrams, see Fig.1).
The contributions from these subsets are generally speaking not
always dominant in the
overall complete tree level set (under some conditions
single resonant diagrams with $Z$ boson in s-channel are not small),
but usually about one order of magnitude larger than others.
The calculation of multiperipheral amplitudes containing t-channel
photons is known to be very untrivial \cite{vermaseren}, especially in
the case when no cuts are imposed on final electrons ("no-tag"
experiment, total rate is finite because $m_e \neq 0$) and gauge
cancellations between diagrams are extremely strong. We
used CompHEP 3.2 \cite{comphep} and tested the results by means of
EXCALIBUR \cite{excalibur}.
In CompHEP numerical stability of the double poles
$1/t^2$ cancellation to single ones is preserved by using quadruple
precision and special algorithms of phase space generation
\cite{ilyin}.
At the compositeness scale $\Lambda=$1 TeV and the energy $\sqrt{s}=$
200 GeV total cross sections
in $pb$ for the process $e^+ e^- \rightarrow e^+ e^- \mu^+ \mu^-$ are
shown in Table 1.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{$\sigma(e^+e^- \rightarrow e^+e^-\mu^+\mu^-)$, pb,
{\bf set I} (loose cuts) } \\ \hline
$M(\mu \mu)$ cut (GeV) & 30 & 60 & 85 \\ \hline
SM & 4.165 & 0.527 & 0.135 \\
SM+$LL-$ & 4.180 & 0.535 & 0.142 \\
deviation in \% & 0.4 & 1.5 & 4.9 \\
$N$, see (2) & 62500 & 4400 & 400 \\
\hline
\hline
\multicolumn{4}{|c|}{$\sigma(e^+e^- \rightarrow e^+e^-\mu^+\mu^-)$, pb,
{\bf set II} (strong cuts) } \\ \hline
$M(\mu \mu)$ cut (GeV) & 30 & 60 & 85 \\ \hline
SM & 1.4$*10^{-2}$ & 0.56$*10^{-2}$ &
0.24$*10^{-2}$ \\
SM+$LL-$ & 1.6$*10^{-2}$ & 0.66$*10^{-2}$ &
0.29$*10^{-2}$ \\
deviation in \% & 14 & 18 & 21 \\
$N$, see (2) & 50 & 30 & 20 \\
\hline
\end{tabular}
\end{center}
\caption{}
\end{table}
We used a very rough criteria (similar to criteria accepted in
\cite{Schrempp}) that the number of events $N$ needed to observe
the $\delta \sigma/\sigma$ fractional deviation from the SM cross section
can be estimated by using the relation
\begin{equation}
\frac{\delta \sigma}{\sigma} \sim \frac{1}{\sqrt{N}}
\end{equation}
i.e. $N$ is of order inverse fractional deviation squared. It follows from
the Table that at the optimistic LEP2 integrated luminosity 500 $pb^{-1}$
the effect of $\Lambda=$ 1 TeV $LL-$ contact term cannot be observed in
the total rate. For instance, 21\% effect in the case $M(\mu^+ \mu^-) \geq$
85 GeV, set II, requires the identification of 20 events while at LEP II
luminosity we have only one event per year.
As usual in this sitiation, we inspected the influence of contact terms
on the shape of various distributions, hoping that in the distributions
the effect could be much more pronounced if some phase space region is
controlled strongly by contact interaction dynamics. We calculated
and compared distributions over muon pair invariant mass
$d\sigma/dM_{\mu \bar \mu}$, muon angle
$d\sigma/d\vartheta_{\mu}$, muon transverse momentum $d\sigma/dp_{t \,
\mu}$, and muon energy $d\sigma/dE_{\mu}$. In this set of distributions
for all cases the $LL-$ term effect looks like rather uniform background
not changing significantly in the whole physical region of the process.
For instance,
we show the distributions over muon angle in Fig.2. Their forward-
backward structure is of course completely
different from the central structure of $2 \rightarrow 2$ body reaction $e^+
e^- \rightarrow \mu^+ \mu^-$ \cite{Eichten,Schrempp} (where only partial
wave with angular
momentum zero contributes), but the shape of $2 \rightarrow 4$ body
distribution $e^+ e^- \rightarrow e^+ e^- \mu^+ \mu^-$ with contact term
is similar to standard distribution.
\unitlength=0.60pt
\begin{figure}[h]
\begin{minipage}[h]{65mm}
\input{fig2.tex}
\end{minipage}
\hfill
\begin{minipage}[h]{65mm}
\input{fig3.tex}
\end{minipage}
\caption{Left figure - $d\sigma/d\vartheta_{\mu}$, Standard Model, set II;
right figure - $d\sigma/d\vartheta_{\mu}$, SM+$LL-$ contact term
($\Lambda=1$ TeV, set II) }
\end{figure}
We calculated also the fractional deviations $d\sigma_{LL-}/d\sigma_{SM}$
for the cases of loose (set I) and strong (set II) cuts.
The accuracy of our Monte Carlo (MC) calculation of the total rate is
around
0.5\%. The accuracy of distributions is quite satisfactory for the most
important regions of the phase space (several percent in one bin). The
error in the ratio of distributions is of course more sensitive to these
statistical mistakes. Fig.3 shows that the accuracy of our MC is not
sufficient to show the 0.4\% effect in the ratio $(d\sigma_{LL-}/dM)/(
d\sigma_{SM}/dM)$ for the case of loose cuts. Of course for practical
purpose we do not need so precise calculation in so far as at LEP2 only
2000 events could be observed while 60000 are necessary (see Table 1). The
effect of contact term
could be clearly separated (Fig. 4) in the same ratio for the case of
strong cuts
(set II), but here we need the luminosity of order $10^4$ $pb^{-1}$ for
experimental observation.
\section{Distributions in the Standard Model with $LL-$ contact term
for the process $e^- p \rightarrow e^- \mu^+ \mu^- X$}
In the case of deep inelastic scattering we are using the MRS
parametrization of proton structure functions \cite{MRS},
developed on the basis of latest experimental data from HERA.
Available parametrizations of proton structure functions
can be used at the $Q^2$ scale sufficiently large, so
the calculations for the process $e^- q \rightarrow e^- \mu^+ \mu^- q$
were performed applying $|Q|= 3$ GeV cut for the momentum
transferred from the constituent quark.
Muon energy cut is equal to 10 GeV and we used 30 GeV for the muon pair
invariant mass cut. For HERA $ep$ collider the energy $\sqrt{s}=314$ GeV,
the electron-positron center of mass system is moving in the laboratory
system with the rapidity $y=1.654$
and the integrated luminosity at present time is several $pb^{-1}$.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
initial state&$eu$&$ed$&$e\bar u$&$e\bar d$&$es,e\bar s$&$ec,e\bar c$ &
total
\\ \hline
SM & 35.88& 3.24 & 1.16 & 0.52& 0.46 & 0.50& 41.76 \\
\hline
SM+$LL-$ & 36.19& 3.25 & 1.17 & 0.52& 0.47 & 0.50& 42.10
\\
\hline
\end{tabular}
\end{center}
\caption{Total cross sections ({\it fb}) for partonic spieces in the
process
$ e^- p \rightarrow e^- \mu^+ \mu^- X$, $q=u,d,s,c$,
$M(\mu^+ \mu^-) \geq 30$ GeV, $\Lambda=1$ TeV. }
\end{table}
\newpage
\unitlength=0.60pt
\begin{figure}[h]
\begin{minipage}[h]{70mm}
\input{fig4.tex}
\end{minipage}
\hfill
\begin{minipage}[h]{70mm}
\input{fig5.tex}
\end{minipage}
\caption{Left figure -
ratio $d\sigma_{LL-}/d\sigma_{SM}$ for the muon pair invariant
mass, $\Lambda=1$ TeV, set I; right figure -
ratio $d\sigma_{LL-}/d\sigma_{SM}$ for the muon pair angle,
$\Lambda=1$ TeV, set I. The error of Monte Carlo calculation
in the ratio is indicated. }
\end{figure}
\begin{figure}[h]
\begin{minipage}[h]{70mm}
\input{fig6.tex}
\end{minipage}
\hfill
\begin{minipage}[h]{70mm}
\input{fig7.tex}
\end{minipage}
\caption{ Left figure -
ratio $d\sigma_{LL-}/d\sigma_{SM}$ for the muon pair invariant
mass, $\Lambda=1$ TeV, set II, right figure -
ratio $d\sigma_{LL-}/d\sigma_{SM}$ for the muon angle,
$\Lambda=1$ TeV, set II. The error of Monte Carlo calculation in the
ratio is indicated. }
\end{figure}
Total cross sections for valence and sea quarks are shown in Table 2.
Similar to $e^+ e^-$ case with loose cuts (set I), the contribution of
the contact term in $ep$ scattering is very small. According
to criteria (2) in order to observe the deviation in total rate of order
1\% it is necessary to identify approximately $10^4$ events, while even
at upgraded high luminosity HERA ($L \sim 10^2 \, pb^{-1}$) it would be
possible to observe of order $10^1$ events. We show the
fractional deviations of the muon pair invariant mass distribution and
muon angle distribution in Fig.5. In the distributions the
effect is also practically unobservable.
\section{Conclusion}
We calculated the effect of $LL-$ contact term in the four-fermion channel
$e^+ e^- \rightarrow e^+ e^- \mu^+ \mu^-$ at the energy 200 GeV.
Search strategies with loose and strong cuts imposed on the final state
were considered.
\newpage
\begin{figure}[h]
\begin{minipage}[h]{70mm}
\input{fig8.tex}
\end{minipage}
\hfill
\begin{minipage}[h]{70mm}
\input{fig9.tex}
\end{minipage}
\caption{Left figure -
ratio $d\sigma_{LL-}/d\sigma_{SM}$ for the muon pair
invariant mass, $\Lambda=1$ TeV; right figure -
ratio $d\sigma_{LL-}/d\sigma_{SM}$ for the muon angle,
$\Lambda=1$ TeV.}
\end{figure}
In the case of loose cuts (set I)
at the compositeness scale 1 TeV
the difference in the total rates is around 1\%. It would be hardly
possible to observe the deviations from the SM in the distibutions.
In the case of strong cuts (set II) the effect of contact terms is much
more
pronounced and is of order 20\% in the total rate and could be clearly
observed in the distributions,
but the number of events at LEP2 luminosity of several hundred $pb^{-1}$
is
too small. Separation of the contact
term is possible at the integrated luminosity of order 10 $fb^{-1}$.
The deviation from the SM distributions caused by contact terms is
rather uniform and in all cases considered it looks like some
bias of constant level in the whole physical region.
At the compositeness scale 4 TeV the difference of SM and SM+$LL-$
distributions in the same four fermionic channel decreases approximately by
one order of magnitude.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|c|c|c|} \hline
& $\sigma_{tot}$ (pb) & deviation in & deviation in \\
& SM & $\sigma_{tot}$ & $d\sigma
/dcos\vartheta_{\mu}$ \\
\hline
$e^+ e^- \rightarrow \mu^+ \mu^-$ & 3.0 & about 300\% & up to 300\% \\
$e^+ e^- \rightarrow e^+ e^- \mu^+ \mu^-$, set I & 4.2 & 0.4\%& negligible \\
$e^+ e^- \rightarrow e^+ e^- \mu^+ \mu^-$, set II& 2*$10^{-2}$& 15\% & up
to 50\% \\ \hline
\end{tabular}
\end{center}
\caption{Typical deviations in the total rate and muon angular distribution
of the reactions $e^+ e^- \rightarrow \mu^+ \mu^-$ and
$e^+ e^- \rightarrow e^+ e^- \mu^+ \mu^-$ caused by $LL-$
contact term at the energy ${\protect \sqrt{s}=200}$ GeV and
compositeness scale $\Lambda=$ 1 TeV. Muon pair mass $M_{\mu
\mu} \geq$ 30 GeV in the case of set I,II.}
\end{table}
We calculated also the "four-fermion" channel $e^- q \rightarrow e^-
\mu^+ \mu^- q$ at the energy of HERA $\sqrt{s}=314$ GeV. The effect
of the contact term in the total rate at the compositeness scale 1 TeV is
about 1\%. Again, it would be hardly possible to observe any deviations
from the SM distributions.
Four-fermion channel considered by us does not show new critical
advantages over the possibilities of the compositeness search
considered earlier \cite{ee}. We compare the magnitude of the effect for the
reactions $e^+ e^- \rightarrow \mu^+ \mu^-$ and $e^+ e^- \rightarrow
e^+ e^- \mu^+ \mu^-$ in Table 3. The discovery potential of four-fermion
reactions is critically dependent from the collider luminosity.
\begin{center}
{\bf Acknowledgement}
\end{center}
The research of M.D. was partially supported by RFBR 96-02-19773a,
St-Pb. grant 95-0-6.4-38 and INTAS 93-1180ext.
|
1,108,101,563,189 | arxiv | \section{Introduction}
The charging of an object in a plasma is one of the basic problems in plasma physics. The understanding of this process is important in studies of interactions between the plasma and the object, or between many objects in a plasma. The question is particularly important in studies of dusty plasmas, where a number of charged dust grains can form ordered structures, such as dust clusters, strings, and crystals \cite{Shukla_Mamun_2002,Vladimirov_Ostrikov_2005, Ishihara_2007}.
In dusty plasma experiments, grains are usually levitated in the sheath region above the electrode, and they are charged negatively due to the high mobility of electrons. Such grains can be exposed to an ion flow. A wake and characteristic regions of enhanced ion density (ion focus) are observed behind dust grains immersed in flowing plasmas \cite{Vladimirov_Nambu_1995,Melzer_Schweigert_1996, Ivlev_Morfill_1999, Miloch_Pecseli_Trulsen_2008}. The ion focusing and the corresponding potential enhancement are more conspicuous for supersonic flows \cite{Miloch_Pecseli_Trulsen_2008}, and can lead to the alignment of grains in a direction of the flow \cite{Melzer_schweigert_1999, Maiorov_Vladimirov_2000,Vladimirov_Maiorov_2003a, Hebner_Riley_2004, Samarian_Vladimirov_2005}. Analogous problems can be formulated also for larger objects moving with respect to a plasma, such as spacecrafts or meteoroids \cite{Svenes_Troim_1994,Melandso_Goree_1995}.
In a space environment, dust is often exposed to electromagnetic radiation \cite{Horanyi_1996}. Radiation can be directional or isotropic, either due to background radiation or scattering of directed light \cite{Hayakawa_Yamashita_1969}. The situation is relevant not only for dust in space, but also for dusty surfaces of larger lunar bodies. In the latter case, the dust on the surface is charged by the plasma and directed solar radiation. It has been argued that the shadowing of light can lead to strong electric fields and transport the dust above the lunar surface \cite{Wang_Horanyi_2007}. In laboratory plasmas, the radiation is either due to the plasma glow or an external light source, thus it is either isotropic or directed, similarly to the space environment \cite{Sickafoose_Colwell_2000}.
If the energy of incoming photons is larger than the work function of the dust surface material, the photo\-electron current contributes to the net current to the grain and should be included in the charging analysis \cite{Ishihara_2007, Weingartner_Draine_2001, Klumov_Vladimirov_2005, Klumov_Vladimirov_2007}.
In several respects the physics of this process resembles that for electron emissive probes \cite{Schrittwieser_Ionita_2008}.
The differences between the two physical processes are found in the mechanisms for the electron emissions and to some extent also in the velocity distributions of the emitted electrons.
Photo\-emission will change the total charge on the dust grain and the surface charge distributions, and it can lead to new types of interactions between dust grains. Structures comprising positively charged dust grains in a plasma in the presence of UV radiation have been discussed theoretically \cite{Rosenberg_Mendis_1995, Rosenberg_Mendis_1996}, and observed in experiments \cite{Fortov_Nefedov_1998, Samarian_Vaulina_2000}.
A theory describing the charging of dust grains with photo\-emission in a self-consistent way is difficult to develop. In particular, photo\-electrons can modify the plasma in the vicinity of dust grains. Several theoretical studies consider simplified models \cite{Rosenberg_Mendis_1996, Khrapak_Nefedov_1999, Ostrikov_Yu_2001}. To study more realistic problems, one should employ numerical simulations, which can account for non-linear and other possible phenomena, and model the charging of a dust in plasma in a self-consistent way. By numerical simulations, it was demonstrated that the charge of the dust cloud in a plasma discharge can be modified by UV radiation \cite{Land_Goedheer_2007}. The potential structures around a positively biased spacecraft were also studied numerically \cite{Engwall_Eriksson_2006}. In these works the electrons were treated as a Boltzmann distributed background. Neither of these studies considered the self-consistent charging of isolated objects.
One of our intentions here is to provide a more realistic model by including electrons (photo\-electrons in particular) in the analysis. In a recent communication, we have demonstrated that UV radiation allows for an accurate control of the charge on an isolated conducting grain \cite{Miloch_Vladimirov_2008}. We have also showed that photo\-electrons can modify and polarize the surrounding plasma, enabling stronger interactions between positively charged conducting dust grains.
In the present paper, we study numerically the charging of conducting or alternatively insulating dust grains in a supersonic plasma flow with a directed photon flux. We analyze the charge, density and potential distributions for different fluxes and energies of photons, and for different angles between the incoming unidirectional photons and the plasma flow velocity vector. Continuous as well as pulsed radiation is considered. Using a particle-in-cell (PIC) method, we simulate the entire charging process in a collision\-less plasma. The simulations are carried out in two spatial dimensions, treating ions and electrons as individual particles. We consider unidirectional radiation, which is relevant for a dust in space exposed to the solar radiation, as for example lunar dust, or a laboratory experiment with an external radiation source.
\section{Numerical code}
We have modified the numerical particle-in-cell (PIC) code used in our previous studies \cite{Miloch_Pecseli_Trulsen_2008, Miloch_Pecseli_Trulsen_2007, Miloch_Vladimirov_2008b}, by including a photon flux and the photo\-electric effect \cite{Miloch_Vladimirov_2008}.
We consider collisionless plasmas in a two-dimensional system in Cartesian coordinates.
Both electrons and ions are treated as individual particles, with the electron to ion temperature ratio $T_e/T_i=100$, and $T_e=0.18~\mathrm{eV}$. The ion to electron mass ratio is $m_i/m_e=120$ in most of the simulations. As a control case we analyze also results for a conducting grain with $m_i/m_e=36720$ (to represent Neon). The plasma density is $n=10^{10}~\mathrm{m^{-2}}$, and the plasma flow velocity is $v_d=1.5~C_s$, with $C_s$ denoting the speed of sound. Because of the large thermal velocity of electrons, the plasma flow is represented solely by the ion drift.
A circular dust grain of radius of $R=0.375$ in units of the electron Debye length $\lambda_{De}$ is placed inside a simulation box of size of $50 \times 50$ $\lambda_{De}$. The grain is assumed to be massive and immobile, except for the simulations of the spinning insulator. Initially, the grain is charged only by the collection of electrons and ions. For a perfectly insulating grain, a plasma particle hitting the dust grain surface remains at this position at all later times and contributes to the surface charge distribution. To model a small conductor in this work, the charge is redistributed equally on the dust grain surface at each time step. Such an algorithm is simple to use and is also found in other numerical studies \cite{Lapenta_1999}, but it does not account for the electric dipole moment on the conducting dust grain as induced by the anisotropic potential distribution in flowing plasmas. The equally distributed surface charge will not necessarily cancel electric fields inside the grain, and thus the algorithm is not adequate for grains larger than the Debye length or for grains of shapes different than spherical (or circular).
This algorithm is different form the one used in our previous studies of the charge distribution on larger dust grains, which enforced constant potential within the dust grain \cite{Miloch_Pecseli_Trulsen_2008, Miloch_Pecseli_Trulsen_2007}. The computational expenses of that algorithm were lengthy simulations and strict constraints on shapes and sizes of simulated dust grains.
A directed photon flux is switched on after approximately 40 ion plasma periods $\tau_i$. At this time, we can assume that the surface charge distribution on a grain has reached a stationary level. The code is run typically up to 50 ion plasma periods.
Three different angles between the incoming photons and the direction of the ion drift are considered: $\alpha=\{ 0^{\circ}, 90^{\circ}, 180^{\circ} \}$. For conducting grains, the simulated photon flux is $\Phi_{h \nu} \in (0.25, 2.5) \times 10^{19}~\mathrm{m^{-2}s^{-1}}$. This together with photon energies $E_{h\nu}$ of $4.8$, $5.5$ and $7.2~\mathrm{eV}$, gives a photon power density $H \in (1.9, 28.8) ~\mathrm{Wm^{-2}}$. The work function $W$ of the conducting dust grains is taken to be $W=4.5~\mathrm{eV}$, which is close to work functions of many metallic materials \cite{Rosenberg_Mendis_1996}. For insulating grains, the photon energies $E_{h\nu}$ are $10.3$, $11.0$ and $12.7~\mathrm{eV}$. This, together with the photon fluxes as for the case of conducting grains, gives a photon power density of $H \in (4.5, 50.8) ~\mathrm{Wm^{-2}}$. The work function of the insulating grain is taken as $W=10~\mathrm{eV}$, which implies that photo\-electrons will have the same energies as for the conducting case.
When a photon hits the surface of the dust grain, a photo\-electron of energy $E=E_{h\nu}-W$ is produced at distance $l=sv\Delta t$ from the dust grain surface, where $s$ is an uniform random number $s \in (0,1]$, $\Delta t$ is the computational time step and $v$ is the photo\-electron speed. Photo\-electron velocity vectors are uniformly distributed over the hemicircle and directed away from the dust grain surface, that is in accordance with Lambert's law.
To investigate the stability of the surface charge distribution on insulating grains, we simulate also spinning grains. Instantaneous rotation of angles $\beta=\{1^{\circ},5^{\circ},10^{\circ} \}$, as well as continuous rotation with the angular velocities $\Omega = \{\pi/180, \pi/36, \pi/18, \pi/2, \pi, 2\pi \}$ in units of $\mathrm{rad}/\tau_i$ (corresponding to the grain rotation by angles of $1,5,10,90,180,$ and 360 degrees within $\tau_i$, respectively) are considered. The rotation starts at approximately one ion plasma period after the onset of radiation. As a control case we also rotate the grain throughout the whole simulation.
\section{Numerical results}
The present section is in two parts. First we consider the charging of a conducting or alternatively insulating grain in the presence of continuous radiation. This problem is followed by the results from the simulations with pulsed radiation.
\subsection{Continuous radiation}
The charge on a conducting dust grain exposed to a continuous photon flux becomes more positive with the onset of radiation and saturates within one ion plasma period. Some of the results for a conducting grain have been presented before \cite{Miloch_Vladimirov_2008}, but we include them also in the present work for completeness. The saturation charge on a conducting grain, which is summarized in Table \ref{tab:uv_charging_c}, depends on the flux density and photon energy. For a sufficiently high photon flux, the grain becomes positively charged. For low fluxes, the saturation charge does not depend significantly on the photon energy. For higher fluxes, high energy photo\-electrons lead to a more positive dust grain. The relative fluctuations of the charge are largest for the grain with the smallest charge. The absolute and relative charge fluctuations are smallest for the case without photo\-emission.
The results for the total charge for $E_{h\nu}=7.2~\mathrm{eV}$ are very similar to the case of $E_{h\nu}=5.5~\mathrm{eV}$, and therefore they are not presented in Table~\ref{tab:uv_charging_c}.
\Table{The total charge $q_t$ on a conducting dust grain for different photon energies $E_{h\nu}$ and different photon fluxes $\Phi_{h\nu}$ for $\alpha=0^{\circ}$, averaged over $10\tau_i$. The relative charge fluctuations $\Delta q_t$ are also shown. The total charge $q_t$ is normalized with the unitary two-dimensional charge $q_{0}=e\left[ n_{0(3D)}\right] ^{1/3}$, where $e$ is an elementary charge, and $n_{0(3D)}$ is the plasma density in the corresponding three-dimensional system. The unit of $q_{0}$ is $[q_{0}]=\mathrm{C/m}$.
\label{tab:uv_charging_c}
\lineup}
\br
&\centre{2}{$E_{h\nu}$=4.8~{eV}}&\centre{2}{$E_{h\nu}=5.5~\mathrm{eV}$}\\
&\crule{2} & \crule{2}\\
$\Phi_{h\nu}$ & $q_t $ & $\Delta q_t $ & $q_t $ & $\Delta ~q_t $\\
$(10^{19}\mathrm{m^{-2}s^{-1}})$ & ${(q_0)}$ & $(\%)$ & $ {(q_0)}$ & $(\%)$\\
\mr
0.0 & \-755 & \0\04 & \0\-755 & \0\04 \\
0.25 & \-163 & \019 & \0\-168 & \017\\
0.50 & \019 & 173 & \0\012 & 258 \\
1.25 & 251& \018 & \0273 & \018\\
2.50 & 795 & \0\08 & 1330& \0\07 \\
\br
\end{tabular}
\end{indented}
\end{table}
The floating potential on a positively charged grain for two highest photon fluxes is shown in Table~\ref{tab:flpot} together with the corresponding results from analytical calculations. The analytical results for the floating potential in Table~\ref{tab:flpot} are calculated for a balance of the photo\-emission $i_{h\nu}$, ion $i_i$, and electron $i_e$ currents to the grain: $i_e=i_i+i_{h\nu}$. For consistency, we restrict our analysis to a two-dimensional case. The photo\-emission current can for this case be expressed by \cite{Shukla_Mamun_2002}:
\begin{equation}
i_{h\nu}=A_{h\nu}\Phi_{h\nu(2D)}e \exp \left( -\frac{e\Psi}{kT_{h \nu}} \right),
\end{equation}
where $e>0$, and it is assumed that the photo\-electric yield and photo\-emission efficiency equal unity. In the present two-dimensional model with unidirectional photons, $A_{h\nu}=2R$, and $\Phi_{h\nu (2D)}=c (\Phi_{h\nu(3D)}/c)^{2/3}$, with the physical dimension of $[\Phi_{h\nu(2D)}]=m^{-1}s^{-1}$. Subscripts $(2D)$ and $(3D)$ stand for two-dimensional and three-dimensional cases, respectively. The ion current to a plane surface segment with area $A_i$ due to singly charged ions drifting at supersonic speed $v_d$, can be approximated by
\begin{equation}
i_{i}= A_{i} n_{0(2D)} v_d e \exp \left( -\frac{e\Psi} {kT_{i}} \right),
\end{equation}
where we define the ion cross section for supersonic ion flow as $A_i=2R$. The ion current is consistent with the current to a probe for retarding fields \cite{Schott_1968}, but we replaced the ion thermal velocity by $v_d$ and neglected the numerical constant by assuming that ion velocities are unidirectional and normal to the probe surface at the sheath edge. We note that the ion current to the positively charged grain is negligible due to small thermal velocity of ions, but nevertheless we include it in the calculations for completeness. Since the grain radius $R$ is comparable to the electron Debye length, we use a general expression for the orbit-motion-limited (OML) current to the conducting cylinder, to calculate the electron current to the grain \cite{Schott_1968}:
\begin{eqnarray}
\fl i_{e}=-\frac{1}{4}A_{e} n_{0 (2D)} e v_{the} \frac{r_s}{R} \left[ \mathrm{erf}\left( \sqrt{\frac{-\gamma}{r^2_s/R^2-1}} \right) + \right. \nonumber\\
\left. +\frac{R}{r_s}\exp(-\gamma) \left( 1- \mathrm{erf}\left( \sqrt{\frac{-\gamma r^2_s}{r^2_s-R^2}}\right) \right)\right],
\end{eqnarray}
where $\gamma=-e\Psi/kT_e$, $A_e=2\pi r$, and $r_s$ is the sheath radius, which in our calculations is set to $r_s=3R$. We introduced the error function as $\mathrm{erf}(x)=2/\sqrt{\pi} \int_{0}^{x} \exp (-y^2) dy$.
\Table{The floating potential on a grain for different photon energies $E_{h\nu}$ and different photon fluxes $\Phi_{h\nu}$ for $\alpha=0^{\circ}$. The results from the simulations $\Psi_{fl,~\mathrm{sim}}$ as well as from analytical calculations $\Psi_{fl,~\mathrm{calc}}$ are shown. \label{tab:flpot} \lineup}
\br
&\centre{2}{$E_{h\nu}$=4.8~{eV}}&\centre{2}{$E_{h\nu}=5.5~\mathrm{eV}$}\\
&\crule{2} & \crule{2}\\
$\Phi_{h\nu}$ & $\Psi_{fl,~\mathrm{sim}}$ & $\Psi_{fl,~\mathrm{calc}}$ & $\Psi_{fl,~\mathrm{sim}}$ & $\Psi_{fl,~\mathrm{calc}}$ \\
$(10^{19}\mathrm{m^{-2}s^{-1}})$ & $\mathrm{(V)}$ & $\mathrm{(V)}$ & $\mathrm{(V)}$ & $\mathrm{(V)}$ \\
\mr
1.25 & 0.17 & 0.17 & 0.28 & 0.27 \\
2.50 & 0.16 & 0.36 & 0.56 & 0.66 \\
\br
\end{tabular}
\end{indented}
\end{table}
The density and potential distributions around a conducting dust grain depend on the flux and energy of the photons. For photon fluxes of $\Phi_{h\nu}=0.25\times 10^{19}~\mathrm{m^{-2}s^{-1}}$, when the grain is negatively charged, we observe an ion focusing in the wake \cite{Miloch_Pecseli_Trulsen_2008}. The ion density in the focusing region is $n_i \approx 1.2n_{0i}$, where $n_{0i}$ is the undisturbed ion density far from the grain. This result is smaller than for the corresponding case without photo\-emission where we had $n_i \approx 2.2n_{0i}$. The ion focusing is destroyed for positively charged grains. In this case, ions are slowed down and deflected in front of the grain.
Consequently, a region of an enhanced ion density is formed in front of the grain, while downstream from the grain there is a distinct boundary between the wake and the undisturbed plasma, see Fig.~\ref{fig:uv_ionwake_c}.
The shape of the enhanced ion density region depends on $\alpha$: it is more pronounced and located closer to the dust grain surface for $\alpha=0^{\circ}$, and further from the grain for $\alpha=180^{\circ}$. For $\alpha=90^{\circ}$, an asymmetry in the enhanced ion density is observed \cite{Miloch_Vladimirov_2008}.
The wake in the ion density behind a conducting grain (a region where $n_i < 0.5 n_{0i}$), scales with the photon flux and photon energy, increasing for increasing fluxes and energies. The measured spatial extent of the wake is summarized in Table~\ref{tab:uv_wake_c}. The ion wake corresponds to the white region behind the grain in Fig.~\ref{fig:uv_ionwake_c}.
\Table{The width $w$ and length $d$ of the ion wake behind a positively charged conducting dust grain for different photon energies $E_{h\nu}$ and different photon fluxes $\Phi_{h\nu}$ for $\alpha=0^{\circ}$. The unit of $w$ and $d$ is the electron Debye length $\lambda_{De}$. The ion wake was not observed for $\Phi_{h\nu} < 0.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$. \label{tab:uv_wake_c} \lineup}
\br
&\centre{2}{$E_{h\nu}$=4.8~{eV}}&\centre{2}{$E_{h\nu}=5.5~\mathrm{eV}$}&\centre{2}{$E_{h\nu}=7.2~\mathrm{eV}$}\\
&\crule{2} & \crule{2} & \crule{2}\\
$\Phi_{h\nu}$ & $w$ & $d$ & $w$ & $d$ & $w$ & $d$ \\
$(10^{19}\mathrm{m^{-2}s^{-1}})$ & $(\lambda_{De})$ & $(\lambda_{De})$ & $(\lambda_{De})$ & $(\lambda_{De})$ & $(\lambda_{De})$ & $(\lambda_{De})$ \\
\mr
0.50 & 0.7 & 3.1 & 0.7 & \03.5 & 0.9 & \03.7\\
1.25 & 2.1 & 6.8 & 2.3 & \07.5 & 2.5 & \07.3 \\
2.50 & 3.6 & 7.0 & 5.9 & 11.1 & 6.3 & 12.7\\
\br
\end{tabular}
\end{indented}
\end{table}
The potential around the positively charged conducting dust grain is polarized for higher photon fluxes. In Fig.~\ref{fig:polarization_c}, the potential distribution around the conducting dust grain is shown for different angles of incidence of photons with the flux $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$. The potential is negative behind, and positive in front of the grain. The polarization of the plasma is most conspicuous for $\alpha=180^{\circ}$.
\
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.9\columnwidth]{figure1}
\caption{The ion density around a conducting dust grain exposed to the photon flux $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$ of energy $E_{h\nu}=4.8~\mathrm{eV}$ averaged over nine ion plasma periods $\tau_i$. $\alpha=0^{\circ}$ and the plasma flow is in the positive $x$ direction. The white region corresponds to ion densities below $0.5n_{0i}$.}
\label{fig:uv_ionwake_c}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.9\columnwidth]{figure2a}
\includegraphics[width=0.9\columnwidth]{figure2b_2c}
\caption{The potential around a conducting dust grain exposed to the photon flux $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$ of energy $E_{h\nu}=4.8~\mathrm{eV}$ for $\alpha=0^{\circ}$ (a), $\alpha=90^{\circ}$ (b), and $\alpha=180^{\circ}$ (c). The data were averaged over a time interval of nine ion plasma periods $\tau_i$. The plasma flows in the positive $x$ direction.}
\label{fig:polarization_c}
\end{center}
\end{figure}
The results from simulations with a more realistic ion mass are in accordance with the results obtained with the reduced ion mass. For a realistic ion mass, the total charge $q_t$ on a grain without photo\-emission is $q_t=-1883q_0$. This result is more negative than for simulations with reduced ion mass. The ratio of the saturation charges for different ion masses is $q_{t,1}/q_{t,2}=2.5$. It is close to the ratio
\begin{equation}
\frac{Q_{0,1}}{Q_{0,2}}=\frac{ {\ln \left( \gamma_1/2\pi +1 \right)} } { \ln \left( \gamma_2/2\pi +1 \right)}=2.9,
\end{equation}
where indices $1,2$ refer to different ion to electron mass ratios for ion masses $m_i=36720m_e$, and $m_i=120m_e$ respectively, and $Q_0$ is a theoretical charge on a grain in a stationary plasma in a two-dimensional system, given by
\begin{equation}
Q_0= 2 \pi \epsilon_0 \Psi_{fl} \frac{R}{\lambda_D}\frac{K_1(R/\lambda_D)}{K_0(R/\lambda_D)}.
\label{q_theory}
\end{equation}
In (\ref{q_theory}), $K_0$ and $K_1$ are modified Bessel functions, $R$ is the radius of a grain, $\gamma=m_i/m_e$, and $\Psi_{fl}$ is a floating potential of the grain, here given by
\begin{equation}
\Psi_{fl}=-\frac{\kappa T_e}{2e}\left[ \ln \left( \frac{\gamma}{2\pi} +1 \right) \right].
\label{psi_theory}
\end{equation}
In (\ref{psi_theory}) it is assumed that cold ions are reaching the surface of the large conducting object at the Bohm speed. A more detailed discussion on Equations (\ref{q_theory}) and (\ref{psi_theory}) is given elsewhere \cite{Miloch_Pecseli_Trulsen_2007}.
Without photo\-emission, ions are streaming out of the ion focus with a wider angle for realistic ion masses as compared to the case with a reduced ion mass. This is due to different ion drift velocities for the two cases. In both cases the ion drift is $v_d=1.5~C_s$, with the speed of sound given by $C_s=\sqrt{\kappa (T_e+5T_i/3)/m_i}$, in the plasma far from the grain. In the vicinity of the grain, the plasma parameters are modified due to particle trapping and sheath formation. With photo\-emission, the saturation charge and the wake are similar for both ion masses. The length of the wake is the same, while the width for larger ion masses is larger by $5\%$. The charge saturates within one ion plasma period for both cases. The ion plasma period for the realistic ion mass is approximately 20 times larger than for the reduced ion mass. With another simulations for grains with the radius of $2R$, we find that the saturation charge is approximately twice the charge value for the grain with radius of $R$.
The charging of an insulating grain exposed to a continuous radiation differs from the conducting case. The saturation in the charging characteristics is observed for photon fluxes $\Phi_{h\nu}=0.25 \times 10^{19}\mathrm{m^{-2}s^{-1}}$ and $\Phi_{h\nu}=0.50 \times 10^{19}\mathrm{m^{-2}s^{-1}}$, when the total charge on the dust grain remains negative. For photon fluxes and energies high enough to change the sign of the total charge on the grain, the charge does not saturate within the time-span of our simulations. In all cases, the charging depends on the angle of incidence, see Fig.~\ref{fig:uv_charging_i}. For lower fluxes the charge is getting less negative with increasing $\alpha$. For higher fluxes, the charge can become positive, and then negative again within a few ion plasma periods. This is not the case for $\alpha=180^{\circ}$, for which the charge increases towards more positive values.
With the onset of the photon flux, we observe the development of an electric dipole moment on the grain which is antiparallel to the direction of the incident photons, see Fig.~\ref{fig:polarization_i}. This electric dipole moment due to the photo\-electrons does not saturate for high photon fluxes, and it is stronger than the electric dipole moment induced by the ion flow.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.9\columnwidth]{figure3.ps}
\caption{The total charge on an insulating dust grain as a function of time for different photon fluxes and angles of photon incidence $\alpha$. Squares correspond to the photon flux $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$, triangles to $\Phi_{h\nu} = 0.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$. The photon energy is $E_{h\nu}=11.0~\mathrm{eV}$. The results are smoothed with a moving box average filter for presentation.}
\label{fig:uv_charging_i}
\end{center}
\end{figure}
The density and potential distributions around an insulating grain evolve in time, see Figs.~\ref{fig:polarization_i} and \ref{fig:uv_ionwake_i}. The potential distribution is influenced by the electric dipole moment due to photo\-emission. This moment becomes smaller when the total charge is negative. For a positively charged grain, the ion focusing region in the wake is destroyed, and the wakes behind the dust grain with $\alpha=0^{\circ}$ and $\alpha=180^{\circ}$ are similar to the conducting case. The wake is strongly asymmetric for $\alpha=90^{\circ}$. For high photon fluxes, when the charge on the grain reaches negative values, the wake behind insulator becomes smaller, and the ion focus can be retrieved. The asymmetric charge distribution for $\alpha=90^{\circ}$ is present also after the closure of the wake, as shown in Fig.~\ref{fig:uv_ionwake_i}b).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.9\columnwidth]{figure4a_4b_top}
\includegraphics[width=0.9\columnwidth]{figure4a_4b_bot}
\caption{
The potential around an insulating dust grain exposed to the photon flux $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$, $\alpha=0^{\circ}$ (a) and $\alpha=90^{\circ}$ (b) of energy $E_{h\nu}=11.0~\mathrm{eV}$ averaged over two ion plasma periods: $t \in ( 39.5,41.5 ) \tau_i$ (top) and $t \in ( 48.0,50.0 ) \tau_i$ (bottom). The plasma flows in the positive $x$ direction.}
\label{fig:polarization_i}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.9\columnwidth]{figure5a_5b_top}
\includegraphics[width=0.9\columnwidth]{figure5a_5b_bot}
\caption{The ion density around an insulating dust grain exposed to the photon flux $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$, $\alpha=0^{\circ}$ (a) and $\alpha=90^{\circ}$ (b) of energy $E_{h\nu}=11.0~\mathrm{eV}$ averaged over two ion plasma periods: $t \in ( 39.5,41.5 ) \tau_i$ (top) and $t \in ( 48.0,50.0 ) \tau_i$ (bottom). The plasma flows in the positive $x$ direction. White regions correspond to ion density levels below $0.5 n_{0i}$.}
\label{fig:uv_ionwake_i}
\end{center}
\end{figure}
In Fig.~\ref{fig:uv_Boltzmann}, we illustrate the difference $\delta$ between the density of Boltzmann distributed electrons that would correspond to the calculated potential and the actual electron density: $\delta=n_{e0}\exp[e\Psi/kT_e]-n_e$, where $e>0$ is the magnitude of the electron charge. Results for both conducting and insulating stationary grains are shown in Fig.~\ref{fig:uv_Boltzmann}. Before the onset of the photon flux the electrons can be well approximated by the Boltzmann distribution. With photo\-emission, the electrons are no longer Boltzmann distributed. The largest discrepancies for conductors are associated with a surplus of electrons due to the photo\-electron emission, and to a region of an enhanced ion density in front of the dust grain, where electrons are underrepresented. For insulators the electric dipole governs the potential in vicinity of the grain.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.6\columnwidth]{figure6a}
\includegraphics[width=0.6\columnwidth]{figure6b}
\caption{The difference $\delta$ between the density of Boltzmann distributed electrons that would correspond to the calculated potential and the actual electron density is shown for the case with (solid line) and without (dashed line) photo\-emission. Both conducting (a) and insulating (b) dust grains are considered for $\alpha=0^{\circ}$.}
\label{fig:uv_Boltzmann}
\end{center}
\end{figure}
Instantaneous rotation of the insulating dust grain with an angle $\beta$ has little effect on the grain charging characteristic. For $\alpha=180^{\circ}$, no significant change is observed in the potential and density distributions, while for $\alpha=0^{\circ}$ the rotation leads to weak asymmetries there. The asymmetries are more pronounced for larger $\beta$. For $\alpha=0^{\circ}$ the charging characteristics are similar to the case without rotation, but the charge becomes more negative at a slightly slower rate with increasing $\beta$.
Continuous rotation by an angle of $\pi$ within one ion plasma period significantly modifies the charging of the grain, see Fig.~\ref{fig:rotation} (a). The rotation of a grain redistributes the charge on the dust grain surface, and lowers the total charge on the grain. After arresting the grain rotation, the charge becomes more negative for $\alpha=0^{\circ}$ and $\alpha=90^{\circ}$, while for $\alpha=180^{\circ}$ it can saturate with a quadrupole moment in the surface charge distribution.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.59\columnwidth]{figure7a.ps}
\includegraphics[width=0.59\columnwidth]{figure7b.ps}
\caption{The total charge on an insulating dust grain rotating with an angle of $\pi$ over one ion plasma period $\tau_i$ (a) and continuously rotating after the start of photo\-emission (b). Continuous radiation with $\Phi_{h\nu} = 1.25 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$ (squares correspond to $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$) is switched on at $t=39\tau_i$. Triangles correspond to the dust grain spinning throughout the whole simulation. $\alpha=0^{\circ}$ for $\Omega=\pi$, $\alpha=90^{\circ}$ for $\Omega=0.5\pi$, and $\alpha=180^{\circ}$ for $\Omega=2\pi$ in units of $\tau_i^{-1}$. The results are smoothed with a moving box average filter for presentation.}
\label{fig:rotation}
\end{center}
\end{figure}
With steady state rotation, the total charge oscillates in time with the mean charge value lower than on the conducting grain with the corresponding parameters for the photon flux. The period of oscillations depends on the angular velocity of the grain, see Fig.~\ref{fig:rotation}b). The oscillations are not observed for very slow angular velocities. There is little difference in the grain charging characteristics for different starts of the rotation of the grain. For a grain spinning throughout the whole simulation, the total charge before the onset of photo\-emission is less negative than on a stationary grain. This is due to the charge redistribution, which prevents the development of a strong electric dipole moment. However, with photo\-ionization, the charging characteristics are similar to the case when the dust grain starts spinning after the radiation onset.
The charge redistribution on spinning grains tilts the electric dipole moment on insulating grains. The strength of the electric dipole moment oscillates together with the total charge on the grain. Simultaneously, the wake becomes asymmetric and its size oscillates in time.
\subsection{Pulsed radiation}
The charge on the conducting grain exposed to a radiation pulse is more positive during the illumination. After the pulse, the charge recovers to the previous value (before the pulse) within approximately one ion plasma period. The charge recovery is initially fast and then continues at a slower rate. Initially, the charge recovery can be well approximated by an exponential function of the form $q=q_0\exp[-t/\tau]$, with the time constant $\tau=3.45 \times 10^{-9} \mathrm{s}$ for $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$, and $\tau=4.56 \times 10^{-9} \mathrm{s}$ for $\Phi_{h\nu} = 0.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$. These time constants are comparable with the electron plasma period $\tau_e=3.53 \times 10^{-9} \mathrm{s}$, which suggests that initially the charge recovery is primarily due to electrons. The time constant for $\Phi_{h\nu} = 0.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$ is larger than $\tau_e$ because the maximum charge is close to zero in this case, and the ions contribute initially to the charge recovery. After a time interval of $2 \tau_e$, the time constant $\tau$ is larger and reaches $\tau \approx 0.5 \tau_i$ at the end of the recovery for both cases.
The charging is shown in Fig.~\ref{fig:pulse_cond}a) together with points corresponding to the exponential fits. For clarity of presentation, we do not show continuous exponential fits, but only regularly spaced points corresponding to the locally fitted curves. Approximately one ion plasma period after the switch off, a small overshoot in the charging characteristic is observed for higher photon energies. The charging at given photon fluxes depends only little on the photon incident angles $\alpha$.
The charging after a series of three pulses is similar to what is found for a single pulse with the relevant photon flux and energy. Each pulse corresponds to a peak in the charging characteristics in Fig.~\ref{fig:pulse_cond}b). The height of each peak does not change much with the time interval between the pulses. The trough is less negative for time intervals between the pulses that are shorter than the charge recovery time.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.59\columnwidth]{figure8a.ps}
\includegraphics[width=0.59\columnwidth]{figure8b.ps}
\caption{The total charge on a conducting dust grain exposed to a single radiation pulse (a) and to a pulse series with different time intervals between pulses $\Delta t_p$ (b). In plot (a) different symbols represent regularly spaced data points corresponding to local exponential fits with different dime constants $\tau$. In plot (b) triangles correspond to $\Phi_{h\nu} = 0.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$ and squares to $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$. In both cases $\alpha=0^{\circ}$, and $E_{h\nu}=5.5~\mathrm{eV}$. The results are smoothed with a moving box average filter for presentation.}
\label{fig:pulse_cond}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.59\columnwidth]{figure9a.ps}
\includegraphics[width=0.59\columnwidth]{figure9b.ps}
\caption{The total charge on an insulating dust grain exposed to the pulsed radiation as a function of time for different time intervals between pulses $\Delta t_p$ and different photon fluxes: $\Phi_{h\nu} = 0.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$ (a) and $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$ (b). Squares correspond to $\alpha=180^{\circ}$, triangles to $\alpha=0^{\circ}$. The photon energy is $E_{h\nu}=11.0~\mathrm{eV}$. The results are smoothed with a moving box average filter for presentation.}
\label{fig:pulse_ins}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.9\columnwidth]{figure10.ps}
\caption{The potential around an insulating dust grain after a series of pulses with $\Delta t_p=1.0\tau_i$. $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$, $E_{h\nu}=11.0~\mathrm{eV}$, and $\alpha=180^{\circ}$. The minimum in the potential is $\Psi=-12.2$ in units of $kT_e/e$ on the surface of the dust grain. Potentials lower than $\Psi=-5~kT_e/e$ are coloured black.}
\label{fig:quadrupole}
\end{center}
\end{figure}
The electrostatic potential around the conducting dust grain exposed to radiation pulses is polarized as in the case of continuous radiation. During the pulses, the potential behind the dust grain is negative, and resembles the case of the conducting grain with continuous radiation, see Fig.~\ref{fig:polarization_c}. This region remains negatively charged also between the pulses. Within a time interval of approximately $1.5\tau_i$ after the last pulse, the positive potential region in the grain wake is rebuilt: first in the vicinity, and then further away from the grain. At the same time, the region with net negative charge becomes less pronounced and moves further downstream from the dust grain, slower than the ion drift speed. During the pulses, the wake potential in the vicinity of the grain oscillates with the frequency of the pulses, see Fig.~\ref{fig:oscillations}. These oscillations propagate into the wake, but are heavily damped further away from the dust grain, and diminish after the last pulse.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.9\columnwidth]{figure11.ps}
\caption{The potential variations at different distances $\Delta x$ from the rear of the conducting grain for $y=25.8$ in units of $\lambda_{De}$ exposed to the radiation pulses as a function of time. $\Phi_{h\nu} = 2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$, $E_{h\nu}=5.0~\mathrm{eV}$, and $\alpha=180^{\circ}$.}
\label{fig:oscillations}
\end{center}
\end{figure}
The ion density behind a conducting dust grain is being enhanced between the pulses. This process is terminated by the start of a successive pulse, and the enhanced ion density regions move away from the dust grain in the ion drift direction, but at a lower speed.
The full recovery of the ion focus occurs after approximately one ion plasma period from the last pulse. The ion density wake in front of the grain is located closer to the grain for $\alpha=180^{\circ}$ than for $\alpha=0^{\circ}$. There is also a depletion in the electron density in the region corresponding to the ion wake originating from a positively charged grain. After the switch-off, photo\-electrons are rapidly redistributed, but the depletion in the region corresponding to the wake remains until the wake is filled with ions.
An insulating dust grain exposed to a single radiation pulse has charging characteristics similar to the conductor only for low photon fluxes and $\alpha=0^{\circ}$, when the total grain charge remains negative, see Fig.~\ref{fig:pulse_ins}a). For the low photon flux and $\alpha=180^{\circ}$, the total charge on a dust grain is low during the pulses. In this case, the charge does not recover within one ion plasma period after the pulse, but it reaches half of the charge value before the onset of radiation. Longer simulations show that the recovery time is approximately 20 ion plasma periods in this case.
For high photon fluxes, the charge during the pulse is positive, see Fig.~\ref{fig:pulse_ins}b). After a single pulse with $\alpha=0^{\circ}$ the charge is more negative than before the pulse, and less negative when $\alpha=180^{\circ}$. The total charge in the trough between subsequent pulses is getting more negative for $\alpha=0^{\circ}$, while for $\alpha=180^{\circ}$ it is more negative for longer time intervals between pulses, and less negative for shorter intervals. In all insulating cases, the total charge after a series of pulses can be more negative than after a corresponding single pulse, and the full charge recovery takes usually several plasma periods.
The radiation pulses modify the potential and density patterns in the vicinity of the grain. For photons with $\alpha=0^{\circ}$, we observe an enhancement in the electric dipole moment on the dust grain. For photons with $\alpha=180^{\circ}$, the positive surface charge is accumulated on the front and rear sides of the dust grain. During such pulses, the electric dipole moment is parallel to the ion flow (antiparallel to the photons direction), and after the pulses, the quadrupole moment in the surface charge distribution develops, see Fig.~\ref{fig:quadrupole}. The quadrupole moment diminishes in time, faster for low photon fluxes, and the dipole moment in the surface charge distribution as well as the electrostatic potential distribution are recovered.
In the wake, the ion focusing is rebuilt behind the dust grain after the radiation pulses. For $\alpha=0^{\circ}$, this region is similar to the one before the pulses, while for $\alpha=180^{\circ}$, at the same time instances it is wider and weaker for low fluxes, and spatially narrower with stronger focusing for high fluxes. Electrons are Boltzmann distributed after the pulses.
\section{Discussion}
Photo\-emission provides an electron source on the irradiated side of the grain and modifies the dust grain charge. For sufficiently high photon fluxes, the charge on the conducting grain becomes positive and saturates within one ion plasma period. Positively charged conducting grains slow down and deflect flowing ions. As a result, a region of enhanced ion density forms in front of the grain, while behind the grain a substantial wake in the ion density is formed, see again Fig.~\ref{fig:uv_ionwake_c}. The wake in the ion density scales with the photon energy and flux, being larger for higher fluxes and energies. Hence, the wake size is proportional to the charge on the grain. Photo\-electrons with energies higher or comparable to the electron thermal velocity can easily be lost on the grain surface, while with higher energies they are more likely to escape the trapping potential of the grain. This together with the photo\-emission rate, which is proportional to the flux, explains the development of more positive charge on the grain for high energetic photons and high fluxes \cite{Miloch_Vladimirov_2008}. The angle between incoming photons and plasma flow direction has little effect on the potential distribution around conducting grains. Photo\-electrons contribute in neutralizing enhanced ion density regions. For this reason, the region of enhanced ion density is located closer to the front of the grain for $\alpha=0^{\circ}$, when the grain charge can be effectively shielded by the photo\-electrons, and further away for $\alpha=180^{\circ}$, when the photo\-electrons are produced on the shadow side. The electrons penetrate into the ion wake, due to their high mobility. The resulting imbalance between ion and electron densities in the vicinity of the grain leads to polarization of the plasma, see again Fig.~\ref{fig:polarization_c}. This allows for strong interactions between many positively charged grains in flowing plasmas. The electrons are no longer Boltzmann distributed.
To calculate the charge on positively charged conducting grains, we use a two-dimensional capacitance model \cite{Miloch_Pecseli_Trulsen_2007}. In (\ref{q_theory}) we use the simulation results for the floating potential $\Psi_{fl}$. For photon fluxes of $\Phi_{h \nu}=1.25 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$, the calculated charge is $q_t \approx 250 q_0$ for both photon energies. This result is close to the data shown in Table \ref{tab:uv_charging_c}. For $\Phi_{h \nu}=2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$ we have $q_t \approx 410 q_0$ for $E_{h \nu}= 4.8~\mathrm{eV}$, and $q_t \approx 732 q_0$ for $E_{h \nu}= 5.5~\mathrm{eV}$, which is lower than the simulation results. However, if in equation (\ref{q_theory}) we formally substitute $\lambda_{D}$ by $\lambda_{De}$ for $\Phi_{h \nu}=2.5 \times 10^{19}~\mathrm{m^{-2}s^{-1}}$, then the results are close to the simulation results: $q_t \approx 734 q_0$ for $E_{h \nu}= 4.8~\mathrm{eV}$, and $q_t \approx 1312 q_0$ for $E_{h \nu}= 5.5~\mathrm{eV}$. This suggests that for lower photon fluxes, and low positive potentials of the grain, the ions can effectively contribute to the shielding of the grain, while for more positive potentials, the grain potential is predominantly shielded by electrons.
The analytical solution for the floating potential is in a good agreement with the simulation results for low energy photons. For high energy photons the analytical calculations give more positive potentials than obtained from the simulations. This is due to thermalization of the photo\-electrons. The temperature of low energy photo\-electrons is higher than the mean temperature of the background electrons, but the corresponding velocities are still within the thermal spread of the background electrons. High energy photo\-electrons are effectively slowed down by the grain and interact with background electrons. We find that the average rate of deceleration of the high energy photo\-electrons is $25\%$ of their initial energy. With this correction for high energy photons, the analytical calculations for the floating potential give values close to the numerical results.
An electric dipole moment develops on insulating grains due to the photo\-emission. It is oriented antiparallel to the photon direction. Neither the electric dipole moment nor the charge saturate on such grains within the simulation time. For $\alpha=180^{\circ}$ the charge is more positive in time, while for other angles of incidence it recovers to negative values. For $\alpha=180^{\circ}$ both rear and front of the grain are positively charged. The depletion in the ion density behind the grain does not allow electrons to neutralize the rear charge. With increasing charge on the rear of the grain, a weak quadrupole moment develops (with negatively charged grain sides tangential to the flow), and the total charge increases towards positive values. For other angles, the non-irradiated side of the grain becomes more negative when the photo\-electric current exceeds the electron current.
Therefore, a depletion of the wake and a recovery of the ion focusing region are observed, see again Figs.~\ref{fig:polarization_i} and \ref{fig:uv_ionwake_i}. The surface charge distribution on insulating grains for $\alpha=90^{\circ}$ leads to more pronounced asymmetries in the ion density than on conducting grains. The electrons are no longer Boltzmann distributed also in the case of insulators, but a dominant contribution here is due to the electric dipole development. Photo\-electrons are absorbed by the high positive charge on the dust grain, and they neutralize enhanced ion density regions in front of the grain.
Rotation of the insulating dust grain redistributes the charge on the grain surface. Without photo\-emission but retaining the directed ion flow, the total charge on fast spinning grains becomes less negative. The electric dipole moment on the surface diminishes and the value of the total charge is similar to the conducting case. With photo\-emission, the electric dipole moment on the spinning dust grain is still present for the photon fluxes considered in this work. It is tilted by an angle with respect to the direction of radiation, and the positive charge on the irradiated side of the dust grain is being neutralized when it reaches the shadow side. Due to the depletion of the wake in the ion density, the total charge oscillates on fast spinning grains, see Fig.~\ref{fig:oscillations}.
With the spinning grain, the symmetry in the ion wake is destroyed near the grain surface. For sufficiently fast rotation, the redistribution of the negative charge bends ion trajectories and leads to wake erosion. This process continues until the ion density is rebuilt in the vicinity of the grain and the region of reduced density is detached from the grain. The electron density in the wake increases, and so does the total electron current to the grain. When the charge becomes less positive, and the electron current to the grain decreases, photo\-emission leads to the formation of the new wake in the ion density.
The closing of the wake and the resulting oscillating total charge will occur only if the erosion of the ion wake is substantial. If the rotation of the grain is slow and the photo\-emission rate is high, the wake will not close and detach.
The present discussion considers grains with spherical, cylindrical and oblate shapes.
Irregularities on the dust grain surface can lead to variations in the surface charge distribution and the wake can be perturbed also for smaller angular velocities of the grain. It is also noted, that spherical grains with inhomogeneous surface properties will spin due to angular momentum transfer from ions \cite{Tsytovich_Vladimirov_2004}. Such inhomogeneities can also lead to complicated surface charge distributions, as illustrated elsewhere \cite{Miloch_Pecseli_Trulsen_2007}.
Due to the high inertia of the dust grains, the rotation of the grains will be slow in most experiments
\cite{Piel_Melzer_2002}. Hence, irregularities in the dust grain surface that give rise to a complicated surface charge distribution will be the main factor for the charge saturation on the insulating grain in the presence of directed radiation. This will be valid also for slowly spinning grains.
On the other hand, the surface charge distribution on the insulating dust grain exposed to directed radiation will lead to strong electric fields within the grain. This effect will be more pronounced on grains with surface irregularities. The irregularities will eventually be destroyed by strong electric fields. This will be valid also for stationary plasmas and is similar to the sterilization and destruction of bacteria by means of plasma used as a source of UV radiation \cite{McDonald_Curry_2002, Laroussi_Mendis_2003}.
Photo\-emission provides a method for controlling the charge on conducting grains both in vacuum \cite{Sickafoose_Colwell_2000} and in plasma \cite{Miloch_Vladimirov_2008}. For perfectly insulating grains, the total charge saturates only when the total grain charge remains negative, but it depends on the photon incidence angle. The charge should also saturate for slow spinning insulators due to surface irregularities. It oscillates for fast spinning insulating grains, however.
After pulses of radiation, the charge recovers to the value from before the onset of radiation. The recovery is initially mainly due to electrons, and then both electrons and ions. The charging can be approximated by an exponential function with the time constant $\tau$ that is initially comparable with the electron plasma period and then larger. This is in agreement with the previous results from dust grain charging simulations \cite{Miloch_Pecseli_Trulsen_2007}. The charge on conducting grains recovers within one ion plasma period. The small overshoot in the charging characteristics at approximately one ion plasma period after the switch off can be attributed to the ion response due to the reduced ion mass \cite{Miloch_Pecseli_Trulsen_2007}, and the formation of the ion focus. For insulators, the charge recovery may take up to 20 ion plasma periods. This is due to a complicated surface charge distribution on the grain.
For $\alpha=180^{\circ}$, the positive charge on the rear of the grain is reduced slowly after the pulse because of the reduced density in the wake, while the front side of the grain is positively charged by the ion flow. A quadrupole moment is present in the surface charge distribution for several ion plasma periods after a pulse. Consistently, the charge between and after the pulses is less negative for $\alpha=180^{\circ}$ than for other angles. For other angles, the surface charge distribution leads to a more negative total charge after the switch off as compared to the dust grain charge previous to photo\-emission.
The photon fluxes considered for conductors in this work, can be achieved by commercially produced sources of UV radiation (e.g., low pressure mercury lamps) \cite{McDonald_Curry_2002}. In case of lamps it would be necessary to collimate the light, but in case of UV lasers the energies of $E_{h\nu} \in (4.8,7.2)\mathrm{eV}$, corresponding to $\lambda \in (172,258)\mathrm{nm}$, can be achieved for instance by excimer lasers used in photolithography \cite{Ewing_2000}. The photon energies in the range $E_{h\nu} \in (10.3,12.7)\mathrm{eV}$, corresponding to $\lambda \in (97,120) \mathrm{nm}$, are more difficult to obtain, but can still be achieved for instance by free electron lasers \cite{Neil_Meriminga_2002}. In our work we assumed the work function for the insulator to be $W=10~\mathrm{eV}$, which is similar to values from experiments with ice \cite{Westley_Baragiola_1995, Baragiola_Vidal_2003}. However, insulating grains can have lower work functions. The work function of pure ice is $W \approx 8.7~\mathrm{eV}$, and can be significantly lower if the ice contains impurities, as it is expected in the atmosphere \cite{Klumov_Morfill_2005}. Sodium silicate glass can have work function as low as $W=6~\mathrm{eV}$ \cite{Vishnakov_Trukhin_1991}. Therefore, on such insulators similar effects, as demonstrated in this study, should be achieved by lower photon energies.
By illuminating a grain using short pulsed lasers, it is possible to modify the charge on the grain and excite potential oscillations in the wake. The other dust grain, if located in the wake, will experience oscillations. Since the charge on the illuminated grain is determined by photo\-emission, the motion of the particle would provide non-invasive diagnostics for measuring the charge on the grain located in the wake of the illuminated one. On the other hand, continuous illumination of the grain placed in the wake will fix the grain charge and allow for accurate study of the wake of other grain.
The results presented here does not depend on the ion to electron mass ratio, except for the saturation charge on the dust grain without photo\-emission and the charging rate. However, the charge on the grain with photo\-emission does not change with the ion to electron mass ratio. This is because the photo\-emission current does not change with the ion to electron mass ratio, and the ion current to positively charged grains is negligible.
There are certain limitations of our model. The radiation is assumed to be unidirectional and photo\-electrons to be monoenergetic. In many problems in laboratory plasmas, UV radiation will be more isotropic due to the plasma glow and light scattering, while photo\-electron energies will be statistically distributed. Isotropic radiation will cause more homogeneous distributions of the photo\-electons and the grain surface charge. We considered perfectly insulating and perfectly conducting dust grains. Finite conductivity due to impurities and resistivity can modify the results, especially for insulators. These issues were not considered in this work.
\section{Conclusions}
The results from numerical simulations of charging of isolated dust grains in flowing plasmas with photo\-emission were presented for perfectly insulating and perfectly conducting grains. By means of photo\-emission, the total charge on a conducting grain can be effectively controlled. The charge control on insulating grains is more difficult since no charge saturation is observed for stationary grains, and the charging characteristics depend on the angle between the incident photons and the plasma flow. For insulating grains, surface charge irregularities and rotation of the dust grain can redistribute the surface charge on the grain. Fast spinning of the grain results in oscillations of the value of the total grain charge and the density wake behind it.
During photo\-emission, the electrons are non-Boltzmann distributed in the vicinity of the grain. This makes a theoretical analysis of the problem difficult. The plasma is polarized in the vicinity of the grain, which can give rise to strong interactions between many dust grains. For insulators the interactions are controlled by a strong electric dipole moment on the surface, antiparallel to the direction of radiation. After pulses of radiation, the charge, density and potential distributions recover to the conditions from before the photo\-emission. The recovery takes approximately one ion plasma period for conducting grains, and several times longer for insulating grains, due to the complicated charge distributions on the dust grain surface.
By a fine adjustment of the charge with the use of photo\-emission, the coagulation of the dust grains can be induced due to large relative fluctuations of the charge when the total charge on a grain is small. Both continuous and pulsed radiation should allow for non-invasive diagnostics of the charge and wake structure in dusty plasma experiments.
\ack
This work was in part supported by the Norwegian Research Council, NFR, and by the Australian Research Council, ARC. Two of the authors (WJM and HLP) wish to thank Dr. J{\o}rgen Schou for useful discussions on the photo\-emission from insulating materials.
\section*{References}
|
1,108,101,563,190 | arxiv | \section{Introduction}
In this paper we establish relations between the arc space and the Lipschitz geometry of a singular real algebraic variety.
The interest in the Lipschitz geometry of real analytic and algebraic spaces emerged in the 70's of the last century by a conjecture of Siebenmann and Sullivan: there are only countably many local Lipschitz structures on real analytic spaces. Subsequently the Lipschitz geometry of real and complex algebraic singularities attracted much attention and various methods have been developed to study it: stratification theory \cite{Mos85,Par93}, $L$-regular decompositions \cite{Kur92,Par94-L,KP06,Paw08}, Lipschitz triangulations \cite{Val05}, non-archimedean geometry \cite{HY}, and recently, in the complex case, resolution and low dimensional topology \cite{BNP}. In the algebraic case Siebenmann and Sullivan's conjecture was proved in \cite{Par88}. The general analytic case was solved in \cite{Val08}.
In this paper we study various versions of Lipschitz inverse mapping theorems, with respect to the inner distance, for homeomorphisms $f:X\rightarrow Y$ between (possibly singular) real algebraic set germs. Recall that a connected real algebraic, and more generally a connected semialgebraic, subset $X\subset\mathbb R^N$ is path-connected (by rectifiable curves), so we have an \emph{inner} distance on $X$, defined by the infimum over the length of rectifiable curves joining two given points in $X$.
We assume that the homeomorphism $f$ is semialgebraic and generically arc-analytic. For instance the recently studied continuous rational maps \cite{Kuc09,KN,KKK} are of this type.
Arc-analytic mappings were introduced to real algebraic geometry in \cite{Kur88}. Those are the mappings sending by composition real analytic arcs to real analytic arcs. It was shown in \cite{BM90,Par94} that the semialgebraic arc-analytic mappings coincide with the blow-Nash mappings. Moreover, by \cite{PP}, real algebraic sets admit algebraic stratifications with local semialgebraic arc-analytic triviality along each stratum.
What we prove can be stated informally as follows: if $f^{-1}$ is Lipschitz, then so is $f$ itself. The problem is non-trivial even when the germs $(X,x)$ and $(Y,y)$ are non-singular \cite{FKP}. When these germs are singular, then the problem is much more delicate. In fact we have to assume that the motivic measures of the real analytic arcs drawn on $(X,x)$ and $(Y,y)$ are equal.
Developing a rigorous theory of motivic measure on the space of real analytic arcs for real algebraic sets is another main goal of this paper.
We state below a concise version of our main results. For more precise and more general statements see Theorems \ref{thm:IFT} and \ref{thm:mainLip}.
\begin{nthm}
Let $f:(X,x)\rightarrow(Y,y)$ be the germ of a semialgebraic generically arc-analytic homeomorphism between two real algebraic set germs, that are of pure dimension\footnote{For ease of reading, in the introduction we avoid varieties admitting points which have a structure of smooth submanifold of smaller dimension as in the handle of the Whitney umbrella $\{x^2=zy^2\}\subset\mathbb R^3$.} $d$.
Assume that the motivic measures of the real analytic arcs centered at $x$ in $X$ and of the real analytic arcs centered at $y$ in $Y$ are equal (see Section \ref{sec:motivic} for the definition of the motivic measure). Then
\begin{enumerate}
\item If the Jacobian determinant of $f$ is bounded from below then it is bounded from above and $f^{-1}$ is generically arc-analytic.
\item If the inverse $f^{-1}$ of $f$ is Lipschitz with respect to the inner distance then so is $f$.
\end{enumerate}
\end{nthm}
The proof of this theorem is based on motivic integration. Recall that in the case of complex algebraic varieties, motivic integration was introduced by M. Kontsevitch for non-singular varieties in order to avoid the use of $p$-adic integrals. Then the theory was developped and extended to the singular case in \cite{DL99,Bat98,DL02,Loo02}. The motivic measure is defined on the space of formal arcs drawn on an algebraic variety and takes values in a Grothendieck ring which encodes all the additive invariants of the underlying category. One main ingredient consists in reducing the study to truncated arcs in order to work with finite dimensional spaces. Notice that since the seminal paper of Nash \cite{Nash}, it has been established that the arc space of a variety encodes a great deal of information about its singularities.
In the real algebraic set-up, arguments coming from motivic integration were used in \cite{KP03,Fic05,Cam16,Cam17} to study and classify the singularities of real algebraic function germs.
In the present paper we construct a motivic measure and a motivic integral for possibly singular real algebraic varieties. Similarly to the complex case, the motivic integral comes together with a change of variables formula which is convenient to do actual computations in terms of resolution of singularities. In our real algebraic set-up this formula holds for generically one-to-one Nash maps and not merely for the birational ones.
A first difference of the present construction compared to the complex one, is that we work with real analytic arcs and not with all formal arcs. However, thanks to Artin approximation theorem, this difference is minor. More importantly, it is not possible to follow exactly the construction of the motivic measure in the complex case because of several additional difficulties arising from the absence in the real set-up of the Nullstellensatz and of the theorem of Chevalley (the image of a Zariski-constructible set by a regular mapping is Zariski-constructible).
The real motivic measure and the real motivic integral are constructed and studied in Section \ref{sec:motivic}.
\section{Geometric framework}
Throughout this paper, we say that a subset $X\subset\mathbb R^N$ is an algebraic set if it is closed for the Zariski topology, i.e. $X$ may be described as the intersection of the zero sets of polynomials with real coefficients. We denote by $I(X)$ the ideal of $\mathbb R[x_1,\ldots,x_N]$ consisting of the polynomials vanishing on $X$. By noetherianity, we may always assume that the above intersection is indexed by a finite set\footnote{Actually, noticing that $f_1=\cdots=f_s=0\Leftrightarrow f_1^2+\cdots+f_s^2=0$, we may always describe a real algebraic set as the zero-set of only one polynomial.} and that $I(X)=(f_1,\ldots,f_s)$ is finitely generated. The dimension $\dim X$ of $X$ is the dimension of the ring $\mathcal{P}(X)=\quotient{\mathbb R[x_1,\ldots,x_N]}{I(X)}$ of polynomial functions on $X$.
The ring $\mathcal{R}(X)$ of regular functions on $X$ is given by the localization of $\mathcal{P}(X)$ with respect to the multiplicative set $\{h\in\mathcal{P}(X),\,h^{-1}(0)=\varnothing\}$. Regular maps are the morphisms of real algebraic sets.
Unless otherwise stated, we will always use the Euclidean topology and not the Zariski one (for instance for the notions of homeomorphism, map germ or closure).
We say that a $d$-dimensional algebraic set $X$ is non-singular at $x\in X$ if there exist $g_1,\ldots,g_{N-d}\in I(X)$ and an Euclidean open neighborhood $U$ of $x$ in $\mathbb R^N$ such that $U\cap X=U\cap V(g_1,\ldots,g_{N-d})$ and $\operatorname{rank}\left(\pd{g_i}{x_j}(x)\right)=N-d$. Then there exists an open semialgebraic neighborhood of $x$ in $V$ which is a $d$-dimensional Nash submanifold. Notice that the converse doesn't hold \cite[Example 3.3.12.b.]{BCR}. We denote by $\operatorname{Reg}(X)$ the set of non-singular points of $X$. We denote by $X_\mathrm{sing}=X\setminus\operatorname{Reg}(X)$ the set of singular points of $X$, it is an algebraic subset of strictly smaller dimension, see \cite[Proposition 3.3.14]{BCR}.
A semialgebraic subset of $\mathbb R^N$ is the projection of an algebraic subset of $\mathbb R^{N+m}$ for some $m\in\mathbb N_{\ge0}$. Actually, by a result of Motzkin \cite{Mot70}, we may always assume that $m=1$. Equivalently, a subset $S\subset\mathbb R^N$ is semialgebraic if and only if there exist polynomials $f_i,g_{i,1},\ldots,g_{i,s_i}\in\mathbb R[x_1,\ldots,x_N]$ such that $$S=\bigcup_{i=1}^r\left\{x\in\mathbb R^N,\,f_i(x)=0,\,g_{i,1}(x)>0,\ldots,g_{i,s_i}(x)>0\right\}.$$
Notice that semialgebraic sets are closed under union, intersection and cartesian product. They are also closed under projection by the Tarski--Seidenberg Theorem. A function is semialgebraic if so is its graph.
We refer the reader to \cite{BCR} for more details on real algebraic geometry.
Let $X$ be a non-singular real algebraic set and $f:X\rightarrow\mathbb R$. We say that $f$ is a Nash function if it is $C^\infty$ and semialgebraic. Since a semialgebraic function satisfies a non-trivial polynomial equation and since a smooth function satisfying a non-trivial real analytic equation is real analytic \cite{Mal67,Sic70,Boc70}, we obtain that $f$ is Nash if and only if $f$ is real analytic and satisfies a non-trivial polynomial equation.
A subset of a real analytic variety is said to be arc-symmetric in the sense of \cite{Kur88} if, given a real analytic arc, either the arc is entirely included in the set or it meets the set at isolated points only. We are going to work with a slightly different notion defined in \cite{Par04}. We define $\mathcal{AS}^N$ as the boolean algebra generated by semialgebraic\footnote{A subset of $\P_\mathbb R^N$ is semialgebraic if it is for $\P_\mathbb R^N$ seen as an algebraic subset of some $\mathbb R^M$, or, equivalently, if the intersection of the set with each canonical affine chart is semialgebraic.} arc-symmetric subsets of $\P_\mathbb R^N$. We set $$\mathcal{AS}=\bigcup_{N\in\mathbb N_{\ge0}}\mathcal{AS}^N.$$
Formally, a subset $A\subset\P_\mathbb R^N$ is an $\mathcal{AS}$-set if it is semialgebraic and if, given a real analytic arc $\gamma:(-1,1)\rightarrow\P_\mathbb R^N$ such that $\gamma(-1,0)\subset A$, there exists $\varepsilon>0$ such that $\gamma(0,\varepsilon)\subset A$.
Notice that closed $\mathcal{AS}$-subsets of $\P_\mathbb R^N$ are exactly the closed sets of a noetherian topology.
For more on arc-symmetric and $\mathcal{AS}$ sets we refer the reader to \cite{KP07}.
One important property of the $\mathcal{AS}$ sets that we rely on throughout this paper is that it admits an additive invariant richer than the Euler characteristic with compact support, namely the virtual Poincaré polynomial presented later in Section \ref{sec:GR-AS}. This is in contrast to the semialgebraic sets, for which, by a theorem of R. Quarez \cite{Qua01}, every additive homeomorphism invariant of semialgebraic sets factorises through the Euler characteristic with compact support.
Let $E,B,F$ be three $\mathcal{AS}$-sets. We say that $p:E\rightarrow B$ is an $\mathcal{AS}$ piecewise trivial fibration with fiber $F$ if there exists a finite partition $B=\sqcup B_i$ into $\mathcal{AS}$-sets such that $p^{-1}(B_i)\simeq B_i\times F$ where $\simeq$ means bijection with $\mathcal{AS}$-graph.
Notice that, thanks to the noetherianity of the $\mathcal{AS}$-topology, if $p:E\rightarrow B$ is locally trivial with fiber $F$ for the $\mathcal{AS}$-topology\footnote{i.e. for every $x\in B$ there is $U\subset B$ an $\mathcal{AS}$-open subset containing $x$ such that $p^{-1}(U)\simeq U\times F$.}, then it is an $\mathcal{AS}$ piecewise trivial fibration.
\section{Real motivic integration}\label{sec:motivic}
This section is devoted to the construction of a real motivic measure. Notice that a first step in this direction was done by R. Quarez in \cite{Qua01} using the Euler characteristic with compact support for semialgebraic sets. The measure constructed in this section takes advantage of the $\mathcal{AS}$-machinery in order to use the virtual Poincaré polynomial which is a real analogue of the Hodge--Deligne polynomial in real algebraic geometry. This additive invariant is richer than the Euler characteristic since it encodes, for example, the dimension.
Since real algebraic geometry is quite different from complex algebraic geometry as there is, for example, no Nullstellensatz or Chevalley's theorem, the classical construction of the motivic measure does not work as it is in this real context and it is necessary to carefully handle these differences.
\subsection{Real arcs and jets}
We follow the notations of \cite[\S2.4]{Cam16}.
\begin{defn}
The space of real analytic arcs on $\mathbb R^N$ is defined as $$\L(\mathbb R^N)=\left\{\gamma:(\mathbb R,0)\rightarrow\mathbb R^N,\,\gamma\text{ real analytic}\right\}$$
\end{defn}
\begin{defn}
For $n\in\mathbb N_{\ge0}$, the space of $n$-jets on $\mathbb R^N$ is defined as $$\L_n(\mathbb R^N)=\quotient{\L(\mathbb R^N)}{\sim_n}$$ where $\gamma_1\sim_n\gamma_2\Leftrightarrow\gamma_1\equiv\gamma_2\mod t^{n+1}$.
\end{defn}
\begin{notation}
For $m>n$, we consider the following \emph{truncation maps}: $$\pi_n:\L(\mathbb R^N)\rightarrow\L_n(\mathbb R^N)$$ and $$\pi^m_n:\L_m(\mathbb R^N)\rightarrow\L_n(\mathbb R^N).$$
\end{notation}
\begin{defn}
For an algebraic set $X\subset\mathbb R^N$, we define the space of real analytic arcs on $X$ as $$\L(X)=\left\{\gamma\in\L(\mathbb R^N),\,\forall f\in I(X),\,f(\gamma(t))=0\right\}$$ and the space of $n$-jets on $X$ as $$\L_n(X)=\left\{\gamma\in\L_n(\mathbb R^N),\,\forall f\in I(X),\,f(\gamma(t))\equiv0\mod t^{n+1}\right\}.$$
The truncation maps induce the maps $$\pi_n:\L(X)\rightarrow\L_n(X)$$ and $$\pi^m_n:\L_m(X)\rightarrow\L_n(X).$$
\end{defn}
\begin{rem}
Notice that $\L_n(X)$ is a real algebraic variety. Indeed, let $f\in I(X)$ and $a_0,\ldots,a_n\in\mathbb R^N$, then we have the following expansion $$f(a_0+a_1t+\cdots+a_nt^n)=P_0^f(a_0,\ldots,a_n)+P_1^f(a_0,\ldots,a_n)t+\cdots+P_n^f(a_0,\ldots,a_n)t^n+\cdots$$ where the coefficients $P_i^f$ are polynomials. Hence $\L_n(X)$ is the algebraic subset of $\mathbb R^{N(n+1)}$ defined as the zero-set of the polynomials $P_i^f$ for $f\in I(X)$ and $i\in\{0,\ldots,n\}$. \\
In the same way, we may think of $\L(X)$ as an infinite-dimensional algebraic variety.
\end{rem}
\begin{rem}
When $X$ is non-singular the following equality holds: $$\L_n(X)=\pi_n(\L(X))$$
Indeed, using Hensel's lemma, we may always lift an $n$-jet to a formal arc on $X$ and then use Artin approximation theorem to find an analytic arc whose expansion coincides up to the degree $n+1$. However this equality doesn't hold anymore when $X$ is singular as highlighted in \cite[Example 2.30]{Cam16}. Hence it is necessary to distinguish the space $\L_n(X)$ of $n$-jets on $X$ and the space $\pi_n(\L(X))\subset\L_n(X)$ of $n$-jets on $X$ which may be lifted to real analytic arcs on $X$. We have the following exact statement.
\end{rem}
\begin{prop}[{\cite[Proposition 2.31]{Cam16}}]
Let $X$ be an algebraic subset of $\mathbb R^N$. Then the following are equivalent :
\begin{enumerate}[label=(\roman*),nosep]
\item $X$ is non-singular.
\item $\forall n\in\mathbb N_{\ge0}$, $\pi_n:\L(X)\rightarrow\L_n(X)$ is surjective.
\item $\forall n\in\mathbb N_{\ge0}$, $\pi^{n+1}_n:\L_{n+1}(X)\rightarrow\L_n(X)$ is surjective.
\end{enumerate}
\end{prop}
\begin{prop}[{\cite[Proposition 2.33]{Cam16}}]\label{prop:dimfibers}
Let $X$ be a $d$-dimensional algebraic subset of $\mathbb R^N$. Then
\begin{enumerate}[label=(\arabic*),ref=\ref{prop:dimfibers}.(\arabic*),nosep]
\item\label{item:truncfibers} For $m\ge n$, the dimensions of the fibers of ${\pi^{m}_n}_{|\pi_{m}(\L(X))}:\pi_m\left(\L(X)\right)\rightarrow\pi_n\left(\L(X)\right)$ are smaller than or equal to $(m-n)d$.
\item The fiber $\left(\pi^{n+1}_n\right)^{-1}(\gamma)$ of $\pi^{n+1}_n:\L_{n+1}(X)\rightarrow\L_n(X)$ is either empty or isomorphic to $T^{\mathrm{Zar}}_{\gamma(0)}X$.
\end{enumerate}
\end{prop}
\begin{thm}[A motivic corollary of Greenberg Theorem]\label{thm:greenberg}
Let $X\subset\mathbb R^N$ an algebraic subset. There exists $c\in\mathbb N_{>0}$ (depending only on $I(X)$) such that $$\forall n\in\mathbb N_{\ge0},\,\pi_n(\L(X))=\pi^{cn}_n(\L_{cn}(X))$$
\end{thm}
\begin{proof}
Assume that $I(X)=(f_1,\ldots,f_s)$.
By the main theorem of \cite{Gre66}, there exist $N\in\mathbb N_{>0}$, $l\in\mathbb N_{>0}$ and $\sigma\in\mathbb N_{\ge0}$ (depending only on the ideal of $\mathbb R\{t\}[x_1,\ldots,x_N]$ generated by $f_i\in\mathbb R[x_1,\ldots,x_N]\subset\mathbb R\{t\}[x_1,\ldots,x_N]$) such that $\forall\nu\ge N,\,\forall\gamma\in\mathbb R\{t\}^N$, if $f_1(\gamma(t))\equiv\cdots\equiv f_s(\gamma(t))\equiv0\mod t^\nu$, then there exists $\eta\in\mathbb R\{t\}^N$ such that $\eta(t)\equiv\gamma(t)\mod t^{\left\lfloor\frac{\nu}{l}\right\rfloor-\sigma}$ and $f_1(\eta(t))=\cdots=f_s(\eta(t))=0$.
Fix $c=\max\left(l(\sigma+2),N\right)$. We are going to prove that $$\forall n\in\mathbb N_{\ge0},\,\pi_n(\L(X))=\pi^{cn}_n(\L_{cn}(X))$$
It is enough to prove that $\pi^{cn}_n(\L_{cn}(X))\subset\pi_n(\L(X))$ for $n\ge1$.
Let $n\ge1$. Let $\tilde\gamma\in\L_{cn}(X)$. Then there exists $\gamma\in\mathbb R\{t\}^N$ such that $\gamma(t)\equiv\tilde\gamma(t)\mod t^{cn+1}$ and $$f_1(\gamma(t))\equiv\cdots\equiv f_s(\gamma(t))\equiv0\mod t^{cn+1}$$
Notice that $cn+1\ge N$ so that there exists $\eta\in\mathbb R\{t\}^N$ such that $\eta(t)\equiv\gamma(t)\mod t^{\left\lfloor\frac{cn+1}{l}\right\rfloor-\sigma}$ and $f_1(\eta(t))=\cdots=f_s(\eta(t))=0$.
Since $$\left\lfloor\frac{cn+1}{l}\right\rfloor-\sigma>n$$ we have that $\pi^{cn}_n(\tilde\gamma)=\pi_n(\eta)\in\pi_n(\L(X))$.
\end{proof}
\begin{rem}
By Tarski--Seidenberg theorem, $\pi_n(\L(X))=\pi^{cn}_n(\L_{cn}(X))$ is semialgebraic as the projection of an algebraic set. However, $\pi_n(\L(X))$ may not be $\mathcal{AS}$ (and thus not Zariski-constructible) as shown in \cite[Example 2.32]{Cam16}.
This is a major difference with the complex case where $\pi_n(\L(X))$ is Zariski-constructible by Chevalley theorem as the projection of a complex algebraic variety.
\end{rem}
\begin{defn}
Let $X$ be an algebraic subset of $\mathbb R^N$. We define the ideal $H_X$ of $\mathbb R[x_1,\ldots,x_N]$ by $$H_X=\sum_{f_1,\ldots,f_{N-d}\in I(X)}\Delta(f_1,\ldots,f_{N-d})((f_1,\ldots,f_{N-d}):I(X))$$ where \begin{itemize}[nosep]
\item $d=\dim X$
\item $\Delta(f_1,\ldots,f_{N-d})$ is the ideal generated by the $N-d$ minors of the Jacobian matrix $$\left(\frac{\partial f_i}{\partial x_j}\right)_{\substack{i=1,\ldots,N-d\\j=1,\ldots,N}}$$
\item $((f_1,\ldots,f_{N-d}):I(X))=\left\{g\in\mathbb R[x_1,\ldots,x_N],\,gI(X)\subset(f_1,\ldots,f_{N-d})\right\}$ is the ideal quotient of the ideal $(f_1,\ldots,f_{N-d})$ by the ideal $I(X)$
\end{itemize}
\end{defn}
\begin{rem}
By \cite[Lemma 4.1]{Cam16}, $V(H_X)=X_\mathrm{sing}$.
\end{rem}
\begin{defn}
Let $X\subset\mathbb R^N$ be an algebraic subset and $e\in\mathbb N_{\ge0}$. We set $$\L^{(e)}(X)=\left\{\gamma\in\L(X),\,\exists h\in H_X,\,h(\gamma(t))\nequiv0\mod t^{e+1}\right\}$$
\end{defn}
\begin{rem}\label{rem:singarcs}
From now on, we set $$\L(X_\mathrm{sing})=\left\{\gamma\in\L(\mathbb R^N),\,\forall h\in H_X,\,h(\gamma(t))=0\right\}$$ and $$\L_n(X_\mathrm{sing})=\left\{\gamma\in\L_n(\mathbb R^N),\,\forall h\in H_X,\,h(\gamma(t))\equiv0\mod t^{n+1}\right\}.$$
Notice that $$\left\{\gamma\in\L(\mathbb R^N),\,\forall h\in H_X,\,h(\gamma(t))=0\right\}=\left\{\gamma\in\L(\mathbb R^N),\,\forall f\in I(X_\mathrm{sing}),\,f(\gamma(t))=0\right\}$$ but be careful that
\begin{align*}
&\left\{\gamma\in\L_n(\mathbb R^N),\,\forall h\in H_X,\,h(\gamma(t))\equiv0\mod t^{n+1}\right\}\\
&\quad\quad\quad\quad\neq\left\{\gamma\in\L_n(\mathbb R^N),\,\forall f\in I(X_\mathrm{sing}),\,f(\gamma(t))\equiv0\mod t^{n+1}\right\}
\end{align*}
Notice also that since the proof of Greenberg Theorem \ref{thm:greenberg} is algebraic, it holds for $\L(X_\mathrm{sing})$ (just use the ideal $H_X$ in the proof).
\end{rem}
\begin{rem}
$\L(X)=\left(\displaystyle\bigcup_{e\in\mathbb N_{\ge0}}\L^{(e)}(X)\right)\bigsqcup\L(X_\mathrm{sing})$
\end{rem}
The following proposition is a real version of \cite[Lemma 4.1]{DL99}. Its proof is quite similar to the one of \cite[Lemma 4.5]{Cam16}.
\begin{prop}\label{prop:ptf}
Let $X$ be a $d$-dimensional algebraic subset of $\mathbb R^N$ and $e\in\mathbb N_{\ge0}$. Then, for $n\ge e$,
\begin{enumerate}[nosep,label=(\roman*)]
\item $\pi_n\left(\L^{(e)}(X)\right)\in\mathcal{AS}$
\item $\pi^{n+1}_n:\pi_{n+1}\left(\L^{(e)}(X)\right)\rightarrow\pi_{n}\left(\L^{(e)}(X)\right)$ is an $\mathcal{AS}$ piecewise trivial fibration with fiber $\mathbb R^d$.
\end{enumerate}
\end{prop}
\begin{proof}
By \cite[Lemma 4.7]{Cam16}, $\L^{(e)}(X)$ is covered by finitely many sets of the form $$A_{\mathbf f,h,\delta}=\left\{\gamma\in\L(\mathbb R^N),\,(h\delta)(\gamma(t))\nequiv0\mod t^{e+1}\right\}$$ where\ \ $\mathbf f=(f_1,\ldots,f_{N-d})\in I(X)^{N-d}$, $\delta$ is a $N-d$ minor of the Jacobian matrix $\left(\frac{\partial f_i}{\partial x_j}\right)_{\substack{i=1,\ldots,N-d\\j=1,\ldots,N}}$ and $h\in((f_1,\ldots,f_{N-d}):I(X))$.
Moreover, $$\L(X)\cap A_{\mathbf f,h,\delta}=\left\{\gamma\in\L(\mathbb R^N),\,f_1(\gamma(t))=\cdots=f_{N-d}(\gamma(t))=0,\,(h\delta)(\gamma(t))\nequiv0\mod t^{e+1}\right\},$$
so that $\displaystyle\L^{(e)}(X)=\L(X)\cap\bigcup_{\mathrm{finite}}A_{\mathbf f,h,\delta}=\bigcup_{\mathrm{finite}}\left(\L(X)\cap A_{\mathbf f,h,\delta}\right)$. \\
For $e'\le e$, we set $$A_{\mathbf f,h,\delta,e'}=\left\{\gamma\in A_{\mathbf f,h,\delta},\,\operatorname{ord}_t\delta(\gamma(t))=e',\,\operatorname{ord}_t\delta'(\gamma(t))\ge e',\,\text{for all $N-d$ minor $\delta'$ of $\left(\frac{\partial f_i}{\partial x_j}\right)$}\right\}$$
in order to refine the above cover: $\displaystyle\L^{(e)}(X)=\bigcup_{\mathrm{finite}}\left(\L(X)\cap A_{\mathbf f,h,\delta,e'}\right)$. \\
Fix some set $A=A_{\mathbf f,h,\delta,e'}\cap\L(X)$. Notice that if $\pi_n(\gamma)\in\pi_n(A)$ and if $\pi_{n+1}(\eta)\in\pi_{n+1}(\L^{(e)}(X))$ is in the preimage of $\pi_n(\gamma)$ by $\pi^{n+1}_n$ then $\pi_{n+1}(\eta)\in\pi_{n+1}(A)$. \\
Indeed, $\eta\in\L(X)$ so $f_1(\eta)=\cdots=f_{N-d}(\eta)=0$ and since $\pi_n(\eta)=\pi_n(\gamma)$, we also get that $(h\delta)(\eta(t))\nequiv0\mod t^{e+1}$, $\operatorname{ord}_t\delta(\eta(t))=e'$ and $\operatorname{ord}_t\delta'(\eta(t))\ge e'$. \\
Hence it is enough to prove the lemma for $\pi^{n+1}_n:\pi_{n+1}(A)\rightarrow\pi_n(A)$. \\
We are first going to prove that the fibers of $\pi^{n+1}_n:\pi_{n+1}(A)\rightarrow\pi_n(A)$ are $d$-dimensional affine subspaces of $\mathbb R^N$. We can reorder the coordinates so that $\delta$ is the determinant of the first $N-d$ columns of $\Delta=\left(\frac{\partial f_i}{\partial x_j}\right)$. Then, similarly to the proof of \cite[Lemma 4.5]{Cam16}, there is a matrix $P$ such that $P\Delta=(\delta I_{N-d},W)$ and $\forall\gamma\in A,\,W(\gamma(t))\equiv0\mod t^{e'}$.
Fix $\gamma\in A$. The elements of the fiber of $\pi_{n+1}(A)\rightarrow\pi_n(A)$ over $\pi_{n}(\gamma)$, $\gamma\in A$, are exactly the $$\pi_{n+1}\big(\gamma(t)+t^{n+1}\nu(t)\big)$$ for $\nu\in\mathbb R\{t\}^d$ such that $\mathbf f(\gamma(t)+t^{n+1}\nu(t))=0$.
\noindent Using Taylor expansion, this last condition becomes $$\mathbf f(\gamma(t))+t^{n+1}\Delta(\gamma(t))\nu(t)+t^{2(n+1)}(\cdots)=0$$
\noindent Or equivalently, since $\gamma\in A$, $$t^{n+1}\Delta(\gamma(t))\nu(t)+t^{2(n+1)}(\cdots)=0$$
\noindent Multiplying by $t^{-n-1-e'}P$, we get
$$t^{-e'}\big(\delta(\gamma(t))I_{N-d},W(\gamma(t))\big)\nu(t)+t^{n+1-e'}(\cdots)=0$$
\noindent Notice that $\operatorname{ord}_t(\delta(\gamma(t))=e'$. Hence, by Hensel's lemma and Artin approximation theorem, the sought fiber is the set of $$\pi_{n+1}\big(\gamma(t)\big)+t^{n+1}\nu_0$$ with $\nu_0$ satisfying the linear system induced by
$$t^{-e'}\big(\delta(\gamma(t))I_{N-d},W(\gamma(t))\big)\nu_0\equiv0\mod t$$
Let $\nu_0$ be a solution, then its first $N-d$ coefficients are expressed as linear combinations of the last $d$. Therefore each fiber of $\pi^{n+1}_n:\pi_{n+1}(A)\rightarrow\pi_n(A)$ is a $d$-dimensional affine subspace of $\mathbb R^N$. \\
By Greenberg Theorem \ref{thm:greenberg}, there is a $c\in\mathbb N_{\ge0}$ such that $\pi_{cn}(A)$ is an $\mathcal{AS}$-set. Then $\pi_n(A)$ is an $\mathcal{AS}$-set as the image of $\pi^{cn}_n:\pi_{cn}(A)\rightarrow\pi_n(A)$ whose fibers have odd Euler characteristic with compact support, see \cite[Theorem 4.3]{Par04}. \\
Finally, notice that $\pi_{n+1}(A)\subset\pi_{n}(A)\times\mathbb R^N$ and that $\pi^{n+1}_n:\pi_{n+1}(A)\rightarrow\pi_n(A)$ is simply the first projection. Then, according to the following lemma, $\pi^{n+1}_n:\pi_{n+1}(A)\rightarrow\pi_n(A)$ is an $\mathcal{AS}$ piecewise trivial fibration.
\end{proof}
\begin{lemma}
Let $A$ be an $\mathcal{AS}$-set, $\Omega\subset A\times\mathbb R^N$ be an $\mathcal{AS}$-set and $\pi:\Omega\rightarrow A$ be the natural projection. \\
Assume that for all $x\in A$, the fiber $\Omega_x=\pi^{-1}(x)$ is a $d$-dimensional affine subspace of $\mathbb R^N$. \\
Then $\pi:\Omega\rightarrow A$ is an $\mathcal{AS}$ piecewise trivial fibration.
\end{lemma}
\begin{proof}
Up to embedding the space of $d$-dimensional affine subspaces of $\mathbb R^N$ into the space of $d+1$-dimensional vector suspaces of $\mathbb R^{N+1}$, we may assume that the fibers are linear subspaces. \\
Denote by $G=\mathbb G_{N,d}$ the Grassmannian of $d$-dimensional linear subspaces of $\mathbb R^N$ and let $E\to G$ be the tautological bundle; i.e. for $g\in G$, the fiber $E_g$ is the subspace given by $g$. \\
We are first going to prove that the following set is $\mathcal{AS}$, $$\tilde A=\left\{(x,g)\in A\times G,\,\Omega_x=E_g\right\}.$$
Identifying $G$ with the set of symmetric idempotent $(N\times N)$-matrices of trace $d$, see \cite[Proof of Theorem 3.4.4]{BCR}, for $i=1,\ldots,N$ we define the regular map $w_i:G\to\mathbb R^N$ as the projection to the coordinates corresponding to the $i$th-column of such matrices. Then $E_g$ is linearly spanned by $\left(w_i(g)\right)$.
Hence $L_i=\left\{(v,g)\in\mathbb R^N\times G,\,v=w_i(g)\right\}$ is $\mathcal{AS}$. Thus
$$\left\{(x,v,g)\in A\times\mathbb R^N\times G,\,v=w_i(g)\in\Omega_x\right\}=(\Omega\times G)\cap(A\times L_i)$$
is $\mathcal{AS}$ and its projection $$X_i=\left\{(x,g)\in A\times G,\,w_i(g)\in\Omega_x\right\}$$ is also $\mathcal{AS}$ as the image of an $\mathcal{AS}$-set by an injective $\mathcal{AS}$-map, see \cite[Theorem 4.5]{Par04}. \\
Then $\tilde A = \bigcap_i X_i$ is $\mathcal{AS}$ as claimed. \\
Let $x_0\in A$. Fix a coordinate system on $\mathbb R^N$ such that $\Omega_{x_0}=\left\{x_{d+1}=\cdots=x_N=0\right\}$ and fix the projection $\Lambda:\mathbb R^N\rightarrow\mathbb R^d$ defined by $\Lambda(x_1,\ldots,x_N)=(x_1,\ldots,x_d)$.
Let $\omega:\tilde A\rightarrow\mathbb R^{N\choose d}$ be such that the coordinates of $\omega(x,g)$ are the $d$-minors of $\left(\Lambda(w_i(g))\right)_{i=1,\ldots,N}$.
Then $$\tilde{A_0}=\left\{(x,g)\in\tilde A,\,\Lambda:\Omega_x\rightarrow\mathbb R^d \text{ is of rank }d\right\}$$ is an $\mathcal{AS}$-set as the complement of $\omega^{-1}(0)$.
Therefore $$A_0=\left\{x\in A,\,\Lambda:\Omega_x\rightarrow\mathbb R^d \text{ is of rank }d\right\}$$ is $\mathcal{AS}$ as the image of the $\mathcal{AS}$-set $\tilde{A_0}$ by the projection to the first factor which is an injective $\mathcal{AS}$-map. \\
Thus $\Phi(x,v)=(x,\Lambda(v))$ is a bijection whose graph is $\mathcal{AS}$.
$$\xymatrix{\pi^{-1}(A_0) \ar[rr]^{\Phi} \ar[rd]_{\pi} & & A_0\times\mathbb R^d \ar[ld]^{\mathrm{pr}_{A_0}} \\ & A_0 & }$$
Consequently $\pi:\Omega\rightarrow A$ is locally trivial for the $\mathcal{AS}$-topology and hence it is an $\mathcal{AS}$ piecewise trivial fibration.
\end{proof}
\subsection{The Grothendieck ring of $\mathcal{AS}$-sets}\label{sec:GR-AS}
\begin{defn}
Let $K_0(\mathcal{AS})$ be the free abelian group generated by $[X]$, $X\in\mathcal{AS}$, modulo
\begin{enumerate}[label=(\roman*),nosep]
\item Let $X,Y\in\mathcal{AS}$. If there is a bijection $X\rightarrow Y$ with $\mathcal{AS}$-graph, then $[X]=[Y]$;
\item If $Y\subset X$ are two $\mathcal{AS}$-sets, then $[X]=[X\setminus Y]+[Y]$.
\end{enumerate}
We put a ring structure on $K_0(\mathcal{AS})$ by adding the following relation:
\begin{enumerate}[label=(\roman*),nosep,resume]
\item If $X,Y\in\mathcal{AS}$, then $[X\times Y]=[X][Y]$.
\end{enumerate}
\end{defn}
\begin{notation}
We set $0=[\varnothing]$, $1=[\mathrm{pt}]$ and $\mathbb L=[\mathbb R]$.
\end{notation}
\begin{rem}
Notice that $0$ is the unit of the addition and $1$ the unit of the multiplication.
\end{rem}
\begin{rem}
If $p:E\rightarrow B$ is an $\mathcal{AS}$ piecewise trivial fibration with fiber $F$, then $$[E]=[B][F]$$
\end{rem}
\begin{defn}
We set $\mathcal M=K_0(\mathcal{AS})\left[\mathbb L^{-1}\right]$.
\end{defn}
The authors of \cite{MP03} proved there exists a unique additive (and multiplicative) invariant of real algebraic varieties up to biregular morphisms which coincides with the Poincaré polynomial for compact non-singular varieties. This construction relies on the weak factorization theorem. Then G. Fichou \cite{Fic05} extended this construction to $\mathcal{AS}$-sets up Nash isomorphisms.
Next, in \cite{MP11}, they gave a new construction of the virtual Poincaré polynomial, related to the weight filtration they introduced in real algebraic geometry. They proved it is an invariant of $\mathcal{AS}$-sets up to homeomorphism with $\mathcal{AS}$-graph. Actually, using the additivity, they proved it is an invariant of $\mathcal{AS}$-sets up to $\mathcal{AS}$-bijections (see \cite[Remark 4.15]{Cam17}).
\begin{thm}[{\cite{MP03,Fic05,MP11}}]
There is a unique ring morphism $\beta:K_0(\mathcal{AS})\rightarrow\mathbb Z[u]$ such that if $X$ compact and non-singular then $$\beta([X])=\sum_{i\ge0}\dim H_i(X,\mathbb Z_2)u^i.$$
We say that $\beta([X])$ is the \emph{virtual Poincaré polynomial} of $X$. \\
Moreover, if $X\neq\varnothing$, $\deg\beta(X)=\dim X$ and the leading coefficient of $\beta(X)$ is positive.
\end{thm}
\begin{thm}[{\cite[Theorem 1.16]{Fic17}}]
The virtual Poincaré polynomial is a ring isomorphism $$\beta:K_0(\mathcal{AS})\xrightarrow{\sim}\mathbb Z[u].$$
\end{thm}
\begin{rem}
The virtual Poincaré polynomial induces a ring isomorphism
$$\beta:\mathcal M\rightarrow\mathbb Z[u,u^{-1}].$$
\end{rem}
\begin{defn}
We define the ring $\widehat{\mathcal M}$ as the completion of $\mathcal M$ with respect to the ring filtration\footnote{i.e. $\mathcal F^{m+1}\mathcal M\subset\mathcal F^m\mathcal M$ and $\mathcal F^m\mathcal M\cdot\mathcal F^n\mathcal M\subset\mathcal F^{m+n}\mathcal M$. The last condition induces a ring structure on the group $\widehat{\mathcal M}$.} defined by the following subgroups induced by dimension $$\mathcal F^m\mathcal M=\left<[S]\mathbb L^{-i}, i-\dim S \ge m\right>$$ i.e. $$\widehat{\mathcal M}=\quotient{\varprojlim\mathcal M}{\mathcal F^m\mathcal M}.$$
\end{defn}
\begin{prop}
The virtual Poincaré polynomial induces a ring isomorphism $$\beta:\widehat{\mathcal M}\rightarrow\mathbb Z[u]\llbracket u^{-1}\rrbracket.$$
\end{prop}
\begin{proof}
We have to prove that $$\varprojlim_m\quotient{\mathbb Z[u,u^{-1}]}{\mathcal F^m\mathbb Z[u,u^{-1}]}=\mathbb Z[u]\llbracket u^{-1}\rrbracket$$ where $\mathcal F^m\mathbb Z[u,u^{-1}]=\left<f\in\mathbb Z[u,u^{-1}], \deg f\le-m\right>$.
For $n<m$, we define $$\rho_{m,n}:\quotient{\mathbb Z[u,u^{-1}]}{\mathcal F^m\mathbb Z[u,u^{-1}]}\rightarrow\quotient{\mathbb Z[u,u^{-1}]}{\mathcal F^n\mathbb Z[u,u^{-1}]}$$ by $$\sum_{k=-m+1}^ra_ku^k\mapsto\sum_{k=-n+1}^ra_ku^k$$ and $$\rho_{m}:\mathbb Z[u]\llbracket u^{-1}\rrbracket\rightarrow\quotient{\mathbb Z[u,u^{-1}]}{\mathcal F^n\mathbb Z[u,u^{-1}]}$$ by $$\sum_{k=-\infty}^ra_ku^k\mapsto\sum_{k=-m+1}^ra_ku^k$$
By construction, $$\varprojlim_m\quotient{\mathbb Z[u,u^{-1}]}{\mathcal F^m\mathbb Z[u,u^{-1}]}=\left\{(f_m)\in\prod_{m\in\mathbb Z}\quotient{\mathbb Z[u,u^{-1}]}{\mathcal F^m\mathbb Z[u,u^{-1}]},\,n<m\Rightarrow\rho_{m,n}(f_m)=f_n\right\}$$
The morphism $$\varphi:\mathbb Z[u]\llbracket u^{-1}\rrbracket\rightarrow\varprojlim_m\quotient{\mathbb Z[u,u^{-1}]}{\mathcal F^m\mathbb Z[u,u^{-1}]}$$ defined by $f\mapsto(\rho_m(f))_{m\in\mathbb Z}$ is an isomorphism.
\end{proof}
\begin{defn}\label{defn:vdim}
For $\alpha\in\mathcal M$, we define the virtual dimension of $\alpha$ by $\dim\alpha=m$ where $m$ is the only integer such that $\alpha\in\mathcal F^{-m}\mathcal M\setminus\mathcal F^{-m+1}\mathcal M$.
\end{defn}
\begin{prop}
$\dim\alpha=\deg(\beta(\alpha))$
\end{prop}
\begin{rem}
Notice that for $x\in\mathcal M$, $\left(x+\mathcal F^m\mathcal M\right)_m$ defines a basis of open neighborhoods. This topology coincides with the one induced by the non-archimedean norm $\|\cdot\|:\mathcal M\rightarrow\mathbb R$ defined by $\|\alpha\|=e^{\dim(\alpha)}$. The completion $\widehat\mathcal M$ is exactly the topological completion with respect to this non-archimedean norm. Particularly,
\begin{itemize}[nosep]
\item Let $(\alpha_n)\in\mathcal M$, then $\alpha_n\rightarrow0$ in $\widehat\mathcal M$ if and only if $\dim(\alpha_n)\rightarrow-\infty$.
\item Let $(\alpha_n)\in\mathcal M$, then $\sum_n\alpha_n$ converges in $\widehat\mathcal M$ if and only if $\alpha_n\rightarrow0$ in $\widehat\mathcal M$.
\item The following equality holds in $\widehat\mathcal M$: $$(1-\mathbb L^{-p})\sum_{i=0}^\infty\mathbb L^{-pi}=1$$
\end{itemize}
\end{rem}
\begin{defn}\label{defn:order}
We define an order on $\widehat\mathcal M$ as follows. For $a,b\in\widehat\mathcal M$, we set $a\preceq b$ if and only if either $b=a$ or the leading coefficient of the virtual Poincaré polynomial $\beta(b-a)$ is positive.
\end{defn}
\begin{rem}
Notice that this real setting has good algebraic properties compared to its complex counterpart:
\begin{itemize}[nosep]
\item $K_0(\mathcal{AS})$ is an integral domain whereas $K_0(\mathrm{Var}_\mathbb C)$ is not \cite{Poo02}. Indeed, there is no zero divisor in $K_0(\mathcal{AS})$ whereas the class of the affine line is a zero divisor of $K_0(\mathrm{Var}_\mathbb C)$ \cite{Bor14} \cite{Mar16}. Notice that in particular $K_0(\mathrm{Var}_\mathbb C)\rightarrow\mathcal M_\mathbb C= K_0(\mathrm{Var}_\mathbb C)\left[\mathbb L_\mathbb C^{-1}\right]$ is not injective.
\item The natural map $\mathcal M\rightarrow\widehat\mathcal M$ is injective. Indeed its kernel is $\cap_m\mathcal F^m\mathcal M$ and the virtual Poincaré polynomial allows us to conclude: if $\alpha\in\cap_m\mathcal F^m\mathcal M$, then, for all $m\in\mathbb Z$, $\deg\alpha\le-m$ and hence $\alpha=0$. In the complex case, it is not known whether $\mathcal M_\mathbb C\rightarrow\widehat\mathcal M_\mathbb C$ is injective.
\end{itemize}
\end{rem}
\subsection{Real motivic measure}
M. Kontsevitch introduced motivic integration in the non-singular case where the measurable sets were the cylinders by using the fact that they are stable. Still in the non-singular case, V. Batyrev \cite[\S6]{Bat98} enlarged the collection of measurable sets: a subset of the arc space is measurable if it may be approximated by stable sets.
Concerning the singular case, J. Denef and F. Loeser \cite{DL99} defined a measure and a first family of measurable sets including cylinders. Then, in \cite[Appendix]{DL02}, they used the tools they developped in the singular case to adapt the definition of V. Batyrev to the singular case. See also \cite{Loo02}.
From now on we assume that $X$ is a $d$-dimensional algebraic subset of $\mathbb R^N$.
\begin{defn}
A subset $A\subset\L(X)$ is said to be \emph{stable} at level $n$ if:
\begin{itemize}[nosep]
\item For $m\ge n$, $\pi_m(A)$ is an $\mathcal{AS}$-subset of $\L_m(X)$;
\item For $m\ge n$, $A=\pi_m^{-1}(\pi_m(A))$;
\item For $m\ge n$, $\pi^{m+1}_m:\pi_{m+1}(A)\rightarrow\pi_m(A)$ is an $\mathcal{AS}$ piecewise trivial fibration with fiber $\mathbb R^d$.
\end{itemize}
\end{defn}
\begin{rem}
Notice that, for the two first points, it is enough to verify that $\pi_n(A)\in\mathcal{AS}$ and that $A=\pi_n^{-1}(\pi_n(A))$ only for $n$. Indeed, then, for $m\ge n$, $\pi_m(A)=(\pi^m_n)^{-1}(\pi_n(A))$ is an $\mathcal{AS}$-set as inverse image of an $\mathcal{AS}$-set by a projection.
\end{rem}
Then the following proposition holds (notice that the condition $A=\pi_m^{-1}(\pi_m(A))$ is quite important).
\begin{prop}
If $A,B$ are stable subsets of $\L(X)$, then $A\cup B$, $A\cap B$ and $A\setminus B$ are stable too.
\end{prop}
\begin{rem}
Notice that $\L(X)$ may not be stable when $X$ is singular.
\end{rem}
\begin{defn}
For $A\subset\L(X)$ a stable set, we define its measure by $$\mu(A)=\frac{[\pi_n(A)]}{\mathbb L^{(n+1)d}}\in\mathcal M,\,n\gg1.$$
\end{defn}
\begin{defn}
The virtual dimension of a stable set is $$\dim(A)=\dim(\pi_n(A))-(n+1)d,\,n\gg1.$$
\end{defn}
\begin{rem}
Notice that the previous definitions don't depend on $n$ for $n$ big enough.
\end{rem}
\begin{rem}
Notice that $\dim(A)=\dim(\mu(A))$ where the second dimension is the one introduced in Definition \ref{defn:vdim}.
\end{rem}
\begin{defn}\label{defn:measurable}
A subset $A\subset\L(X)$ is measurable if, for every $m\in\mathbb Z_{<0}$, there exist
\begin{itemize}[nosep]
\item a stable set $A_m\subset\L(X)$;
\item a sequence of stable sets $(C_{m,i}\subset\L(X))_{i\ge0}$
\end{itemize}
such that
\begin{itemize}[nosep]
\item $\forall i$, $\dim C_{m,i}<m$;
\item $A\Delta A_m\subset\cup C_{m,i}$
\end{itemize}
Then we define the measure of $A$ by $\displaystyle\mu(A)=\lim_{m\to-\infty}\mu(A_m)$.
\end{defn}
\begin{prop}\label{prop:Mlimit}
The previous limit is well defined in $\widehat\mathcal M$ and doesn't depend on the choices.
\end{prop}
The proof of the above Proposition, presented below, relies on the following two lemmas.
\begin{lemma}\label{lem:ASBaire}
Let $(A_i)_{i\in\mathbb N_{\ge0}}$ be a decreasing sequence of non-empty $\mathcal{AS}$-sets $$A_1\supset A_2\supset\cdots$$
Then $$\bigcap_{i\in N}A_i\neq\varnothing.$$
\end{lemma}
\begin{proof}
Recall that $\clos[\mathcal{AS}]{A}$ denotes the smallest closed $\mathcal{AS}$-set containing $A$. We have the following sequence which stabilizes by noetherianity of the $\mathcal{AS}$-topology:
$$\clos[\mathcal{AS}]{A_1}\supset \clos[\mathcal{AS}]{A_2}\supset\cdots\supset\clos[\mathcal{AS}]{A_k}=\clos[\mathcal{AS}]{A_{k+1}}=\cdots$$
Recall that $\mathcal{AS}$-sets are exactly the constructible subsets of projective spaces for the $\mathcal{AS}$-topology whose closed sets are the semialgebraic arc-symmetric sets in the sense of \cite{Kur88}. Hence $A_l=\cup_{\mathrm{finite}}(U_i\cap V_i)$ where $U_i$ is $\mathcal{AS}$-open, $V_i$ is $\mathcal{AS}$-closed and $U_i\cap V_i\neq\varnothing$. We may assume that the $V_i$'s are irreducible (up to spliting them) and that $\clos[\mathcal{AS}]{U_i\cap V_i}=V_i$ (up to replacing $V_i$ by $\clos[\mathcal{AS}]{U_i\cap V_i}$). Hence we obtain the following decomposition as a union of finitely many irreducible closed subsets $\clos[\mathcal{AS}]{A_k}=\cup V_i$ (it is not necessarily the irreducible decomposition since we may have $V_i\subset V_j$).
Fix $Z$ an $\mathcal{AS}$-irreducible subset of $\clos[\mathcal{AS}]{A_k}$. By the previous discussion, for $l\ge k$, there exists $U_l$ an open dense $\mathcal{AS}$-subset of $Z$ such that $U_l\subset A_l$.
By \cite[Remark 2.7]{Par04}, $\dim(Z\setminus U_l)<\dim U_l$ so that $Z\setminus U_l$ is a closed subset of $Z$ with empty interior for the Euclidean topology. From Baire theorem, we deduce that the Euclidean interior of $\cup_{l\ge k}Z\setminus U_l$ is empty. Hence $\cap_{l\ge k}U_l$ is non-empty.
\end{proof}
The following lemma is an adaptation to the real context of \cite[Theorem 6.6]{Bat98}.
\begin{lemma}\label{lem:finitesubcov}
Let $A\subset\L(X)$ be a stable set and $(C_i)_{i\in\mathbb N_{\ge0}}$ be a family of stable sets such that $$A\subset\bigcup_{i\in\mathbb N_{\ge0}}C_i$$
Then there exists $l\in\mathbb N_{\ge0}$ such that $$A\subset\bigcup_{i=0}^lC_i$$
\end{lemma}
\begin{proof}
Without loss of generality, we may assume that $C_i\subset A$ (up to replacing $C_i$ by $C_i\cap A$).
Set $D_i=A\setminus\left(C_1\cup\cdots\cup C_i\right)$ so that we get a decreasing sequence of stable sets $$D_1\supset D_2\supset D_3\supset\cdots$$ satisfying $$\bigcap_{i\in\mathbb N_{\ge0}} D_i=\varnothing$$
Assume by contradiction that $A$ may not be covered by finitely many $C_i$, then $$\forall i\in\mathbb N_{\ge0}, D_i\neq\varnothing$$
Now assume that $A$ is stable at level $n$ and that $D_i$ is stable at level $n_i\ge n$. Then $\pi_n(D_i)=\pi^{n_i}_n(\pi_{n_i}(D_i))\in\mathcal{AS}$ as the image of an $\mathcal{AS}$-set by a regular map whose fibers have odd Euler characteristic with compact support, see \cite[Theorem 4.3]{Par04}. Hence, by Lemma \ref{lem:ASBaire}, $$B_n=\bigcap_{i\in\mathbb N_{\ge0}}\pi_n(D_i)\neq\varnothing$$
Choose $u_n\in B_n$.
Now set $$B_{n+1}=\bigcap_{i\in\mathbb N_{\ge0}}\pi_{n+1}(D_i)\neq\varnothing$$ As before each $\pi_{n+1}(D_i)$ is a non-empty $\mathcal{AS}$-set. Notice that $(\pi^{n+1}_n)^{-1}(u_n)$ is a non-empty $\mathcal{AS}$-subset of $\L_{n+1}(X)$. Then, by Lemma \ref{lem:ASBaire}, $B_{n+1}\cap(\pi^{n+1}_n)^{-1}(u_n)\neq\varnothing$. This way, there exists $u_{n+1}\in B_{n+1}$ such that $\pi^{n+1}_{n}(u_{n+1})=u_n$.
Therefore, we may inductively construct a sequence $\left(u_m\in\L_m(X)\right)_{m\ge n}$ such that:
\begin{itemize}
\item $\displaystyle u_m\in B_m=\bigcap_{i\in\mathbb N_{\ge0}}\pi_{m}(D_i)\neq\varnothing$;
\item $\pi^{m+1}_m(u_{m+1})=u_m$.
\end{itemize}
This defines an element $u\in\L(X)$ such that for all $m\ge n$, $\pi_m(u)\in B_m$. Hence for $i\in\mathbb N_{\ge0}$, $\pi_{n_i}(u)\in B_{n_i}\subset\pi_{n_i}(D_i)$. Since $D_i$ is stable at level $n_i$, $u\in\pi_{n_i}^{-1}(\pi_{n_i}(D_i))=D_i$.
Therefore $u\in\bigcap D_i$ which is a contradiction.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:Mlimit}]
We first prove that the limit is well defined. Let $A_m,C_{m,i}$ be as in the definition. Then for $m_1,m_2\in\mathbb Z_{<0}$, $$A_{m_1}\Delta A_{m_2}\subset\bigcup_{i\in\mathbb N_{\ge0}}(C_{m_1,i}\cup C_{m_2,i})$$
By Lemma \ref{lem:finitesubcov}, there exists $l\in\mathbb N_{\ge0}$ such that $$A_{m_1}\Delta A_{m_2}\subset\bigcup_{i=0}^l(C_{m_1,i}\cup C_{m_2,i})$$ hence $\dim(A_{m_1}\Delta A_{m_2})\le\max(m_1,m_2)$. Thus $\mu(A_m)$ is a Cauchy sequence and its limit is well defined in the completion $\widehat\mathcal M$.
\ \\
We now check that the limit doesn't depend on the choices. Let $A_m',C_{m,i}'$ be another choice of data for the measurability of $A$. Fix $m\in\mathbb Z_{<0}$ then $$A_m\Delta A_m'\subset\bigcup_{i\in\mathbb N_{\ge0}}(C_{m,i}\cup C_{m,i}')$$
By Lemma \ref{lem:finitesubcov}, there exists $l\in\mathbb N_{\ge0}$ such that $$A_m\Delta A_m'\subset\bigcup_{i=0}^l(C_{m,i}\cup C_{m,i}')$$
Hence $\dim(A_m\Delta A_m')<m$ and $\displaystyle\lim_{m\to-\infty}\mu(A_m)=\lim_{m\to-\infty}\mu(A_m')$.
\end{proof}
\begin{prop}
If $A,B$ are measurable subsets of $\L(X)$, then $A\cup B$, $A\cap B$ and $A\setminus B$ are measurable too.
\end{prop}
\begin{proof}
Assume that $A$ and $B$ are measurable, respectively with the data $A_m,C_{m,i}$ and $B_m,D_{m,i}$.
\begin{itemize}
\item $A\cup B$ is measurable since $$(A\cup B)\Delta(A_m\cup B_m)\subset\bigcup (C_{m,i}\cup D_{m,i})$$
\item In order to prove that $A\setminus B$ is measurable, we may use the previous point and assume that $B\subset A$ up to replacing $A$ by $A\cup B$. Similarly, we may assume that $B_m\subset A_m$. Then $$(A\setminus B)\Delta(A_m\setminus B_m)\subset\bigcup C_{m,i}\cup D_{m,i}$$
\item Using both previous points, we obtain that $A\cap B=(A\cup B)\setminus\left(((A\cup B)\setminus A)\cup((A\cup B)\setminus B)\right)$ is measurable.
\end{itemize}
\end{proof}
\begin{prop}
The measure is additive on disjoint unions: $$\mu(A\sqcup B)=\mu(A)+\mu(B)$$
\end{prop}
\begin{proof}
According to the previous proof we have
$$\mu(A\sqcup B)=\lim_{m\to\infty}\left(\mu(A_m)+\mu(B_m)-\mu(A_m\cap B_m)\right)$$
and $$0=\mu(A\cap B)=\lim_{m\to\infty}\mu(A_m\cap B_m)$$
Hence $$\mu(A\sqcup B)=\lim_{m\to\infty}\mu(A_m)+\lim_{m\to\infty}\mu(B_m)=\mu(A)+\mu(B)$$
\end{proof}
\begin{prop}\label{prop:measurableseries}
Let $(B_n)_{n\in\mathbb N_{\ge0}}$ be a sequence of measurable sets with $\dim B_n\rightarrow-\infty$. \\
Then $B=\cup B_n$ is measurable and $$\mu(B)=\lim_{n\to+\infty}\mu\left(\bigcup_{k\le n}B_k\right).$$
Furthermore, if the sets $B_n$ are pairwise disjoint, then $$\mu(B)=\sum_{n=0}^{\infty}\mu\left(B_k\right).$$
\end{prop}
\begin{proof}
By Definition \ref{defn:measurable} for each $n\in \mathbb N_{\ge0}$ and $m\in \mathbb Z_{< 0}$ there are stable sets $A_{n,m}$ and $C_{n,m,i}$, $\dim C_{n,m,i} < m$ such that $$B_n \Delta A_{n,m} \subset \bigcup_{i} C_{n,m,i}. $$
For $m\in \mathbb Z_{< 0}$ choose $N\in\mathbb N_{\ge0}$ such that if $n\ge N$ then $\dim B_n <m$. \\
Note that then $\dim A_{n,m}<m$. Let us set $A_m=\displaystyle\bigcup_{k<N}A_{k,m}$. Then $$\bigcup_n B_n\Delta A_m\subset\bigcup_{n,i}C_{n,m,i}\cup\bigcup_{n\ge N}A_{n,m}.$$
This shows that $B$ is measurable. The other properties follows easily.
\end{proof}
\subsection{Measurability of the cylinders}
\begin{lemma}\label{lem:dimLS}
Let $X$ be a $d$-dimensional algebraic subset of $\mathbb R^N$. Let $S\subset X$ be an algebraic subset of $X$ with $\dim S<d$. For every $e\in\mathbb N_{\ge0}$, there exists $N\in\mathbb N_{\ge0}$ such that $$\forall i,n\in\mathbb N_{\ge0},\,n\ge i\ge N\Rightarrow \dim\left(\pi_n\left(\pi_i^{-1}\left(\L_i(S)\right)\right)\right)\le(n+1)d-e-1$$ where $\pi_n$ denotes the $n$-th truncation map for $X$ $$\forall n\in\mathbb N_{\ge0},\,\pi_n:\L(X)\rightarrow\L_n(X)$$ and where $\L(S)\subset\L(X)$ and $\forall i\in\mathbb N_{\ge0},\,\L_i(S)\subset\L_i(X)$.
\end{lemma}
\begin{proof}
By Greenberg Theorem \ref{thm:greenberg} applied to $S$, there exists $c\in\mathbb N_{\ge0}$ such that $$\pi_e\left(\pi_{ce}^{-1}\left(\L_{ce}(S)\right)\right)=\pi_e\left(\L(S)\right)$$
Let $N=ce$ and $n\ge N$. By \ref{item:truncfibers} applied to $$\pi_n\left(\pi_{ce}^{-1}\left(\L_{ce}(S)\right)\right)\rightarrow\pi_e\left(\pi_{ce}^{-1}\left(\L_{ce}(S)\right)\right)$$ we get that $$\dim\left(\pi_n\left(\pi_{ce}^{-1}\left(\L_{ce}(S)\right)\right)\right)\le\dim\left(\pi_e\left(\pi_{ce}^{-1}\left(\L_{ce}(S)\right)\right)\right)+(n-e)d$$
But $$\pi_e\left(\pi_{ce}^{-1}\left(\L_{ce}(S)\right)\right)=\pi_e\left(\L(S)\right)$$ so that (see \cite[Proposition 2.33.(i)]{Cam16})$$\dim\left(\pi_n\left(\pi_{ce}^{-1}\left(\L_{ce}(S)\right)\right)\right)\le(e+1)(d-1)+(n-e)d=(n+1)-e-1$$
Now if $n\ge i\ge N(=ce)$, the result derives from the inclusion $$\pi_n\left(\pi_i^{-1}\left(\L_i(S)\right)\right)\subset\pi_n\left(\pi_{ce}^{-1}\left(\L_{ce}(S)\right)\right)$$
\end{proof}
\begin{defn}
Let $X\subset\mathbb R^N$ be an algebraic subset. For $i\in\mathbb N_{>0}$, we set $$C_i(X)=\L^{(i)}(X)\setminus\L^{(i-1)}(X).$$
\end{defn}
\begin{rem}
$\displaystyle C_i(X)=\left\{\gamma\in\L(X),\,\forall h\in H_X,\,\operatorname{ord}_th\circ\gamma\ge i,\,\exists\tilde h\in H_X,\,\operatorname{ord}_t\tilde h\circ\gamma=i\right\}$
\end{rem}
\begin{prop}\label{prop:dimCi}
For $i\in\mathbb N_{>0}$, $C_i(X)$ is stable and $$\lim_{i\to+\infty}\dim C_i(X)=-\infty$$
\end{prop}
\begin{proof}
Fix some $i\in\mathbb N_{>0}$. First, $C_i(X)$ is stable at level $i$ since the $\L^{(e)}(X)$ are stable by Proposition \ref{prop:ptf}.
Notice that $\pi_{i-1}(C_i(X))\subset\L_{i-1}(X_\mathrm{sing})$. Hence $$C_i(X)\subset\pi_{i-1}^{-1}(\pi_{i-1}(C_i(X)))\subset\pi_{i-1}^{-1}\left(\L_{i-1}(X_\mathrm{sing})\right)$$ and then $$\pi_i(C_i(X))\subset\pi_i\left(\pi_{i-1}^{-1}\left(\L_{i-1}(X_\mathrm{sing})\right)\right).$$
As explained in Remark \ref{rem:singarcs}, we may apply Greenberg Theorem \ref{thm:greenberg} to $H_X$ so that Lemma \ref{lem:dimLS} holds for $X_\mathrm{sing}$.
Hence, for all $e\in\mathbb N_{\ge0}$, there exists $N\in\mathbb N_{\ge0}$ so that for $i\ge N$ we have $$\dim\left(\pi_i(C_i(X))\right)-(i+1)d\le\dim\left(\pi_i\left(\pi_{i-1}^{-1}\left(\L_{i-1}(X_\mathrm{sing})\right)\right)\right)-(i+1)d\le -e$$
\end{proof}
\begin{cor}\label{cor:measALe}
A subset $A\subset\L(X)$ is measurable if and only if $\forall e\gg0,\, A\cap\L^{(e)}(X)$
is measurable.
\end{cor}
\begin{proof}
By Proposition \ref{prop:ptf} every $\L^{(e)}(X)$ is stable and therefore if $A$ is
measurable so is every $A\cap\L^{(e)}(X)$.
Suppose now that $\forall e\ge N,\, A\cap\L^{(e)}(X)$ is measurable.
Then so are $A\cap C_i(X)$ for $i> N$. Hence $$A=\displaystyle\big(A\cap\L^{(N)}(X)\big)\cup\bigcup_{i>N}\big(A\cap C_i(X)\big)$$ is measurable by Proposition \ref{prop:measurableseries}.
\end{proof}
\begin{defn}
A cylinder at level $n$ is a subset $A\subset\L(X)$ of the form $$A=\pi_n^{-1}(C)$$ for $C$ an $\mathcal{AS}$-subset of $\L_n(X)$.
\end{defn}
\begin{rem}
A cylinder at level $n$ is a cylinder at level $m$ for $m\ge n$. Indeed $\pi_n=\pi^m_n\circ\pi_m$ so that $\pi_n^{-1}(C)=\pi_m^{-1}\left((\pi^m_n)^{-1}(C)\right)$ where $(\pi^m_n)^{-1}(C)\in\mathcal{AS}$ as the inverse image of an $\mathcal{AS}$-set by a projection.
\end{rem}
The following result derives from Proposition \ref{prop:ptf}.
\begin{prop}\label{prop:stabcyl}
If $X$ is non-singular, a cylinder of $\L(X)$ is stable.
\end{prop}
\begin{prop}
A cylinder $A\subset\L(X)$ is measurable and $$\mu(A)=\lim_{m\to+\infty}\mu\left(A\cap\L^{(m)}(X)\right)$$
\end{prop}
\begin{proof}
By Proposition \ref{prop:dimCi}, we may construct by induction an increasing map $\varphi:\mathbb N_{>0}\rightarrow\mathbb N_{>0}$ such that $$i\ge\varphi(s)\Rightarrow\dim C_i(X)<-s$$
Let $m\in\mathbb Z_{<0}$. Set $A_m=A\cap\L^{(\varphi(-m))}(X)$. Then $A_m$ is stable by Proposition \ref{prop:ptf} and $$A\Delta A_m=A\setminus\L^{(\varphi(-m))}(X)=A\cap\pi_{\varphi(-m)}^{-1}\left(\L_{\varphi(-m)}(X_\mathrm{sing})\right)\subset\bigcup_{i\ge\varphi(-m)}C_i(X)$$ where $C_i(X)$ is stable with $\dim C_i(X)<m$. Hence $A$ is measurable and $$\mu(A)=\lim_{m\to+\infty}\mu\left(A\cap\L^{(\varphi(m))}(X)\right)$$
The second part of the statement derives from the fact that $\left(\mu\left(A\cap\L^{(m)}(X)\right)\right)_{m\in\mathbb N_{>0}}$ is already a Cauchy sequence. Assume that $A$ is a cylinder at level $s$ then $A\cap\L^{(m)}(X)$ is stable at level $\max(m,s)$. Indeed fix $k\in\mathbb N_{\ge0}$. Then, for $n\ge m'\ge m\ge\max(\varphi(k),s)$, we get
\begin{align*}
\mu\left(A\cap\L^{(m)'}(X)\right)-\mu\left(A\cap\L^{(m)}(X)\right)&=\frac{\left[\pi_n\left(A\cap\L^{(m')}(X)\right)\right]}{\mathbb L^{-(n+1)d}}-\frac{\left[\pi_n\left(A\cap\L^{(m)}(X)\right)\right]}{\mathbb L^{-(n+1)d}}\\
&=\frac{\left[\pi_n\left(A\cap\L^{(m')}(X)\right)\setminus\pi_n\left(A\cap\L^{(m)}(X)\right)\right]}{\mathbb L^{-(n+1)d}}\in\mathcal F^k\mathcal M
\end{align*}
\end{proof}
\begin{cor}
For $Y\subset X$ an algebraic subset, set $$\L(X,Y)=\left\{\gamma\in\L(X),\,\gamma(0)\in Y\right\}$$ then
\begin{itemize}[nosep]
\item $\L(X,Y)$ is a measurable subset of $\L(X)$;
\item in particular, $\L(X)$ is measurable.
\end{itemize}
\end{cor}
\begin{proof}
Indeed, $\L(X)=\pi_0^{-1}(X)$ and $\L(X,Y)=\pi_0^{-1}(Y)$ are cylinders.
\end{proof}
\begin{cor}
If $Y\subset X$ is an algebraic subset with $\dim Y<\dim X$, then $\L(Y)\subset\L(X)$ is measurable of measure $0$: $$\mu_{\L(X)}\left(\L(Y)\right)=0$$
\end{cor}
\begin{proof}
Notice that $\L(Y)$ is a countable intersection of cylinders: $$\L(Y)=\bigcap_{n\in\mathbb N_{\ge0}}\pi_n^{-1}(\L_n(Y))$$
Then $\pi_n^{-1}(\L_n(Y))$ is measurable as a cylinder and $$\dim\mu\left(\pi_n^{-1}(\L_n(Y))\right)\le(n+1)(\dim Y-\dim X)\xrightarrow[n\to\infty]{}-\infty$$
\end{proof}
\subsection{Motivic integral and the change of variables formula}
\begin{defn}
Let $X\subset\mathbb R^N$ be an algebraic subset. Let $A\subset\L(X)$ be a measurable set. Let $\alpha:A\rightarrow\mathbb N_{\ge0}\cup\{\infty\}$ be such that each fiber is measurable and $\mu(\alpha^{-1}(\infty))=0$. We say that $\mathbb L^{-\alpha}$ is integrable if the following sequence converges in $\widehat\mathcal M$: $$\int_A\mathbb L^{-\alpha}\d\mu=\sum_{n\ge0}\mu\left(\alpha^{-1}(n)\right)\mathbb L^{-n}$$
\end{defn}
\begin{defn}\label{defn:gen1to1}
We say that a semialgebraic map $\sigma:M\rightarrow X$ between semialgebraic sets is \emph{generically one-to-one} if there exists a semialgebraic set $S\subset X$ satisfying $\dim(S)<\dim(X)$, $\dim\left(\sigma^{-1}(S)\right)<\dim(M)$ and $\forall p\in X\setminus S,\,\#\sigma^{-1}(p)=1$.
\end{defn}
\begin{defn}\label{defn:ordjac1}
Let $\sigma:M\rightarrow X$ be a Nash map between a $d$-dimensional non-singular algebraic set $M$ and an algebraic subset $X\subset\mathbb R^N$. For a real analytic arc $\gamma:(\mathbb R,0)\rightarrow M$, we set $$\operatorname{ord}_t\operatorname{jac}_\sigma(\gamma(t))=\min\left\{\operatorname{ord}_t\delta(\gamma(t)),\,\forall\delta\text{ $d$-minor of }\operatorname{Jac}_\sigma\right\},$$ where the Jacobian matrix $\operatorname{Jac}_\sigma$ is defined using a local system of coordinates around $\gamma(0)$ in $M$.
\end{defn}
The following lemma is a generalization of Denef--Loeser change of variables key lemma \cite[Lemma 3.4]{DL99} to generically one-to-one Nash maps in the real context.
\begin{lemma}[{\cite[Lemma 4.5]{Cam16}}]\label{lem:CoV}
Let $\sigma:M\rightarrow X$ be a proper generically one-to-one Nash map where $M$ is a non-singular $d$-dimensional algebraic subset of $\mathbb R^p$ and $X$ a $d$-dimensional algebraic subset of $\mathbb R^N$.
For $e,e'\in\mathbb N_{\ge0}$ and $n\in\mathbb N_{\ge0}$, set $$\Delta_{e,e'}=\left\{\gamma\in\L(M),\,\operatorname{ord}_t\operatorname{jac}_\sigma(\gamma(t))=e,\,\sigma_*(\gamma)\in\L^{(e')}(X)\right\}, \quad \Delta_{e,e',n}=\pi_n\left(\Delta_{e,e'}\right),$$
where $\sigma_*:\L(M)\rightarrow\L(X)$ is induced by $\sigma$. \\
Then for $n\ge\operatorname{max}(2e,e')$ the following holds:
\begin{enumerate}[label=(\roman*)]
\item Given $\gamma\in\Delta_{e,e'}$ and $\delta\in\L(X)$ with $\sigma_*(\gamma)\equiv\delta\mod t^{n+1}$ there exists a unique $\eta\in\L(M)$ such that $\sigma_*(\eta)=\delta$ and $\eta\equiv\gamma\mod t^{n-e+1}$.
\item Let $\gamma,\eta\in\L(M)$. If $\gamma\in\Delta_{e,e'}$ and $\sigma_*(\gamma)\equiv\sigma_*(\eta)\mod t^{n+1}$ then $\gamma\equiv\eta\mod t^{n-e+1}$ and $\eta\in\Delta_{e,e'}$.
\item The set $\Delta_{e,e',n}$ is a union of fibers of $\sigma_{*n}$.
\item $\sigma_{*n}(\Delta_{e,e',n})$ is an $\mathcal{AS}$-set and $\sigma_{*n|\Delta_{e,e',n}}:\Delta_{e,e',n}\rightarrow \sigma_{*n}(\Delta_{e,e',n})$ is an $\mathcal{AS}$ piecewise trivial fibration with fiber $\mathbb R^e$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{lem:invcyl}
Let $\sigma:X\rightarrow Y$ be a Nash map between algebraic sets. If $A\subset\L(Y)$ is a cylinder then $\sigma_*^{-1}(A)\subset\L(X)$ is also a cylinder.
\end{lemma}
\begin{proof}
Assume that $A=\pi_n^{-1}(C)$ where $C$ is an $\mathcal{AS}$-subset of $\L_n(Y)$. Then we have the following commutative diagram:
$$
\xymatrix{
\L(X) \ar[r]^{\sigma_*} \ar[d]_{\pi_n} & \L(Y) \ar[d]^{\pi_n} \\
\L_n(X) \ar[r]_{\sigma_{*n}} & \L_n(Y)
}
$$
Notice that $\sigma_{*n}$ is polynomial and thus its graph is $\mathcal{AS}$ so that the inverse image of an $\mathcal{AS}$-set by $\sigma_{*n}$ is also an $\mathcal{AS}$-set. Hence $\sigma_*^{-1}(A)=\pi_n^{-1}(\sigma_{*n}^{-1}(C))$ where $\sigma_{*n}^{-1}(C)$ is $\mathcal{AS}$.
\end{proof}
\begin{prop}\label{prop:preim}
Let $\sigma:M\rightarrow X$ be a proper generically one-to-one Nash map where $M$ is a non-singular $d$-dimensional algebraic subset of $\mathbb R^p$ and $X$ a $d$-dimensional algebraic subset of $\mathbb R^N$. \\
If $A\subset\L(X)$ is a measurable subset, then the inverse image $\sigma_*^{-1}(A)$ is also measurable.
\end{prop}
\begin{proof}
Let $$S'=\clos[\mathrm{Zar}]{\sigma^{-1}(X_\mathrm{sing}\cup S)\cup\Sigma_\sigma}$$ where $S\subset X$ is as in Definition \ref{defn:gen1to1} and $\Sigma_\sigma$ is the critical set of $\sigma$. Notice that the Zariski-closure of a semialgebraic set doesn't change its dimension. Therefore $\L(S')$ is a measurable subset of $\L(M)$ with measure $0$.
Hence $\sigma_*^{-1}(A)$ is measurable if and only if $\sigma_*^{-1}(A)\setminus\L(S')$ is measurable and then $$\mu\left(\sigma_*^{-1}(A)\right)=\mu\left(\sigma_*^{-1}(A)\setminus\L(S')\right)$$
Since $A$ is measurable, there exists $A_m$ and $C_{m,i}$ as in Definition \ref{defn:measurable}. Hence for all $m\in\mathbb Z_{<0}$, $$\sigma_*^{-1}(A)\Delta\sigma_*^{-1}(A_m)\subset\bigcup_i\sigma_*^{-1}(C_{m,i})$$ and
\begin{equation}\label{eqn:meas}
\left(\sigma_*^{-1}(A)\setminus\L(S')\right)\Delta\left(\sigma_*^{-1}(A_m)\setminus\L(S')\right)\subset\bigcup_i\left(\sigma_*^{-1}(C_{m,i})\setminus\L(S')\right)
\end{equation}
By Lemma \ref{lem:invcyl} the sets $\sigma_*^{-1}(A_m)$ and $\sigma_*^{-1}(C_{m,i})$ are cylinders, therefore they are stable sets by Proposition \ref{prop:stabcyl} since $M$ is non-singular.
By definition of $S'$, $$\L(M)\setminus\L(S')\subset\bigcup_{e,e'}\Delta_{e,e'}$$
By Lemma \ref{lem:finitesubcov}, there exists $k$ such that $$\L(M)\setminus\L(S')\subset\bigcup_{e,e'\le k}\Delta_{e,e'}$$
Thus, by Lemma \ref{lem:CoV}, $\dim\left(\sigma_*^{-1}(C_{m,i})\setminus\L(S')\right)<k+m$.
This allows one to prove that $\sigma_*^{-1}(A)\setminus\L(S')$ is measurable by shifting the index $m$ in \eqref{eqn:meas}.
\end{proof}
\begin{prop}\label{prop:measIm}
Let $\sigma:M\rightarrow X$ be a proper generically one-to-one Nash map where $M$ is a non-singular $d$-dimensional algebraic subset of $\mathbb R^p$ and $X$ a $d$-dimensional algebraic subset of $\mathbb R^N$. \\
If $A\subset\L(M)$ is a measurable subset, then the image $\sigma_*(A)$ is also measurable.
\end{prop}
\begin{proof}
We use the same $S'$ as in the proof of Proposition \ref{prop:preim}. Then $\L(S')$ and $\sigma_*\left(\L(S')\right)$ have measure $0$ so that it is enough to prove that $\sigma_*\left(A\setminus\L(S')\right)$ is measurable.
\begin{lemma}\label{lem:stableIm}
There exists $k$ such that for every stable set $B\subset\L(M)\setminus\L(S')$, $\sigma_*(B)$ is stable and $\dim\left(\sigma_*(B)\right)<\dim(B)-k$.
\end{lemma}
\begin{proof}
By definition of $S'$ and Lemma \ref{lem:finitesubcov}, there exists $k$ such that $$B\subset\L(M)\setminus\L(S')\subset\bigcup_{e,e'\le k}\Delta_{e,e'}$$
Then the lemma derives from Lemma \ref{lem:CoV}.
\end{proof}
Assume that $A$ is measurable with the data $A_m,C_{m,i}$ then $$A\Delta A_m\subset\bigcup C_{m,i}$$ so that $$(A\setminus\L(S'))\Delta(A_m\setminus\L(S'))\subset\bigcup C_{m,i}\setminus\L(S')$$ and $$\sigma_*(A\setminus\L(S'))\Delta\sigma_*(A_m\setminus\L(S'))\subset\sigma_*\left((A\setminus\L(S'))\Delta(A_m\setminus\L(S'))\right)\subset\bigcup\sigma_*\left(C_{m,i}\setminus\L(S')\right)$$
Then we may conclude using Lemma \ref{lem:stableIm}.
\end{proof}
\begin{thm}\label{thm:CoV}
Let $\sigma:M\rightarrow X$ be a proper generically one-to-one Nash map where $M$ is a non-singular $d$-dimensional algebraic subset of $\mathbb R^p$ and $X$ a $d$-dimensional algebraic subset of $\mathbb R^N$. \\
Let $A\subset\L(X)$ be a measurable set. Let $\alpha:A\rightarrow\mathbb N_{\ge0}\cup\{\infty\}$ be such that $\mathbb L^{-\alpha}$ is integrable. \\
Then $\mathbb L^{-(\alpha\circ\sigma_*+\operatorname{ord}_t\operatorname{jac}_\sigma)}$ is integrable on $\sigma_*^{-1}(A)$ and
$$\int_{A\cap\Im(\sigma_*)}\mathbb L^{-\alpha}\d\mu_{\L(X)}=\int_{\sigma^{-1}_*(A)}\mathbb L^{-(\alpha\circ\sigma_*+\operatorname{ord}_t\operatorname{jac}_\sigma)}\d\mu_{\L(M)}$$
where $\sigma_*:\L(M)\rightarrow\L(X)$ is induced by $\sigma$.
\end{thm}
\begin{proof}
Set $\beta=\alpha\circ\sigma_*+\operatorname{ord}_t\operatorname{jac}_\sigma$. By Proposition \ref{prop:preim}, $\sigma^{-1}_*(A)$ and the fibers of $\alpha\circ\sigma_*$ are measurable.
Notice that $$\beta^{-1}(n)=\bigsqcup_{e=0}^n\left((\alpha\circ\sigma_*)^{-1}(n-e)\cap(\operatorname{ord}_t\operatorname{jac}_\sigma)^{-1}(e)\cap\sigma_*^{-1}(A)\right)$$ so that the fibers of $\beta$ are measurable.
As in the proof of Proposition \ref{prop:preim}, up to replacing $\sigma_*^{-1}(A)$ by $\sigma_*^{-1}(A)\setminus\L(S')$, we may assume that $$\sigma_*^{-1}(A)\subset\bigcup_{e,e'\le k}\Delta_{e,e'}$$
Using Lemma \ref{lem:CoV}, we obtain
\begin{align*}
\int_{\sigma_*^{-1}(A)}\mathbb L^{-(\alpha\circ\sigma_*+\operatorname{ord}_t\operatorname{jac}_\sigma)}\d\mu_{\L(M)} &= \sum_{e,e'\le k}\int_{\sigma_*^{-1}(A)\cap\Delta_{e,e'}}\mathbb L^{-(\alpha\circ\sigma_*+\operatorname{ord}_t\operatorname{jac}_\sigma)}\d\mu_{\L(M)} \\
&=\sum_{e,e'\le k}\sum_{n\ge e}\mu\left(\gamma\in\sigma_*^{-1}(A)\cap\Delta_{e,e'},\,\alpha\circ\sigma_*(\gamma)=n-e\right)\mathbb L^{-n} \\
&=\sum_{e,e'\le k}\sum_{n\ge e}\mu\left(\gamma\in A\cap\sigma_*(\Delta_{e,e'}),\,\alpha(\gamma)=n-e\right)\mathbb L^{-(n-e)} \\
&=\sum_{e,e'\le k}\sum_{n\ge 0}\mu\left(\gamma\in A\cap\sigma_*(\Delta_{e,e'}),\,\alpha(\gamma)=n\right)\mathbb L^{-n} \\
&=\sum_{n\ge 0}\sum_{e,e'\le k}\mu\left(\gamma\in A\cap\sigma_*(\Delta_{e,e'}),\,\alpha(\gamma)=n\right)\mathbb L^{-n} \\
&=\sum_{n\ge 0}\mu\left(\gamma\in A\cap\Im(\sigma_*),\,\alpha(\gamma)=n\right)\mathbb L^{-n} \\
&=\int_{A\cap\Im(\sigma_*)}\mathbb L^{-\alpha}\d\mu_{\L(X)}
\end{align*}
Notice that $\Im(\sigma_*)$ is measurable by Proposition \ref{prop:measIm}.
\end{proof}
\section{An inverse mapping theorem for blow-Nash maps}
\subsection{Blow-Nash and generically arc-analytic maps}
\begin{defn}[{\cite[Définition 4.1]{Kur88}}]
Let $X$ and $Y$ be two real algebraic sets. We say that $f:X\rightarrow Y$ is arc-analytic if for every real analytic arc $\gamma:(-1,1)\rightarrow X$ the composition $f\circ\gamma:(-1,1)\rightarrow Y$ is also real analytic.
\end{defn}
\begin{defn}[{\cite[Definition 2.22]{Cam16}}]
Let $X$ and $Y$ be two algebraic sets. We say that the map $f:X\rightarrow Y$ is generically arc-analytic if there exists an algebraic subset $S\subset X$ satisfying $\dim S<\dim X$ and such that if $\gamma:(-1,1)\rightarrow X$ is a real analytic arc not entirely included in $S$, then the composition $f\circ\gamma:(-1,1)\rightarrow Y$ is also real analytic.
\end{defn}
\begin{defn}
Let $X$ and $Y$ be two algebraic sets. We say that $f:X\rightarrow Y$ is blow-Nash if $f$ is semialgebraic and if there exists a finite sequence of algebraic blowings-up with non-singular centers $\sigma:M\rightarrow X$ such that $f\circ\sigma:M\rightarrow Y$ is real analytic (and hence Nash).
\end{defn}
\begin{lemma}[{\cite[Lemma 2.27]{Cam16}}]\label{lem:BNisGenAA}
Let $f:X\rightarrow Y$ be a semialgebraic map between two real algebraic sets. Then $f:X\rightarrow Y$ is blow-Nash if and only if $f$ is generically arc-analytic. \\
\end{lemma}
\begin{rem}
In the non-singular case, the previous lemma derives from \cite{BM90} or \cite{Par94}.
\end{rem}
\begin{assumption}\label{hyp:S}
For the rest of this section we assume that $X\subset\mathbb R^N$ and $Y\subset\mathbb R^M$ are two $d$-dimensional algebraic sets and that $f:X\rightarrow Y$ is blow-Nash. Since $f$ is, in particular, semialgebraic, it is real analytic in the complement of an algebraic subset $S$ of $X$ of dimension $<d$. We may choose $S$ sufficiently big so that $S$ contains the singular set of $X$ and the non-analyticity set of $f$.
Because $f$ is blow-Nash we may suppose, moreover, that $f$ is analytic on every analytic arc $\gamma$ not included entirely in $S$. Then for every $\gamma\in\L(X)\setminus\L(S)$, $f\circ\gamma\in\L(Y)$.
\end{assumption}
We say that such $f$ \emph{is generically of maximal rank} if the Jacobian matrix of $f$ is of rank $d$ on a dense semialgebraic subset of $X\setminus S$.
Let $\gamma\in\L(X)\setminus\L(S)$. Then the limit of tangent spaces $T_{\gamma(t)} X$ exists in the Grassmannian $\mathbb G_{N,d}$ of $d$-dimensional linear subspaces of $\mathbb R^N$. After a linear change of coordinates we may assume that this limit is equal to $\mathbb R^d\subset\mathbb R^N$. Then $(x_1,\ldots,x_d)$ is a local system of coordinates at every $\gamma(t)$, $t\ne 0$. Fix $J=\{j_1,\ldots,j_d)$ with $1\le j_1<\cdots<j_d\le M$. Then, for $t\ne 0$,
$$\d f_{j_1}\wedge\cdots\wedge\d f_{j_d}(\gamma(t))=\eta_J(t)\,\d x_{1}\wedge\cdots\wedge\d x_{d},$$
where $\eta_J(t)$ is a semialgebraic function, well-defined for $t\ne 0$.
Indeed, let $\Gamma_f\subset\mathbb R^{N+M}$ denote the graph of $f$ and let $\tau_{\Gamma_f}:\operatorname{Reg}(\Gamma_f)\to\mathbb G_{N+M,d}$ be the Gauss map. It is semialgebraic, see e.g. \cite[Proposition 3.4.7]{BCR}, \cite{Kur92}.
Denote by $\widetilde{\Gamma_f}$ the closure of its image and by $\pi_f:\widetilde{\Gamma_f}\to\Gamma_f$ the induced projection.
Then $\gamma$ lifts to a semialgebraic arc $\overline\gamma$ in $\widetilde{\Gamma_f}$. The limits $\lim_{t\to 0^+}\overline\gamma(t)$ and $\lim_{t\to 0^-}\overline\gamma(t)$ exist, and as follows from Proposition \ref{prop:orders} they coincide.
Denote by $E\to\mathbb G_{N+M,d}$ the tautological bundle. Thus each fiber of $E\to\mathbb G_{N+M,d}$ is a $d$-dimensional vector subspace of $\mathbb R^{N+M}$.
We denote by $(x_1,\ldots,x_N,f_1,\ldots f_M)$ the linear coordinates in $\mathbb R^{N+M}$.
Then the restriction of alternating $d$-forms to each $V^d\in\mathbb G_{N+M,d}$ gives an identity $$\d f_{j_1}\wedge\cdots\wedge\d f_{j_d} = \eta_J(V^d)\,\d x_{1}\wedge\cdots\wedge\d x_{d}$$ that defines a semialgebraic function $\eta_J(V_d)$ on $\mathbb G_{N+M,d}$ with values in $\mathbb R\cup\{\pm \infty\}$.
Then $\eta_J (t) = \eta_J (\overline \gamma(t))$.
As follows from Proposition \ref{prop:orders}, $\eta_J(t)$ is meromorphic and $\operatorname{ord}_t\eta_J\in \mathbb Z \cup \{\infty\}$.
The following notion generalizes the order defined in Definition \ref{defn:ordjac1}.
\begin{defn}\label{defn:ordjac}
\emph{The order of the Jacobian determinant of $f$ along $\gamma$} is defined as $$\operatorname{ord}_t\operatorname{jac}_f(\gamma)=\min_J\{\operatorname{ord}_t\eta_J(t)\}.$$
If $\eta(t)\equiv0$ then we define its order as $+\infty$.
\end{defn}
\begin{defn}\label{defn:jacbounded}
We say that \emph{the Jacobian determinant of $f$ is bounded from above (resp. below)} if there exists $S\subset X$ as in \ref{hyp:S} such that for every $\gamma\in\L(X)\setminus\L(S)$, $\operatorname{ord}_t\operatorname{jac}_f(\gamma)\ge0$ (resp. $\operatorname{ord}_t\operatorname{jac}_f(\gamma)\le0$).
\end{defn}
\subsection{Resolution diagram of $f$}
Let $g:M\rightarrow X$ be a Nash map where $M$ is a non-singular algebraic set and $X$ is an algebraic subset of $\mathbb R^N$. Denote by $\mathcal O_M$ the sheaf of Nash functions on $M$. \\ Assume that $\dim M=\dim X=d$. Then the Jacobian sheaf $\mathcal J_g$ of $g$ is the sheaf of $\mathcal O_M$-ideals generated, in a local system of coordinates $z_1,\ldots,z_d$ on $M$, by
$$
\mathcal J_g =
\left\langle\pd{\left(g_{i_1},\ldots,g_{i_d}\right)}{(z_1,\ldots,z_d)},\,
1\le i_1<\cdots<i_d\le N\right\rangle.
$$
Let $D=\cup D_i \subset M$ be a divisor with normal crossings. We say that a local system of coordinates $z_1,\ldots,z_d$ at $p\in M$ is compatible with $D$ if
$D$ at $p$ is the zero set of a monomial in $z_i$ or $p\not \in D$.
\begin{prop}\label{prop:JsInvertible}
Let $g:M\rightarrow X$ be as in the previous definition. Then there exists $\sigma:\tilde M\rightarrow M$ the composition of a sequence of blowings-up with smooth algebraic centers and an algebraic divisor with simple normal crossings $D=\cup D_i\subset \tilde M$ such that in any local Nash system of coordinates compatible with $D$,
$\mathcal J_{g\circ\sigma}$ is generated by a monomial.
\end{prop}
\begin{proof}
First we fix a regular (in the algebraic sense) differential form $\omega_M$ of degree $d$ on $M$ which is not identically zero on every component of $M$.
There exists a sequence of blowings-up whose Jacobian determinant is a normal crossing divisor and such that the compositions with the coefficients of $\omega_M$ are also normal crossings, see for instance \cite[Theorem 1.10]{BM97}. Then the zero set of the pullback of $\omega_M$ is a divisor with simple normal crossings.
Up to composing with blowings-up, this allows us to assume that the zero set of $\omega_M$, denoted by $Z(\omega_M)$, is a divisor with simple normal crossings.
Since $M$, and hence $Z(\omega_M)$, is affine there is a regular function $\varphi$ on $M$ such that $Z(\omega_M)\subset \div \varphi$. By performing additional blowings-up we may assume that $\div (\varphi)$ is a divisor with normal crossings.
For $I=\{i_1,\ldots,i_d\}\subset\{1,\ldots,N\}$, let $\pi_I:X\rightarrow\mathbb R^d$ be defined by $\pi_I(x_1,\ldots,x_N)=(x_{i_1},\ldots,x_{i_d})$. We consider the algebraic differential form $\omega_I=\pi^*(\d x_{i_1}\wedge\cdots\wedge\d x_{i_d})$.
Then
$$
\varphi g^*\omega_I= h_I \omega_M,
$$
where $h_I$ is a Nash function on $M$. By \cite[Proposition 2.11]{Cam17}, we may find a finite composition of blowings-up $\sigma:\tilde M\rightarrow M$, with smooth algebraic centers, such that $h_I\circ\sigma$ is locally a monomial times a Nash unit.
More precisely, let $D \subset \tilde M$ be the union of $\sigma ^{-1} (\div \varphi)$ and the exceptional divisor of $\sigma$. We may suppose that $D$ is with simple normal crossings and hence $h_I\circ\sigma$ equals a monomial times a Nash unit, in any local system of coordinates compatible with $D$.
Let $z_1,\ldots,z_d$ be such a local system of coordinates and let $\tilde g= g\circ \sigma$. Then
$$\tilde g^*\omega_I=\pd{\left(\tilde g_{i_1},\ldots,\tilde g_{i_d}\right)}{(z_1,\ldots,z_d)}\d z=\varphi^{-1}h_I\sigma^*\omega_M=z^{\alpha_I}u(z)\d z,$$
where $u$ is a unit.
We may apply the above procedure to all $\omega_I$ and their differences.
Then, by \cite[Beginning of the proof of Proposition 2.1]{Zar67}, see also \cite[Lemma 6.5]{BM90}, we conclude that the ideal generated by such
$\tilde g^*\omega_I$ is, locally, principal and generated by a monomial.
\end{proof}
Let $p:\Gamma\rightarrow X$ be a composition of finitely many algebraic blowings-up such that $q=f\circ p:\Gamma\rightarrow Y$ is Nash and $\sigma:M\rightarrow\Gamma$ be an algebraic resolution of $\Gamma$ such that $\mathcal J_{p\circ\sigma}$ (resp. $\mathcal J_{q\circ\sigma}$) is locally generated by a monomial.
Notice that $M$ is a non-singular real algebraic variety and that $f\circ p\circ\sigma$ is Nash.
Note that if $M$ is not connected then $\mathcal J_{p\circ\sigma}$ can vanish identically on a connected component of $M$ if and only if $f$ is not generically of maximal rank.
We call $p:\Gamma\rightarrow X$ and $\sigma:M\rightarrow\Gamma$ satisfying the above properties \emph{a resolution diagram of $f$}. By Hironaka's desingularisation theorem and Proposition \ref{prop:JsInvertible}, such a diagram always exists but is not unique.
\begin{equation}\label{eqn:resolution}
\begin{gathered}
\xymatrix{
& M \ar[d]^\sigma & \\
& \Gamma \ar[dl]_p \ar[dr]^q & \\
X \ar[rr]_f & & Y
}
\end{gathered}
\end{equation}
By choosing the algebraic subset $S\subset X$ bigger (but still with $\dim S<d$) we may assume that $(p\circ\sigma)_*$ induces a bijection $\L(M)\setminus\L(S')\rightarrow\L(X)\setminus\L(S) $, where $S'=(p\circ\sigma)^{-1}(S)$. Note that $\dim S'<d$.
Thus the diagram \eqref{eqn:resolution} induces a diagram
$$\xymatrix{
& \L(M)\setminus\L(S') \ar@{^{(}->>}[dl]_{(p\circ\sigma)_*} \ar[dr]^{(q\circ\sigma)_*} & \\
\L(X)\setminus\L(S) \ar[rr]_{f_*} & & \L(Y)
}$$
where we denote $f_*=(q\circ\sigma)_*\circ(p\circ \sigma)_*^{-1}$.
Now we show how to compute the order of the Jacobian determinant of $f$ along $\gamma$ using a resolution diagram.
\begin{prop}\label{prop:orders}
Let $\gamma\in\L(X)\setminus\L(S)$ and let $\tilde\gamma=(p\circ \sigma)_*^{-1}(\gamma)$. Then
\begin{equation}
\operatorname{ord}_t\operatorname{jac}_f(\gamma)=\operatorname{ord}_t\operatorname{jac}_{q\circ\sigma}(\tilde\gamma(t))-\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}(\tilde\gamma(t)).
\end{equation}
\end{prop}
\begin{proof}
The result derives from the chain rule which holds outside $S$.
\end{proof}
\begin{cor}\label{cor:jacbounded}
Suppose that $f$ is generically of maximal rank. Then the Jacobian determinant of $f$ is bounded from above, resp. from below, if and only if at every point of $M$ a local generator of $\mathcal J_{p\circ\sigma}$ divides a local generator of $\mathcal J_{q\circ\sigma}$, resp. a local generator of $\mathcal J_{q\circ\sigma}$ divides a local generator of $\mathcal J_{p\circ\sigma}$.
\end{cor}
\begin{rem}
We deduce from the previous corollary that if one of the conditions of Definition \ref{defn:jacbounded} is satisfied for one $S$, then it holds for every $S$.
\end{rem}
\subsection{An inverse mapping theorem}
\begin{thm}\label{thm:IFT}
Let $f:(X,x)\rightarrow(Y,y)$ be a germ of semialgebraic homeomorphism between real algebraic sets.
Assume that $\mu_{\L(X)}(\L(X,x))=\mu_{\L(Y)}(\L(Y,y))$. \\
If $f$ is generically arc-analytic and if the Jacobian determinant of $f$ is bounded from below, then the inverse map $f^{-1}:Y\rightarrow X$ is also generically arc-analytic and the Jacobian of $f$ is bounded from above.
\end{thm}
\begin{rem}
Notice that arc-analyticity is an open condition for semialgebraic continuous maps (See \cite[Theorem 3.1]{KP05} where it is not necessary to assume that $f$ is bounded, up to composing $f$ with a real analytic diffeomorphism $\varphi:\mathbb R\rightarrow(-1,1)$). Hence, since the above statement is local, it is enough to use real analytic arcs centered at $x$ for the arc-analyticity condition.
The same holds for the boundedness of the Jacobian of $f$: we assume that the arcs of Definition \ref{defn:jacbounded} or Corollary \ref{cor:jacbounded} are centered at $x$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{thm:IFT}]
We have the commutative diagram \eqref{eqn:resolution}. Notice that $E=(p\circ\sigma)^{-1}(0)$ is algebraic since $p\circ\sigma$ is regular. By Theorem \ref{thm:CoV},
\begin{align*}
\mu_{\L(X)}\left((p\circ\sigma)_*(\L(M,E))\right) &= \int_{(p\circ\sigma)_*(\L(M,E))}\mathbb L^{-0}\d\mu_{\L(X)} \\
&= \int_{\L(M,E)}\mathbb L^{-\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}}\d\mu_{\L(M)} \\
&= \sum_{n\ge0}\mu_{\L(M)}\left(\L(M,E)\cap\left(\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}\right)^{-1}(n)\right)\mathbb L^{-n}
\end{align*}
Thus
\begin{align*}
\mu_{\L(X)}\left((p\circ\sigma)_*(\L(M,E))\right)\sum_{i\ge0}\mathbb L^{-i} &= \sum_{i\ge0}\sum_{n\ge0}\mu_{\L(M)}\left(\L(M,E)\cap\left(\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}\right)^{-1}(n)\right)\mathbb L^{-(i+n)}\\
&=\sum_{n\ge0}\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}(\gamma(t))\le n\right)\mathbb L^{-n}
\end{align*}
Similarly
$$\mu_{\L(Y)}\left((q\circ\sigma)_*(\L(M,E))\right)\sum_{i\ge0}\mathbb L^{-i}=\sum_{n\ge0}\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{q\circ\sigma}(\gamma(t))\le n\right)\mathbb L^{-n}$$
Hence
\begin{align*}
&\left(\mu_{\L(Y)}\left((q\circ\sigma)_*(\L(M,E))\right)-\mu_{\L(X)}\left((p\circ\sigma)_*(\L(M,E))\right)\right)\sum_{i\ge0}\mathbb L^{-i} \\
&\quad=\sum_{n\ge0}\Big(\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{q\circ\sigma}(\gamma(t))\le n\right)-\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}(\gamma(t))\le n\right)\Big)\mathbb L^{-n}
\end{align*}
Since we may lift a real analytic arc non-entirely included in the exceptional locus by $p\circ\sigma$, we have $$\mu_{\L(X)}\left((p\circ\sigma)_*(\L(M,E))\right)=\mu_{\L(X)}\left(\L(X,x)\right)$$ so that
\begin{align*}
&\left(\mu_{\L(Y)}\left((q\circ\sigma)_*(\L(M,E))\right)-\mu_{\L(X)}\left(\L(X,x)\right)\right)\sum_{i\ge0}\mathbb L^{-i} \\
&\quad=\sum_{n\ge0}\Big(\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{q\circ\sigma}(\gamma(t))\le n\right)-\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}(\gamma(t))\le n\right)\Big)\mathbb L^{-n}
\end{align*}
Since $\mu_{\L(Y)}(\L(Y,y))=\mu_{\L(X)}(\L(X,x))$, we obtain
\begin{align*}
&\left(\mu_{\L(Y)}\left((q\circ\sigma)_*(\L(M,E))\right)-\mu_{\L(Y)}\left(\L(Y,y)\right)\right)\sum_{i\ge0}\mathbb L^{-i} \\
&\quad=\sum_{n\ge0}\Big(\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{q\circ\sigma}(\gamma(t))\le n\right)-\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}(\gamma(t))\le n\right)\Big)\mathbb L^{-n}
\end{align*}
Since $M$ is non-singular, we may simply write
\begin{align*}
&\left(\mu_{\L(Y)}\left((q\circ\sigma)_*(\L(M,E))\right)-\mu_{\L(Y)}\left(\L(Y,y)\right)\right)\sum_{i\ge0}\mathbb L^{-i} \\
&\quad=\sum_{n\ge0}\Big(\left[\gamma\in\L_n(M,E),\,\operatorname{ord}_t\operatorname{jac}_{q\circ\sigma}(\gamma(t))\le n\right]-\left[\gamma\in\L_n(M,E),\,\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}(\gamma(t))\le n\right]\Big)\mathbb L^{-(n+2)d}
\end{align*}
Since the Jacobian determinant $f$ is bounded from below, using Proposition \ref{prop:orders}, we get that each summand of the RHS is positive or zero (in the sense of Definition \ref{defn:order}) because the leading coefficient of the virtual Poincaré polynomial of a non-empty $\mathcal{AS}$-set is positive:
\begin{align*}
&\left(\mu_{\L(Y)}\left((q\circ\sigma)_*(\L(M,E))\right)-\mu_{\L(Y)}\left(\L(Y,y)\right)\right)\sum_{i\ge0}\mathbb L^{-i} \\
&\quad=\sum_{n\ge0}\Big(\left[\left\{\gamma\in\L_n(M,E),\,\operatorname{ord}_t\operatorname{jac}_{q\circ\sigma}(\gamma(t))\le n\right\}\setminus\left\{\gamma\in\L_n(M,E),\,\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}(\gamma(t))\le n\right\}\right]\Big)\mathbb L^{-(n+2)d}
\end{align*}
Moreover, the LHS is negative or zero since $(q\circ\sigma)_*(\L(M,E))\subset\L(Y,y)$. \\
Assume that $f$ is not bounded from above, then at least one of the summand of the RHS is positive so that we obtain a contradiction. This proves that $f$ is bounded from above. \\
Furthermore, since the RHS is zero, we obtain that
\begin{equation}\label{eqn:Im}
\mu_{\L(Y)}\left((q\circ\sigma)_*(\L(M,E))\right)=\mu_{\L(Y)}\left(\L(Y,y)\right)
\end{equation}
We are now going to prove that $f^{-1}$ is generically arc-analytic so that it is blow-Nash.
Assume by contradiction there exists $\gamma\in\L(Y,y)$ not entirely included in $f(S)\cup Y_\mathrm{sing}$ which may not be lifted by $q\circ\sigma$. Nevertheless, by \cite[Proposition 2.21]{Cam16},
$$(q\circ\sigma)^{-1}(\gamma(t))=\sum_{i\ge0}c_it^{\frac i a},\,t\ge0$$
and
$$(q\circ\sigma)^{-1}(\gamma(t))=\sum_{i\ge0}d_i(-t)^{\frac i b},\,t\le0.$$
By assumption $(q\circ\sigma)^{-1}(\gamma(t))$ is not analytic so that either these expansions don't coincide or they have a non-integer exponent.
\begin{enumerate}
\item We first treat the latter case. Assume that $$(q\circ\sigma)^{-1}(\gamma(t))=\sum_{i=0}^mc_it^{i}+ct^{\frac{a}{b}}+\cdots,\,t\ge0,\,m<\frac a b<m+1,\,c\neq0.$$
Since $(q\circ\sigma)^{-1}:Y\setminus f(S)\rightarrow M$ is continuous and subanalytic, it is locally Hölder so that there exists $N\in\mathbb N_{\ge0}$ satisfying for all real analytic arc $\eta(t)$ not entirely included in $f(S)\cup Y_\mathrm{sing}$, $$\eta(t)\equiv\gamma(t)\mod t^N\Rightarrow(q\circ\sigma)^{-1}(\eta(t))\equiv(q\circ\sigma)^{-1}(\gamma(t))\mod t^{m+1}.$$
Thus $\pi_N^{-1}\left(\pi_N(\gamma)\right)\subset\L(Y,y)\setminus(q\circ\sigma)_*(\L(M,E))$. \\
Notice that $\pi_N^{-1}\left(\pi_N(\gamma)\right)$ is measurable as a cylinder. Let $\rho:\tilde Y\rightarrow Y$ be a resolution of $Y$. Since $\gamma$ is not entirely included in the singular set of $Y$, there exists a unique real analytic arc $\tilde\gamma$ on $M$ such that $\gamma=\rho\circ\tilde\gamma$. Let $e=\operatorname{ord}_t\operatorname{jac}_\rho(\tilde\gamma(t))$ and $e'$ be such that $\gamma\in\L^{(e')}(Y)$. We may assume that $N\ge\max(e',2e)$. Then, by Lemma \ref{lem:CoV} and since $\tilde Y$ is non-singular,
\begin{align*}
\mu_{L(Y)}\left(\pi_N^{-1}\left(\pi_N(\gamma)\right)\right)&=\mu_{\L(\tilde Y)}\left(\pi_N^{-1}\left(\pi_N(\tilde\gamma)\right)\right)\mathbb L^{-e} \\
&=\left[\pi_N\left(\pi_N^{-1}\left(\pi_N(\tilde\gamma)\right)\right)\right]\mathbb L^{-(N+1)d-e} \\
&\neq 0
\end{align*}
Since $\pi_N^{-1}\left(\pi_N(\gamma)\right)\subset\L(Y,y)\setminus(q\circ\sigma)_*(\L(M,E))$, we obtain that $$\mu\left(\L(Y,y)\setminus(q\circ\sigma)_*(\L(M,E))\right)\neq0$$ which contradicts \eqref{eqn:Im}.
\item We now assume that
$$\tilde\gamma^+(t)=(q\circ\sigma)^{-1}(\gamma(t))=\sum_{i=0}^{m-1}c_it^{i}+ct^m+\cdots,\,t\ge0$$
and
$$\tilde\gamma^-(t)=(q\circ\sigma)^{-1}(\gamma(t))=\sum_{i=0}^{m-1}c_it^{i}+dt^m+\cdots,\,t\le0$$
with $c\neq d$.
Notice that $(q\circ\sigma)(\gamma^\pm(t))$ are analytic so that $\gamma(t)=(f\circ q\circ\sigma)(\gamma^+(t))=(f\circ q\circ\sigma)(\gamma^-(t))$. Since $f$ is a homeomorphism, we get $(q\circ\sigma)(\gamma^+(t))=(q\circ\sigma)(\gamma^-(t))$. Since this real analytic arc is not entirely included in $S$, it may be uniquely lifted by $q\circ\sigma$ so that $\gamma^+(t)=\gamma^-(t)$. Hence $c=d$ and we obtain a contradiction.
\end{enumerate}
Thus, for all $\gamma\in\L(Y,y)\setminus\L(f(S)\cup Y_\mathrm{sing})$ there exists $\tilde\gamma\in\L(M,E)$ such that $(q\circ\sigma)(\tilde\gamma(t))=\gamma(t)$. Then $f^{-1}(\gamma(t))=(p\circ\sigma)(\tilde\gamma(t))$ which is real analytic. Therefore $f^{-1}$ is generically arc-analytic and so blow-Nash.
\end{proof}
\begin{rem}
Notice that, in the above proof, we do not need a homeomorphism $f:X\rightarrow Y$ but only a homeomorphism of $f:\overline{\operatorname{Reg}(X)}\rightarrow\overline{\operatorname{Reg}(Y)}$.
\end{rem}
Under the assumptions of the previous theorem, we derive the following corollary from Lemma \ref{lem:BNisGenAA}.
\begin{cor}\label{cor:IFTcor1}
Let $f:(X,x)\rightarrow(Y,y)$ be a semialgebraic homeomorphism germ between real algebraic sets with $\dim X=\dim Y$.
Assume moreover that $\mu_{\L(X)}(\L(X,x))=\mu_{\L(Y)}(\L(Y,y))$. \\
If $f$ is blow-Nash and if the Jacobian determinant of $f$ is bounded from below, then the inverse $f^{-1}$ is also blow-Nash and the Jacobian determinant of $f$ is bounded from above.
\end{cor}
\begin{rem}
Notice that in the previous results we don't assume that $X=Y$ contrary to \cite[Main Theorem 3.5]{Cam16}.
\end{rem}
\begin{thm}\label{thm:CompareMeasures}
Let $f:(X,x)\rightarrow(Y,y)$ be a semialgebraic homeomorphism germ between algebraic sets with $\dim X=\dim Y$.
If $f$ is generically arc-analytic and if the Jacobian determinant of $f$ is bounded from below, then $\mu_{\L(X)}(\L(X,x))\preceq\mu_{\L(Y)}(\L(Y,y))$.
\end{thm}
\begin{proof}
Following the beginning of the proof of Theorem \ref{thm:IFT}, we obtain :
\begin{align*}
&\left(\mu_{\L(Y)}\left(\L(Y,y)\right)-\mu_{\L(X)}\left(\L(X,x)\right)\right)\sum_{i\ge0}\mathbb L^{-i} \\
&\quad\succeq\left(\mu_{\L(Y)}\left((q\circ\sigma)_*(\L(M,E))\right)-\mu_{\L(X)}\left(\L(X,x)\right)\right)\sum_{i\ge0}\mathbb L^{-i} \\
&\quad=\sum_{n\ge0}\Big(\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{q\circ\sigma}(\gamma(t))\le n\right)-\mu_{\L(M)}\left(\gamma\in\L(M,E),\,\operatorname{ord}_t\operatorname{jac}_{p\circ\sigma}(\gamma(t))\le n\right)\Big)\mathbb L^{-n} \\
&\quad\succeq0
\end{align*}
\end{proof}
\section{An inverse mapping theorem for inner-Lipschitz maps}
\subsection{Inner distance}
Let $X$ be a connected semialgebraic subset of $\mathbb R^N$ equipped with the standard Euclidean distance.
We denote by $d_X$ the {\it inner} (also called \emph{geodesic}) distance in $X$. By definition, for $p,q\in X$, the inner distance $d_X(p,q)$ is the infimum over the length of all
rectifiable curves joining $p$ to $q$ in $X$. By \cite{KO97}, $d_X(p,q)$ is the infimum over the length of continuous semialgebraic curves in $X$ joining $p$ and $q$.
It is proven in \cite{KO97} that $d_X$ can be approximated uniformly by subanalytic distances.
We recall some results from \cite{KO97}, based on \cite{Kur92}. Let $\varepsilon>0$, we say that a connected semialgebraic set $\Gamma\subset\mathbb R^N$ is \emph{$K_\varepsilon$-regular} if for any $p,q\in\Gamma$ we have $$d_\Gamma(p,q)\le(1+\varepsilon)|p-q|.$$
We state now a semialgebraic version of \cite[Proposition 3]{KO97}.
\begin{prop}\label{prop:ko}
Let $X\subset\mathbb R^N$ be a semialgebraic set and $\varepsilon>0$. Then there exists a finite decomposition
$X = \bigcup_{\nu \in V} \Gamma_\nu$ such that:
\begin{enumerate}
\item each $ \Gamma_\nu$ is a semialgebraic connected analytic submanifold of
$\mathbb R^N$,
\item each $\Gamma_\nu$ is $K_\varepsilon$-regular.
\end{enumerate}
\end{prop}
\begin{rem}
Given a finite family of semialgebraic sets $X_i,\,i\in I$, we can find a decomposition satisfying the above conditions and such that for any $i\in I$, $\nu\in V$, we have: either $\Gamma_\nu\subset X_i$ or $\Gamma_\nu\cap X_i=\emptyset$.
\end{rem}
For a $C^1$ map $f:X '\rightarrow\mathbb R^{M}$ defined on a submanifold $X'$ of $\mathbb R^N$ we denote by $
D_pf:T_pX\rightarrow\mathbb R^{M}
$
its differential at $p\in X'$. Then the norm of $D_p f$ is defined by $$\Vert D_pf\Vert=\sup\left\{|D_pf(v)|:\,v\in T_p,\,|v|=1\right\}.$$
\begin{lemma}\label{lem:Lip}
Assume that $f_\nu:\Gamma_\nu\rightarrow\mathbb R^{M}$ is a $C^1$-map, such that for any $p\in\Gamma_\nu$ we have $\Vert D_pf_\nu\Vert\le L$. Then $f_\nu$ is $(1+\varepsilon)L$-Lipschitz with respect to the Euclidean distance, hence it extends continuously on $\overline\Gamma_\nu$ to a Lipschitz map with the same constant.
\end{lemma}
\begin{proof}
Let $p,q\in\Gamma_\nu$ and $\varepsilon'>\varepsilon$, then, by \cite{KO97}, there exists a $C^1$-semialgebraic arc $\lambda:[0,1]\rightarrow\Gamma_\nu$ such that $p=\lambda(0)$, $q=\lambda(1)$ of the length $|\lambda|\le(1+\varepsilon')|p-q|$.
It follows that
$$
|f_\nu(p)-f_\nu(q)|\le L|\lambda|\le(1+\varepsilon')L|p-q|.
$$
We obtain the conclusion passing to the limit $\varepsilon'\rightarrow\varepsilon$.
Notice that, on any metric space, a Lipschitz mapping extends continuously to the closure with the same Lipschitz constant.
\end{proof}
Let $X$ and $Y$ be locally closed connected semialgebraic subsets respectively of $\mathbb R^N$ and $\mathbb R^{M}$.
They are equipped with the inner distances $d_X$ and $d_Y$, respectively. Let
$$f:X\rightarrow Y$$ be a continuous semialgebraic map.
Then there exists a semialgebraic set $X'\subset X$, which is open and dense in $X$, such that the connected components of $X'$ are analytic submanifolds of $\mathbb R^N$, possibly of different dimensions.
Moreover $f$ restricted to each connected component of $X'$ is analytic.
\begin{prop}\label{prop:inlip}
The following conditions are equivalent:
\begin{enumerate}[label=(\roman*),ref=\ref{prop:inlip}.(\roman*)]
\item $ d_Y (f(p),f(q))\le Ld_X(p,q)$ for any $p,q\in X$, \label{item:metric}
\item $\Vert D_pf\Vert \le L$ for any $p\in X'$. \label{item:norm}
\end{enumerate}
\end{prop}
\begin{proof}
The implication $\ref{item:metric}\Rightarrow\ref{item:norm}$ is obvious since
at a smooth point $p\in X$, the inner and Euclidean distances are asymptotically equal.
To prove the converse let us fix $p,q\in X$. For any $\varepsilon>0$ there exists a continuous semialgebraic arc $\lambda:[0,1]\rightarrow X$ such that $p=\lambda(0)$, $q=\lambda(1)$ of the length $|\lambda|\le(1+\varepsilon)d_X(p,q)$. By Proposition \ref{prop:ko} there exists a finite decomposition $X'=\bigcup_{\nu\in V}\Gamma_\nu$ into $K_\varepsilon$-regular semialgebraic connected analytic submanifolds of $\mathbb R^N$. Let $X''=\bigcup_{\nu \in V'}\Gamma_\nu$ be the union of those $\Gamma_\nu$ which are open in $X'$.
Note that $X''$ is dense in $X'$. It follows that $X\subset\bigcup_{\nu \in V'}\overline{\Gamma_\nu}$. Since the arc $\lambda$ is semialgebraic there exists a finite sequence $0=t_0<\cdots<t _k=1$ such that each $\lambda([t_i,t_{i+1}])\subset\overline{\Gamma_\nu}$ for some $\nu\in V'$.
By Lemma \ref{lem:Lip} the length of $f(\lambda([t_i, t_{i+1}]))$ is bounded by $(1+\varepsilon)|\lambda([t_i,t_{i+1}])|$. Hence
$$|f(\lambda([0, 1]))| = \sum_{i=0}^{k-1}|f(\lambda([t_i,t_{i+1}])|\le(1+\varepsilon)L\sum_{i=0}^{k-1}|\lambda([t_i,t_{i+1}])|\le(1+\varepsilon)L|\lambda |. $$
Thus $$d_Y(f(p),f(q))\le|f(\lambda([0,1]))|\le(1+\varepsilon)L|\lambda|\le(1+\varepsilon)^2Ld_X(p,q)$$
We conclude by taking the limit as $\varepsilon\rightarrow0$.
\end{proof}
\subsection{An inverse mapping theorem}
We suppose now that $f:X\rightarrow Y$ satisfies Assumption \ref{hyp:S}. Thus it is a blow-Nash map between two real algebraic sets of dimension $d$. Let $\gamma\in\L(X)\setminus\L(S)$. Let us adapt the notation introduced in the paragraph after Assumption \ref{hyp:S}. In particular we assume that the limit of tangent spaces $T_{\gamma(t)} X$ in the Grassmannian $\mathbb G_{N,d}$ is equal to $\mathbb R^d\subset\mathbb R^N$. Then, for every $i=1,\ldots,d$ and every $j=1,\ldots,M$
$$\eta_{i,j}(t)=\frac{\partial f_{j}}{\partial x_i}$$
is semialgebraic. Thus the order of $\eta_{i,j}(t)$, as $t\rightarrow 0^+$ is a well defined rational number (or $+\infty$ if $f_j$ vanishes identically on $\gamma$).
\begin{defn}
\emph{The order of the Jacobian matrix of $f$ along $\gamma$} is defined as
$$\operatorname{ord}_{t\to 0^+}\operatorname{Jac}_f(\gamma(t))=\min_{i,j}\{\operatorname{ord}_{t\to 0^+}\eta_{i,j}(t)\}.$$
\end{defn}
\begin{rem}
The above notion shouldn't be confused with the order of the Jacobian \emph{determinant} defined in Definition \ref{defn:ordjac}.
\end{rem}
\begin{rem}
It is likely that $\eta_{i,j}(t)$ is actually meromorphic and it is not necessary, in the above definition, to restrict to $t\to 0^+$. We leave it as an open problem.
\end{rem}
\begin{defn}
We say that \emph{the Jacobian matrix of $f$ is bounded from above} if there is an $S$ such that for every $\gamma\in\L(X)\setminus\L(S)$, $\operatorname{ord}_{t\to 0^+} \operatorname{Jac}_f(\gamma(t))\ge 0$.
\end{defn}
One may show again that if the above condition is satisfied for one $S$ they are satisfied for every $S$.
The following result follows from Proposition \ref{prop:inlip}.
\begin{prop}\label{prop:CforIL}
Let $f:(X,x)\rightarrow(Y,y)$ be a semialgebraic homeomorphism germ between two real algebraic set germs with $\dim(X,x)=\dim(Y,y)$. Then $f:\overline{\operatorname{Reg}(X)}\rightarrow\overline{\operatorname{Reg}(Y)}$ is inner Lipschitz iff the Jacobian matrix of $f$ is bounded from above.
\end{prop}
\begin{thm}\label{thm:mainLip}
Let $f:(X,x)\rightarrow(Y,y)$ be a semialgebraic homeomorphism germ between two real algebraic set germs with $\dim (X,x)=\dim (Y,y)$.
Assume that $\mu_{\L(X)}(\L(X,x))=\mu_{\L(Y)}(\L(Y,y))$.
If $f$ is generically arc-analytic and $f^{-1}:\overline{\operatorname{Reg}(Y)}\rightarrow\overline{\operatorname{Reg}(X)}$ is inner Lipschitz, then $f^{-1}:Y\rightarrow X$ is also generically arc-analytic and $f:\overline{\operatorname{Reg}(X)}\rightarrow\overline{\operatorname{Reg}(Y)}$ is inner Lipschitz.
\end{thm}
\begin{rem}
Notice that both previous results involve the closure of the regular parts of the algebraic sets. The obtained sets $\overline{\operatorname{Reg}(X)}$ and $\overline{\operatorname{Reg}(Y)}$ do not contain any part of smaller dimension but they still may not be smooth submanifolds. \\
For instance, for the Whitney umbrella $X=\{x^2=zy^2\}$, $\overline{\operatorname{Reg}(X)}$ consists in the canopy (i.e. the $z\ge0$ part of $X$). Therefore $\overline{\operatorname{Reg}(X)}$ is singular along the half-axis $\{(0,0,z),\,z\ge0\}$. However it doesn't contain the handle of the Whitney umbrella (i.e. $\{(0,0,z),\,z<0\}$) which is a smooth manifold of dimension $1$ whereas $\dim X=2$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{thm:mainLip}.]
To simplify the exposition we suppose that $X$, and hence $Y$ as well, is pure-dimensional.
That is $X = \overline{ \operatorname{Reg}(X)}$ and $Y = \overline{\operatorname{Reg}(Y)}$. The proof in the general case is similar.
First we apply Proposition \ref{prop:inlip} to $f^{-1}$. Hence the Jacobian determinant of $f^{-1}$ is bounded from above. Therefore the Jacobian determinant of $f$ is bounded from below and we can apply to $f$ Theorem \ref{thm:IFT}. This shows that $f^{-1}$ is generically arc-analytic and that the Jacobian determinant of $f$ is bounded from above and below.
Now we show that the Jacobian matrix of $f$ is bounded from above.
Let $\gamma\in\L(X)\setminus\L(S)$. We may assume, as explained above, that $\mathbb R^d\subset\mathbb R^N$ is the limit of tangent spaces $T_{\gamma(t)} X$.
Similarly by considering the limit of $T_{f(\gamma(t))} Y$ we may assume that it equals $\mathbb R^d\subset\mathbb R^M$.
Then $y_1,\ldots,y_d$ form a local system of coordinates on $Y$ at every $f(\gamma(t))$, $t\ne 0$.
By the assumptions the matrix $(\pd{x_i}{y_j})(f(\gamma(t))$ is bounded and its determinant is a unit. Therefore, by the cofactor formula, its inverse, that is $(\pd{x_i}{y_j})(f(\gamma(t))$ is bounded. This shows that $f$ is inner Lipschitz by Proposition \ref{prop:CforIL}.
\end{proof}
\begin{rem}
Notice that, in the above proof, we do not need a homeomorphism $f:X\rightarrow Y$ but only a homeomorphism of $f:\overline{\operatorname{Reg}(X)}\rightarrow\overline{\operatorname{Reg}(Y)}$.
\end{rem}
\small
|
1,108,101,563,191 | arxiv | \section{Introduction}
Despite the many advances in Machine Learning and Artificial Intelligence, there are still areas where learning mechanisms have not yielded performance comparable to humans. Computer chess is a prime example of the difficulties in such fields.
It is well-known that computer games have served as an important testbed for spawning various innovative AI techniques in domains and applications such as search, automated theorem proving, planning, and learning. In addition, the annual World Computer Chess Championship (WCCC) is arguably the longest ongoing performance evaluation of programs in computer science, which has inspired other well-known competitions in robotics, planning, and natural language understanding.
Computer chess, while being one of the most researched fields within AI, has not lent itself to the successful application of conventional learning methods, due to its enormous complexity. Hence, top chess programs still resort to manual tuning of the parameters of their evaluation function, typically through years of trial and error. The evaluation function assigns a score to a given chess position and is thus the most critical component of any chess program.
Currently, the only successful attempt reported on automatic learning of the parameter values of an evaluation function is based on ``mentor-assisted'' evolution \cite{david08a}. This approach evolves the parameter values by mimicking the evaluation function of an available chess program that serves as a ``mentor''. It essentially attempts to reverse engineer the evaluation function of this program by observing the scores it issues for a given set of positions. The approach relies heavily on the availability of the numeric evaluation score of each position, which is provided by the reference program.
In this paper, we deal successfully with a significantly more difficult problem, namely that of evolving the parameter values of an evaluation function by relying solely on the information available from games of human grandmasters, i.e., the moves played. Lacking any numeric information provided typically by a standard chess program, we combine supervised and unsupervised learning. The organisms are first evolved to mimic the behavior of these human grandmasters by observing their games, and the best evolved organisms are then further evolved through coevolution. The results show that our combined approach efficiently evolves the parameters of interest from randomly initialized values to highly tuned ones, yielding a program that outperforms a two-time World Computer Chess Champion.
In Section 2 we review past attempts at applying evolutionary techniques in computer chess. We also compare alternative learning methods to evolutionary methods, and argue why the latter are more appropriate for the task in question. Section 3 presents our new approach, including a detailed description of the framework of the GA as applied to the current problem. Section 4 provides our experimental results, and Section 5 contains concluding remarks and suggestions for future research.
\section{Learning in Computer Chess}
While the first chess programs could not pose a challenge to even a novice player, the current advanced chess programs are on par with the strongest human chess players, as the recent man vs.~machine matches clearly indicate. This improvement is largely a result of deep searches that are possible nowadays, thanks to both hardware speed and improved search techniques. While the search depth of early chess programs was limited to only a few plies, nowadays tournament-playing programs easily search more than a dozen plies in middlegame, and tens of plies in late endgame.
Despite their groundbreaking achievements, a glaring deficiency of today's top chess programs is their severe lack of a learning capability (except in most negligible ways, e.g., ``learning'' not to play an opening that resulted in a loss, etc.). In other words, despite their seemingly intelligent behavior, those top chess programs are mere brute-force (albeit efficient) searchers.
\subsection{Conventional vs. Evolutionary Learning in Computer Chess}
During more than fifty years of research in the area of computer games, many learning methods such as reinforcement learning \cite{sutton98} have been employed in simpler games. Temporal difference learning has been successfully applied in backgammon and checkers \cite{schaeffer01,tesauro92}. Although temporal difference learning has also been applied to chess \cite{baxter00}, the results showed that after three days of learning, the playing strength of the program was only 2150 Elo (see Appendix B for a description of the Elo rating system), which is a very low rating for a chess program. Wiering \cite{wiering95} provided formal arguments for the failure of these methods in more complicated games such as chess.
The issue of learning in computer chess can be seen as an optimization problem. Each program plays by conducting a search, where the root of the search tree is the current position, and the leaf nodes (at some predefined depth of the tree) are evaluated by some static evaluation function. In other words, sophisticated as the search algorithms may be, most of the knowledge of the program lies in its evaluation function. Even though automatic tuning methods, that are based mostly on reinforcement learning, have been successfully applied to simpler games such as checkers, they have had almost no impact on state-of-the-art chess engines. Currently, all top tournament-playing chess programs use hand-tuned evaluation functions, since conventional learning methods cannot cope with the enormous complexity of the problem. This is underscored by the following four points.
(1) The space to be searched is huge. It is estimated that there are about $10^{46}$ possible positions that can arise in chess \cite{chinchalkar96}. As a result, any method based on exhaustive search of the problem space is infeasible.
(2) The search space is not smooth and unimodal. The evaluation function's parameters of any top chess program are highly co-dependent. For example, in many cases increasing the values of three parameters will result in a worse performance, but if a fourth parameter is also increased, then an improved overall performance would be obtained. Since the search space is not unimodal, i.e., it does not consist of a single smooth ``hill'', any gradient-ascent algorithm such as hill climbing will perform poorly. In contrast, genetic algorithms are known to perform well in large search spaces which are not unimodal.
(3) The problem is not well understood. As will be discussed in detail in the next section, even though all top programs are hand-tuned by their programmers, finding the best value for each parameter is based mostly on educated guessing and intuition. (The fact that all top programs continue to operate in this manner attests to the lack of practical alternatives.) Had the problem been well understood, a domain-specific heuristic would have outperformed a general-purpose method such as GA.
(4) We do not require a global optimum to be found. Our goal in tuning an evaluation function is to adjust its parameters so that the overall performance of the program is enhanced.
In view of the above points it seems appropriate to employ GA for automatic tuning of the parameters of an evaluation function. Indeed, at first glance this appears like an optimization task, well suited for GA. The many parameters of the evaluation function (bonuses and penalties for each property of the position) can be encoded as a bit-string. We can randomly initialize many such ``chromosomes'', each representing one evaluation function. Thereafter, one needs to evolve the population until highly tuned ``fit'' evaluation functions emerge.
However, there is one major obstacle that hinders the above application of GA, namely the fitness function. Given a set of parameters of an evaluation (encoded as a chromosome), how should the fitness value be calculated? For many years, it seemed that the solution was to let the individuals, at each generation, play against each other a series of games, and subsequently record the score of each individual as its fitness value. (Each ``individual'' is a chess program with an appropriate evaluation function.)
The main drawback of this approach is the unacceptably large amount of time needed to evolve each generation. As a result, severe limitations were imposed on the length of the games played after each generation, and also on the size of the population involved. With a population size of 100 and a limit of 10 seconds per game, and assuming that each individual plays each other individual once in every generation, it would take 825 minutes for each generation to evolve. Specifically, reaching the 100th generation would take up to 57 days. As we see in the next section, past attempts at applying this process resulted in weak programs, which were far inferior to state-of-the-art programs.
In Section 3 we present our GA-based approach for using GA in evolving state-of-the-art chess evaluation functions. Before that, we briefly review previous work of applying evolutionary methods in computer chess.
\subsection{Previous Evolutionary Methods Applied \\to Chess}
Despite the abovementioned problems, there have been some successful applications of evolutionary techniques in computer chess, subject to some restrictions. Genetic programming was successfully employed by Hauptman and Sipper \cite{hauptman05,hauptman07} for evolving programs that can solve Mate-in-N problems and play chess endgames.
Kendall and Whitwell \cite{kendall01} used evolutionary algorithms for tuning the parameters of an evaluation function. Their approach had limited success, due to the very large number of games required (as previously discussed), and the small number of parameters used in their evaluation function. Their evolved program managed to compete with strong programs only if their search depth (lookahead) was severely limited.
Similarly, Aksenov \cite{aksenov04} employed genetic algorithms for evolving the parameters of an evaluation function, using games between the organisms for determining their fitness. Again, since this method required a very large amount of games, it evolved only a few parameters of the evaluation function with limited success. Tunstall-Pedoe \cite{tunstall91} also suggested a similar approach, without providing an implementation.
Gross \emph{et al.}~\cite{gross02} combined genetic programming and evolution strategies to improve the efficiency of a given search algorithm using a distributed computing environment on the Internet.
David, Koppel, and Netanyahu \cite{david08a} used ``mentor-assisted'' evolution for reverse engineering the evaluation function of a reference chess program (the ``mentor''), thereby evolving a new comparable evaluation function. Their approach takes advantage of the evaluation score of each position considered (during the training phase), that is provided by the reference program. In fact, this numeric information is key to simulating the program's evaluation function. In other words, notwithstanding the high-level performance of the evolved program, the learning process is heavily dependent on the availability of the above information.
In this paper, we combine supervised evolution and unsupervised coevolution for evolving the parameter values of the evaluation function to simulate the moves of a human grandmaster, without relying on the availability of evaluation scores of some computer chess program. As will be demonstrated, the evolved program is on par with today's strongest chess programs.
\section{Evolution and Coevolution of Evaluation Functions}
Encoding the parameters of an evaluation function as a chromosome is a straightforward task, and the main impediment for evolving evaluation functions is the difficulty of applying a fitness function (a numerical value representing how well the organism performs). However, as previously noted, establishing the fitness evaluation by means of playing numerous games between the organisms in each generation (i.e., single-population coevolution) is not practical.
As mentioned earlier, the fitness value in mentor-assisted evolution is issued as follows. Both an organism and a grandmaster-level chess program are run on a given set of positions; for each position the difference between the evaluation score computed by the organism and that computed by the reference program is recorded. The fitness value is taken to be inversely proportional to this difference.
In contrast, no evaluation scores of any chess program are assumed available in this paper, and we only make use of (widely available) databases of games of human grandmasters. The task of evolution, in this case, is thus significantly more difficult than that based on an existing chess program, as the only information available here consists of the actual moves played in the positions considered.
The evaluation function is evolved by learning from grandmasters according to the steps shown in Figure \ref{fig:mentor}.
\begin{figure}[htbp]
\begin{center}
\line(1,0){240}
\begin{enumerate}
\item Select a list of positions from games of human grandmasters. For each position store the move played.
\item For each position, let the organism perform a 1-ply search and store the move selected by the organism.
\item Compare the move suggested by the organism with the actual move played by the grandmaster. The fitness of the organism will be the total number of ``correct'' moves selected (where the organism's move is the same as the grandmaster's move).
\end{enumerate}
\line(1,0){240}
\caption{Fitness function for supervised evolution of evaluation functions.}
\label{fig:mentor}
\end{center}
\end{figure}
Although performing a search for each position appears to be a costly process, in fact it consumes little time. Conducting a 1-ply search amounts to less than a millisecond for a typical chess program on an average machine, and so one thousand positions can be processed in one second. This allows us to use a large set of positions for the training set.
The abovementioned process, which will be discussed below in greater detail, results in a grandmaster-level evaluation function (see next section). Due to the random initialization of the chromosomes, each time the above process is applied, a different ``best evolved organism'' is obtained. Comparing the best evolved organisms from different runs, we observe that even though they are of similar playing strength, their evolved parameter values differ, and so does their playing style.
After running the supervised evolution process a number of times, we obtain several evolved organisms. Each organism is the best evolved organism from one complete run of the evolutionary process. We next use a coevolution phase for further improving upon the obtained organisms. During this single-population coevolution phase the evolved organisms play against each other, and the fitness function applied is based on their relative performance. Completing this phase for a predetermined number of generations, the best evolved organism is selected as the best overall organism. According to the results in the next section, this ``best of best'' organism improves upon the organisms evolved from the supervised phase. As noted before, previous attempts at applying coevolution have failed to produce grandmaster-level evaluation functions. The difference here is that the population size is small (we used 10), and the initial organisms are already well tuned (in contrast to randomly initialized).
In the following subsections, we describe in detail the chess program, the implementation of the supervised and unsupervised evolution, and the GA parameters used.
\subsection{The Chess Program and the Evaluation Function}
Our chess program uses \textsc{NegaScout}/PVS \cite{campbell83,reinfeld83} search, in conjunction with standard enhancements such as null-move pruning \cite{beal89,david08b,donninger93}, internal iterative deepening \cite{anantharaman91,scott69}, dynamic move ordering (history + killer heuristic) \cite{akl77,gillogly72,schaeffer83,schaeffer89}, multi-cut pruning \cite{bjornsson98,bjornsson01}, selective extensions \cite{anantharaman91,beal95} (consisting of check, one-reply, mate-threat, recapture, and passed pawn extensions), transposition table \cite{nelson85,slate77}, and futility pruning near leaf nodes \cite{heinz98a}.
The evaluation function of the program (which we are interested in tuning automatically) consists of 35 parameters. Even though this is a small number of parameters in comparison to other top programs, the set of parameters used does cover all important aspects of a position, e.g., material, piece mobility and centricity, pawn structure, and king safety.
The parameters of the evaluation function are represented as a binary bit-string (chromosome size: 224 bits), initialized randomly. The value of a pawn is set to a fixed value of 100, which serves as a reference for all other parameter values. Except for the four parameters representing the material values of the pieces, all the other parameters are assigned a fixed length of 6 bits per parameter. Obviously, there are many parameters for which 3 or 4 bits suffice. However, allocating a fixed length of 6 bits to all parameters ensures that \emph{a priori} knowledge does not bias the algorithm in any way.
Note that the program's evaluation function is randomly initialized, i.e., other than knowing the rules of the game, the program has essentially no game skills at all at this point.
\subsection{Supervised Evolution using Human\\ Grandmaster Games}
As indicated, our goal is to evolve the parameters of a program's evaluation function, so as to simulate the moves played by grandmasters for a given set of positions.
For our experiments, we use a database of 10,000 games by grandmasters of rating above 2600 Elo, and randomly pick one position from each game. We pick winning positions only, i.e., positions where the side to move ultimately won the game (e.g., if it is white's turn to move, the game was won eventually by white). Of these 10,000 positions, we select 5,000 positions for training and 5,000 for testing.
In each generation, for each organism we translate its chromosome bit-string to a corresponding evaluation function. For each of the $N$ test positions (in our case, $N=5,000$), the program performs a 1-ply search using the decoded evaluation function, and the best move returned from the search is compared to that of the grandmaster in the actual game. The move is deemed ``correct'' if it is the same as the move played by the grandmaster, and ``incorrect'' otherwise. The fitness of the organism is calculated as the square of the total number of correct moves.
Note, unlike the mentor-assisted approach for mimicking an existing chess program, which provides numeric values for each position, here we only have 1-bit of information for each processed position (correct/incorrect). This underscores why relying on human games is much more difficult than using computers as mentors.
Other than the special fitness function described above, we use a standard GA implementation with Gray coded chromosomes, fitness-proportional selection, uniform crossover, and elitism (the best organism is copied to the next generation). The following parameters are used:
\\
\\
population size = 100\\
crossover rate = 0.75\\
mutation rate = 0.005\\
number of generations = 200
\subsection{Coevolution of the Best Evolved\\ Organisms}
Rerunning the supervised evolution ten times, we obtain ten ``best organisms'' corresponding to the various runs. The evaluation functions of these evolved organisms do not have the same evolved parameter values, since each run produces different results (due to the random initialization). Although the ten programs are of similar playing strength, their playing style is somewhat different. At any rate, the above ten best organisms are used for the coevolution phase described below. Note that selecting, instead, the top ten evolved organisms from one of the supervised runs is not desirable, as it could result in ``inbreeding'', in the sense that the parameter values of these organisms tend to be fairly similar.
Consider, on the other hand, generating multiple evolved organisms using different training sets for each run. Specifically, for each run we might pick games of a specific grandmaster, in the hope of obtaining organisms that mimic the individual styles of the various grandmasters. Preliminary tests suggest, however, that this variant provides no additional insight or improvement. Apparently, the 1-ply searches enable mimicking only a ``generic'' grandmaster style, rather than the style of a specific player.
In the coevolution phase, the ten best organisms selected serve as the initial population, which is then coevolved over 50 generations. In each generation, each organism plays four games against each other organism (to obtain a more reliable result). At the end of each generation, rank-based selection is applied for selecting the organisms for breeding. Elitism is used here as well, which ensures that the best organism survives for the next generation. This is especially critical in light of the small population size. Other GA parameters remain unchanged, that is, uniform crossover with crossover rate of 0.75 and mutation rate of 0.005.
In the following section we present our experimental results, both in terms of the learning efficiency and the performance gain of the best evolved individual.
\section{Experimental Results}
We now present the results of running the evolutionary process described in the previous section. We also provide the results of several experiments that measure the strength of the evolved program in comparison to \textsc{Crafty}, a former two-time World Computer Chess Champion that is commonly used as a baseline for testing chess programs.
\subsection{Results of Supervised Evolution}
Running the evolution for 200 generations, the average number of solved positions (i.e., the number of correct moves found) increases until stabilizing at around 1,500 (out of 5,000), which corresponds to 30\% of the positions. The best organism at generation 200 solves 1,621 positions, which corresponds to 32.4\% of the positions. Due to the use of elitism, the number of solved positions for the best organism is monotonically increasing, since the best organism is preserved. The entire 200-generation evolution took approximately 2 hours on our machine (see Appendix A).
At first glance, a solution rate of 32\% might not seem too high. However, considering that the evolved organism selects successfully the ``correct'' move in one out of three cases, by applying merely a 1-ply search (as opposed to the careful analysis of a position by the grandmaster), this is quite satisfactory.
With the completion of the learning phase, we used the additional 5,000 positions set aside for testing. We let our best evolved organism perform a 1-ply search on each of these positions. The number of correctly solved positions was 1538 (30.7\%). This indicates that the first 5,000 positions used for training cover most types of positions that can arise, as the success rate for the testing set is close to the success rate for the training set.
To measure the performance of the best evolved organism after the supervised evolution phase (we call this program \textsc{Evol*}), we conducted a series of matches against the chess program \textsc{Crafty} \cite{hyatt90}. \textsc{Crafty} has successfully participated in numerous World Computer Chess Championships (WCCC), and is a direct descendent of \textsc{Cray Blitz}, the WCCC winner of 1983 and 1986. It is frequently used in the literature as a standard reference.
Table~\ref{tab:results1} provides the results of 500 games between \textsc{Evol*} and \textsc{Crafty}. The results show that the evolved organism (\textsc{Evol*}) is on par with \textsc{Crafty}, clearly demonstrating that the supervised evolution has succeeded in evolving a grandmaster-level evaluation function by purely mimicking grandmaster moves.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|l||c|c|c|}
\hline
Match & Result & W\% & RD\\
\hline
\hline
\textsc{Evol*} - \textsc{Crafty} & 254.5 - 245.5 & 50.9\% & $+6$\\
\hline
\end{tabular}
\end{center}
\vspace*{-6pt}
\caption{Results of the games between COEVOL* and CRAFTY (W\% is the winning percentage, and RD is the Elo rating difference (see Appendix B)). Win = 1 point, draw = 0.5 point, and loss = 0 point.}
\label{tab:results1}
\end{table}
\vspace*{-6pt}
\subsection{Results of Coevolution}
Repeating the supervised evolutionary process, we obtained each time a ``best evolved organism'' with a different set of evolved parameter values. That is, each run produced a different grandmaster-level program. Even though the performance of these independently evolved best organisms is fairly similar, our goal was to improve upon these organisms and create an enhanced ``best of best'' organism.
We applied single-population coevolution to enhance the performance of the program. After running the supervised evolution ten times (which ran for about 20 hours), ten different best organisms were obtained. Using these ten organisms as the starting population, we applied GA for 50 generations, where each organism played each other organism four times in every round. Each game was limited to ten seconds (5 seconds per side). In practice, this coevolution phase ran for approximately 20 hours.
We measured the performance of the best evolved organism after coevolution (we call this program \textsc{Coevol*}) by conducting a series of matches against \textsc{Crafty} and also against \textsc{Evol*}. Table~\ref{tab:results2} provides the results of 500 games between \textsc{Coevol*} and \textsc{Evol*}, and between \textsc{Coevol*} and \textsc{Crafty}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|l||c|c|c|}
\hline
Match & Result & W\% & RD\\
\hline
\hline
\textsc{Coevol*} - \textsc{Crafty} & 304.5 - 195.5 & 60.9\% & $+77$\\
\hline
\textsc{Coevol*} - \textsc{Evol*} & 293.0 - 212.0 & 58.6\% & $+60$\\
\hline
\end{tabular}
\end{center}
\vspace*{-6pt}
\caption{Results of the games of COEVOL* against CRAFTY and EVOL*.}
\label{tab:results2}
\end{table}
The results demonstrate that the coevolution phase further improved the performance of the program, resulting in the superiority of \textsc{Coevol*} to both \textsc{Crafty} and \textsc{Evol*}.
\section{Concluding Remarks and \\Future Research}
In this paper we presented a novel approach for evolving grandmaster-level evaluation functions by combining supervised and unsupervised evolution. In contrast to the previous successful attempt which focused on mimicking the evaluation function of a chess program that served as a mentor, the approach presented in this paper focuses on evolving the parameters of interest solely by observing games of human grandmasters, where the only available information to guide the evolution consists of the moves made in these games.
Learning from games of human grandmasters in the supervised phase of the evolution, we obtained several grandmaster-level evaluation functions. Specifically, running the procedure ten times, we obtained ten such evolved evaluation functions, which served as the initial population for the second coevolution phase.
While previous attempts at using coevolution have failed due to the unacceptably large amount of time needed to evolve each generation, the use of coevolution succeeded in our case because the initial population was not random, but relatively well tuned due to the first phase of supervised evolution.
According to our experiments, organisms evolved from randomly initialized chromosomes to sets of highly tuned parameters. The coevolution phase further improved the performance of the program, resulting in an evolved organism which resoundingly defeats a grandmaster-level program. Note that this performance was achieved despite the fact that the evaluation function of the evolved program consists of a considerably smaller number of parameters than that of \textsc{Crafty}, of which the evaluation function consists of over 100 parameters.
In summary, we demonstrated how our approach can be used for automatic tuning of an evaluation function from scratch. Furthermore, the approach can also be applied for enhancing existing highly tuned evaluation functions. Starting from several sets of tuned parameter values of the evaluation function, the coevolution phase can be applied to refine these values, so as to further improve the evaluation function.
Running the supervised evolution phase ten times, together with coevolution, took a total of about 40 hours. Both the supervised and unsupervised phases can be easily parallelized for obtaining linear scalability. During the supervised evolution each organism can be evaluated independently on a different processor, without having to share any information with the other organisms. Also, during coevolution, multiple games can be run in parallel. In this work we ran the experiments on a single processor machine (see Appendix A). Running these tests on an 8-core processor (which is readily available today) would reduce the overall running time from 40 hours to as little as 5 hours.
Finally, the results presented in this paper point to the vast potential in applying evolutionary methods for learning from human experts. We believe that the approach presented in this paper for parameter tuning could be applied to a wide array of problems for essentially ``reverse engineering'' the knowledge of a human expert.
|
1,108,101,563,192 | arxiv | \section*{Introduction}
Engineering the desired functionality of a material through nanostructuring has proven a powerful approach that is particularly well suited for two-dimensional (2D) materials. Often, the goal of such engineering involves shifting spectral weight to the Fermi level or increasing the coupling of the electrons to other degrees of freedom, such as light for improved optoelectronic properties~\cite{yan:2020} or phonons for superconductivity~\cite{allan:2017}. Nanostructuring can be achieved in a variety of ways: standard cleanroom techniques, such as electron beam lithography~\cite{grigorescu:2009}, photolithography, or focused ion beam lithography~\cite{genet:2007}, allow to realize patterns of a few nanometers in size.
Moir\'e engineering, in other words using the potential landscape from a Moir\'e lattice that emerges due to a finite twist angle between two 2D lattices, has attracted a lot of attention in recent years.
This attention stems in large part from the discovery of extremely flat bands for certain, very small twist angles, referred to as `magic' angles, in twisted bilayer graphene (TBG)~\cite{mcdonald:2011, cao:2018a, herrero:2018}. Ideally, these bands concentrate spectral weight around the Fermi level, are separated by a gap from other bands in the spectrum, and potentially possess non-trivial topology with intriguing implications for the many-body ground states~\cite{zou:2020,bernevig:2020}. Moreover, as a result of the flatness of these bands the electron-electron coupling becomes the dominant interaction. Consequently, this system shows correlated insulator states at integer fillings~\cite{cao:2018a} and superconductivity in between~\cite{herrero:2018,efetov:2019}. Finally, even quantum anomalous Hall phases, driven by the strong interactions, have been observed~\cite{Wang:2021,pierce:2021,serlin:2020}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{fig1_setup.pdf}
\caption{\label{fig:unit_cell} {\bf System setup and electronic structure.} ({\bf a}) The unit cell consisting of a graphene monolayer with adatoms placed in the center of the unit cell as indicated by the red circle. The lattice vectors ($\vec{v_1}$,$\vec{v_2}$) are related to the lattice vectors of pure graphene ($\vec{a_1}$, $\vec{a_2}$) by Eq.~(\ref{eq:relation_lat_vecs}) with $(n,m) = (-1, 2)$. ({\bf b}) The Brillouin zone (BZ) of monolayer graphene shown in red and the reduced BZ of our setup depicted in black. The $K$ and $K'$ points of graphene map to the $\Gamma$ point in the reduced BZ of the superstructure. ({\bf c}) For comparison, the BZ of TBG (in black) tilted with respect to the two BZs of graphene twisted by a small angle (in blue and red). ({\bf d}) The DFT spectrum of graphene with tungsten adatoms. Irreducible representations of the bands at high-symmetry points are indicated in the figure. Right side shows the density of states projected onto the adatom and carbon $p_z$ orbitals, respectively.}
\end{figure*}
More specifically, for a discrete set of angles, the Moir\'e pattern creates a commensurate hexagonal supercell and the electronic structure can be described again in terms of Bloch states. The superstructure significantly affects the tunneling of electrons between the two graphene layers and the hybridization of electronic states near the two graphene Dirac-cones results in a decrease of the Fermi velocity at the charge neutrality point and the formation of nearly flat bands in the spectrum.
The twist angle in bilayer graphene directly controls the size of the Moir\'e supercell and thus acts as the tuning parameter for the band flattening.
Motivated by magic-angle TBG, many more systems were proposed, where a small twist angle between stacked layers, such as other multilayer graphene heterostructures or transition metal dichalcogenides, is used to create novel correlated phases. Furthermore, Moir\'e engineering on a single Dirac cone of a topological-insulator surface state was recently discussed~\cite{cano:2020, wang:2020}.
Besides the restriction on the types of periodic potentials that can be realized with Moir\'e pattern, however, producing samples with predefined twist angle and sufficient homogeneity is a further intricate experimental challenge~\cite{uri:2020, benschop:2020tmp}. There is thus an ongoing search for other systems with electronic properties similar to the ones of magic-angle TBG~\cite{lee:2020}, but with more control over the design. Such systems provide novel platforms to study the physics of correlated electrons in topologically non-trivial bands.
In this work, we propose an alternative approach for creating topologically non-trivial flat bands in a single graphene sheet by the application of a periodic potential. In particular, using first-principle calculations, we investigate a single layer of graphene decorated with a periodic, $C_6$-symmetric distribution of adatoms, see Fig.~\ref{fig:unit_cell}a. While such an approach allows for high control over the applied potential through an adatom superlattice via atom manipulation using scanning tunnelling microscopy~\cite{brar:2011,wang:2013,wyrick:2016}, our principle is amenable also to artificial graphene~\cite{gomes:2012,Drost:2017} or engineered lattices~\cite{Drost:2017,Yan:2019,khajetoorians:2019}, and via nanofabrication to graphene~\cite{dyck:2017}.
Within our first-principles calculations, we indeed find flat bands separated by gaps from other bands in the spectrum of the system. Employing the recently introduced framework of topological quantum chemistry~\cite{TQC:2017}, we further reveal the fragile topological nature of these bands. Bands with this type of topology stand in between strong topological and trivial phases, based on the topological robustness against addition of trivial degrees of freedom. Specifically, the main characteristic of bands with fragile topology is that they can be trivialised by addition of bands permitting an atomic limit below the Fermi level \cite{TQC:2017,watanabe:2017,po:2018}.
Systems with fragile topology protected by $n$-fold rotation symmetry often feature a filling anomaly: In open boundary conditions, the system possesses $n$ degenerate in-gap states, which are only partially occupied at charge neutrality~\cite{huges:2019}. This implies a degeneracy of the many-body ground state in the thermodynamic limit protected by rotation symmetry. We illustrate this bulk-boundary correspondence employing a tight-binding calculation of a $C_6$-symmetric flake, where the filling anomaly manifests itself in an excess charge accumulation in the corners of the flake, suitable for experimental discovery.
\section*{Model system and flat bands}
Conceptually, the electronic structure of both TBG and our approach can be understood starting from that of a single layer of graphene.
The band structure is characterized by two Dirac cones around the $K$ and $K'$ points in the Brillouin zone (BZ). For TBG with discrete commensurate twist angles, the resulting periodic superstructure allows for a momentum-space description in a reduced BZ. As shown in Fig.~\ref{fig:unit_cell}c, this reduced BZ can be geometrically constructed directly in momentum space, by twisting the two original BZs resulting in a reduced BZ with new $K$ and $K'$ points stemming from the $K$ points from the two individual layers, $K_1$ and $K_2$. For small twist angles, the tunneling between the two graphene layers that hybridizes the bands around $K_1$ and $K_2$ becomes comparable to the band width of the respective bands in the reduced BZ, resulting in flat bands over the whole reduced BZs. Note that there are two time-reversal related BZs, one from the $K$ and one from the $K'$ points of the individual graphene layers. For the small angles required for flat bands, the resulting unit cell in real space contains thousands of carbon atoms.
Inspired by the construction in TBG, we build in the following flat bands starting again from a single graphene sheet, but enlarging the unit cell using nanostructuring. We consider a superlattice potential arising from adatoms placed in the hollow sites (H) of the graphene lattice, meaning in the center of the C hexagon, with a periodicity described by the superlattice vectors $\vec{v}_1$ and $\vec{v}_2$, see Fig.~\ref{fig:unit_cell}a. In consequence and contrast to the TBG case, we therefore only have a single original $K$ and $K'$ point. We choose the superlattice vectors in such a way, that both $K$ and $K'$ are mapped to the $\Gamma$ point of the reduced BZ. Crucially, this allows for a strong hybridization of the original with the adatom bands.
The lattice vectors leading to such a configuration are given by
\begin{equation}\label{eq:relation_lat_vecs}
\vec{v}_1 = n \vec{a}_1 + (3m+n) \vec{a}_2\, , \\
\end{equation}
where $n,m \in \mathbb{Z}$, $\vec{a}_1$ and $\vec{a}_2$ are the lattice vectors for graphene, and $\vec{v}_2$ is related to $\vec{v}_1$ by a 60-degrees rotation.
For concreteness, we use in the following $(n,m) = (-1, 2)$.
The BZ resulting from this construction is shown in Fig.~\ref{fig:unit_cell}b.
As a guiding principle, we choose non-magnetic transition-metal adatoms, which, as we will discuss below, generically lead to flat bands that are topologically non-trivial due to their $d$ orbitals. We further require the candidate adatoms to be sufficiently stable at the H site~\cite{nakada:2011}. As a figure of merit, we consider the ratio of spectral gap to the closest C-based bands and the band width. Finally focusing on situations, where the flat bands are separated from other bands related to the adatoms, these guiding principles result in a set of most promising adatoms: W, Ta, and Ru. Figure~\ref{fig:unit_cell}d shows the band structure that results from the above construction with tungsten adatoms obtained using first-principles calculations (see Methods for details). For these calculations, the graphene lattice is oriented in the $x$-$y$ plane and the tungsten atoms are relaxed to their equilibrium position in $z$ direction over the H site.
For completeness, Tab.~\ref{tab:table-elements} summarizes the band width and gaps to the nearest C-based bands of tungsten and all other considered adatoms (see Methods for their corresponding DFT spectra).
\begin{table}[b]
\begin{tabular}{c|cccccccccc}
\hline
$\text{Element}$&$\text{W}$&$\text{Ir}$&$\text{Cr}$&$\text{Mo}$&$\text{Rh}$&$\text{Ta}$&$\text{Ru}$&$\text{Re}$&$\text{Os}$&$\text{Nb}$\\
\hline
$\text{Width [eV]}$&$0.26$&$0.56$&$0.21$&$0.30$&0.39&0.27&0.28&0.24&0.30&0.30\\
$\text{Gap [eV]}$&$0.34$&$0.26$&$0.26$&$0.36$&0.30&0.31&0.38&0.39&0.37&0.33\\
\hline
\end{tabular}
\caption{Summary of adatoms placed in H sites and that form flat bands, featuring fragile topology with band width and spectral gap to the nearest C bands indicated. }
\label{tab:table-elements}
\end{table}
In the following we focus on the flat bands highlighted in Fig.~\ref{fig:unit_cell}d, assuming that the chemical potential through the graphene channel can be directly controlled by back gating~\cite{Novoselov:2004}.
The orbital projected density of states on the right side of Fig.~\ref{fig:unit_cell}d shows that these flat bands are originating from the hybridization of the C $p_z$ orbitals of graphene and the $d_{xz,yz}$ orbitals of tungsten. As we will discuss in the following, this is crucial for the topological properties and needs to be taken into account when constructing a minimal tight-binding model.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig2_tightbinding.pdf}
\caption{\label{fig:pbc_obc_tbmodel}{\bf Tight-binding calculations for open geometries} ({\bf a}) The geometry of $C_6$-symmetric flake with the number of the unit cells $N_{uc} = 61$. ({\bf b}) Band structure of the tight-binding model (in blue) with flat bands highlighted in yellow and $C_2$-symmetry eigenvalues at $\Gamma$ and M points indicating the topological nature of the bands. The spectrum for the flake (in red) shows in-gap states around the energy indicated by the green line. ({\bf c}) The unit cell of the ribbon geometry. ({\bf d}) The spectrum for ribbon geometry (in blue) shows a small gap. In the flake geometry, there are six additional states that lie in this gap (in red).}
\end{figure}
\section*{Topological Properties}
To examine the topological properties of the flat bands constructed by the superlattice, we employ a symmetry analysis within the context of topological quantum chemistry~\cite{TQC:2017}. This analysis requires the transformation properties of the wave functions that make up the flat bands at the $\Gamma$ and the $M$ points. While the decorated graphene lattice retains the $C_6$ symmetry of graphene, all mirror symmetries are broken by the adatom arrangement, such that, including translations, our system reduces to $P_6$ space-group symmetry. For the situation shown in Fig.~\ref{fig:unit_cell}d, the flat bands highlighted with yellow transform as $\Gamma_3$ and $\Gamma_5$ at the $\Gamma$ point and $M_2$ at the $M$ point.
In particular, the $C_2$ eigenvalues associated with these irreducible representations at the $\Gamma$ and $M$ points are $\alpha_{C_2}(\Gamma) = +1$ and $\alpha_{C_2}(M) = -1$. These eigenvalues can be understood by inspecting the orbital content of the bands in Fig.~\ref{fig:unit_cell}d: At the $\Gamma$ point, the bands stem from $p_z$ orbitals, while at the $M$ point, the bands originate from $d_{xz}$ and $d_{yz}$ orbitals.
Such a combination of irreducible representations cannot arise from an atomic limit of exponentially-localized Wannier functions~\cite{cryst1:2011,cryst2:2006,cryst3:2006}, implying that the bands are indeed topological. Following the terminology introduced in this context, these bands do not form an elementary band representation (EBR). The $p_z$-$d_{xz}$/$d_{yz}$ hybridization is thus the crucial ingredient for the formation of bands with non-trivial topology.
While the flat bands cannot be adiabatically connected to an atomic limit, they can be written as a difference between two EBRs (with integer coefficients), which indicates the fragile nature of their topology \cite{bradlyn:2019}. In particular, the bands can be expressed as the difference $\text{FT} = \text{AL}_1 - \text{AL}_2$, where $\text{AL}_1 = [\Gamma_1 \oplus \Gamma_3\Gamma_5,M_1\oplus2M_2]$ and $\text{AL}_2 = [\Gamma_1 \oplus M_1]$ are two sets of band representations forming an EBR.
This feature distinguishes the current case from a strong topological phase, which cannot be trivialised by adding trivial degrees of freedom.
Unlike strong topological phases, which are characterized by topological edge states due to the bulk-boundary correspondence, bands with fragile topology protected by $C_n$-symmetry can exhibit a filling anomaly. This topological feature describes the situation, when a mismatch exists between the number of electrons required to simultaneously satisfy charge neutrality, a unique ground state in open boundary conditions, and the crystalline symmetry. In the spectrum, $n$ degenerate states associated with the filling anomaly appear in the gap with $n/2$ states occupied for charge neutrality. In our case, adding one more electron will lead to a quantized excess charge of $e/6$ in each corner of the flake \cite{huges:2019}.
\begin{figure}[b]
\centering
\includegraphics[width=1.\linewidth]{fig3_cornerstates.pdf}
\caption{\label{fig:corner_states} {\bf Corner-localized in-gap states} ({\bf a}) The spatial distribution of the absolute values squared of the eigenvectors corresponding to the in-gap states for $C_6$-symmetric graphene flake shown in Fig.~\ref{fig:pbc_obc_tbmodel}a with $N_{\text{uc}}=331$. ({\bf b}) Line profile of the local density of states at the bound state, integrated along the blue and red lines given in {\bf a}. Dashed red line indicates exponential fit. ({\bf c}) Finite-size scaling of the in-gap states showing their degeneracy in the thermodynamic limit. An arrow indicates the states with spatial distribution given in {\bf a}.}
\end{figure}
To investigate the appearance of this type of bulk-boundary correspondence associated with the fragile topology of the flat bands, we introduce a tight-binding model starting from the C $p_z$ and the transition-metal $d$ orbitals (see Methods). Such a model allows for the simulation of any open geometry, such as the ones shown in Fig.~\ref{fig:pbc_obc_tbmodel}. Before investigating this finite system further, we note that for appropriate parameters, the tight-binding model indeed yields flat bands with the correct irreducible representations as discussed above and shown in Fig.~\ref{fig:pbc_obc_tbmodel}.
The example of the $C_6$-symmetric flake in Figure~\ref{fig:pbc_obc_tbmodel}a leads to the spectrum in panel b with the spectrum of the translationally invariant system added for comparison. Indeed, we find in-gap states, though, more states than the six anticipated from fragile topology. We can understand the origin of these additional states considering a ribbon geometry as shown in Fig.~\ref{fig:pbc_obc_tbmodel}c. As can be seen in Fig.~\ref{fig:pbc_obc_tbmodel}d, most of the in-gap states are associated with edge states which are, however, not completely gapless. These states are connected to an only weakly-broken mirror symmetry perpendicular to the open direction in the ribbon geometry. In the hybridization gap of these edge states, we find six in-gap states for the flake geometry, which we attribute to the fragile topology of the system.
Figure~\ref{fig:corner_states}a shows the contribution of these six in-gap states to the local density of states of the flake geometry. Their dominant spectral weight is localized at the corners of the flake, as further emphasized in Fig.~\ref{fig:corner_states}b, which shows the unit-cell-averaged weight of the wave functions of the states in the gap along the edge. Relevant to their experimental discovery is an exponentially decaying LDOS towards the center of the structure with a characteristic length of $0.292/N_{\rm uc}$, which would allow for a distinction with respect to other edge-state observations. Finally, while the six states associated with the filling anomaly are not degenerate in a finite geometry, a finite size scaling, Fig.~\ref{fig:corner_states}c, shows that they indeed become degenerate in the thermodynamic limit. As such, panel c in Fig.~\ref{fig:corner_states} serves as a useful guide for how the degeneracy of the corner-localized gap states evolves in the gradual buildup of such a structure.
\section*{Discussion}
The system we propose here is conceptually simple, yet features intriguing topological properties. We demonstrated how the fragile topology manifests itself through a filling anomaly, which can be mapped with a local scanning probe.
In addition to the finite geometries that are required to probe the corner-localized states, our approach allows for more design freedom. In particular, while we focused here on a superstructure with $P_6$ symmetry, any subgroup of the graphene space group $P_6/mmm$ can be realized by choosing the appropriate superlattice vectors. Furthermore, defects in the lattice, which are a distinct way of probing topological bands, can be readily implemented by the deliberate addition or removal of atoms.
Our ideas of engineering topologically non-trivial flat bands through nanostructuring go beyond the periodic decoration of graphene with adatoms. A further promising route towards their realization can be based on artificial graphene, for example using scanning-tunneling-microscopic methods to arrange CO molecules on a Cu(111) surface~\cite{gomes:2012}.
Finally, in view of the required nanometer periodicity, we expect that graphene sheets could even be engineered by lithography techniques.
\section*{Acknowledgements}
A.S., S.S.T. and T.N. were supported by funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (ERC-StG-Neupert-757867-PARATOP). A.S. was also supported by Forschungskredit of the University of Zurich, grant No. FK-20-101. S.S.T. and T.N. were additionally supported by NCRR Marvel. S.S.T. also acknowledges support from the Swiss National Science Foundation (grant number: PP00P2\_176877). F.D.N. thanks SNSF (PP00P2-176866) and ONR (N00014-20-1-2352) for generous support.
\section*{Introduction}
Engineering the desired functionality of a material through nanostructuring has proven a powerful approach that is particularly well suited for two-dimensional (2D) materials. Often, the goal of such engineering involves shifting spectral weight to the Fermi level or increasing the coupling of the electrons to other degrees of freedom, such as light for improved optoelectronic properties~\cite{yan:2020} or phonons for superconductivity~\cite{allan:2017}. Nanostructuring can be achieved in a variety of ways: standard cleanroom techniques, such as electron beam lithography~\cite{grigorescu:2009}, photolithography, or focused ion beam lithography~\cite{genet:2007}, allow to realize patterns of a few nanometers in size.
Moir\'e engineering, in other words using the potential landscape from a Moir\'e lattice that emerges due to a finite twist angle between two 2D lattices, has attracted a lot of attention in recent years.
This attention stems in large part from the discovery of extremely flat bands for certain, very small twist angles, referred to as `magic' angles, in twisted bilayer graphene (TBG)~\cite{mcdonald:2011, cao:2018a, herrero:2018}. Ideally, these bands concentrate spectral weight around the Fermi level, are separated by a gap from other bands in the spectrum, and potentially possess non-trivial topology with intriguing implications for the many-body ground states~\cite{zou:2020,bernevig:2020}. Moreover, as a result of the flatness of these bands the electron-electron coupling becomes the dominant interaction. Consequently, this system shows correlated insulator states at integer fillings~\cite{cao:2018a} and superconductivity in between~\cite{herrero:2018,efetov:2019}. Finally, even quantum anomalous Hall phases, driven by the strong interactions, have been observed~\cite{Wang:2021,pierce:2021,serlin:2020}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{fig1_setup.pdf}
\caption{\label{fig:unit_cell} {\bf System setup and electronic structure.} ({\bf a}) The unit cell consisting of a graphene monolayer with adatoms placed in the center of the unit cell as indicated by the red circle. The lattice vectors ($\vec{v_1}$,$\vec{v_2}$) are related to the lattice vectors of pure graphene ($\vec{a_1}$, $\vec{a_2}$) by Eq.~(\ref{eq:relation_lat_vecs}) with $(n,m) = (-1, 2)$. ({\bf b}) The Brillouin zone (BZ) of monolayer graphene shown in red and the reduced BZ of our setup depicted in black. The $K$ and $K'$ points of graphene map to the $\Gamma$ point in the reduced BZ of the superstructure. ({\bf c}) For comparison, the BZ of TBG (in black) tilted with respect to the two BZs of graphene twisted by a small angle (in blue and red). ({\bf d}) The DFT spectrum of graphene with tungsten adatoms. Irreducible representations of the bands at high-symmetry points are indicated in the figure. Right side shows the density of states projected onto the adatom and carbon $p_z$ orbitals, respectively.}
\end{figure*}
More specifically, for a discrete set of angles, the Moir\'e pattern creates a commensurate hexagonal supercell and the electronic structure can be described again in terms of Bloch states. The superstructure significantly affects the tunneling of electrons between the two graphene layers and the hybridization of electronic states near the two graphene Dirac-cones results in a decrease of the Fermi velocity at the charge neutrality point and the formation of nearly flat bands in the spectrum.
The twist angle in bilayer graphene directly controls the size of the Moir\'e supercell and thus acts as the tuning parameter for the band flattening.
Motivated by magic-angle TBG, many more systems were proposed, where a small twist angle between stacked layers, such as other multilayer graphene heterostructures or transition metal dichalcogenides, is used to create novel correlated phases. Furthermore, Moir\'e engineering on a single Dirac cone of a topological-insulator surface state was recently discussed~\cite{cano:2020, wang:2020}.
Besides the restriction on the types of periodic potentials that can be realized with Moir\'e pattern, however, producing samples with predefined twist angle and sufficient homogeneity is a further intricate experimental challenge~\cite{uri:2020, benschop:2020tmp}. There is thus an ongoing search for other systems with electronic properties similar to the ones of magic-angle TBG~\cite{lee:2020}, but with more control over the design. Such systems provide novel platforms to study the physics of correlated electrons in topologically non-trivial bands.
In this work, we propose an alternative approach for creating topologically non-trivial flat bands in a single graphene sheet by the application of a periodic potential. In particular, using first-principle calculations, we investigate a single layer of graphene decorated with a periodic, $C_6$-symmetric distribution of adatoms, see Fig.~\ref{fig:unit_cell}a. While such an approach allows for high control over the applied potential through an adatom superlattice via atom manipulation using scanning tunnelling microscopy~\cite{brar:2011,wang:2013,wyrick:2016}, our principle is amenable also to artificial graphene~\cite{gomes:2012,Drost:2017} or engineered lattices~\cite{Drost:2017,Yan:2019,khajetoorians:2019}, and via nanofabrication to graphene~\cite{dyck:2017}.
Within our first-principles calculations, we indeed find flat bands separated by gaps from other bands in the spectrum of the system. Employing the recently introduced framework of topological quantum chemistry~\cite{TQC:2017}, we further reveal the fragile topological nature of these bands. Bands with this type of topology stand in between strong topological and trivial phases, based on the topological robustness against addition of trivial degrees of freedom. Specifically, the main characteristic of bands with fragile topology is that they can be trivialised by addition of bands permitting an atomic limit below the Fermi level \cite{TQC:2017,watanabe:2017,po:2018}.
Systems with fragile topology protected by $n$-fold rotation symmetry often feature a filling anomaly: In open boundary conditions, the system possesses $n$ degenerate in-gap states, which are only partially occupied at charge neutrality~\cite{huges:2019}. This implies a degeneracy of the many-body ground state in the thermodynamic limit protected by rotation symmetry. We illustrate this bulk-boundary correspondence employing a tight-binding calculation of a $C_6$-symmetric flake, where the filling anomaly manifests itself in an excess charge accumulation in the corners of the flake, suitable for experimental discovery.
\section*{Model system and flat bands}
Conceptually, the electronic structure of both TBG and our approach can be understood starting from that of a single layer of graphene.
The band structure is characterized by two Dirac cones around the $K$ and $K'$ points in the Brillouin zone (BZ). For TBG with discrete commensurate twist angles, the resulting periodic superstructure allows for a momentum-space description in a reduced BZ. As shown in Fig.~\ref{fig:unit_cell}c, this reduced BZ can be geometrically constructed directly in momentum space, by twisting the two original BZs resulting in a reduced BZ with new $K$ and $K'$ points stemming from the $K$ points from the two individual layers, $K_1$ and $K_2$. For small twist angles, the tunneling between the two graphene layers that hybridizes the bands around $K_1$ and $K_2$ becomes comparable to the band width of the respective bands in the reduced BZ, resulting in flat bands over the whole reduced BZs. Note that there are two time-reversal related BZs, one from the $K$ and one from the $K'$ points of the individual graphene layers. For the small angles required for flat bands, the resulting unit cell in real space contains thousands of carbon atoms.
Inspired by the construction in TBG, we build in the following flat bands starting again from a single graphene sheet, but enlarging the unit cell using nanostructuring. We consider a superlattice potential arising from adatoms placed in the hollow sites (H) of the graphene lattice, meaning in the center of the C hexagon, with a periodicity described by the superlattice vectors $\vec{v}_1$ and $\vec{v}_2$, see Fig.~\ref{fig:unit_cell}a. In consequence and contrast to the TBG case, we therefore only have a single original $K$ and $K'$ point. We choose the superlattice vectors in such a way, that both $K$ and $K'$ are mapped to the $\Gamma$ point of the reduced BZ. Crucially, this allows for a strong hybridization of the original with the adatom bands.
The lattice vectors leading to such a configuration are given by
\begin{equation}\label{eq:relation_lat_vecs}
\vec{v}_1 = n \vec{a}_1 + (3m+n) \vec{a}_2\, , \\
\end{equation}
where $n,m \in \mathbb{Z}$, $\vec{a}_1$ and $\vec{a}_2$ are the lattice vectors for graphene, and $\vec{v}_2$ is related to $\vec{v}_1$ by a 60-degrees rotation.
For concreteness, we use in the following $(n,m) = (-1, 2)$.
The BZ resulting from this construction is shown in Fig.~\ref{fig:unit_cell}b.
As a guiding principle, we choose non-magnetic transition-metal adatoms, which, as we will discuss below, generically lead to flat bands that are topologically non-trivial due to their $d$ orbitals. We further require the candidate adatoms to be sufficiently stable at the H site~\cite{nakada:2011}. As a figure of merit, we consider the ratio of spectral gap to the closest C-based bands and the band width. Finally focusing on situations, where the flat bands are separated from other bands related to the adatoms, these guiding principles result in a set of most promising adatoms: W, Ta, and Ru. Figure~\ref{fig:unit_cell}d shows the band structure that results from the above construction with tungsten adatoms obtained using first-principles calculations (see Methods for details). For these calculations, the graphene lattice is oriented in the $x$-$y$ plane and the tungsten atoms are relaxed to their equilibrium position in $z$ direction over the H site.
For completeness, Tab.~\ref{tab:table-elements} summarizes the band width and gaps to the nearest C-based bands of tungsten and all other considered adatoms (see Methods for their corresponding DFT spectra).
\begin{table}[b]
\begin{tabular}{c|cccccccccc}
\hline
$\text{Element}$&$\text{W}$&$\text{Ir}$&$\text{Cr}$&$\text{Mo}$&$\text{Rh}$&$\text{Ta}$&$\text{Ru}$&$\text{Re}$&$\text{Os}$&$\text{Nb}$\\
\hline
$\text{Width [eV]}$&$0.26$&$0.56$&$0.21$&$0.30$&0.39&0.27&0.28&0.24&0.30&0.30\\
$\text{Gap [eV]}$&$0.34$&$0.26$&$0.26$&$0.36$&0.30&0.31&0.38&0.39&0.37&0.33\\
\hline
\end{tabular}
\caption{Summary of adatoms placed in H sites and that form flat bands, featuring fragile topology with band width and spectral gap to the nearest C bands indicated. }
\label{tab:table-elements}
\end{table}
In the following we focus on the flat bands highlighted in Fig.~\ref{fig:unit_cell}d, assuming that the chemical potential through the graphene channel can be directly controlled by back gating~\cite{Novoselov:2004}.
The orbital projected density of states on the right side of Fig.~\ref{fig:unit_cell}d shows that these flat bands are originating from the hybridization of the C $p_z$ orbitals of graphene and the $d_{xz,yz}$ orbitals of tungsten. As we will discuss in the following, this is crucial for the topological properties and needs to be taken into account when constructing a minimal tight-binding model.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig2_tightbinding.pdf}
\caption{\label{fig:pbc_obc_tbmodel}{\bf Tight-binding calculations for open geometries} ({\bf a}) The geometry of $C_6$-symmetric flake with the number of the unit cells $N_{uc} = 61$. ({\bf b}) Band structure of the tight-binding model (in blue) with flat bands highlighted in yellow and $C_2$-symmetry eigenvalues at $\Gamma$ and M points indicating the topological nature of the bands. The spectrum for the flake (in red) shows in-gap states around the energy indicated by the green line. ({\bf c}) The unit cell of the ribbon geometry. ({\bf d}) The spectrum for ribbon geometry (in blue) shows a small gap. In the flake geometry, there are six additional states that lie in this gap (in red).}
\end{figure}
\section*{Topological Properties}
To examine the topological properties of the flat bands constructed by the superlattice, we employ a symmetry analysis within the context of topological quantum chemistry~\cite{TQC:2017}. This analysis requires the transformation properties of the wave functions that make up the flat bands at the $\Gamma$ and the $M$ points. While the decorated graphene lattice retains the $C_6$ symmetry of graphene, all mirror symmetries are broken by the adatom arrangement, such that, including translations, our system reduces to $P_6$ space-group symmetry. For the situation shown in Fig.~\ref{fig:unit_cell}d, the flat bands highlighted with yellow transform as $\Gamma_3$ and $\Gamma_5$ at the $\Gamma$ point and $M_2$ at the $M$ point.
In particular, the $C_2$ eigenvalues associated with these irreducible representations at the $\Gamma$ and $M$ points are $\alpha_{C_2}(\Gamma) = +1$ and $\alpha_{C_2}(M) = -1$. These eigenvalues can be understood by inspecting the orbital content of the bands in Fig.~\ref{fig:unit_cell}d: At the $\Gamma$ point, the bands stem from $p_z$ orbitals, while at the $M$ point, the bands originate from $d_{xz}$ and $d_{yz}$ orbitals.
Such a combination of irreducible representations cannot arise from an atomic limit of exponentially-localized Wannier functions~\cite{cryst1:2011,cryst2:2006,cryst3:2006}, implying that the bands are indeed topological. Following the terminology introduced in this context, these bands do not form an elementary band representation (EBR). The $p_z$-$d_{xz}$/$d_{yz}$ hybridization is thus the crucial ingredient for the formation of bands with non-trivial topology.
While the flat bands cannot be adiabatically connected to an atomic limit, they can be written as a difference between two EBRs (with integer coefficients), which indicates the fragile nature of their topology \cite{bradlyn:2019}. In particular, the bands can be expressed as the difference $\text{FT} = \text{AL}_1 - \text{AL}_2$, where $\text{AL}_1 = [\Gamma_1 \oplus \Gamma_3\Gamma_5,M_1\oplus2M_2]$ and $\text{AL}_2 = [\Gamma_1 \oplus M_1]$ are two sets of band representations forming an EBR.
This feature distinguishes the current case from a strong topological phase, which cannot be trivialised by adding trivial degrees of freedom.
Unlike strong topological phases, which are characterized by topological edge states due to the bulk-boundary correspondence, bands with fragile topology protected by $C_n$-symmetry can exhibit a filling anomaly. This topological feature describes the situation, when a mismatch exists between the number of electrons required to simultaneously satisfy charge neutrality, a unique ground state in open boundary conditions, and the crystalline symmetry. In the spectrum, $n$ degenerate states associated with the filling anomaly appear in the gap with $n/2$ states occupied for charge neutrality. In our case, adding one more electron will lead to a quantized excess charge of $e/6$ in each corner of the flake \cite{huges:2019}.
\begin{figure}[b]
\centering
\includegraphics[width=1.\linewidth]{fig3_cornerstates.pdf}
\caption{\label{fig:corner_states} {\bf Corner-localized in-gap states} ({\bf a}) The spatial distribution of the absolute values squared of the eigenvectors corresponding to the in-gap states for $C_6$-symmetric graphene flake shown in Fig.~\ref{fig:pbc_obc_tbmodel}a with $N_{\text{uc}}=331$. ({\bf b}) Line profile of the local density of states at the bound state, integrated along the blue and red lines given in {\bf a}. Dashed red line indicates exponential fit. ({\bf c}) Finite-size scaling of the in-gap states showing their degeneracy in the thermodynamic limit. An arrow indicates the states with spatial distribution given in {\bf a}.}
\end{figure}
To investigate the appearance of this type of bulk-boundary correspondence associated with the fragile topology of the flat bands, we introduce a tight-binding model starting from the C $p_z$ and the transition-metal $d$ orbitals (see Methods). Such a model allows for the simulation of any open geometry, such as the ones shown in Fig.~\ref{fig:pbc_obc_tbmodel}. Before investigating this finite system further, we note that for appropriate parameters, the tight-binding model indeed yields flat bands with the correct irreducible representations as discussed above and shown in Fig.~\ref{fig:pbc_obc_tbmodel}.
The example of the $C_6$-symmetric flake in Figure~\ref{fig:pbc_obc_tbmodel}a leads to the spectrum in panel b with the spectrum of the translationally invariant system added for comparison. Indeed, we find in-gap states, though, more states than the six anticipated from fragile topology. We can understand the origin of these additional states considering a ribbon geometry as shown in Fig.~\ref{fig:pbc_obc_tbmodel}c. As can be seen in Fig.~\ref{fig:pbc_obc_tbmodel}d, most of the in-gap states are associated with edge states which are, however, not completely gapless. These states are connected to an only weakly-broken mirror symmetry perpendicular to the open direction in the ribbon geometry. In the hybridization gap of these edge states, we find six in-gap states for the flake geometry, which we attribute to the fragile topology of the system.
Figure~\ref{fig:corner_states}a shows the contribution of these six in-gap states to the local density of states of the flake geometry. Their dominant spectral weight is localized at the corners of the flake, as further emphasized in Fig.~\ref{fig:corner_states}b, which shows the unit-cell-averaged weight of the wave functions of the states in the gap along the edge. Relevant to their experimental discovery is an exponentially decaying LDOS towards the center of the structure with a characteristic length of $0.292/N_{\rm uc}$, which would allow for a distinction with respect to other edge-state observations. Finally, while the six states associated with the filling anomaly are not degenerate in a finite geometry, a finite size scaling, Fig.~\ref{fig:corner_states}c, shows that they indeed become degenerate in the thermodynamic limit. As such, panel c in Fig.~\ref{fig:corner_states} serves as a useful guide for how the degeneracy of the corner-localized gap states evolves in the gradual buildup of such a structure.
\section*{Discussion}
The system we propose here is conceptually simple, yet features intriguing topological properties. We demonstrated how the fragile topology manifests itself through a filling anomaly, which can be mapped with a local scanning probe.
In addition to the finite geometries that are required to probe the corner-localized states, our approach allows for more design freedom. In particular, while we focused here on a superstructure with $P_6$ symmetry, any subgroup of the graphene space group $P_6/mmm$ can be realized by choosing the appropriate superlattice vectors. Furthermore, defects in the lattice, which are a distinct way of probing topological bands, can be readily implemented by the deliberate addition or removal of atoms.
Our ideas of engineering topologically non-trivial flat bands through nanostructuring go beyond the periodic decoration of graphene with adatoms. A further promising route towards their realization can be based on artificial graphene, for example using scanning-tunneling-microscopic methods to arrange CO molecules on a Cu(111) surface~\cite{gomes:2012}.
Finally, in view of the required nanometer periodicity, we expect that graphene sheets could even be engineered by lithography techniques.
\section*{Acknowledgements}
A.S., S.S.T. and T.N. were supported by funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (ERC-StG-Neupert-757867-PARATOP). A.S. was also supported by Forschungskredit of the University of Zurich, grant No. FK-20-101. S.S.T. and T.N. were additionally supported by NCRR Marvel. S.S.T. also acknowledges support from the Swiss National Science Foundation (grant number: PP00P2\_176877). F.D.N. thanks SNSF (PP00P2-176866) and ONR (N00014-20-1-2352) for generous support.
|
1,108,101,563,193 | arxiv | \section{Introduction}
\subsection{Motivation}
Recent conceptual developments in the quantum Internet have allowed to start defining layer models for quantum network architectures \cite{QIavision,IETFlinklayer}. Similar to the OSI model for the classical Internet, these models separate physical and application issues to allow researchers to study experimental problems such as extending coherence time or establishing entangled links between distant nodes using various physical platforms~\cite{loopholefreebell,repeaterNV} and conversely to start developing high-level applications that could sit on top of any physical implementation. A non-exhaustive overview of protocols for the future quantum Internet can be found on the Quantum Protocol Zoo website~\cite{Zoo}. These layer models shed light on composable security issues that have to be addressed. Roughly speaking, a protocol is said to be composably secure if it can be used multiple times in a row or as a subroutine in any bigger protocol without threatening the overall security. A protocol can thus be seen as a black box that can be composed with other protocols, which is precisely the way we would like to think of applications in such settings.
Expected progress within the next few years will lead to several realistic applications such as quantum money~\cite{QuantumMoney,weakcoinflippin}, voting~\cite{Leaderelection}, or anonymous transmission~\cite{Anonymity} that rely on firstly securely establishing entanglement between all nodes in a quantum network. More precisely, many applications can be achieved in such a network of $n$ nodes by first sharing an entangled state then manipulating it locally to get the desired entanglement needed for the rest of the protocol. One such multiparty state is the GHZ state $\frac{\ket{0}^{\bigotimes n}+\ket{1}^{\bigotimes n}}{\sqrt{2}}$ \cite{GHZ} where each party holds one qubit. In this context, it becomes relevant to have a secure protocol for ensuring that the source shares a state that is at least $\epsilon$-close to the GHZ state. In order to be practical, this protocol should have the minimum requirements for the parties, which are to be able to perform classical communication and local single qubit operations. It should also be composable as it is meant to be used as a subroutine in bigger quantum network protocols.
To show composability results, one has to prove security in a composable framework. Here we use the Abstract Cryptography (AC) framework~\cite{AbstractCryptography,ConsCryptography} that captures the ideal-world vs. real-world paradigm. We continue this introduction by presenting the abstract cryptography framework and how composable security can be proven. We try to provide a full introduction to the framework so that it is accessible to non experts in the topic. Then in the second section we present the ideal abstraction of a verification protocol with the desired security properties as well as an actual multipartite entanglement verification protocol that achieves this functionality. This protocol, designed as a subroutine for bigger protocols in a distributed setting, presents interesting security properties and is believed to be achievable in the near future. We study its composability properties in the AC framework. Then in the last section we discuss possible implementations in near-tear quantum networks and limitations of such constructions.
\subsection{Composability and Abstract Cryptography}
In order to prove composable security, one needs to prove security in a composable framework. One such framework is Abstract (or Constructive) Cryptography (AC), a top-down approach developed by U. Maurer and R. Renner~\cite{AbstractCryptography,FromToConsCrypto,ConsCryptography} to define a simulation-based cryptography theory. It creates some notion of a module with well-defined interfaces that interacts with the rest of the world in a black box fashion. In the Universal Composability framework of Canetti~\cite{UC}, this is called a functionality. In AC, those modules are called resources and going from a resource to another is done through converters called protocols. For example a one-time pad protocol constructs a secure communication channel resource out of a secret key resource and an authenticated classical channel (see Fig.~\ref{fig:OTP}). In their first paper~\cite{AbstractCryptography}, Renner and Maurer defined a complete cryptography algebra of resources with their composition rules. This allowed them to define \textit{equivalence relations} between resources and to infer security notions that inherit composability properties. Moreover this framework is of interest when modeling multiparty protocols as it offers a simpler view of what dishonest parties could have access to than the usual game-based cryptography theory where the strategy for a dishonest group should be given explicitly. The level of abstraction of the different resources can be modulated to highlight the properties that one wants to study about them. Finally, the AC framework is a resource theory with a large power of abstraction that allows us to think of a protocol the same way we would do when thinking of an application in the quantum Internet.
Different results have been achieved using this framework such as the study of unfair coin tossing~\cite{UnfairCoinTossing}, remote state preparation~\cite{RemoteState}, oblivious transfer~\cite{ObliTransfer} and composable security of multiparty delegated quantum computing~\cite{MultipartyQC,DelegatedQC,ComposableMultiDQC}. Different extensions have also been proposed such as adding relativistic constraints~\cite{Relativistic} or global event history in the case of ratcheting~\cite{Ratcheting}. Let us give a brief overview of this framework which we will use to study our multipartite entanglement verification protocol.\\
Abstract cryptography uses the concept of abstract systems to express cryptography as a resource theory. A cryptography protocol is viewed as the construction of some \textit{ideal} resource $\mathcal{S}$ out of other \textit{real} resources $\mathcal{R}$. This construction notion is made through converters. Finally, the distance between two resources is formalised through the notion of a distinguisher. Those three objects are the building blocks of the AC theory. \newpage
\begin{figure}[!ht]
\flushleft
\includegraphics[width=15cm]{OneTimePadResource.pdf}
\caption{Concrete One Time Pad resource $\pi_A\pi_B\mathcal{R}\pi_E$: Alice has access to the left interface, Bob to the right interface and Eve to the down interface. $\mathcal{R}$ is the resource composed of a secret key resource and an authentic channel resource in parallel. Protocols are represented in blue, $\pi_E$ being the protocol of an honest Eve that blocks the input $y$ from the authenticated channel resource.}
\label{fig:OTP}
\end{figure}
A \textbf{resource} is an abstract system with interfaces specified by a set $\mathcal{I}$ (e.g. $\mathcal{I}= \{A,B,E\}$ for Alice, Bob and Eve in a tripartite setting). Each interface is accessible to one user and provides them with some abilities. Note that the notion of a party is not explicitly modeled in this framework, but induced by the interfaces they are restricted to have access to. Resources are used to model functionalities that are not done specifically by a party. They can be associated with real physical resources (e.g. a quantum channel) or with abstract functionalities (e.g. bit commitment or quantum random number generation). The level of abstraction of such a functionality is not bounded \textit{per se} but it is usually tailored to the application that one is modeling and the properties one wants to highlight. For example quantum memories can be explicit and represented with resources or abstracted in converters. Classical protocols can also be explicitly shown or abstracted through oracle calls. Moreover any parallel composition of resources is a resource in which the interface set corresponds to the union of the ones from the composed resources.
\textbf{Converters} are also abstract systems with one set of ``inside'' interfaces that are expected to be connected to a resource and one set of ``outside'' interfaces. Their name derives from the fact that a converter attached to a resource converts it into another resource by emulating a certain set of interfaces to the outside world. They typically model the local computation of a party during a protocol and are denoted with Greek letters. For a resource $\mathcal{R}$ with interfaces $A$ and $B$ and a two-party protocol $\pi=\{\pi_A,\pi_B\}$ we denote $\pi_A\mathcal{R}\pi_B$ the resource obtained from connecting $\pi_A$ to interface $A$ and $\pi_B$ to interface $B$ (see Fig.~\ref{fig:OTP}). A dishonest party is then modeled by just unplugging their corresponding converter from the resources, indicating that the party is not following the protocol. This leaves the interface they have been accessing open to the outside world.
Note that the ordering of the converters is not important and that they are usually written in the most readable way.
Converters are also used to model the honest utilisation of an ideal resource. Indeed a dishonest user might have access to more functionalities than an honest one. To model this we use a converter, a \textit{filter}, to cover these functionalities for an honest player, that we remove in case of a dishonest utilisation of the resource (see an example in Fig.~\ref{fig:PrivClassChan}). Finally, converters are used in the ideal world to simulate the local output to a dishonest party, in which case we use the term of \textit{simulator}. Converters and resources can be described with the help of boxes and arrows as well as in the form of algorithms by specifying where each output goes.
\begin{figure}[!ht]
\centering
\includegraphics[width=9cm]{PrivateClassChan.pdf}
\caption{Filtered one-way private classical channel resource. It takes as input a bitstring $x$ at the left interface, outputs it at the right interface and leaks its size $|x|$ on the bottom interface. $\bot$ is a filter blocking the bottom interface to simulate an honest use of the resource. As we will see in the next section, this resource is equivalent to the one of Fig.~\ref{fig:OTP}.}
\label{fig:PrivClassChan}
\end{figure}
Abstract cryptography is thus the theory of breaking down cryptographic processes into box-shaped resources that can be composed together in series or in parallel. Resources, which represent cryptographic primitives, can be transformed into other resources using converters and composition. A \textit{concrete} resource represents an actual protocol using physical systems and classical and/or quantum operations while an \textit{ideal} resource is the abstraction of the functionality achieved by the protocol. We say that a protocol $\pi = \{\pi_A, \pi_B\}$ constructs the resource $\mathcal{S}$ out of $\mathcal{R}$ and write $\mathcal{R}\xrightarrow{\pi} \mathcal{S}$. Such a construction is \textit{composable} if for all $\mathcal{R},\mathcal{S}$ and $\mathcal{T}$ resources and $\pi,\nu$ converters (protocols) such that $\mathcal{R}\xrightarrow{\pi} \mathcal{S}$ and $\mathcal{S}\xrightarrow{\nu} \mathcal{T}$ we have that
\begin{equation}
\mathcal{R} \xrightarrow{\pi} \mathcal{S} \wedge \mathcal{S} \xrightarrow{\nu} \mathcal{T} \implies \mathcal{R} \xrightarrow{ \nu \circ \pi} \mathcal{T}.
\end{equation}
\subsection{Security definition and assumptions}
\label{subsec:SecDef}
To show that a protocol $\pi$ constructs the ideal $\mathcal{S}$ out of concrete resource $\mathcal{R}$, we have to capture an equivalence notion, with a metric $\approx$ such that $\pi\mathcal{R} \approx \mathcal{S} \overset{def}{\iff}\mathcal{R}\xrightarrow{\pi} \mathcal{S}$. To that end, Abstract Cryptography introduces \textbf{Dinstiguishers}. They are abstract systems that are used to construct a pseudo-metric between two resources. They replace the notion of an adversary and also encompass any protocol that is run before, after or during the protocol being analyzed. As its name indicates, a distinguisher is used to distinguish between two resources $\mathcal{R}$ and $\mathcal{S}$ by connecting to all their interfaces and outputting a single bit: a guess whether it is interacting with $\mathcal{R}$ or $\mathcal{S}$ (see Fig.~\ref{fig:Distinguisher}). The advantage of a distinguisher $\mathbf{D}$ is given by
\begin{equation*}
d^{\mathbf{D}}(\mathcal{R},\mathcal{S})=|Pr[\mathbf{D}\mathcal{R} = 0] - Pr[\mathbf{D}\mathcal{S} = 0]|,
\end{equation*}
where $\mathbf{D}\mathcal{R}$ is the output of $\mathbf{D}$ when interacting with $\mathcal{R}$. For example in Fig.~\ref{fig:Distinguisher}, replacing $\mathcal{R}$ with $\pi_A\pi_B\mathcal{R}\pi_E$ from Fig.\ref{fig:OTP} and $\mathcal{S}$ with the filtered private authenticated classical channel resource from Fig.~\ref{fig:PrivClassChan}, we see that any distinguisher $\mathbf{D}$ will see the same output $x$ for any given input $x$ on any of the two resources. Hence we have that $d^{\mathbf{D}}(\mathcal{R},\mathcal{S})=0$. For a class of distinguishers $\mathbb{D}$, the distinguishing advantage is defined as
\begin{equation*}
d^{\mathbb{D}}(\mathcal{R},\mathcal{S}) = \underset{\mathbf{D}\in\mathbb{D}}{\sup} d^{\mathbf{D}}(\mathcal{R},\mathcal{S}).
\end{equation*}
\begin{figure}[!ht]
\centering
\includegraphics[width=13cm]{Distinguisher.pdf}
\caption{A distinguisher interacting with $\mathcal{R}$ and $\mathcal{S}$. It has access to a complete description of the two systems and can choose the inputs of all players, receive their outputs and simultaneously fulfill the role of an adversary. After interaction, it must guess which resource is which. Replacing $\mathcal{R}$ by Fig.~\ref{fig:OTP} and $\mathcal{S}$ by Fig.~\ref{fig:PrivClassChan}, no distinguisher is able to guess between the two resources.}
\label{fig:Distinguisher}
\end{figure}
The distinguishing advantage is a pseudo-metric on the space satisfying all properties of a composable distance, namely identity, symmetry and triangle inequality. This allows to define \textit{equivalence relations} between resources: for a class of distinguishers $\mathbb{D}$ we say that $\mathcal{R}$ is equivalent (or $\epsilon$-close to) $\mathcal{S}$ and write $\mathcal{R} \approx \mathcal{S}$ (resp. $\mathcal{R} \approx_{\epsilon} \mathcal{S}$) if $d^{\mathbb{D}}(\mathcal{R},\mathcal{S})=0$ (resp. $d^{\mathbb{D}}(\mathcal{R},\mathcal{S})\leq\epsilon$).
To summarize, converters describe mostly local and non-costly operations while resources can have non local functionalities and extended computational power. Distinguishers are all powerful objects that represent the environment trying to guess between two resources. \\
We now have the necessary ingredients to present the notion of secure construction of a resource in AC. Let $\pi=\{\pi_i\}_{i=1}^n$ be a protocol run by $n$ parties using the concrete resource $\mathcal{R}$ that has interfaces $\mathcal{I}$ and let $\mathcal{S}$ be an ideal resource with all the desired properties expected from the protocol. We say that \textbf{$\pi$ securely construct $\mathcal{S}$ out of $\mathcal{R}$ within $\epsilon$} and write $\mathcal{R}\xrightarrow{(\pi,\epsilon)} \mathcal{S} $ if there exist converters $\sigma = \{\sigma_{i}\}$ called \textit{simulators} such that
\begin{equation}
\forall \mathcal{P}\subseteq\mathcal{I}, \pi_{\mathcal{P}}\mathcal{R} \approx_{\epsilon} \sigma_{\mathcal{I}\setminus\mathcal{P}}\mathcal{S},
\end{equation}
with $\forall \mathcal{P}\subseteq\mathcal{I}, \pi_{\mathcal{P}}=\{\pi_i\}_{i\in\mathcal{P}}$.
This means that for each party $i$ that does not follow its protocol $\pi_i$, we are able to find a simulator $\sigma_i$ that locally simulates on the ideal resource the interfaces the party has access to on the concrete resource. Simulators don't represent actual concrete operations and should only be seen as a tool in the proof. For example, using a simulator $\sigma$ taking as input a size and producing a random bit string of this size, we have an equivalence relation between the concrete one-time pad resource with an dishonest Eve $\pi_A\pi_B\mathcal{R}$ and the ideal private classical channel resource on which we attach $\sigma$ (see Fig.~\ref{fig:OTPequiv}). This equivalence together with the equivalence of Fig.~\ref{fig:OTP} and Fig.~\ref{fig:PrivClassChan} (usually denoted as the correctness of the protocol) proves the composably secure construction of the private classical channel resource by the one-time pad protocol. In this case, those two equivalences suffice because we suppose Alice and Bob to be always honest in the one-time pad protocol. One must find simulators for each subset of possible dishonest parties to prove composable construction.
\begin{figure}[!ht]
\centering
\includegraphics[width = 16cm]{OTPequiv.pdf}
\caption{Equivalence between the One-time pad resource with a dishonest Eve and the ideal private classical channel resource with the simulator $\sigma$.}
\label{fig:OTPequiv}
\end{figure}
The power of the class of distinguishers and simulators used to prove a secure construction determines the strength of the security proof. For example considering only classical distinguishers leads to security against classical adversaries while considering all powerful distinguishers leads to information-theoretic security. Ideally we would want the class of simulators to be restricted to a class of easily implementable converters and the set of distinguishers to be as general as possible. This leads to security statements such as ``We can easily construct the ideal resource $\mathcal{S}$ from $\mathcal{R}$ and we can easily simulate any cheating behaviour such that even a very powerful distinguisher cannot tell the two resources apart''.
\vspace{1cm}
\section{An entanglement generation testing resource}
In this section we start with a verification protocol that is possible to be realised with current technology and show its equivalence with an ideal verification resource. Then we build upon it to construct an ideal \textit{verified GHZ sharing resource} that $n$ parties of a network can securely use to get a verified GHZ state as a subroutine of a bigger protocol even when the source is noisy or malicious.
\subsection{Multipartite entanglement verification protocol}
\label{subsec:MEVprotocol}
In the following we describe a protocol that securely constructs an ideal multipartite entanglement verification resource using only classical communication between $n$ parties that each receive a single qubit from a source of multipartite entanglement. We believe the following proof can be adapted to any stabilizer state verification where parties first receive a qubit and then do only local operations and classical communication (LOCC).\\ We first review the original protocol from~\cite{MEVresistant}, then introduce the ideal and concrete resources, and finally prove the secure construction. In this paper we will call ``Source'' the party controlling the entanglement source or the device itself interchangeably. We will consider authenticated classical communication and perfect quantum communication as any imperfection can be modeled as the source perfectly sending noisy states.
Our work is based on the work from~\cite{MEVresistant} where the authors develop and analyze an $n$-party verification protocol consisting only of classical communication and local quantum operations once the state is shared. One of the parties, called the \textit{Verifier}, has a central role in the protocol: it sends instructions to all parties and broadcasts the output of the verification. We recall the protocol of~\cite{MEVresistant}:
\begin{protocol}{Multipartite entanglement verification protocol}
\begin{enumerate}
\item The source creates an $n$-qubit GHZ state and sends each qubit $i$ to party $i$ using a state generation resource and $n$ one-way quantum channels.
\item The Verifier selects for each $i\in[n]$ a random input $x_{i}\in\{0,1\}$ such that $\sum_{i=1}^{n}x_{i}\equiv 0$ (mod 2) and sends it to the corresponding party via an authenticated classical channel resource. The Verifier keeps one to themselves.
\item If $x_{i}=0$, party $i$ performs a Hadamard operation on their qubit. If $x_{i}=1$, party $i$ performs a $\sqrt{X}$ operation.
\item Each party $i$ measures their qubit in the $\{\ket{0},\ket{1}\}$ basis and sends their outcome $y_{i}$ to the Verifier via the classical channel.
\item The Verifier accepts and outputs $b_{out}=0$ if and only if
\begin{equation*}
\sum_{i=1}^{n}y_{i}\equiv\frac{1}{2}\sum_{i=1}^{n}x_{i} \text{(mod 2)}
\end{equation*}
\end{enumerate}
\end{protocol}
\newpage
This protocol has been extensively studied and presents desirable properties that are expected from such a protocol: it is correct and for one round, its output depends on the distance between the state that was actually shared by the source and the GHZ state and the number of malicious parties. Indeed, for a state $\rho$ shared among the parties, $b_{out}$ is such that:
\begin{equation}
b_{out} = \left\{
\begin{array}{ll}
0 & \mbox{with probability } 1 - \frac{\tau^{2}}{2} \\
1 & \mbox{with probability } \frac{\tau^{2}}{2}
\end{array}
\right.
\end{equation}
with
\begin{equation}
\tau=\min_{U}\mbox{TD}(\ketbra{GHZ}{GHZ}, U\rho U^\dagger)
\end{equation}
where TD is the trace distance and $U$ is an operator acting only on the space of the dishonest parties.
This protocol is made to be repeated several rounds until some confidence is built on the fact that the source shares GHZ states. In order to prevent the source from sending a wrong state on the round where it is supposed to be used for computation, the authors of~\cite{MEVresistant} considered randomizing this round. They also randomize which party should play the role of the Verifier at each verification round to prevent malicious actions from the parties. Thus, all parties have access to a trusted common random source that gives, at each round, a random bit $C\in\{0,1\}$ used as a security parameter and an identifier for one party $i\in[n]$. If $C=0$ (which happens with some probability $P_C$), the state is used for computation. If $C=1$ (which happens with probability $1-P_C$), the parties perform the above verification protocol with $i$ as the Verifier and restart only if the state is accepted. It has been proven that the probability that the protocol has not aborted and that a state $\rho$ such that $\mbox{TD}(\ketbra{GHZ}{GHZ}, U\rho U^\dagger)\geq \epsilon$ where $U$ is an operator on the space of the $k$ dishonest parties is used for computation is less than $P_C=\frac{4n}{k\epsilon^2}$.
The security properties in~\cite{MEVresistant} are proven in a game-based framework hence are not composable. There is for example a strategy where, when performing the protocol multiple times in a row, a malicious coalition of parties and source could increase the probability that the honest parties accept a state that is not a GHZ state. It has indeed been noticed that if we allow for a 50\% loss rate in the quantum communication, there exists a strategy for dishonest players that increases their probability of making the others accept a faulty state. This problem has been later solved in~\cite{MEVexperimental} where a loss-tolerant variation of this protocol that presents the same security properties, called the $\theta$-protocol, was implemented in a photonic setting. It mainly consists in changing the classical instructions $X=\{x_i\}_{i=1}^n$ sent by the Verifier to angles $\Theta=\{\theta_i\}_{i=1}^n$ indicating the rotated measurement basis for each party. This protocol increases the loss that can be tolerated by the protocol, but still the dishonest parties can increase their cheating probability if the losses are high enough. For simplicity we will consider only the version of the protocol presented above (called the XY-protocol) but the following proof can be straightforwardly extended to match the $\theta$-protocol as well.\newpage
Composability issues are due to somewhat hidden assumptions in the original game-based model, such as the lossless channel assumption. Moreover, it is assumed that the dishonest parties are not disturbing the classical communication between the players or the random choosing of the Verifier and that they don't have access to the quantum memories of the honest parties. This may threaten the security of the protocol when used as a subroutine of a bigger one. Finally, the game-based framework assumes a specific strategy from the dishonest parties: they are actively trying to convince the honest parties that they all share a GHZ state when they don't. This of course makes sense when we look at the protocol in a stand-alone setting, but may not be the case when the protocol is part of a bigger, more complicated one. Using the AC framework, we can deal with all possible dishonest strategies with the help of distinguishers. It forces us to explicit every input and output of each party and to avoid hidden assumptions on dishonest behaviour and on physical resources. Additionally, it gives a box-like form for the protocol that corresponds to the way we think about applications in the quantum Internet.
\vspace{-0.2cm}
\subsection{Ideal Resource}
Let us now present the ideal resource for practical multipartite entanglement verification. Consider a source using physical resources to create and share an $n$-qubit quantum state to $n$ parties expecting a qubit from a GHZ state. Our resource, called $\mathcal{MEV}_C$, aims to get a sense of how trustworthy the source is, by verifying that it sends a state at least close to the GHZ state. It also has a built-in parameter $C$ that makes the resource output qubits with some probability known by all $n+1$ parties using the resource.
This black box (see Fig.~\ref{fig:IdealHonestMEV} for a 3-party example) has $n+1$ input interfaces. All $n$ parties wishing to test a source collectively send a start signal to the input interfaces of resource. The last interface is the source interface that gets a classical description of the state sent by the source.
Upon reception of the start inputs, $\mathcal{MEV}_C$ will forward the start signals to the source interface then wait for the classical description of an $n$-qubit quantum state $\rho$. After that, it outputs on all interfaces a bit $C=0$ with probability $p$, or $C=1$ with probability $1-p$. This bit indicates if the resource is going to output qubits or a verification bit $b_{out}$. The probability distribution of $C$ can be tuned freely to match any distribution. If $C=0$ it then outputs to each party a qubit of $\rho$ and if $C=1$ it computes a bit $b_{out}$ indicating if the state shared by the source is close to the GHZ state and sends it to all parties. This box is made to be composed with itself in series with a very small $p$ until all parties get a qubit or $b_{out}=1$. \newpage
\begin{figure}[!ht]
\centering
\includegraphics[width=10cm]{MEVresource.pdf}
\caption{The $\mathcal{MEV}_C$ resource for $n=3$ parties. For readability we put the parties interfaces on the left and the source interface on the right. The left interfaces are ``collective interfaces'' meaning that inputs are sent collectively by all the parties and the output is obtained by all parties.}
\label{fig:IdealHonestMEV}
\end{figure}
The output bit $b_{out}$ should indicate whether the state shared by the source is $\epsilon$-close to the GHZ state for some $\epsilon$. At this level of abstraction, we don't care whether this behaviour comes from a faulty device or an actual adversary trying to manipulate the source. Our $\mathcal{MEV}_C$ resource outputs a $b_{out}$ such that
\begin{equation}
\label{output}
b_{out} = \left\{
\begin{array}{ll}
0 & \mbox{with probability } 1 - \frac{\tau^{2}}{2} \\
1 & \mbox{with probability } \frac{\tau^{2}}{2}
\end{array}
\right.
\end{equation}
with
\begin{equation}
\tau=\mbox{TD}(\ketbra{GHZ}{GHZ}, \rho),
\end{equation}
where TD is the trace distance. The output of the resource is thus probabilistic, and depends on the trace distance between the input state $\rho$ and the GHZ state and on the security parameter $C$. Notice that this $b_{out}$ follows the same distribution as the one of the original protocol (see Sec.~\ref{subsec:MEVprotocol}) in the case where all parties are honest. The security parameter $C$ is added to the verification procedure to make the resource suitable for practical use in larger protocols where one wants to eventually get shared entanglement between the parties when the source is acting correctly.
Now in the case of the honest use of the resource, the source interface is given as input a classical description of the GHZ state. Moreover the output $C$ remains hidden to the outside world. In AC this is modeled by using converters, the so called \textit{filters}, that block the adversarial interfaces (thus filtering the outputs) and send a specific input. In our case we define one filter $\bot$ that enforces the honest use of $\mathcal{MEV}_C$. It blocks any deviation from the outside world and upon reception of a start signal, it sends a classical description of a GHZ state to $\mathcal{MEV}_C$ (see Fig.~\ref{fig:Bot}). It has its inside interface plugged into the $\mathcal{MEV}_C$ resource and its outside interface open to inputs from any distinguisher (see \cite{Ratcheting} for extended discussion about filtering and the inclusion of events in AC). \newpage
\begin{figure}[!ht]
\centering
\includegraphics[width=8cm]{Bot.pdf}
\caption{Filter $\bot$. Upon reception of a start input, it outputs a classical description of a GHZ state on its inside interface and blocks any input at its outside or inside interface.}
\label{fig:Bot}
\end{figure}
Composed with $\mathcal{MEV}_C$, they form our ideal resource $\mathcal{MEV}_{C}\bot$ for secure verified GHZ sharing or source testing (see Fig.~\ref{fig:IdealHonestFilteredMEV} for a 3-party example).
\begin{figure}[!ht]
\centering
\includegraphics[width=12cm]{IdealHonestFilteredMEV.pdf}
\caption{The ideal filtered $\mathcal{MEV}_{C}\bot$ resource for $n=3$ parties. On the left are the ``collective interfaces'' that are used by the parties to collectively send the start signal and receive the output. On the right is the source interface filtered by $\bot$, that blocks any input and sends a specific message to the resource.}
\label{fig:IdealHonestFilteredMEV}
\end{figure}
\newpage
\subsection{Concrete Resource}
We will now make explicit the protocol in the AC framework, by defining the resources used and the converters for each party. We first define the concrete resources, which in this case are abstractions of physical resources. Namely we define the state generator resource, the one-way quantum channel resource, the two-way classical channel resource and two multiparty classical computation oracles.
The state generator resource (see Fig.~\ref{fig:StateGen}) represents a perfect source of quantum states able to create arbitrary quantum states of at most $n$ qubits. Receiving a classical description of an $n$-qubit state $\rho$ on its input interface it will output each qubit of $\rho$ on its $n$ output interfaces. This resource can be used to model imperfect sources by including the noise in the classical description of the state given as input. We consider that no information is leaked by this resource about the state that it creates, as it is the more restricting scenario in our security proof. In Sec.~\ref{subsec:PracticalImp} we discuss the realization of such a resource.
\begin{figure}[!ht]
\centering
\includegraphics[width=7cm]{StateGenerator.pdf}
\caption{State generator resource.}
\label{fig:StateGen}
\end{figure}
The $\mathcal{SG}_n$ resource is to be composed with $n$ quantum channel resources which we draw as arrows with a Q (see Fig.~\ref{fig:QChannel}). A quantum channel resource in our case is a perfect private authenticated quantum channel which takes as input a qubit and outputs the same qubit at a different place without any leakage.
\begin{figure}[!ht]
\centering
\includegraphics[width=7cm]{QChannel.pdf}
\caption{Quantum channel resource.}
\label{fig:QChannel}
\end{figure}
Finally the classical communication between parties is modeled through classical channel resources which we simply draw as arrows (see Fig.~\ref{fig:CChannel}). They take bits at any of their interfaces and transmit them to the other interface. We suppose those channels to be authenticated: to any other party watching the channel, it will also output of the message transmitted without the possibility to alter it. In order not to overload the figures, we don't represent this leaking interface when all parties are honest but we do when considering a dishonest source watching over the classical communication.
\begin{figure}[!ht]
\centering
\includegraphics[width=7cm]{ClassicalChan.pdf}
\caption{Classical channel resource.}
\label{fig:CChannel}
\end{figure}
\newpage
We will abstract multiparty classical functionalities achieved by the parties by the use of oracle queries. All parties can collectively call two oracles $\mathcal{O}_C$ and $\mathcal{O}_v$ that respectively give a common random bit $C$ and a common random party identifier $v$ to each party. We will draw them as boxes with $n$ input interfaces expecting a collective query from the parties and $n$ output interface broadcasting $C$ or $v$. This is a modeling of classical communication protocols that provide random bits and random identifiers to the parties. It is not considered private and the values of $C$ and $v$ are available to any malicious party watching over the classical communication. We will discuss how these oracles can be replaced by actual classical protocols in Sec.~\ref{subsec:PracticalImp}. Moreover, each party is locally equipped with a quantum register able to perfectly store a qubit for the time required by the protocol on which they can perform one-qubit operations and measurements. Quantum registers will not be drawn in the figures for simplification purposes as well as the leakage interfaces of the classical channels, but they should not be forgotten as assumptions in our model, particularly when considering the case of a malicious party. Since we consider here all parties to be honest during the verification protocol, we only draw resources and interfaces of interest.
We call $\mathcal{R}$ the resource constructed by a state generator resource composed in series to a collection of $n$ quantum channel resources and in parallel to $n$ classical channel resources, $\mathcal{O}_C$ and $\mathcal{O}_v$. $\mathcal{R}$ formally defines the creation of a state, common classical randomness generator protocols, the (2-way) classical communication between the Verifier and the parties and the (one-way) quantum communication between the source and the parties.\\
The next step is to present the converters $\pi=\{\pi_i\}_{i=1}^n$ and $\pi_S$ that represent the protocols followed by each party and the source. They model the local computation of each party during an honest round of the protocol and can be represented either as algorithms or as boxes and arrows, that both expect some input from which they produce output to send to the resources. Their quantum abilities are equal to the ones that we give to local parties performing the multipartite entanglement verification protocol \cite{MEVresistant}.
We start with $\pi=\{\pi_i\}_{i=1}^n$ representing the protocol followed by each party $i$. $i$ is a binary identifier for each party, but for simplicity we represent it with $i\in[n]$ and we write $\pi_{[n]}$ for the parallel composition of all $\{\pi_i\}_{i=1}^n$.
\begin{protocol}{Protocol for the $i_{\text{th}}$ party $\pi_{i}$}
\begin{enumerate}
\item Ask the source to send a GHZ state. Wait for the reception of the qubit.
\item After reception, query $\mathcal{O}_C$, get $C$ and output it. If $C=0$, keep the qubit (output the qubit to party).
\item If $C=1$, \begin{enumerate}
\item Query $\mathcal{O}_v$, get $v$.
\item If $v\neq i$ \begin{enumerate}
\item Wait for the reception of $x_i$.
\item If $x_{i}=0$, perform a Hadamard operation on the qubit. If $x_{i}=1$, perform a $\sqrt{X}$ operation on the qubit.
\item Measure in the $\{\ket{0},\ket{1}\}$ basis.
\item Send the outcome $y_{i}$ to the Verifier via the classical channel resource.
\end{enumerate}
\item If $v=i$ \begin{enumerate}
\item Create a random bit string $X=\{x_{i}\}_{i=1}^{n}$ with $x_{i}\in\{0,1\}$ such that $\sum_{i=1}^{n}x_{i}\equiv 0$ (mod 2)
\item Send $x_{i}$ it to party $i$ via a classical channel resource, keep $x_{v}$.
\item Follow steps (iii).b.2 to (iii).b.4 and get $y_v$
\item Wait for the reception of all the other $y_{i}$.
\item Upon the reception of all the $y_{i}$, output 0 to all if
\begin{equation*}
\sum_{i=1}^{n}y_{i}\equiv\frac{1}{2}\sum_{i=1}^{n}x_{i} \text{(mod 2)}
\end{equation*}
and 1 otherwise.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{protocol}
The actual verification protocol is thus seen here as a subroutine (steps (iii).(a) to (iii).(c)). All parties start by collectively querying a qubit and $C$ and then, depending on the value of $C$, they either keep the qubit or do the verification protocol. During the verification protocol, one party is chosen to be the Verifier and after some classical communication and local quantum operations, the Verifier sends the output $b_{out}$ to all parties.
The last converter, $\pi_S$, represents the local operation that an honest source would perform using the source to create an $n$-qubit GHZ state and send it to the parties. That is simply sending, upon receiving a signal from the parties, a classical description of the GHZ state to the $\mathcal{SG}_n$ resource. It implies that the source is not watching the classical communication between the parties at any point. Functioning like a filter, this converter is made to be removed in case the source is noisy or some malicious party takes control of the source to reveal new interfaces to the outside world.
\begin{protocol}{Protocol for the source $\pi_{S}$ }
\begin{enumerate}
\item Upon reception of a query by the parties, send a classical description of the GHZ state to the $\mathcal{SG}_n$ resource.
\end{enumerate}
\end{protocol}
\vspace{1cm}
Together with $\mathcal{R}$, this completes the definition of the concrete multipartite entanglement resource $\pi_{[n]}\mathcal{R}\pi_S$ (see Fig.~\ref{fig:ConcreteProt} for a 3-party example), which takes as input a start signal and outputs a bit $C$ then a qubit from a GHZ state to each party or a bit $b_{out}=0$.
\begin{figure}[!ht]
\centering
\includegraphics[width=16.5cm]{ConcreteResourceHonest.pdf}
\caption{The $\pi_{[n]}\mathcal{R}\pi_S$ Resource within the dotted red line for $n=3$ parties wishing to test a source, when party 1 is chosen to be the Verifier. We represent resources in red and converters in blue. We recall the timeline of the protocol: (1) all the $\pi_i$ send a start signal to $\pi_S$ that sends a classical description of a GHZ state to the $\mathcal{SG}_n$ resource. (2) Upon reception of the qubit, they send a query to $\mathcal{O}_C$ and get $C$. (3) If $C=0$ they output a GHZ qubit and if $C=1$ the parties query $\mathcal{O}_v$ and get $v$ (here party 1). (4) The Verifier sends instructions $X=\{x_i\}_{i=1}^n$ (here $\{x_2,x_3\}$) to others parties, get outcomes $Y=\{y_i\}_{i=1}^n $ (here $\{y_2,y_3\}$) and computes and broadcasts $b_{out}$. To avoid overloading the figure we don't represent quantum memories as well as the classical signals going from $\pi_1$ to $\pi_S$. As $\pi_S$ represents honest behaviour from the source, we also don't represent the leakage of information from the classical channels.}
\label{fig:ConcreteProt}
\end{figure}\newpage
\subsection{Security Analysis}
\label{subsec:SecAnal}
We come now to the proof of the main claim of this paper, namely that the multipartite entanglement verification protocol $\pi$ securely constructs the $\mathcal{MEV}_C$ resource out of $\mathcal{R}$. We proceed as expected from the security definition of Sec.~\ref{subsec:SecDef} that is by finding simulators to emulate local dishonest behaviour on the ideal resource. A dishonest behavior from a party is simply modeled by removing the associated converter and making new free interfaces accessible to a distinguisher. Simulators should render the ideal resource indistinguishable from dishonest concrete resources.
We will only consider cases that are of interest for our security claim which are when all parties are honest and when the source is noisy or malicious. The case of dishonest parties possibly tampering the source is discussed in Sec.~\ref{subsec:MaliciousParty}, but it appears that composable security cannot be proven in the AC framework when a party is dishonest. Distinguishers in this section are all powerful, both classicaly and quantumly.
\subsubsection{Correctness.}
The first step of the proof corresponds to the correctness of the multipartite entanglement verification protocol, meaning that when all parties are honest and the source is honest the parties all get either a qubit from a GHZ state or a bit $b_{out}=0$.
\begin{theorem}
The multipartite entanglement verification protocol emulates the filtered ideal resource $\mathcal{MEV}_{C}\bot$.
\end{theorem}
\begin{proof}
Let $\mathbf{D}$ be an all powerful distinguisher trying to guess between $\mathcal{MEV}_{C}\bot$ and $\pi_{[n]}\mathcal{R}\pi_S$. Let us look at the distribution of outputs that it will get from them.
$\mathbf{D}$ first sends start signals to both resources. When interacting with $\mathcal{MEV}_{C}\bot$, it gets $C=1$ and $b_{out}=0$ with some probability $1-p$ and $C=0$ and $n$ qubits from a GHZ state with probability $p$ by definition of our resource. Throughout this paper, the probability distribution of $p$ is tuned to match the one of $\mathcal{O}_C$. When interacting with $\pi_{[n]}\mathcal{R}\pi_S$, the distinguisher thus performs the concrete multipartite entanglement verification protocol with the same probability $p$. If all parties share a GHZ, the condition $\sum_{i=1}^{n}y_{i}\equiv\frac{1}{2}\sum_{i=1}^{n}x_{i} \text{(mod 2)}$ is always fulfilled (see~\cite{MEVresistant} for complete proof) so the Verifier always sends $b_{out}=0$ at the end. Hence, $\mathbf{D}$ gets $C=1$ and $b_{out}=0$ with probability $1-p$ and $C=0$ and $n$ qubits from a GHZ state with probability $p$.
We can conclude that for any distinguisher $\mathbf{D}$, $d^{\mathbf{D}}(\pi_{[n]}\mathcal{R}\pi_S,\mathcal{MEV}_{C}\bot)=0$ hence
\begin{equation}
\pi_{[n]}\mathcal{R}\pi_S\approx\mathcal{MEV}_{C}\bot.
\end{equation}
\end{proof}
\newpage
\subsubsection{Dishonest source.}
Let us now look at the case of a dishonest or noisy source. As custom in AC, we model this by removing the filter $\bot$ of the ideal resource and the protocol $\pi_S$ of the concrete one (see Fig.~\ref{fig:ConcreteProtDis}). This leaves a new interface free for a distinguisher to send in a classical description of a state $\rho$. Because we do not use private but rather authenticated classical communication, the distinguisher also receives all leakage of classical communication between the parties and when they query oracles.
\begin{figure}[!ht]
\centering
\includegraphics[width=16cm]{ConcreteResourceDishonest.pdf}
\caption{The $\pi_{[n]}\mathcal{R}$ resource for $n=3$ parties when party 1 is chosen as the Verifier, accessed by a distinguisher (in green). To not overload the figure we join all leakages interfaces from the classical channel resources into two arrows, but they should each be considered as a different interface the distinguisher has access to.}
\label{fig:ConcreteProtDis}
\end{figure}
In order to prove security, as expected from the security definition of Sec.~\ref{subsec:SecDef}, we need to find a simulator $\sigma_{S}$ such that we can prove $\pi_{[n]}\mathcal{R}\approx\mathcal{MEV}_C\sigma_S$. It should emulate dishonest behaviour and the new interfaces a distinguisher has access to when interacting with the ideal resource. \\
Let $\sigma_{S}$ be the simulator shown in Fig.~\ref{fig:SigmaS}.
It first takes as input a start signal from the $\mathcal{MEV}_C$ resource, then emulates the verification protocol by forwarding this start signal. After receiving a classical description of a state $\rho$, it forwards it to $\mathcal{MEV}_C$ and gets and forwards the bit $C$. If $C=1$, it creates a random $v \in [n]$ and a random bit string $X=\{x_i\}_{i=1}^n$ such that $\sum_{i=1}^{n}x_{i}\equiv 0$ (mod 2) and sends them to the outside world, except for $x_v$. Then it computes a table of possible measurement outcomes by calculating all necessary scalar products:
\begin{multline}\\
\label{eq:outputtable}
Pr[y_1=0,y_2=0,...,y_n=0]=\bra{00...0} U \rho U^{\dagger}\ket{00...0}\\
Pr[y_1=0,y_2=0,...,y_n=1]=\bra{00...1} U \rho U^{\dagger}\ket{00...1} \\
...\\
Pr[y_1=1,y_2=1,...,y_n=1]=\bra{11...1} U \rho U^{\dagger}\ket{11...1}\\
\end{multline}
with $U=H^{x_1}(\sqrt{X})^{1-x_1}\otimes H^{x_2}(\sqrt{X})^{1-x_2}\otimes ... \otimes H^{x_n}(\sqrt{X})^{1-x_n}$ corresponding to the local operations made by each party on their qubit in the verification protocol. Then it randomly samples $Y=\{y_i\}_{i=1}^n$ from this table and sends them to the outside world, except for $y_v$.\\
\begin{figure}[!ht]
\centering
\includegraphics[width=8cm]{SigmaS.pdf}
\caption{Simulator $\sigma_S$ for a dishonest source.}
\label{fig:SigmaS}
\end{figure}\\
Roughly speaking, $\sigma_S$ classically emulates the whole multiparty protocol by reproducing the classical communication and local quantum operations. Plugged in $\mathcal{MEV}_C$, this defines a new resource $\mathcal{MEV}_C\sigma_S$ (see Fig.~\ref{fig:IdealProtDis}). With this simulator we can state that:
\begin{theorem}
The multipartite entanglement verification protocol with a noisy or malicious source emulates the ideal resource $\mathcal{MEV}_C\sigma_{S}$.
\end{theorem}
\begin{proof}
In this scenario, we have to prove an equivalence between $\mathcal{MEV}_C\sigma_S$ and $\pi_{[n]}\mathcal{R}$ by showing that no distinguisher sending inputs and receiving outputs from both can guess which resource it is interacting with. In the concrete setting this means that the parties will share a state $\rho$ that is $\tau$-close to the GHZ state, with $\tau= \mbox{TD}(\ketbra{GHZ}{GHZ},\rho)$, that they will either keep or verify with probability S. In~\cite{MEVresistant}, it is shown that a state $\rho$ passes the verification test with probability $1-\frac{\tau^{2}}{2}$. \newpage
\begin{figure}[!ht]
\centering
\includegraphics[width=15cm]{IdealResourceDishonest.pdf}
\caption{The $\mathcal{MEV}_C\sigma_S$ resource for $n=3$ parties accessed by a distinguisher (in green).}
\label{fig:IdealProtDis}
\end{figure}
Let $\mathbf{D}$ be an all powerful distinguisher trying to guess between $\pi_{[n]}\mathcal{R}$ and $\mathcal{MEV}_C\sigma_{S}$. In the concrete setting, it sends in start signals at the parties interfaces then receives it at the source interface and sends a classical description of a state $\rho$ to $\mathcal{SG}_n$. $\mathbf{D}$ then sees all the classical communication happening out of the authenticated classical channels. More explicitly it will first see a bit $C$ and if $C=0$, nothing but the qubits of $\rho$ at each parties interface. If $C=1$, a random identifier $v\in[n]$ leaks, then random bits $X\backslash\{x_v\}$ from the Verifier to each party, then the outcome of each party's measurement except the Verifier's $Y\backslash\{y_v\}$ and finally the bit $b_{out}$ broadcasted by the Verifier.
In the ideal scenario, after $\mathbf{D}$ sends in a start signal, $\mathcal{MEV}_C$ forwards it to $\sigma_{S}$ which then sends a start signal simulating the query of a state by the parties. After that, the distinguisher sends a classical description of a state $\rho$ to $\sigma_S$ who forwards it to $\mathcal{MEV}_C$, which outputs $C$ at all its interfaces. $\sigma_S$ gets $C$ and outputs it at its outside interface. If $C=0$, $\mathcal{MEV}_C$ outputs the qubits of $\rho$ at each party's interface. If $C=1$, $\sigma_S$ creates and outputs a random $\hat{v}\in[n]$ then computes a random bit string $\hat{X}=\{\hat{x}_i\}_{i=1}^n$ such that $\sum_{i=1}^{n}\hat{x}_{i}\equiv 0$ (mod 2) and sends them to the outside world, except for $\hat{x}_v$. After that, $\sigma_S$ computes the table of Eq.~(\ref{eq:outputtable}), randomly samples $\hat{Y}=\{\hat{y}_i\}_{i=1}^n$ and outputs them all to the outside world except for $\hat{y}_v$. Finally $\mathcal{MEV}_C$ outputs $\hat{b}_{out}=0$ with probability $1-\frac{\tau^{2}}{2}$ and $\hat{b}_{out}=1$ otherwise.
The probability distribution of the bit $C$ is designed to match the probability distribution given by the oracle $\mathcal{O}_C$. In the concrete setting $v$ is chosen randomly among the players through a query to the oracle $\mathcal{O}_v$ so we have that for all $i\in [n]$, $\Pr[v=i]=\Pr[\hat{v}=i]$. $X=\{x_i\}_{i=1}^n$ and $\hat{X}=\{\hat{x}_i\}_{i=1}^n$ are both chosen randomly so their probability distribution is the same. $Y=\{y_i\}_{i=1, i \neq v}^n$ are the outcomes of the measurements of each qubit $\rho$ by each party in the $\{\ket{0},\ket{1}\}$ basis after doing the operation indicated by each $x_i$. The state after each party applied their operation is $U \rho U^{\dagger}$ with $U=H^{x_1}(\sqrt{X})^{1-x_1}\otimes H^{x_2}(\sqrt{X})^{1-x_2}\otimes ... \otimes H^{x_n}(\sqrt{X})^{1-x_n}$. They are samples from the table of Eq.~(\ref{eq:outputtable}). Hence for each $i\in[n]$ we have that $\Pr[y_i=0]=\Pr[\hat{y}_i=0]$. Finally, by definition of our $\mathcal{MEV}_C$ resource, the probability distribution of $\hat{b}_{out}$ is the same as the one of $b_{out}$.
The probability distribution of the output given by the two resources depending on the inputs is thus the same. Hence we have that for any distinguisher $\mathbf{D}$, $d^{\mathbf{D}}(\pi_{[n]}\mathcal{R}, \mathcal{MEV}\sigma_{S})=0$ and
\begin{equation}
\pi_{[n]}\mathcal{R}\approx\mathcal{MEV}_C\sigma_{S}.
\end{equation}
\end{proof}
\subsubsection{Conclusion.}
We have proved that $\pi_{[n]}\mathcal{R}\pi_S\approx\mathcal{MEV}_C\bot$ and that $\exists \sigma_S$ s.t. $\pi_{[n]}\mathcal{R}\approx\mathcal{MEV}_C\sigma_S$. This means that the multipartite entanglement verification protocol presented is composable when all parties are honest but with a possibly dishonest source. The protocol can thus be thought of as a black box and equivalently replaced by the $\mathcal{MEV}_C$ resource (Fig.~\ref{fig:IdealHonestMEV}) when designing protocols using this one as a subroutine. It assumes that the parties have access to resources $\mathcal{R}$, including common oracles and quantum memories.
\subsubsection{Application : Verified GHZ sharing resource.} The composability result we proved allows $n$ parties to securely compose the protocol with itself multiple times. If the probability that $C=0$ is sufficiently small, the protocol will be repeated on expectation enough rounds to allow the parties to build high confidence on the source's ability to create a state close to the GHZ state. Since the round where they will actually use the qubits sent by the source to perform some communication or computation protocol is unknown to the source, it is not possible for the source to adapt and decide when to send faulty states. Hence it forces the source to send states that are sufficiently close to the GHZ state every time it is queried. We call this protocol the multi-round multipartite entanglement verification protocol.\\
By defining converters $\{\Pi_i\}_{i=1}^n$ representing the aforementioned protocol, we can construct a resource $\Pi_{[n]}\mathcal{MEV}_{C}\bot$ that gives either a state at least $\epsilon$-close to the GHZ state to $n$ parties or an abort signal (see Fig.~\ref{fig:MultiRound} for a 3-party example and the explicit description of a $\Pi_i$). As it is a composable framework, AC allows us to state that
\begin{align}
\Pi_{[n]}\pi_{[n]}\mathcal{R}\pi_S\approx\Pi_{[n]}\mathcal{MEV}_C\bot \\
\textnormal{and }
\exists \sigma_S \textnormal{ s.t. } \Pi_{[n]} \pi_{[n]} \mathcal{R} \approx \Pi_{[n]}\mathcal{MEV}_C \sigma_S.
\end{align} \newpage
\begin{figure}[!ht]
\centering
\includegraphics[width=18cm]{Multiround.pdf}
\caption{Multi-round verification resource $\Pi_{[n]}\mathcal{MEV}_{C}\bot$ for 3 parties (in the red dotted square). It takes start signals as input and outputs either a shared quantum state $\epsilon$-close to the GHZ state or an abort signal. }
\label{fig:MultiRound}
\end{figure}
Let us define a {\em verified GHZ state sharing} resource that we call $\mathcal{GHZ}$ (see Fig.~\ref{fig:GHZ}). This resource is the idealisation of multipartite entanglement verification achieved through an interactive protocol between the source and the parties. We assume that at each round of the interaction a state is produced and shared by the source and the parties perform some verification protocol until, in the end, they decide to trust that the shared state is close to the GHZ state or abort the protocol.
$\mathcal{GHZ}$ takes as input start signals from the parties, then interacts with the source and finally outputs either a state $\epsilon$-close to the GHZ state or an abort signal. The interaction is abstractly modeled in the following way: first a Start signal is sent to the source interface, which replies with the classical description of a state $\rho$. Then $\mathcal{GHZ}$ will either ask for another state by sending a ``Continue'' signal to the source interface, or output an ``Abort'' signal to all interfaces because the current state was found to be far from the GHZ state, or, last, stop the protocol and share the last state it has received to the parties interface and send a ``Stop'' signal to the source interface.
This resource abstracts all the local operations and communication between the parties. From their point of view it is simply a source of states that are close to the GHZ state. However, to capture possibly malicious behavior from the source, we include the interaction on the source interface. We argue this is an abstract enough resource that captures all interactive verification procedures where the parties verify a number of states from the source before asserting that the source gives close to GHZ states.
\begin{figure}[!ht]
\centering
\includegraphics[width=14cm]{GHZverif.pdf}
\caption{Verified GHZ sharing resource for 3 parties. It takes start signals as input from the parties on the left interface then interacts with the source on the right interface. It outputs either a shared quantum state $\epsilon$-close to the GHZ state or an abort signal to the parties.}
\label{fig:GHZ}
\end{figure}
Similarly to Sec.~\ref{subsec:SecAnal}, when we define $\bot'$ and $\sigma_C$ as in Fig.~\ref{fig:GHZproof}, we can prove that
\begin{align}
\Pi_{[n]}\mathcal{MEV}_C\bot \approx \mathcal{GHZ}\bot'\\
\textnormal{and } \Pi_{[n]}\mathcal{MEV}_C \approx \mathcal{GHZ}\sigma_C
\end{align}
Hence,
\begin{align}
\Pi_{[n]}\pi_{[n]}\mathcal{R}\pi_S\approx \mathcal{GHZ}\bot\\
\textnormal{and }
\exists \sigma_S \textnormal{ s.t. } \Pi_{[n]} \pi_{[n]} \mathcal{R} \approx\mathcal{GHZ}\sigma_S
\end{align}
\vspace{1cm}
This means that the multi-round multipartite entanglement verification protocol constructs the $\mathcal{GHZ}$ resource out of $\mathcal{R}$. We can also state that it is composably secure in the setting of all honest parties and in the presence of a possibly malicious or noisy source. We can conclude that this protocol allows $n$ parties to get a GHZ state as a subroutine of a bigger protocol with an untrusted source.
\begin{figure}[!ht]
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[width=0.8\textwidth]{GHZverifFiltered.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\includegraphics[width=\textwidth]{GHZverifDishonest.pdf}
\end{minipage}
\caption{Filter $\bot'$ (on the left) and simulator $\sigma_C$ (on the right) to plug into the $\mathcal{GHZ}$ resource. The former represents the honest use of the resource and allows us to state the correctness of the AC security proof (Eq. (12)). The latter is the simulator that models the interface to which a distinguisher has access when we consider the source to act maliciously using the multiround multipartite entanglement verification resource $\Pi_{[n]}\mathcal{MEV}_C$. It allows us to derive the second part (noisy or malicious source) of the AC security proof (Eq. (13)).}
\label{fig:GHZproof}
\end{figure}
\newpage
\section{Discussion}
\vspace{-3mm}
\subsection{Case of honest parties}
\vspace{-1.5mm}
The multipartite entanglement verification protocol is particularly suited in a distributed computing scenario where the parties are honest but where there could be a faulty resource. They can use this protocol to check if the noise of an entanglement source is small enough for practical use. Indeed, if after many rounds of performing this protocol the output is most of the time $b_{out}=0$, they can realistically be sure that the source is producing states that are close to the GHZ state. Its composability allows for the construction of the multi-round verification resource, which can find practical use in larger communication protocols, as for example in anonymous ranking~\cite{Ranking}, quantum secret sharing~\cite{SecretSharing} or distributed consensus~\cite{Consensus} protocols. In fact, any protocol that starts with a GHZ state shared among $n$ honest parties that don't trust their source can be composed with this one in a secure way. This might seem limiting but is in fact realistic in many distributed computing settings. This protocol can also be seen as a building block of a quantum network. We can reasonably assume that parties are honest when performing protocols establishing the network in the same way we think about parties when considering entanglement distillation, network or transport layer protocols of the OSI model of the classical Internet. An intermediate scale quantum Internet example is a network where a source shares a GHZ state to all parties at each time-step, that they either use or verify. Our verification protocol can in this case be hidden in the assumptions of the network.
One may wonder why we did not start by defining the multi-round and the verified GHZ sharing resources of the above section from the beginning. This is indeed the practical resource that one would like to use in larger protocols as it directly provides quantum states that are $\epsilon$-close to the GHZ state. This was based on the fact that our priority was not to define \emph{ad hoc} the most useful resource, but to succeed in modeling a resource that is as close as possible to the signals that will actually be sent by the parties when performing the protocol in real life, and use this resource in a composably secure way to obtain a practical multipartite entanglement verification resource, that of the multi-round resource.
Our one-round resource captures the important parameters for composing the protocol in larger routines and it allows for modularity and a more precise understanding of what happens in the multi-round case. We will also see below that dishonest behavior of a party already causes composability issues in the one-round case thus we get a better understanding of the issues by proving composability in this case. Moreover the box-shaped resource that we construct using AC (Fig.~\ref{fig:IdealHonestFilteredMEV}) is close to the black-box picture that we would like to have when thinking of the building blocks of the Quantum Internet. Finally, we emphasize that this protocol only assumes classical communications between the parties and single-qubit local operations for each party, making it a good candidate for scalable application development.
\subsection{Case of a malicious party}
\label{subsec:MaliciousParty}
When studying this problem, it is natural to think about the case of dishonest parties possibly controlling the source. If we assume that dishonest parties are trying to make the others accept a state that is not close the GHZ state, results from~\cite{MEVresistant} and~\cite{MEVexperimental} show that for one round of verification, the output bit $b_{out}$ depends on the minimal distance between the GHZ state and the shared state up to local operations on the part of the state held by the dishonest parties. This result holds even when the dishonest parties have complete control over the state generation resource. For this to hold, we have to assume that the Verifier is always honest and that the parties cannot influence the probability distributions of the oracles $\mathcal{O}_C$ and $\mathcal{O}_v$.
Yet as discussed in the first part of this paper, it seems that this protocol cannot be proven composable in the Abstract Cryptography framework when considering a dishonest party. Indeed one straightforward strategy for a dishonest party would be to make the protocol abort randomly, which would give false information about the source. Any dishonest party actually has complete control on the distribution of the concrete resource's output $b_{out}$ while the ideal resource's output is fixed by the distance with the GHZ state of the state given as input. Even if we add switches to our resource on which a simulator could act to make it abort (as custom is such cases), we could not reproduce the abort probability distribution of our concrete protocol in the ideal world. It seems impossible to find a simulator that emulates the interfaces a distinguisher has access to when removing one of the $\pi_i$. This can be seen in the AC framework by removing the converters corresponding to the dishonest parties and finding distinguishing attacks for every possible simulator. We would moreover need extra assumptions on the quantum registers and the access to the multiparty computation oracles $\mathcal{O}_C$ and $\mathcal{O}_v$ that seem unpractical in a near-term network.
However, our composability result comes on top of the security proof of~\cite{MEVresistant} meaning that our multiparty entanglement verification protocol is secure against possible coalition of dishonest parties and source trying to persuade others that they share a GHZ when they don't and composably secure against a malicious source. It does not limit the use of our protocol to the all honest case. No attack is known to make use of the repetition of the protocol that would alter the \textit{integrity} of the shared state more than simply repeating the attack described in ~\cite{MEVresistant}. On the other hand, the \textit{availability} of the resource can be compromised by dishonest behaviour in an unpredictable way. This sheds light on the pros and cons of using a game-based framework versus a composable framework. In the former we can restrict dishonest behaviour to specific attacks and get specific security properties while in the former we can only act on how powerful the class of distinguishers is but get more general security claims. By studying the protocol in different frameworks, we are able to take the best of all approaches and show different aspects of security that increase confidence in the protocols.
\subsection{Practical implementation in a near-term quantum network}
\label{subsec:PracticalImp}
To actually implement the protocol, one has to replace the resources in $\mathcal{R}$ with actual protocols or physical resources. Multiparty classical protocols should take the place of oracle calls, and have to be proved composable to securely construct $\mathcal{R}$ out of them and the quantum resources. An example of a protocol replacing calls to $\mathcal{O}_C$ is the random bit protocol explicited in~\cite{Anonymity} and~\cite{classAlgo}. Previous work~\cite{MEVresistant} shows that by choosing the probability of using the qubit for computation ($C=0$) to be $\frac{\epsilon^2}{4n\delta}$ for some $\delta>0 $, all honest parties have the guarantee that the probability the state used has distance at least $\epsilon$ from the correct
one is at most $\frac{1}{\delta}$.
Qubit transportation should be taken care of by physical channels and link layer protocols that one has to study to see if they are equivalent to the quantum channel resource presented in this paper. As previously mentioned, any noisy channel can be modeled by a perfect channel in which a noisy state is given as input, and a noisy source can be modeled by a perfect source in which a classical description of a noisy state is given. The $\mathcal{SG}_n$ resource is designed as an attempt to capture what happens in the most general case when the protocol is performed in the lab where, at some point, a classical signal is sent to a quantum device that creates a state. Usually some information is accessible to the person controlling the device to check (for example by heralding photons) if the right state has been created. We suppose here that none of this information leaks from $\mathcal{SG}_n$ as it is the more restricting scenario. Moreover we don't restrict the source to create only $n$-qubit states, but merely enforce that it is able to create states up to this size. The proof holds even if the source creates bigger states and keeps part of it or sends it to a malicious party. $\mathcal{SG}_n$ is thus not meant to be realistic but to give an abstract embodiment of any source. A photonic implementation of a loss-tolerant variation of the original protocol has been achieved with 4 parties~\cite{MEVexperimental}. This leads to expect near-term realization of our protocol, presenting all security properties as well as composability and modularity for use in bigger protocols.
Lastly, the quantum memory assumption can be removed by asking the parties to measure their bit directly after receiving it and flipping the outcome randomly depending on the input given by the Verifier. We would lose the security properties against a malicious party from~\cite{MEVresistant} that are based on the actual order of the inputs for each party. In our all honest setting, this would not matter so this protocol can actually be used in near-term architecture to securely check a source. Experimental realization of this protocol in a composable way is currently studied, which would allow to take this protocol as a concrete building block for applications in the quantum Internet. Whether this protocol should remain in the application layer or be hidden in some network or transport layer is still to be determined and will depend on future developments in quantum network architectures.
\section*{Acknowledgements}
We would like to thank Simon Neves, Léo Colisson, Atul Mantri, Anna Pappa, Damian Markham and Frédéric Grosshans for fruitful discussions. This project is part of the Quantum Internet Alliance and has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 820445. We also acknowledge support from the European Union through the Project ERC-2017-STG-758911 QUSCO, from QuantERA through the project QuantAlgo and from the ANR through the Project ANR-17-CE39-0005 quBIC.
\section*{References}
\bibliographystyle{ieeetr}
|
1,108,101,563,194 | arxiv | \section{Introduction}
\label{intro}
Quantum Chromodynamics [QCD], the theory of the strong interactions within
the Standard Model of particle physics~\cite{QCD,QCD-2,asyfree,asyfree-2}, describes
the building blocks of strongly interacting particles, like proton, neutron and many others,
and the forces acting between them. The fundamental building blocks of these
particles are the spin-1/2 quarks $q$~\cite{quarks,quarks-2}, which come in three families.
Their masses cover a large range~\cite{Nakamura:2010zzi}. The three lightest quarks $u,d,s$
weigh only a small
fraction of the proton mass, the charm quark $c$ just about the proton mass
while the two heavy quarks $b,t$ weigh more than 5 and 180 times the proton mass,
respectively. Baryons, like proton and neutron, are composed of three quarks $qqq$,
while mesons, like pions, are bound states $q \bar{q}$ of quark-antiquark pairs.
Since the spin and the spatial $S$-wave functions of the lightest baryons are
symmetric under the exchange of quarks, the Pauli principle demands the quarks to be labelled
by new charges, called colours, which discriminate between the three components
of the baryons~\cite{color,color-2,color-3}. Rephrased within the SU(3)$_C$ symmetry group for three
colour degrees of freedom, the colour wave function is antisymmetric. This
threefold antisymmetric combination of colours renders baryons non-coloured, {\it i.e.}
they are white SU(3)$_C$ singlets; summing up the quark colours symmetrically in mesons,
these hadrons are white too.
By reducing the lifetime of the neutral pion by a factor $3^2 = 9$, the three-colour
extension reconciles the prediction of the quark model with the experimental measurement,
a crucial point in establishing the colour charges.
Equivalently to the electric charges in electrodynamics,
the colour charges of the quarks can serve
as sources for force fields, which bind the quarks within the mesons and
baryons~\cite{Nambu}. Eight such gluon fields $g$ are predicted by
the non-abelian gauge group SU(3)$_C$. Like the photon field they are vector
fields with spin = 1, but in contrast to the photon they carry colour charges
themselves, mediating colour flips of quarks by absorption or emission.
This theory, Quantum Chromodynamics, is theoretically described
by a non-abelian Yang-Mills gauge
theory \cite{YangMills}. Gluons couple to each other, giving rise to
three- and four-gluon interactions. These self-interactions of the
gluons have profound consequences for the QCD coupling. While virtual fermionic
quarks render the vacuum colour-diamagnetic, the large contribution
of virtual bosonic gluons renders the vacuum finally colour-paramagnetic.
Thus, in contrast to the electric coupling, the QCD coupling decreases
with decreasing distance and the quarks and gluons become asymptotically free~\cite{asyfree,asyfree-2}.
Quarks and gluons therefore interact weakly at short
distances while the strength of their interactions grows with increasing
distance, suggesting the permanent confinement of particles carrying
non-zero colour charges \cite{Wilson}.
Quarks can be {\it seen} in the scattering of electrons or neutrinos
off nucleons. The final-state pattern of these processes reveals
that the leptons scatter off point-like, nearly massless spin-1/2 constituents
of the nucleons which carry the electric and weak charges of the quarks.
Gluons inside nucleons, which do not carry electric nor weak charges,
manifest themselves only indirectly. Half of the momentum
of fast moving nucleons cannot be accounted for by the spectrum
of the quarks alone, and it must be attributed to gluons as flavour-neutral
constituents \cite{ChLlSm}. In addition, the quark spectrum is modified by gluon
bremsstrahlung if the energy of the impinging leptons is raised from low
to high values \cite{Gross}.
However, QCD suggests another method to unravel its basic constituents.
As a result of asymptotic freedom, quarks and gluons move as
quasi-free particles, called partons~\cite{Feyn}, at short distances. When these
coloured objects
are accelerated in scattering processes, they develop bremsstrahlung cascades
of narrowly collimated gluons and quark-antiquark pairs, which finally transform
to equally well collimated hadrons at distances at the colour
confinement radius of about 1 fm [$10^{-13}$ cm].
Thus, the quarks and gluons at small distances map themselves into jets of hadrons
at large distances. Since the quanta associated with the confinement forces are soft,
their impact on the energies and momenta of the jets is small
so that the configurations of high-energy quarks and gluons at short
distances are truly reflected in the energy and angular distributions
of the jets. Since these jets can be observed experimentally, the properties
of quarks and gluons can be determined experimentally by jet analyses, such as
their spins, flavour and colour charges, and their basic interactions.
It should be stressed here that the field of jet physics and QCD owes a great deal of
gratitude to the development and successful operations of high energy colliders, in
particular, electron-positron colliders. Starting from SPEAR at SLAC, which started the
physics runs in 1972 and had a maximum beam energy of 4 GeV, the subsequently built
$e^+e^-$ colliders DORIS (physics start 1973; maximum beam energy 5.6 GeV) and PETRA
(physics start 1978; maximum beam energy 23.4 GeV) at DESY, PEP (physics start 1980;
maximum beam energy 15 GeV) and SLC (physics start 1989; maximum beam energy 50 GeV)
at SLAC, TRISTAN at KEK (physics start 1987; maximum beam energy 32 GeV), and LEP
(physics start 1989; maximum beam energy 104.6 GeV) at CERN, saw the main jet activity
and detailed tests of QCD. The results from these machines are the primary focus of
this review and are discussed in the first six chapters. However, in a long epilogue,
described in chapter 7, entitled jets as tools, we have discussed some selected results
related to QCD and jets,
which have come out from the electron-proton collider HERA (physics start 1992; maximum
$e^+/e^-$-beam energy 27.6 GeV and maximum proton energy 920 GeV) at DESY, and the hadron colliders
Tevatron (physics start 1987; maximum $p$ and $\bar{p}$ energy 980 GeV) at Fermilab and
finally the LHC (physics start 2010; maximum proton beam energy so far 3.5 TeV). It is
not our mandate to discuss the technical aspects of these machines, which will take us
too far afield from the main focus, namely historical development of jets and QCD
from the theoretical and experimental points of view. For the interested readership
of this review, the high energy machine related aspects are summarised concisely in
Reviews of Particle Physics by the Particle Data Group
(PDG). Many of these colliders, in fact all the $e^+e^-$ colliders, are no longer working
in particle physics, and for these we refer to the 1996
PDG review~\cite{Barnett:1996hr}, while for the others to the 2010
PDG review~\cite{Nakamura:2010zzi}.
Quite early, the final states in $e^+ e^-$ annihilation
to hadrons had been predicted to consist [primarily] of two jets evolving
from a quark-antiquark pair produced in the collision process \cite{THquark,THquark-2,THquark-3}:
\begin{equation}
e^+ e^- \to q \bar{q} \to 2 \, jets \,.
\end{equation}
Experimental evidence for these quark jets was first provided at the
$e^+ e^-$ collider SPEAR \cite{EXPquark,EXPquark-2} by demonstrating that the
hadrons in the final states were not isotropically distributed but
accumulated near the central event axis specified by the momenta
of the quarks \cite{PHENquark}. At PETRA energies ($12 \leq \sqrt{s} \leq 46.6$ GeV)
the two jets could be recognised without any sophisticated analysis,
{\it cf.} Fig.{\ref{fig:1.1ab}} (left-side frame). Angular distributions and charge analyses
finally proved the jets to be associated with spin-1/2 quarks indeed.
\begin{figure}
\center{
\resizebox{1.0\columnwidth}{!}{
\includegraphics{Tasso-2jet.eps} $\;\;\;\;\;\;\;\;\;\;\;$ \includegraphics{Tasso-3jet.eps}}}
\caption{Observation of (a) 2-jet final states in
electron-positron annihilation to hadrons: $e^+ e^- \to q
\bar{q} \to 2 \, jets$ (TASSO~\cite{EXP4gluon-TASSO}); and (b) 3-jet final states
in gluon bremsstrahlung off quarks in $e^+ e^-$ annihilation:
$e^+ e^- \to q \bar{q} g \to 3 \, jets$ in the TASSO detector~\cite{EXPgluonjet}.}
\label{fig:1.1ab}
\end{figure}
First indirect evidence of gluons was provided by the PLUTO collaboration at the $e^+ e^-$ collider
DORIS~\cite{Berger79} from the decay $\Upsilon(1S) \to ggg$.
However, as $\Upsilon(1S)$ has a mass of 9.46 GeV, significant non-perturbative contributions
had to be taken into account.
PLUTO used their 2-jet data below the resonance to extract the $q^* \to ~{\rm hadrons}$
fragmentation and used this to estimate also the fragmentation $g^* \to ~{\rm hadrons}$.
With this, their analysis was in agreement with the expectations from
the underlying process $\Upsilon(1S) \to ggg$.
Gluon jets were later discovered unambiguously at the $e^+ e^-$ collider PETRA
\cite{EXP4gluon-TASSO,EXP4gluon-MARK-J,EXP4gluon-PLUTO,EXP4gluon-JADE}
running at higher energy (typically 30 GeV). A 3-jet event from the very
early PETRA data~\cite{EXPgluonjet} is shown in Fig.~\ref{fig:1.1ab} (right-side frame).
Such events had been predicted theoretically \cite{THgluon}
for configurations in which the quark pair produced in $e^+e^-$ annihilation
radiates a hard non-collinear gluon:
\begin{equation}
e^+ e^- \to q \bar{q} g \to 3 \, jets \,.
\end{equation}
This bremsstrahlung mechanism is characteristic for gauge theories
and it is familiar from electrodynamics where charges accelerated in
collision processes emit photons, as in electron-positron scattering
$e^+ e^- \to e^+ e^- \gamma$, for example.
Bremsstrahlung gluons in QCD which transform to hadron jets generate
characteristic patterns in the final states which allow one to prove
the existence of gluons: With increasing energy the primary
quark jets become narrower; the events become flat and
``Mercedes-Star-like'' ($Y$-shaped); and finally three well separated jets emerge.
Detailed comparison of the event structure with the underlying theory (QCD) required
apart from the perturbative (hard) processes also modeling of the non-perturbative (soft)
features of the quark and gluon fragmentation. Hence, event generators,
incorporating the perturbative and non-perturbative aspects of QCD
were necessary to relate the emerging jet distributions to the predictions derived
from gluon bremsstrahlung
in QCD. References~\cite{PHENgluon1,PHENgluon2} illustrate the early use of such
event generators, taking the form of
Monte Carlo simulations to match the theretical calculations with the
experimental measurements, which were state-of-the-art tools at that time, and which helped
in establishing the properties of the quark and gluon jets. To avoid any confusion,
Monte Carlo in the present context is a numerical computational technique to calculate
multi-dimensional distributions of a process in which events are generated randomly but
weighted to reflect the underlying dynamics.
Dedicated experiments at PETRA and PEP and theoretical progress in the
80's greatly consolidated jet physics and led to quantitative tests of
QCD. A more modern view of the use of Monte Carlo programs, in particular, their role as tools
in hard hadronic collisions can be found in recent reviews, for example~\cite{Mangano:2005dj}.
The program to establish QCD in studying quark and gluon jets
was naturally continued at the $e^+ e^-$ collider LEP, see, for example,
\cite{LEP}, where the increased energy could be exploited
to measure the gluon self-interactions in multijet events,
\begin{equation}
e^+ e^- \to q \bar{q} q^\prime {\bar{q}}^\prime, \;\;
q \bar{q} g g
\to 4 \, jets \,,
\end{equation}
with the production amplitudes dominated by the $q\bar{q} gg$ states, which
included the virtual gluon splitting, {\it e.g.} $g^\ast \to gg$. By measuring energy and angular
distributions of these 4jet-events the colour charge
of gluons could be determined, the crucial element for generating
asymptotic freedom in QCD. Correspondingly, the variation of the
quark/gluon coupling could be examined for a large range of energies,
though experiments at PEP, PETRA and TRISTAN had already confirmed the
running of $\alpha_s(Q^2)$ in agreement with the renormalisation group (RG)
equation.
The quark/gluon jet phenomena were also indicated at the $pp$ collider
ISR~\cite{Breakstone:1983pb}, before high-energy jets were unambiguously isolated at the
$Sp\bar{p}S$~\cite{Scott:1985sr}. Since then, jets in hadronic collisions have become
precision tools in not only testing QCD and the electroweak physics at the highest available
energies (such as at the Tevatron and the LHC), but also in searching for phenomena
beyond-the-Standard-Model (BSM),
such as dijet resonances and quark substructure. By the same token, jet phenomena observed
at hadron-hadron and lepton-hadron colliders have provided fundamental information on the
quark and gluon densities (parton distribution functions) of the proton. We shall review this
towards the end of this paper, but for now concentrate on the general development of
jet physics and QCD which took place in the context of $e^+e^-$ colliders.
Quark and gluon processes at short distances can be treated, due to
asymptotic freedom of QCD, in perturbative expansions for the weakly
interacting fields. Therefore the basic short-distance processes
as well as the evolution of the quark/gluon cascades are well
controlled theoretically. However, the final transition from the
quark/gluon configurations to hadrons is a long-distance
process governed by large values of the QCD coupling which
cannot be treated in perturbation theory and which, so far, cannot
be analysed rigorously. Instead, QCD-inspired models have been
developed which parametrise the transition phenomenologically.
Two alternative approaches have been worked out in detail.
In the first picture a quark moving out of the short-distance
regime generates a string-like gluonic flux tube which breaks
up repeatedly by spontaneous quark-antiquark creation when
its length approaches about 1 fm. This mechanism generates
a jet of collimated hadrons with energy and direction corresponding
to the initial high-energy quark \cite{FieldF}. Gluons had
been treated analogously \cite{PHENgluon1,PHENgluon2},
or they were assumed to generate kinks, local depositions
of energy and momentum
in the strings stretched between quarks and antiquarks \cite{Lund}.
Alternatively in cluster fragmentation, after splitting
all final gluons in a quark/gluon cascade to quarks and
antiquarks, $q \bar{q}$ pairs with low invariant masses
transform to hadronic resonances which eventually
decay to the standard low-mass mesons and baryons
\cite{Herwig1}.
After the important work of Ref.~\cite{WuZo} (see, also~\cite{Lanius:1980nv,Lanius:1980mz,Daum:1980rp}),
numerous methods have been proposed, with steadily increasing refinement,
to reconstruct the jets experimentally. One class
consists of algorithms based on sequential jet recombination.
Particles are sequentially combined if their distance in
momentum space falls below a pre-set minimum. Typical examples are
the JADE algorithm \cite{JADE}, where the distance is defined
by the invariant mass of pairs, developed later to algorithms
based on transverse momenta $k_t$. A second class is
built by cone algorithms in which particles belonging to
pre-defined cones are grouped
into jets. Originally introduced to regulate singularities in infrared
and collinear quark-gluon configurations \cite{SterW,Sterman:1979uw}, they have
been developed to a standard method in hadron collider analyses.
The original jet analyses at PETRA were based on independent-jet
fragmentation \cite{PHENgluon1,PHENgluon2}, providing a valid tool
for reconstructing the quark/gluon configurations at small distances
in $e^+ e^-$ annihilation. Subtle effects observed later in the
hadron distributions between jets, were interpreted as string
effect~\cite{Lund,Andersson:1983ia}, or explained alternatively by additional
soft gluon radiation with angular ordering~\cite{Azimov:1986sf}.
PYTHIA \cite{Pythia,Sjostrand:2007gs}, HERWIG \cite{Herwig} and SHERPA \cite{Sherpa}
are modern versions of Monte Carlo programs which are used in
present jet analyses.
The connection of jets with QCD has been extensively treated in
the literature under theoretical and experimental aspects, see
{\it e.g.}~\cite{Kramer}-\cite{EllisK}. This review
will summarise the basic concepts of jet physics, intended
to describe how jet physics has been exploited to establish QCD
as the non-abelian quark/gluon gauge field theory of the strong
interactions. Addressing also communities outside high energy
physics, the review is presented mostly in a non-technical language,
giving a qualitative account of theoretical and experimental
developments which have dramatically changed the earlier picture
of the strong forces in particle physics. In doing this, we have
included some landmark measurements in a chronological order as they
were reported. The same remark applies to the discussion of the theoretical aspects,
and we have emphasized only works which were contemporary with the discoveries.
The picture now is based on a few fundamental principles summarised succinctly in Quantum
Chromodynamics.
The topics on which we concentrate are the non-perturbative and
perturbative elements of quark/gluon jets,
including experimental and phenomenological
methods to define the jets. Early evidence and
indirect indications of quark and gluon jets in $e^+ e^-$
annihilation to quarks at SPEAR and $\Upsilon$ decays to gluons
at DORIS will be reviewed. In the central core of this
paper, we will describe the theoretical basis of the discovery
of gluons in the three-jet events at PETRA and the measurement
of their properties. The picture will be completed with LEP. Finally
we will demonstrate in a few examples how jets can be used
as tools for measuring other parameters and fundamental processes of QCD, the
gluon content of nucleons, QCD Rutherford scattering, {\it etc.},
but also how to exploit jets for identifying electroweak $W,Z$ and Higgs
bosons, top-quark physics, and search for new phenomena, in particular possible substructure of
partons. Such problems have been addressed at HERA and the Tevatron,
and they will play an important role at the LHC. However, despite discussing some of the
most recent measurements in jets and QCD, this is not a review of the up-to-date theoretical
advances. We have included some of these topics to introduce the readers to the
vast areas of particle physics research in which jets and QCD have branched out,
but emphasize that this article aims primarily at providing a historical perspective.
This paper is organised in 8 sections and the main topics discussed
are as follows: Fragmentation properties of quarks and gluons (section 2),
discovery of quark jets at SPEAR and the first application of perturbative QCD to
derive the 2-jet cross section in $e^+e^-$ annihilation (section 3),
gluon jets in $\Upsilon$ decays and the basic partonic process $\Upsilon \to ggg$
(section 4), jets in QCD and at PETRA
and PEP (section 5), jets and QCD studies at LEP (section 6), jets as tools, with
applications in Deep Inelastic Scattering Processes, $\gamma \gamma$ collisions, and hard hadronic
collisions at the Tevatron and the LHC (section 7). A brief summary (section 8) will conclude
this review.
\section{The fragmentation of quarks and gluons}
\label{sec:fragmentation}
Quarks and gluons move, due to asymptotic freedom of QCD, as quasi-free
particles at short distances of the order of $10^{-15}$ cm ($10^{-2}$ fm) in the
femto-universe. When these coloured objects separate to more than
of the order of
1 fm, confinement forces become effective
which bind the quarks and gluons in hadrons. The hadronisation proceeds
through the formation of jets in high energy processes which is driven
by two dynamical mechanisms. These mechanisms can be explicated most easily
in $e^+ e^-$ annihilation to hadrons, $e^+ e^- \to q \bar{q}, q \bar{q} g,
...$, {\it cf.} Fig.{\ref{fig:fragm.ab}}. {\it (i)} Quarks
which are suddenly accelerated in the production
process at short distance and time of the order of $1/E \ll 1$ fm,
will radiate gluons preferentially into a cone of small aperture,
$dN/d\Theta^2 \sim 1/\Theta^2$. Subsequently the gluons may split
into two gluons or quark-antiquark pairs, and, repeatedly, quarks and gluons
again into quark and gluon pairs, so that the original quark fragments finally
into a quark/gluon cascade within a narrow cone. {\it (ii)} When
the coloured quarks on the way out of the femto-universe to large distances
separate to more than 1 fm, a gluonic flux tube of narrow transverse dimensions
builds up which fragments into ordinary hadrons. Similar mechanisms
lead to the hadronisation of gluons. In total, the perturbative quark/gluon
cascade as well as the partons fragmenting non-perturbatively into hadrons
generate jets of particles preserving, in momentum and energy, the original
kinematic characteristics of their parent partons.
\begin{figure}
\center{
\resizebox{0.95\columnwidth}{!}{
\includegraphics{F_fragmq.eps} \hspace*{2cm} \includegraphics{F_fragmc.eps}}}
\caption{(a) Quark fragmentation to hadrons induced by confinement forces;
(b) Quark/gluon cascades at high energies in QCD.
}
\label{fig:fragm.ab}
\end{figure}
\subsection{Quark fragmentation}
When quarks and antiquarks in high energetic processes, like $e^+ e^- \to
q \bar{q}$, separate from each other, a linear gluonic flux tube is expected
to build up, with energy density of about 1 GeV/fm and small transverse size,
which will confine the two coloured objects. For sufficiently large separations
$R \sim 1$ fm, enough energy will be accumulated in the flux tube so that new
quark-antiquark pairs can be created spontaneously and the flux tube breaks up.
This expectation is borne out by lattice analyses \cite{Bali} which, in static
approximation, support this picture qualitatively. Beyond the short-distance
regime, the potential between heavy quarks rises linearly with distance,
$V(R) = \sigma R$ with $\sigma \approx 1$ GeV/fm. However, when the distance
between the heavy quarks exceeds a value of about 1.2 fm, light quark pairs
$q \bar{q}$ are created and the heavy-quark $[Q \bar{Q}]$ system breaks up
into two separate mesons $[Q \bar{q}]$ and $[\bar{Q} q]$.
Repeating this break-up process, adjacent quarks and antiquarks coalesce to
hadrons with small transverse momenta of the order of 350 MeV so that narrow jets
of collimated hadrons are generated~\cite{FieldF,Lund}. If the partition
of energies in $q \to h_{[q{\bar{q}}']} + q'$ scales with the energy
of the primary quark, the number density of hadrons $D(z)$ observed
with energy $z = E^h/E_q$ obeys, for a single species, the recursion formula~\cite{FieldF}
\begin{equation}
D(z)= f(1-z) + \int_z^1 f(\eta) F(z/\eta) \frac{d\eta}{\eta}~,
\end{equation}
with $f(\zeta)$ denoting the break-up probability
for fractional energy $\zeta$. This equation states that the primary meson might be
the first in rank primary meson, with probability $f(1-z) dz$, or if not, then the first-rank
primary meson has left a momentum fraction $\eta$ with probability $f(\eta) d\eta$, and in this
remaining cascade the probability to find $z$ in $dz$ is $F(z/\eta) dz/\eta$.
Dividing out by $dz$ gives the above equation. The
probability $f(\zeta)$, which
must be determined experimentally, is generally parametrised as a polynomial
$\sim (1-\zeta )^\beta$ or as an exponential $\sim \zeta^{-1} (1-\zeta)^\beta
\exp[-\alpha /\zeta]$ in string fragmentation.
From this picture two important consequences can be derived.
{\it (i)} Solving the evolution equation generates a pole in $z \to 0$,
most easily seen for polynomial probabilities,
\begin{equation}
D(z) \to \frac{const}{z} \;\; {\rm for} \;\; z \to 0 \,.
\end{equation}
Thus the fragmentation mechanism predicts a large number of soft (low energy) hadrons
in the jets, {\it i.e.} a long constant plateau in the rapidity $y = \log z^{-1}$.
{\it (ii)} Summing up the hadron charges in the jets reflects the charge
of the parent quark. If $u,d,s$ quark pairs were created in the flux tube
spontaneously with equal probabilities, the sum would measure the charge
exactly. However, since $s$-quarks weigh a little more than $u$- and $d$-quarks,
the probabilities for spontaneous quark-pair creation deviate from 1/3
by a small amount and a small fraction of the charge leaks into the
plateau:
\begin{equation}
\langle Q_q \rangle = \sum_{h \in jet} e^h = e_q - \gamma \,.
\end{equation}
In the parton model language~\cite{FieldF} one finds $\gamma \approx 0.067$, {\it i.e.}
$\langle Q_u \rangle = 0.60$, $\langle Q_d \rangle = \langle Q_s \rangle
= -0.40$. The close relation to the ideal values +2/3 and -1/3 therefore
allows the efficient tagging of the parent quark charges in the jets. In practise, things are
more involved in perturbative QCD. Jet-charge studies have been undertaken extensively at
LEP~\cite{Abbiendi:1999ry},
where a lot of data are available at the $Z$ peak and flavour-tagged results~\cite{Albino:2005me},
distinguishing between the light-quark, charm and bottom contributions, have been obtained.
The light quark fragmentation to mesons can effectively be described by the
fragmentation function
\begin{equation}
D(z) = (1+\beta) \frac{1}{z} (1-z)^\beta
\;\; {\rm with} \;\; \beta \sim 0.2~,
\end{equation}
for small jet energies $\sim 7$ GeV. For higher energies QCD predicts
a stronger fall-off of the spectrum. [For a detailed overview
of quark fragmentation to various types of mesons and baryons
see~\cite{Saxon,Albino:2008gy}.]
The fragmentation function of the heavy $c,b$-quarks behaves rather
differently. It was recognized very early~\cite{heavyq,heavyq-2} that a heavy flavoured meson
(containing a charm or bottom quark and a light antiquark) retains a good fraction of the
momentum of the primordial heavy quark. Thus,
due to the inertia of the heavy quarks, the fragmentation function of a heavy quark
should be much harder than that of a light hadron. Estimating the
size of the transition amplitude by the energy transfer in the
break-up process $Q \to h_{[Q \bar{q}]} + q$, the
fragmentation function behaves, for example, as \cite{Pet}
\begin{equation}
D_Q(z) \sim \frac{1}{z \left[ 1 - \frac{1}{z}
- \frac{\epsilon_Q}{1-z} \right] ^2}
\;\; {\rm with} \;\;
\epsilon_Q \sim \Lambda^2 / M^2_Q \,,
\end{equation}
with $\Lambda \sim$ 200 MeV denoting the strong interaction scale.
The spectrum develops a narrow maximum near $z_{0} \sim
1 - \sqrt{\epsilon_Q}$. This form describes the essential characteristics of the hard spectra
of $Q$-flavoured mesons in the heavy $c,b$ jets with $M_c \sim$ 1.5 GeV
and $M_b \sim$ 4.5 GeV. For more recent works on the heavy quark fragmentation,
see~\cite{Mele:1990cw,Ma:1997yq,Qfrag,Cacciari:2005uk,Kneesch:2007ey,Kniehl:2008zza}.
It should be pointed out that at higher energies, where the heavy quarks are produced
with momenta much larger than the heavy quark mass, one
expects important perturbative QCD corrections, enhanced by the powers of the logarithms of
the transverse momenta to the heavy quark mass, which modify the shape of the fragmentation
functions. These effects can be implemented using the framework of an evolution equation which
is discussed later in this review. They have to be incorporated in the analysis of data.
After the discovery of quark jets in 1975 in $e^+ e^- \to q \bar{q}$ at SLAC,
detailed studies in understanding the hadronisation process, and hence the
energy-momentum profiles of the quark jets,
were initiated in 1977 by Feynman and Field~\cite{FieldF}. In their approach,
the initial quarks
and antiquarks produced in $e^+e^-$ annihilation fragmented independently
in a cascade process,
$q \to q+ (\bar{q}^\prime q^\prime) \to h_{(q \bar{q}^\prime)} +q^\prime$,
conserving the charge and other flavour quantum numbers at
each step of this cascade. To determine the energy-momentum profile,
light-cone variables $p= (p_+,p_-,\vec{p}_T)$ were used with $p_- \ll p_+$,
where $p_\pm = E \pm p_\parallel $. The fragmentation
$q \to h+q^\prime$ is then affected through a primordial
fragmentation function
\begin{equation}
f_q^h(z)= 1- a + 3a (1-z)^2, \;\;\;\; z=\frac{(E+ p_\parallel)_{\rm h} }
{(E+ p_\parallel)_{\rm q}}~,
\end{equation}
with $a$ an adjustable energy-independent parameter, fixed by data.
As already discussed, this gives rise to an scale-invariant longitudinal
energy
distribution of hadrons in a jet. Heavy quark fragmentation (for example of
a charm quark into a
$D$ meson) is encoded by a different primordial $c \to D$ fragmentation
function, as already discussed in this section above.
The $\vec{p}_T$-distribution ($\vec{p}_T$ is the transverse momentum
measured with respect to the jet-axis, which can be identified with the
direction of the fragmenting quark, for the time being) was
implemented in terms of a Gaussian function, characterised by
$\sigma_q\simeq 0.35$ GeV, also determined phenomenologically:
$g(p_T^2)= (2 \sigma_q^2)^{-1} {\rm e}^{-p_T^2/2\sigma_q^2}$. Like the flavour
quantum numbers, $\vec{p}_T$ is locally compensated, implying that
an $r^{\rm th}$-rank primary meson has a momentum
$\vec{k}_T(r)$, with $\vec{k}_T(r)= \vec{q}_{T r} - \vec{q}_{T(r-1)}$.
The striking feature of the Feynman-Field
jet is its simple algorithm with the phenomenological profile determined
in terms of a few parameters, which provided an adequate description of the
non-perturbative aspects of jets initiated by quarks.
\subsection{Gluon fragmentation}
The fragmentation of gluon jets follows rules similar to quarks.
Two paths had been chosen in the analysis of PETRA jets.
The properties of $g$-jets may be described either as a nearly
flavour-neutral average \cite{PHENgluon1} of $u,d$ and, with
reduced probability, $s$-quark jets, or, alternatively, gluon jets
may be reinterpreted as a superposition of quark and antiquark
jet pairs \cite{PHENgluon2} with the spectra derived from the
$g \to q \bar{q}$ splitting function. In any case, the transition
from gluons to quarks $g \to q \bar{q}$ in creating the leading
particle will soften the gluon fragmentation compared with the
quark fragmentation, accounted for effectively by raising
the power fall-off towards $z \to 1$ of the fragmentation function of the order
of $\sim 1.5$.
\subsection{Hadronisation Models}
Quark and gluon configurations created at small distances must
transform to bundles of hadrons due to confinement at large distances.
The transformation requires non-perturbative mechanisms and, therefore,
cannot be treated rigorously at the present time. Instead, models
have been developed which describe this step at various levels of
sophistication.
\subsubsection{Independent jet fragmentation}
Gluonic flux tubes, built up when coloured objects separate, may hadronize
into a jet of collimated hadrons as argued earlier. While the basic picture
had first been described for quark jets \cite{FieldF}, gluon jets can be
analysed similarly when the gluons are either treated globally as partons
\cite{PHENgluon1} or split into quark-antiquark pairs, fragmented incoherently
again \cite{PHENgluon2}. Implementing energy-momentum
conservation in the overall event was an unsatisfactory element of the model.
Nevertheless, independent
jet fragmentation has a simple and transparent structure including a small number
of parameters. The picture could account quite successfully for the essential
properties of two- and three-jet events in $e^+ e^-$ annihilation
at PETRA and PEP. Thus, it had initially been the right theoretical tool
for proving experimentally the gluonic nature of the third jet
in three-jet events.
\subsubsection{String model}
Apart from the different choice of the primordial splitting function
$f(\zeta)$, motivated by covariance and side-independence of the
beginning of the break-up sequence, quark jets in the string model
\cite{Lund} are not very different from independent fragmentation
schemes. However, gluons are incorporated quite differently. They
generate kinks which locally transfer energy and momentum to the strings
stretched between quarks and antiquarks. A small number of hadrons
is boosted from the segment between quark and antiquark jets to the
segments between quark or antiquark and gluon jets. This string effect
has been observed experimentally as reshuffling of hadrons
between jets, discussed later.
\subsubsection{Cluster hadronisation}
Quite a different hadronisation mechanism is based on colour pre-confinement
\cite{preconf}. Neighbouring coloured partons in cascades arrange themselves
in colour-neutral islands with preferentially small invariant masses. In
practise, the quark/gluon partons in cascades are evolved down to low invariant masses
of the order of several $\Lambda_{\rm QCD}=O(200~{\rm MeV})$, where $\Lambda_{\rm QCD}$
is the scale parameter specific to QCD and appears in the argument of
$\alpha_s(Q^2)$.
Splitting the gluons in the final step
into $q \bar{q}$ pairs, neighbouring quarks and antiquarks form
the colourless clusters which may finally decay to standard hadrons
according to phase space rules \cite{Herwig1}. The reduced number
of radiated gluons off heavy quarks and the small number of
large invariant masses in the tail of the distribution
of the colour-neutral clusters can be covered by non-perturbative
string-type compliments to the cluster hadronisation scheme.
Based on these schemes QCD event generators have been constructed which
describe hadron spectra at a high level of accuracy. While
the prototypes had originally been developed for hadron production
in $e^+ e^-$ annihilation, the event generators
have been expanded to proton-(anti)proton collisions and complimented
by programs for lepton-nucleon collisions. The modern versions of
PYTHIA \cite{Pythia,Sjostrand:2007gs}, HERWIG \cite{Herwig}, SHERPA \cite{Sherpa} and
others involve the cascading of quarks/gluons in the final and the $p/\bar{p}$
initial states, and string or cluster hadronisation in the final states.
For multijet final states, frequently produced at high energies
in colliders, elaborate techniques have been developed, based
on the relation~\cite{Sudakov:1954sw}
${\rm PS}(Q^2) = {\rm ME}(Q^2) \times {\rm Sudakov \; factor} \, [Q^2_{max} \to Q^2]$,
to accomplish smooth transitions between quark/gluon parton
showers (${\rm PS}$) and well separated multijet final states described by
fixed-order perturbation theory matrix elements (${\rm ME}$) squared.
\subsection{Inclusive jet measures}
In this section we discuss some inclusive jet measures which have played
an important role in the quantitative tests of QCD. The first of these is
the observable called sphericity which played a central role in the discovery of
quarks jets at SPEAR. In its tensorial form it is
defined as follows~\cite{PHENquark}:
\begin{equation}
S_{\alpha \beta}=\frac{\sum_i p_{i\alpha} p_{i\beta}}{\sum_{i}\vec{p_i}^2}~,
\label{eq:Salfabeta}
\end{equation}
which can be diagonalised obtaining the
principal axes $\vec{n_1},\vec{n_2}$ and $\vec{n_3}$ with
corresponding eigenvalues $Q_1,Q_2$ and $Q_3$. The $Q_i$ can be
ordered $Q_1<Q_2<Q_3$ and normalised so that $Q_1+Q_2+Q_3=1$.
Then the squares of the transverse momenta are minimal with respect to the
axis $\vec{n_3}$ and the sphericity $S$ is given by
\begin{equation}
S= \frac{3}{2}(1-Q_3)=\frac{3}{2}(Q_1+Q_2)~,
\end{equation}
with the sphericity axis equal to $\vec{n_3}$. For events with two particles
with equal and opposite momenta (ideal two-jet event) we have $S=0$ and
$S \to 1$ for completely isotropic events. Because of the normalisation
$Q_1+Q_2+Q_3=1$ only two of the eigenvalues are needed to characterise an event.
For example one can take in addition to S the so-called aplanarity $Ap$, which
is
\begin{equation}
Ap = \frac{3}{2}Q_1=
\frac{3}{2} {\rm min} \frac{\sum_{i}|\vec{p_{iT,out}}|^2}{\sum_{i}\vec{p_i}^2}~.
\end{equation}
The aplanarity $Ap$ minimises the transverse momenta with respect to a
plane. Events with small $Ap$ are almost planar. The jet variables
of an event, $Q_1,Q_2$ and $Q_3$ can be plotted inside a triangle as shown
in Fig.~\ref{fig:PLB86-5}, in which events obtained by the
TASSO Collaboration at PETRA
are shown. In this plot planar events are found in the strip with small
$Ap$, 2-jet events have in addition also small $S$.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{PL86B-5.eps}}}
\caption{Distributions of the $e^+e^- \to$ hadron events as a function of aplanarity
and sphericity defined in the text for the low (a) and high (b)
energy PETRA data (TASSO~\cite{EXP4gluon-TASSO}).
}
\label{fig:PLB86-5}
\end{figure}
Alternatively, sphericity can be defined
as~\cite{PHENquark}
\begin{equation}
S=\frac{3}{2}\, {\rm min}\, \frac{\sum_{i} |\vec{p_{iT}}|^2}{\sum _{i}|\vec{p_i}|^2}~.
\end{equation}
Here, $\vec{p_{iT}}$ are the transverse momenta of all produced
hadrons in the final state event relative to an axis chosen such that the
numerator is minimised.
The method based on
the sphericity tensor, first applied to the
analysis of the 3-gluon decay of the $\Upsilon$ resonance~\cite{Alexander} and
to the analysis of $q\bar{q}g$ events~\cite{WuZo}, has the
advantage that the eigenvalues $Q_i$ and the principal axes $\vec{n_i}$ and
from this $S$ and $Ap$ can be calculated quite easily. Since in these jet
measures the momenta enter quadratically, the high momentum particles are
weighted stronger in the calculation of $S$ and $Ap$. Also, these variables
are not invariant against clustering of particles and depend strongly on
details of the fragmentation of quarks (and gluons) into hadrons. This has, for
example, the effect, that the sphericity changes if a particle momentum
splits up by decay, as for example, $\rho^0 \to \pi^+\pi^-$ or by
fragmentation, for instance $q \to q' + meson$ into two or more momenta.
Therefore these variables are also sensitive to the emission of soft or
collinear gluons.
There exist other variables
which are infrared safe (this term is used for observables which are free of
divergences when calculated in perturbation theory in the limiting case of low energy radiation)
and which depend on linear sums of
momenta. Known examples are thrust $T$ and acoplanarity $A$ which are defined
by
\begin{equation}
T = max \frac{\sum_{i}|\vec{p_{iL}}|}{\sum_{i}|\vec{p_i}|}~,
\end{equation}
\begin{equation}
A= 4min \left(\frac{\sum_{i}|\vec{p_{iT,out}|}}{\sum_{i}|\vec{p_i}|}\right)^2~.
\end{equation}
For thrust $T$, which was introduced in \cite{Brandt,Farhi},
the thrust axis $\vec{n}$ is obtained by maximising the
longitudinal momenta $\vec{p_{iL}}$ along it. T varies
in the range $0.5<T<1.0$, where the lower limit corresponds to isotropic events
and the upper limit to completely collinear configurations. In a similar
way for spherocity $S^\prime$, defined as~\cite{Brandt79}
\begin{equation}
S^\prime = (\frac{4}{\pi})^2 {\rm min}
\left(\frac{\sum_{i}|\vec{p_{iT}}|}{\sum_{i}|\vec{p_i}|}\right)^2~,
\end{equation}
the $|\vec{p_{iT}}|$ is minimised with respect to a unit vector $\vec n $. It lies between 0
and 1 for configurations from collinear to fully isotropic events. Similar to
$Ap$ the acoplanarity is obtained in such a way that the $\vec{p_{iT,out}}$ is
minimal with respect to a plane. Planar event must have small $A$ values. For
massless particles $A$ varies between 0 and 2/3.
Various other jet measures have been proposed: For example a generalisation
of thrust to three clusters instead of two, called triplicity
\cite{Brandt79} or jettiness \cite{WuZo}. A variable
introduced for the analysis to verify the existence of four-jet events is the
variable tripodity $D_3$ \cite{Nachtmann}. These and other jet
variables will be defined explicitly when they are used to interpret
specific final state data in $e^+e^-$ annihilation in later sections.
\subsection{Jet algorithms}
Classifying multi-particle final states qualitatively in jets with high
energies is straightforward for a coarse picture. However, when the picture
is refined to a high level of precision, algorithms must be employed
which define the jets in a rigorous way. In addition, when experimental
measurements are compared with theoretical predictions, the algorithms
used in the experimental analyses must conform with the algorithms
adopted in the theoretical analyses.
A multitude of algorithms \cite{Salam:2009jx} has been developed to describe
jets at high energies. A few representative examples should characterise
the two classes, sequential recombination and cone algorithms.
Recombination algorithms have been introduced originally in $e^+ e^-$
annihilation, while cone algorithms have been used frequently at
hadron colliders so far. We shall discuss some of these algorithms
later while discussing jets in hadronic collisions.
\subsubsection{Sequential recombination algorithms}
The JADE algorithm \cite{JADE} is a prominent representative for recombination
algorithms applied in $e^+ e^-$ annihilation. Particles are clustered
in a jet iteratively as long as their distance remains less than a
prescribed cut-off value. The distance of two particles is defined by
the invariant mass of the pair:
\begin{equation}
y_{ij} = 2 E_i E_j (1-\cos\theta_{ij}) / E^2_{cm} \,.
\label{eq:yij}
\end{equation}
If the criterion $y_{ij} \le y_{cut}$ is fulfilled, the particles $i$ and
$j$ are combined to a compound by adding up energy and momentum,
for instance, and the recombination continues by pairing the compound
with the next particle $k$. The procedure stops after all particles are
associated with jets. The cut-off value $y_{cut}$ is generally chosen
in the range from $10^{-1}$ down to $10^{-3}$.
The presence of $E_iE_j$ in the numerator of $y_{ij}$ means that two soft particles moving
in {\it opposite directions} can get combined into a single particle in the early stages of
clustering, which is counter-intuitive to the idea of a jet as consisting of particles
restricted in the angular dimension. Apart from this, JADE algorithm leads to a structure
in higher orders of pertrurbation theory which does not allow itself to be expressed in
a compact resummed form. Technically, this means that the double logarithms of the type
$\alpha_s^n \ln^{2n}y_{\rm cut}$ ($n=1,2,... $), which arise in higher orders of perturbation theory
and which should be resummed to have the correct perturbative form in a limited kinematic region,
named after Sudakov, are either not present or not discernible easily.
To rectify both of these shortcomings, the JADE algorithm has been improved
by substituting $E_i E_j \to \min[E^2_i,E^2_j]$ in the DURHAM algorithm~\cite{Catani:1991hj}.
This amounts to defining the distance by the minimal transverse momentum
$k_t$ of the particles in nearly collinear pairs.
The concept has been transcribed to hadron colliders~\cite{Ellis,Catani-algo}, where the total
sub-energy is experimentally not well defined, by switching to
un-normalised measures and replacing the angles between particles by
the differences of the rapidities $y = 1/2 \log(E+p_z)/(E-p_z)$
along the beam axis and the
azimuthal angles $\phi$ in the plane transverse to the beam axis,
\begin{equation}
d_{ij} = \min[p^{2p}_{ti},p^{2p}_{tj}] \,
[(y_i - y_j)^2 + (\phi_i - \phi_j)^2]/R^2 \,,
\end{equation}
with $p_{ti(j)}$ denoting the transverse momenta of the particles
with regard to the beam axis.
The jet parameter $R$ is chosen of the order of one. Since the individual quantities
$(y_i-y_j)$, $(\phi_i-\phi_j)$, $p_{ti}$ and $p_{tj}$ are all invariant under longitudinal boosts,
the distance measure $d_{ij}$ is also longitudinally invariant.
Recombination with the beam jets
is controlled by the observable $d_{iB} = p^{2p}_{ti}$, included parallel
to the measure $d_{ij}$ when recombining all the particles to jets with non-zero
transverse momenta and beam jets. Originally, the power parameter $p$ had
been chosen 1 in the $k_t$ algorithm~\cite{Catani:1991hj} and 0 in the Cambridge/Aachen
algorithm~\cite{Wobisch:1998wt}.
However, clustering involving hard particles are favoured by choosing $p = -1$
in the $anti$-$k_t$ algorithm~\cite{Cacciari:2008gp}. This algorithm, which makes jets
grow outwards from hard seeds as intuitively expected, is the preferred tool
for LHC analyses.
\subsubsection{Cone algorithms}
Cone algorithms had been introduced in QED to tame infrared and collinear
singularities encountered in photon radiation off charged particles. The concept has been
translated to QCD in formulating the Sterman-Weinberg jets \cite{SterW}. Defining 2-jet events
as hadron configurations in which all but a fraction $\epsilon$ of the
total energy is contained in cones of half-angle $\delta$ around the
central event axis, the ratio
\begin{equation}
\frac{\sigma_2}{\sigma} = 1 - \frac{32}{3} \, \frac{\alpha_s}{2 \pi} \,
\log\frac{1}{\delta} \, \log\frac{1}{\epsilon} \,.
\end{equation}
describes the 2-jet fraction of hadronic events in $e^+ e^-$ annihilation
in the leading logarithmic approximation.
The transition to hadron collisions has been formulated again by adopting
the definition of distances based on rapidities and azimuthal angles.
The clustering requires a seed location; the 4-momentum of the cluster is
determined by summing the 4-momenta of all the objects within a distance
$R= \sqrt{ (y-y_c)^2 + (\phi - \phi_c)^2}$ from the seed $(y_c,\phi_c)$. In one variant,
used in the analysis of the Run II
Tevatron data, the 4-momenta are summed using the so-called
E-scheme~\cite{Blazey:2000qt} (this should not be confused with the E-scheme in the
analysis of the jets in $e^+e^-$ annihilation),
$(E,p_x,p_y,p_z)=\sum (E,p_x,p_y,p_z)_i$, and the various variables are defined as
\begin{equation}
p_T=\sqrt{p_x^2 + p_y^2}, ~~~y=\frac{1}{2} \ln \left( \frac{E+p_z}{E-p_z}\right)~,
~~\phi=\tan^{-1} (p_y/p_x)~.
\end{equation}
This differs from the Snowmass clustering algorithm~\cite{Huth:1990mi}, used in the
analysis of the Tevatron I data, in which the clustering centroid was defined as the
$E_T$-weighted averages of the rapidity $y$ and the azimuthal angle $\phi$.
The cones are either centred around seed particles (defined as those particles setting
the initial direction and one sums the momenta of all the other particles around these
seed particles within a specified jet measure), an approach which is not
infrared safe, or they are defined by grouping all particles into
subsets such that the subsets correspond exactly to pre-defined cones.
For further discussion of these and related issues, we refer to the works of
Seymour~\cite{Seymour:1997kj} and the comprehensive review of jet measures by
Salam~\cite{Salam:2009jx}.
\section{Discovering quark jets}
\label{sec:3}
\subsection{Quark jets at SPEAR}
\label{sec:3.1}
The notion of jets in
$e^+e^-$ annihilation is closely connected with the discovery of Bjorken
scaling in deep-inelastic electron-nucleon scattering in 1968 at SLAC. As
mentioned in the introduction, inelastic
electron scattering on protons and neutrons at large spacelike ($q^2<0$)
momentum transfer and large inelasticity $\nu$ can very well be described
in terms of an interaction of the spacelike virtual
photon with the pointlike constituents of the
nucleon, the partons, identified as the $u$ and $d$ quarks inside the proton
and neutron \cite {Feynman}.
The analogous process with a timelike ($q^2>0$) virtual photon is
$e^+e^-$ annihilation into a quark-antiquark pair, $e^+e^- \to q\bar{q}$, as
shown in Fig.~\ref{fig:Born}, where $q$ stands for the quarks $u, d, s, c, b$.
\begin{figure}
\center{
\resizebox{0.50\columnwidth}{!}{
\includegraphics{qqbar.eps}}}
\caption{Born diagram for $e^+ e^- \to \gamma \to q \bar{q}$.}
\label{fig:Born}
\end{figure}
As explained
already in the introduction, in this simple model the virtual photon from the
annihilating electron and positron creates a quasi-free quark and
antiquark. The occurrence of real
quark and antiquark particles in the final state is prevented by the the fact that
they carry non-trivial colour quantum numbers. The quarks and antiquarks transform
themselves into hadrons with unit probability under the
confinement forces, which act at much later times $t \simeq 1~GeV^{-1}$.
These hadrons should appear in the final state aligned roughly along the momentum
direction of the original $q$ and $\bar{q}$, so that two distinct hadron
jets with oppositely directed momenta appear in experiments. This discussion mirrors
the early ``outside-in'' picture of jet formation, which was used in the formative
stages of jet physics. This approach was later replaced by the so-called ``inside-out''
description where quark-antiquark pairs were created out of the vacuum before the
step of hadronisation. We will discuss the salient feature of this development later.
This simple quark model~\cite{THquark,THquark-2,THquark-3}
was supported by the fact that the total annihilation cross section for hadron
production $\sigma(e^+e^- \to hadrons)$ is given by the square of the
quark charges $Q_f$ multiplied with the number of colours $N_C$ of each quark
$q$
\begin{equation}
\sigma(e^+e^- \to hadrons) \equiv \sigma_0=\frac{4\pi\alpha^2}{3s} N_C \sum_{f} Q_f^2~,
\label{eq:sigma-0}
\end{equation}
where the sum over $'f'$ is over all active flavours which can be produced at a
given center-of-mass energy $\sqrt{s}$;
$\alpha$ is the fine structure constant $\alpha \simeq
e^2/137$. Dividing by the cross section for the production of a
$\mu^+\mu^-$ pair, $\sigma(e^+e^- \to \mu^+\mu^-)$, one
obtains the famous Drell-ratio $R$, defined as
\begin{equation}
R\equiv\frac{\sigma(e^+e^- \to {\rm hadrons})}{\sigma(e^+e^- \to \mu^+\mu^-)} =
N_C\sum_{f} Q_f^2~,
\end{equation}
which has the numerical value 2 (for $f=u,d,s$), and assumes the values
$10/3$ (for $f=u,d,s,c$) and $11/3$ (for $,f=u,d,s,c,b$), as the threshold for
the processes $e^+ e^- \to c\bar{c}$ and $e^+ e^- \to b\bar{b}$ are crossed.
A recent compilation of the hadronic cross section
$\sigma(e^+e^- \to {\rm hadrons})$ and the corresponding ratio $R$
is shown in Fig.~\ref{fig:Ree}
(taken from the Particle Data Group~\cite{Nakamura:2010zzi})
where the various resonances ($\rho, \omega, \phi, J/\psi,...)$ encountered in $e^+e^-$
annihilation and the transition regions are clearly visible. Away from the resonances, the
ratio $R$ is almost flat, increasing as a new quark-antiquark threshold is crossed in
agreement with the values quoted above.
Note that the $t\bar{t}$ threshold (at around
350 GeV) lies beyond the energies of the $e^+e^-$ collider rings operated so far.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{Ree.eps}}}
\caption{Measurements of the ratio $R$
as a function of the $e^+e^-$ centre-of-mass energy $\sqrt{s}$ [From
PDG~\cite{Nakamura:2010zzi}].}
\label{fig:Ree}
\end{figure}
The production of hadron jets in $e^+e^-$ annihilation as a signature of
the process $e^+e^- \to q\bar{q}$ was suggested by Bjorken and Brodsky already
in 1970 \cite{PHENquark}.
However, it was not until 1975 that they were discovered
experimentally at SLAC's $e^+e^-$ storage ring SPEAR
by the SLAC-LBL Collaboration using the
MARK I detector~\cite{EXPquark} when high enough centre-of-mass energies $\sqrt{s}$ became
available. At low energies, for
example at the ADONE ring at Frascati or the original DORIS ring at DESY, it
was not possible to see jets because the jet cones were too broad.
This is easy to understand if we assume that the transverse momentum
$p_T$ with respect to the jet direction (which, theoretically is the
momentum direction of the original quark $q$ or antiquark $\bar{q}$ ), which
are emitted back-to back in the c. m. system,
is limited and that the hadron multiplicity $<n>$ increases only modestly
with $\sqrt{s}$. The jet cone becomes narrower with
increasing $\sqrt{s}$, characterised by the mean half angle
$<\delta>$ of the jet cone (see Fig.~\ref{fig:2jets}).
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{2jets.eps}}}
\caption{The process $e^+ e^- \to q \bar{q} \to$ jets, with the jets defined using the
jet-cone angle $\delta$, arising from limited $p_T$ of the hadrons.}
\label{fig:2jets}
\end{figure}
At $\sqrt{s} = 4~GeV$ the particle multiplicity is about 6, so that
with $<p_T> \simeq 0.35~GeV$ the half-angle of the jet-cone
$<\delta> \simeq <p_T> <n>/\sqrt{s} \simeq 0.53
\simeq 30^{\circ}$. This shows that at this energy each of the two
jets is broader than $60^{\circ}$ on average.
For establishing the jets it is necessary to
determine the jet axis along which the transverse momenta of the produced
hadrons are minimised, In the early work of the SLAC-LBL collaboration, the jet axis was defined
in terms of the sphericity variable defined earlier. In the SLAC-LBL experiment
the mean sphericity was found to be approximately constant
as a function of the total $e^+e^-$-energy $E_{c.m.}=\sqrt{s}$ up to $4~GeV$
and then it decreases with increasing $E_{c.m.}$.
A detailed comparison is shown in Fig.~\ref{fig:PRD26-3}, in which the measured
sphericity distributions $d\sigma/dS$ at $E_{c.m.}= 3.0$, $6.2$ and $7.4~GeV$ are
compared with the calculated distributions based on a two-jet model and the phase-space.
\begin{figure}
\center{
\resizebox{0.70\columnwidth}{!}{
\includegraphics{PhysRevD-3.eps}}}
\caption{Observed sphericity distributions for data from MARK I detector, jet model
(solid curves), and phase-space model (dashed curves) for (a) $E_{c.m.}=3.0$ GeV, (b)
$E_{c.m.}=6.2$ GeV, and (c) $E_{c.m.}=7.4$ GeV. (From Ref.~\cite{EXPquark}.)
}
\label{fig:PRD26-3}
\end{figure}
At $3.0~GeV$ there is no distinction between the two models and the
data agree with both. At $6.2$ and
$7.4~GeV$ the $S$ distributions are peaked towards low $S$ favouring the jet
model. But the $S$ distributions are still very broad. This comparison shows
quite clearly that (i) the $E_{c.m.}$ must be high enough to see the
production of jets in $e^+e^-$ annihilation, and (ii) that even at the
higher $E_{c.m.}$ energy range of the SPEAR storage ring, the jet
structure is visible only through a detailed comparison with the prediction of an
appropriate jet model. Observing the jet structure was easier at PETRA energies, where
most of the events have a two-jet topology, which, because of the higher energy
had much narrower angular jet-cones.
An example of such an event measured by the TASSO collaboration at
$E_{c.m.}= 31.6~GeV$, is shown in Fig.~\ref{fig:1.1ab} (left-hand frame).
Further tests of the underlying
quark structure of the jets in $e^+e^-$ annihilation were undertaken at
SPEAR. One such test is the
measurement of the angular distribution $d\sigma/d\cos\theta$ of the jet axis
with respect to the beam direction. This distribution for the production of
massless spin $1/2$ particles is \cite{Gatto}
\begin{equation}
\frac{d\sigma}{d\cos\theta} \sim 1+\cos^2 \theta~.
\end{equation}
The first data came from the SLAC-LBL Collaboration at SPEAR. They did the
measurement with transversely polarised $e^+$ and $e^-$ beams available
at the lower c.m. energies of the SPEAR ring.
With transversely polarised beams the angular
distribution has the following form
\begin{equation}
\frac{d\sigma}{d\Omega} \sim 1+\alpha\cos^2\theta +\alpha P_+P_-\sin^2\theta~
\cos2\phi~,
\end{equation}
where $\phi$ is the azimuthal angle of the jet axis with respect to the
storage ring plane and $P_+$ and $P_-$ are the polarisations of the $e^+$ and
$e^-$ beams, respectively.
The measured $\phi$ distributions (averaged over $\theta$) for 6.2 and
7.4 GeV are seen in Fig.~\ref{fig:PRD26-11}. At 6.2 GeV the beam polarisations are $P_+=P_-=0$
and therefore the $\phi$ distribution is isotropic. At 7.4 GeV, where
$P_+P_-=0.5$ the characteristic $\cos2\phi$ behaviour is observed. From this
measurement at SPEAR, the value $\alpha=0.97\pm0.14$ \cite{EXPquark,EXPquark-2} is in
agreement with the expectation for spin $1/2$ quarks, $\alpha=1$.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{PhysRevD-11.eps}}}
\caption{Observed distributions of jet-axis azimuthal angles from the plane of the
storage rings for jet axis with $|\cos \theta | \leq 0.6$ for (a) $E_{c.m.}=6.2$ GeV and
(b) $E_{c.m.}=7.4$ GeV. (From Ref.~\cite{EXPquark}.)
}
\label{fig:PRD26-11}
\end{figure}
Similar, but less accurate, results were obtained by the PLUTO Collaboration at DORIS for
$E_{c.m.} = 7.7$ and $9.4~GeV$ \cite{Berger78}. Measurements of the angular
distribution of jets at higher energies were also performed at
the $e^+e^-$ storage rings PEP and PETRA.
Although the beam energies were
much higher, yielding a much better defined jet axis, the result
$\alpha =1.04\pm0.16$~\cite{Elsen} does not have a better
accuracy than the SPEAR measurement, which had the benefit of polarised beams.
This test of the spin $1/2$ nature of the quarks produced in $e^+e^-$
annihilation is very much the same as the verification of the Callan-Gross
relation \cite{Callan} in deep inelastic lepton-nucleon
scattering: $F_2(x)=2xF_1(x)$, which is also very well satisfied
experimentally.
\subsection{Sterman-Weinberg Jets}
\label{sec:SW-jets}
The existence proof of jets in QCD was provided by Sterman and
Weinberg~\cite{SterW}. As already noted, they calculated the cross section
$\sigma_{\rm 2-jet}(\epsilon, \delta)$
for the process $e^+ e^- \to 2-{\rm jets}$, where the jets are defined by two
cones in opposite hemispheres with half-angular size $\delta$, having all but
a fraction $\epsilon$ of the total c.m. energy. In the general field
theory context, jets were anticipated due to the Lee-Nauenberg theorem~\cite{Lee:1964is},
which states
that the transition probability in a theory involving massless particles is finite
provided summation over degenerate states is performed.
The Feynman diagrams which
contribute in order $\alpha_s(Q^2)$ are shown in Fig.~\ref{fig:qqg}.
\begin{figure}
\center{
\resizebox{0.60\columnwidth}{!}{
\includegraphics{PLB86-1.eps}}}
\caption{Lowest order Feynman diagrams contributing to
$e^+e^- \to q\bar{q} g$ (upper two diagrams) and vertex corrections in
$e^+e^- \to q\bar{q}$ (b).}
\label{fig:qqg}
\end{figure}
For small $\epsilon$ and
$\delta$, and to leading order in $\alpha_s(Q^2)$ one has
\begin{equation}
\sigma_{\rm 2-jet}(\epsilon, \delta)=\sigma_0 \left[ 1 + C_F \frac{\alpha_s(Q^2)}
{\pi} \left(-4 \ln 2\epsilon \ln \delta -3 \ln \delta -\frac{\pi^2}{3}
+ \frac{5}{2} + O(\epsilon) + O(\delta) \right)\right]~,
\label{eq:sw-2jets}
\end{equation}
where $\sigma_0$ is the lowest order cross section given in
Eq.~(\ref{eq:sigma-0}), $C_F=4/3$ and $\alpha_s(Q^2)$ is the QCD coupling
constant defined in the lowest order
\begin{equation}
\alpha_s(Q^2)=\frac{12 \pi}{(33-2n_f)\ln\frac{Q^2}{\Lambda^2}}~,
\label{eq:alphas-0}
\end{equation}
where $n_f$ is the number of quark flavours.
The terms $O(\epsilon)$, $O(\delta)$ neglected by Sterman and Weinberg
are all finite, essentially proportional to the phase space and have
been subsequently worked out~\cite{Stevenson:1978td}.
Here $\Lambda$ is a scale parameter, to be determined experimentally, typically of
$O(200)$ MeV. As $Q^2 \to \Lambda^2$, $\alpha_s(Q^2) \to \infty$, signaling the breakdown of
perturbation theory. The above expression for
$\alpha_s(Q^2)$ also states that $\alpha_s(Q^2) \to 0$ as $Q^2 \to \infty$,
implying that QCD is an asymptotically free field theory. In the range of $Q^2$ where
$\alpha_s(Q^2)/\pi \ll 1$, one has a controlled perturbative region.
Since, in
leading order in $\alpha_s(Q^2)$, the inclusive hadronic cross section
for $e^+ e^- \to \gamma \to {\rm hadrons}$ is
\begin{equation}
\sigma(e^+ e^- \to \gamma \to {\rm hadrons})=\sigma_0(1 +
\frac{\alpha_s(Q^2)}{\pi})~,
\label{eq:sigma1}
\end{equation}
the complement of $\sigma_{\rm 2-jet}(\epsilon, \delta)$ is the 3-jet
cross section
\begin{equation}
\sigma_{\rm 3-jet}(\epsilon, \delta)= \sigma_0 \frac{\alpha_s(Q^2)}
{\pi} C_F\left(4\ln 2\epsilon \ln\delta + 3 \ln \delta +
\frac{\pi^2}{3}-\frac{7}{4} + O(\epsilon) + O(\delta)\right)~.
\label{eq:sw3jets}
\end{equation}
This implies that for typical jet resolutions, a small fraction of hadronic
events should consist of three-jets. They were found subsequently in
$e^+e^-$ annihilation at PETRA and we shall discuss them later quantitatively.
Another example of a jet measure which can be used with ease to
characterise jets is in terms of the invariant mass of the partons $y_{ij}$
emerging from a hard process, defined in Eq.~(\ref{eq:yij}).
Requiring $y_{ij} > y_{\rm min} > 0$, one avoids both
infrared and collinear singularities in a perturbative calculation.
The first of such $y_{\rm min}$-dependent 2-jet cross-section was calculated
in \cite{Kramer}, yielding ($y_{\rm min}=y$)
\begin{equation}
\sigma_{2-jet}= \sigma_0\left[1 + C_F \frac{\alpha_s(Q^2)}{2\pi}
\left( -2 \ln^2 y -3 \ln y + 4y\ln y -1 + \frac{\pi^2}{3} +O(y)\right)
\right]~.
\end{equation}
The $O(y)$ terms have been derived in~\cite{Kramer:1986sg}.
The two prescriptions just discussed have been used in the experimental analysis
of data concerning jets. Thus, for example, the JADE
algorithm~\cite{Bartel:1986ua}, used mostly in the analysis of the $e^+e^-$ data at
PETRA and PEP, is based on the $y_{\rm min}$-prescription, which can be used to classify
also muli-jet events. This was subsequently replaced by the $k_t$-jet algorithm, as
discussed in the preceding section. The modified form of the
the cone-prescription is widely used in the analysis of jets in
hadroproduction processes.
\section{Gluon jets in Upsilon decays}
\noindent
The $\Upsilon$ meson first produced in proton-nucleus collisions at FERMILAB
\cite{Herb,Innes} and identified by the
$\Upsilon \to \mu^+\mu^-$decay was later observed as a narrow resonance with
mass $m_{\Upsilon} = 9.46~GeV$ and width
$\Gamma_{\Upsilon}=(40^{+13}_{-8})$ keV
in the process $e^+e^- \to \Upsilon \to {\rm hadrons}$
\cite {Berger76,Berger76-2,Darden-78,Bienlein-78,Andrews-80,Bohringer-80}.
This resonance is a $b\bar{b}$ bound state and has the quantum numbers
$J^{PC}=1^{--}$. As the $B\bar{B}$ threshold lies above $m_\Upsilon$,
the $\Upsilon(9.46)$ state is
predicted to decay mainly into 3 gluons ($g$) in QCD, the massless colour-octet vector
particles~\cite{Appelquist,Appelquist-2,Koller,Koller-2,Koller-3,deGrand-77,deRujula-78} in
complete analogy with the decay of orthopositronium into 3 photons
\cite{Ore}. While $\Upsilon(9.46) \to ggg$ is the dominant decay mode,
with $3\%$ probability it can decay also into a photon
and 2 gluons, $\Upsilon(9.46) \to \gamma gg$.
Average energies of the three partons
were measured as $\langle E_1\rangle \simeq 4.1$ GeV for the most energetic of
the three gluons, with the other two having energies $\langle E_2\rangle \simeq 3.4$ GeV
and $\langle E_3\rangle \simeq 2.0$ GeV, respectively~\cite{Berger81}.
in approximate accord with the lowest order QCD matrix elements. However, only the fastest of
the three partons yielded a collimated jet of hadrons and its detailed phenomenological profile
was studied by PLUTO~\cite{Berger81,Stella:2010ne}. Phenomenological models, which included
the lowest order QCD matrix elements and the fragmentation of the partons (quarks and gluons),
were found to be in conformity with a number of inclusive measurements undertaken by
PLUTO.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{ZPhysC8-3.eps}}}
\caption{Processes contributing to hadronic final states in the $\Upsilon(9.46)$ resonance
region: (a) from direct decays of the $\Upsilon$, (b) from the $\Upsilon$ vacuum polarization
and (c) from the non-resonant continuum. The $\mu^+\mu^-$ final state can be produced
from the $\Upsilon$ vacuum polarization (d), and from the continuum (e).
(from Ref.~~\cite{Berger81}).}
\label{fig:ZfP-C8-3}
\end{figure}
The contributions to the multi-hadron events from the $\Upsilon$ mass region
originates from three sources~\cite{Berger81}, as shown in Fig.~\ref{fig:ZfP-C8-3}. The first row
in this figure shows the direct decay of the $\Upsilon$ (a), decay through the vacuum polarisation
(one-photon decay) (b), and the non-resonating continuum (c).
Denoting the corresponding cross sections as $\sigma_{\rm on}$ (cross-section in the $\Upsilon(9.46)$
energy range), $\sigma^{\rm vp}$ (cross-section for the $\Upsilon$ vacuum polarisation), and
$\sigma^{\rm off}$ (cross-section for the non-resonant continuum), the cross-section for the
$\Upsilon(9.46)$-production with direct decay is
$\sigma^{\rm dir}= \sigma^{\rm on}-\sigma^{\rm off} - \sigma^{\rm vp}$.
Since for $\mu^+\mu^-$ final states,
a 'direct' production does not exist, the term $ \sigma^{\rm vp}$ can be obtained by scaling the
$\mu$-pair production on and off-resonance to the hadron production level. This yields:
\begin{eqnarray}
\sigma^{\rm dir}=\sigma^{\rm on}-\sigma^{\rm off}-\sigma^{\rm vp}=
\sigma^{\rm on}-\sigma^{\rm off}
-\sigma^{\rm off}\frac{\sigma^{\rm on}_{\mu\mu}-\sigma^{\rm off}_{\mu\mu}}
{\sigma^{off}_{\mu\mu}}~.
\end{eqnarray}
Using
$(\sigma^{\rm on}_{\mu\mu} -\sigma^{\rm off}_{\mu\mu})/\sigma^{\rm off}_{\mu\mu}
=0.24\pm0.22$
\cite{Berger79} and the number of events in the two energy
regions, the $\Upsilon$ direct decay cross section is obtained. This is
evaluated as a function of sphericity $S$.
The results are shown in Fig.~\ref{fig:PL82B-1} a, b, c,
separately for the off-resonance data, the data at the $\Upsilon$
resonance and the subtracted distribution for the $\Upsilon$ direct decay. These
experimental results are compared to the two-jet model based on the Feynman-
Field model, already discussed (dash-dotted line in Fig.~\ref{fig:PL82B-1} a)
and the predictions of the
phase-space model in Fig.~\ref{fig:PL82B-1} c (dashed line) together with the
three-gluon decay model (solid line).
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{Pluto-PL82B-Fig1.eps}}}
\caption{Differential sphericity distributions and the sphericity angular distributions.
The dash-dotted line in (a) represents the two-jet model. The dashed and solid lines in
(c) represent respectively phase-space and the three-gluon decay models. The dash-dotted
line in (d) is proportional to $1 + \cos^2\theta$ and the solid line in (f) to
$1+0.39 \cos^2\theta$ (from Ref.~~\cite{Berger79}).
}
\label{fig:PL82B-1}
\end{figure}
The off-resonance data are well described by the two-jet model in agreement
with the earlier findings at SPEAR. The distribution for the
direct decay is shifted to larger sphericity values and is best reproduced
by the three-gluon decay model.
The $\Upsilon$ meson is produced at rest. Therefore, the scaled momenta
$\vec{x_i}=2\vec{k_i}/m_{\Upsilon}$ obey the
relations ($x_i=|\vec{x_i}|$): $\vec{x_1} + \vec{x_2} + \vec{x_3} = 0~,~x_1 + x_2 + x_3 = 2$.
Another possibility to describe
the configuration of the final state uses the angles between the gluons.
For massless gluons the relation between the $x_i$ and the
angles $\theta_i$ is: $x_i =\frac{2\sin \theta_i}{\sum_{i} \sin \theta_i}$.
This relation allows one to characterise the final gluon configuration in the
corners of the Dalitz plot ($x_1=x_2=x_3=2/3$ is the
"Mercedes-Star"-like configuration, $x_1=x_2=1,x_3=0$ is a two-jet configuration
with the third gluon perpendicular to the direction of the first two
and $x_1=1,x_2=x_3=0.5$ is the configuration, where the fastest jet recoils
against the two others with $x_2=x_3$). The momentum distribution of the gluon
as calculated in leading-order (LO) QCD is~\cite{Koller,Koller-2,Koller-3,deGrand-77,deRujula-78}
\begin{eqnarray}
\frac{1}{\sigma}\frac{d^2\sigma}{dx_1dx_2}=\frac{6}{(\pi^2-9)x_1^2x_2^2x_3^2}
\left(x_1^2(1-x_1)^2+x_2^2(1-x_2)^2+x_3^2(1-x_3)^2 \right)~.
\label{eq:vecglue}
\end{eqnarray}
The above cross
section formula is the basis for Monte-Carlo model calculations mentioned
above. In these models the hard cross section for $\Upsilon \to 3g$
must be supplemented with the additional fragmentation of the 3 gluons into
hadrons.
To compare the decay $\Upsilon \to 3g$ with vector gluons as
follows from QCD, also models with scalar gluons have
been considered. The momentum distribution corresponding to scalar gluons
was derived in~\cite{Krasemann} leading to the result that they
peak at the corners of the Dalitz plot and have zeros in the middle of
each boundary. In contrast, vector gluons populate nearly uniformly the Dalitz plot.
As the majority of the events have one gluon with very low
energy, a 2-jet structure is expected for scalar gluon theories ~\cite{WalshZerwas}.
The distribution of the gluon direction in space is essentially
determined by the QCD theory~\cite{Koller,Koller-2,Koller-3,deGrand-77,deRujula-78}.
For vector gluons QCD predicts
\begin{equation}
W(\cos\theta) \sim 1+0.39 \cos^2\theta~,
\end{equation}
where $\theta$ is the angle between the most energetic gluon and the
momentum of the incoming initial electron (see Fig.~\ref{fig:PL82B-1}). Scalar gluons
would give rise to the angular distribution~\cite{Krasemann}
\begin{equation}
W(\cos\theta) \sim 1-\cos^2\theta~.
\end{equation}
The PLUTO collaboration \cite{Berger81} has
measured a number of observables to strengthen the 3-gluon interpretation of the
hadronic $\Upsilon$ decay.
The test of vector gluons versus scalar gluon has been done using the
angular distribution in $\cos \theta$, where $\theta$
is the angle between the thrust axis and the beam momentum. Theoretical distributions
predicted for $\Upsilon$ decay into vector and scalar gluons, respectively,
are shown in Fig.~\ref{fig:PLB88-3} compared with the PLUTO measurements
\cite{Berger81}. The data clearly prefer the vector gluon decay.
Similar conclusions have been reached by the LENA collaboration on
the basis of their measurements \cite{Niczyporuk}.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{Z-Phys-C8-14.eps}}}
\caption{Corrected experimental distribution in $|\cos \theta|$
from the decays $\Upsilon \to$ hadrons by the PLUTO collaboration, where $\theta$ is
the angle between the thrust axis and the beam axis. The curves are theoretical
distributions for $\Upsilon$-decay into vector (solid curve) and scalar (dashed curve)
gluons, respectively (from Ref.~~\cite{Berger81}).
}
\label{fig:PLB88-3}
\end{figure}
Further comparisons with the 3-gluon model presented in \cite{Berger81}
consist of topological tests with the jet variable thrust, defined
earlier, and the variable triplicity $T_3$ \cite{Brandt79}, an
extension of thrust to 3-jet configurations, where the final state
particles are grouped into 3 classes with total momentum $\vec{P}(C_l)$,
$l=1,2,3$. The values of triplicity vary between $T_3=1$ for a perfect 3-jet
event and $T_3=3\sqrt{3}/8$ for completely spherical events. PLUTO data
have been analysed also in terms of the fractional energies $x_i$
of the jets, where the jet axes are obtained from the triplicity analysis. If
the three jets would be completely separated in space, the fractional energies
would be independent of the fragmentation of the gluons and would depend, in the lowest
order perturbation theory,
only on the QCD matrix element $W(x_1,x_2,x_3)$ given above (see Eq.~\ref{eq:vecglue}).
In Fig.~\ref{fig:ZPC8-11}
the projections of the two-dimensional
histograms, spanned by the axes $x_3$ and $(x_1-x_2)/\sqrt{3}$, on the $x_1$
axis is shown (this is also the distribution of the most energetic triplicity jet).
The prediction of the 3-gluon Monte Carlo model is compared to the data and impressive
agreement is obtained, whereas two versions of the phase-space Monte Carlo
model fail to do so. (See~\cite{Stella:2010ne}
for a recent reappraisal of the PLUTO experimental analysis).
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{ZPhys-C8-11.eps}}}
\caption{Experimental distribution of the reconstructed reduced energy $x_1$ of the fastest triplicity jet for the $\Upsilon$-direct and off-resonance data taken by the PLUTO collaboration
compared to Monte Carlo
calculations for various models (from Ref.~\cite{Berger81}).
}
\label{fig:ZPC8-11}
\end{figure}
All this
information taken together demonstrates that the $\Upsilon$-direct decay data are
very well reproduced by the 3-gluon model while all the other models
disagree with $\Upsilon$-direct data.
These findings were further backed up later by the CLEO collaboration at the
CESR storage ring at Cornell \cite{Berkelman} and the ARGUS
collaboration at the DORIS ring at DESY \cite{Albrecht}.
\section{Jets in QCD and at PETRA and PEP}
\label{sec:PETRA}
To put the contents of this chapter in historical perspective, we would like to introduce the
main detectors which played an important role in the development of jets and detailed tests
of QCD at PETRA and PEP. In doing this, however, we will be very brief and refer the interested
readers to the review by Gidal {\it et al.}~\cite{Gidal:1985cr}, which is a
compendium of the properties and performance characteristics of the major high energy physics
detectors in that epoch, and the review by Lynch~\cite{Lynch:1987}. In alphabetical orders,
these detectors were CELLO, JADE, MARK-J,
PLUTO and TASSO (all located at the PETRA $e^+e^-$ rings at DESY, Hamburg), DELCO,
the High Resolution Spectrometer HRS, the Magnetic Calorimeter MAC, MARK II, MARK III, and the
Time Projection Chamber TPC (all located at the PEP $e^+e^-$ ring at SLAC, Stanford). As already
mentioned in the introduction, PETRA started data runs in 1978 with the maximum beam energy
of 23.6 GeV and PEP came a little later in 1980 having the maximum beam energy of 15 GeV. These
detectors were involved in measurements for almost a decade, ending their runs as LEP started
taking data at higher energies.
\subsection{Jet-like distributions from the weak decays of heavy quarks}
\label{sec:hqjets}
The process $e^+ e^- \to q \bar{q} g$ leads to $p_T$-broadening of the quark jets, leading
eventually to three-jet topologies as the centre-of-mass energy increases. There is another source
of $p_T$-broadening in $e^+e^-$ annihilation due to the production of a heavy
quark-antiquark pair $e^+ e^- \to Q \bar{Q}$, and the subsequent weak decays of the
heavy quarks/hadrons. For the
centre-of-mass energies available at the $e^+e^-$ colliders PEP and PETRA, the
heavy quarks whose production and decays had to be correctly taken into account were
the charm-anticharm ($c\bar{c}$) and bottom-antibottom ($b\bar{b}$) pairs.
Sampling the theoretical predictions of the
top quark mass in the PEP and PETRA era, most guesses put
it around 10 - 15 GeV~\cite{Georgi:1978fu,Fritzsch:1979zq}.
Hence, the production of a top-antitop pair was widely anticipated at these
colliders~\cite{Brandelik:1979bv,Berger:1979wn},
and their characteristic jet topologies were worked out in the context of the
Cabibbo-Kobayashi-Maskawa (CKM)
6-quark model~\cite{Cabibbo:1963yz,Kobayashi:1973fv}. However, as subsequent developments showed,
there were no top quarks to be seen at PETRA and PEP (or at LEP).
Thanks to the Fermilab-Tevatron~\cite{:2009ec}, the
top quark has a measured mass of about 173 GeV.
The event topology in $e^+e^-$ annihilation is
sensitive to the onset of $Q\bar{Q}$ threshold. The data in the center-of-mass energy in the
range $9.4 ~{\rm GeV} \leq \sqrt{s} \leq 17.0 ~{\rm GeV}$ was analysed~\cite{Ali:1979upa}
in terms of the measures of jettiness,
$\langle S \rangle $ and $\langle 1-T\rangle$, which showed a clear step as the $B\bar{B}$ threshold is crossed.
The data taken by the PLUTO collaboration~\cite{Berger79} at 9.4 GeV at DORIS, and at 13.0
and 17.0
GeV at PETRA by the PLUTO~\cite{Berger:1979bp} and TASSO~\cite{Brandelik:1979cj}
collaborations were well described by a theoretical
Monte Carlo~\cite{Ali:1978uy} taking into account the production processes $e^+ e^- \to c \bar{c}$ and
$e^+ e^- \to b \bar{b}$, with the subsequent non-leptonic decays $c \to s u \bar{d}$ and
$b \to c \bar{u} d$, following the CKM theory of weak decays.
The effects of heavy quark production and decays above their respective thresholds on the
jet distributions are taken into account by a three-step modifications of the light
quark pair production and subsequent fragmentation~\cite{Ali:1978uy}. The heavy quark
mass enters the
Lorentz-invariant density matrix for $e^+e^- \to Q \bar{Q}$ (here $Q^2=s$):
\begin{equation}
\vert M\vert^2= \frac{\alpha^2}{Q^4} [ (\ell_+p_1)(\ell_-p_2) + (\ell_+p_2)(\ell_-p_1)
+ m_Q^2 Q^2/2]~,
\end{equation}
where $\ell_-(\ell_+)$ is the electron (positron) momentum and $p_1(p_2)$ is the momentum
of $Q(\bar{Q})$, and the quark mass is denoted by $m_Q$. In the second step, the heavy
quark (antiquark) fragments into a heavy hadron and a number of light hadrons, determined by
a function $f_Q^H(z)$, which peaks increasingly near $z \to 1$, as $m_Q$ increases.
In the third step, the heavy hadrons decay, dominantly non-leptonically, modelled on the
quark transitions
$Q(p) \to q_1(q_1) + \bar{q}_2(q_2) + q_3(q_3)$~\cite{Ali:1978kn}. These effects are
important quantitatively for jet physics for the lower PETRA energies
(typically $\leq 30$ GeV).
\subsection{3-jet events and cross sections at PETRA}
\label{sec:gluon-jet}
As discussed earlier, the characteristic feature of the process $e^+ e^- \to q\bar{q}$ with the
subsequent fragmentation of the quarks and the antiquarks into a jet of hadrons
is that it leads to a two-jet configuration. In QCD, the diagrams shown
in Fig.~\ref{fig:qqg} modify this picture. These corrections being proportional to
$\alpha_s(Q^2)$, the QCD coupling constant at the scale $Q^2$, are small.
However, the process $e^+ e^- \to q \bar{q} g$
may reflect itself in a structure of the final states that topologically
is different from the dominant process $e^+ e^- \to q \bar{q}$. The radiated gluon
provides a new (non-local) mechanism for producing large-$p_T$ hadrons, which,
unlike the $p_T$ of the hadrons generated in the process $e^+ e^- \to q \bar{q}$,
is expected to increase with the $e^+ e^-$ centre-of-mass energy. Thus, broadening
of the transverse momentum of the hadrons with increasing centre-of-mass energy
is a consequence of gluon bremsstrahlung. It was argued
in~\cite{THgluon}, that a corollary of this phenomenon is that
a third jet should exist in the direction of the large $p_T$ particle. In particular,
if there is enough phase space available, i.e. for large enough $Q$,
a three-jet topology in the shape of ``Y'' (Mercedz-Benz symbol) should
emerge, clearly distinguishable from the (dominant) oblate cigar topology
corresponding to two-jet events.
The calculation for the process $e^+ e^- \to q(p_1) + \bar{q}(p_2) + g(p_3)$,
shown in the upper two Feynman diagrams in Fig.~\ref{fig:qqg} leads to the
following (Dalitz) distribution:
\begin{equation}
\frac{1}{\sigma_0} \frac{d^2\sigma}{dx_1 dx_2}=\frac{\alpha_s(Q^2)}{2 \pi} C_F
\frac{x_1^2 + x_2^2}{(1-x_1)(1-x_2)}~,
\label{eq:egr77}
\end{equation}
where $Q^2=4E^2$, $x_i=E_i/E=2E_i/Q$, and $E_i$ are the energies of the quark,
antiquark, and gluon, with $x_1+x_2+x_3=2$, and $\sigma_0$ is
the lowest order $e^+e^- \to {\rm hadron}$ cross section given in Eq.~(\ref{eq:sigma-0}).
The differential cross section in Eq.~(\ref{eq:egr77}) diverges near the end-points
$x_{1,2} \to 1$, and indeed has infra-red and collinear divergences. We shall discuss finite
2-jet and 3-jet cross sections in the next subsection, but for the present discussion
these divergences can be removed by a reasonable cut-off procedure, such as a
cut-off $Q_0^2$ on the invariant masses $s_{13}=Q^2(1-x_2)$ and $s_{23}=Q^2(1-x_1)$,
yielding a finite lowest order three-jet fraction.
\subsection{Experimental evidence of three-jet events at PETRA}
While valuable tests of QCD were performed in studies of the $\Upsilon$ decays, based on
the underlying mechanism $\Upsilon \to 3 g$ and the subsequent fragmentation of the
gluons, three-jet events were first observed in
$e^+e^-$ annihilation at PETRA in 1979 by the four experimental
collaborations: TASSO~\cite{EXP4gluon-TASSO}, MARK-J~\cite{EXP4gluon-MARK-J},
PLUTO~\cite{EXP4gluon-PLUTO} and JADE~\cite{EXP4gluon-JADE}. The process
$e^+e^- \to q \bar{q} g$ leads to planar events, the search of three-jet events in these
experiments was concentrated mainly in demonstrating the excess of planar events
compared to the estimates based on the 2-jet final states around $\sqrt{s}=27$ GeV, where
most of the early experiments at PETRA were carried out.
Such quantitative analyses were
backed up by topologically well separated 3-jet events. Fig.~\ref{fig:TASSO-3j} shows
momentum-space representation of a representative two-jet and three-jet event measured
by the TASSO collaboration, analysed on the basis of sphericity tensor and
jettiness~\cite{WuZo}.
\begin{figure}
\center{
\resizebox{0.95\columnwidth}{!}{
\includegraphics{PL86B-6-1.eps}
\includegraphics{PL86B-6-2.eps}}}
\caption{Momentum space representation of a two-jet event (a) - (c) and a three-jet event
(d) - (f)
in each of the three projections, (a),(d) $\hat{n}_2$ - $\hat{n}_3$ plane; (b),(e)
$\hat{n}_1$ - $\hat{n}_2$ plane; (c), (f) $\hat{n}_1$ - $\hat{n}_3$ plane.
Here $\hat{n}_i$ are the three axes of the sphericity tensor. (From TASSO~\cite{EXP4gluon-TASSO}).
}
\label{fig:TASSO-3j}
\end{figure}
The MARK-J measurement of the distributions in oblateness (defined below)
at $\sqrt{s}=17$ GeV and at higher energies (27.4 + 30 + 31) GeV
are shown in Fig.~\ref{fig:MARK-J-3j}(a) and Fig.~\ref{fig:MARK-J-3j} (b),
respectively. For this measurement, the coordinate system is
defined by the thrust axis $\vec{e_1}$, the major axis $\vec{e_2}$, which is in the plane
perpendicular to $\vec{e_1}$, and is in the direction along which the projected energy in
that plane is maximised, and the minor axis, $\vec{e_3}$, which is orthogonal to both
$\vec{e_1}$ and $\vec{e_2}$. Oblateness is then defined as
\begin{equation}
{\cal O}=F_{\rm major} - F_{\rm minor}~,
\end{equation}
where $F_{\rm major}=\sum_{i}\vec{p_i}.\vec{e_2}/\sum_i|p_i|$ and
$F_{\rm minor}=\sum_{i}\vec{p_i}.\vec{e_3}/\sum_i|p_i|$.
The two frames on
the r.h.s. of this figures show the energy flow in the event plane defined by the thrust and
major axes (upper frame) and by the thrust and the minor axes (lower frame).
These measurements were compared with the
$q\bar{q}$ (two-jet) and $q\bar{q}g$ (three-jet) Monte Carlo models~\cite{PHENgluon1,PHENgluon2},
and clearly favoured
the $q\bar{q}g$ description, in a statistically significant way.
\begin{figure}
\center{
\resizebox{1.0\columnwidth}{!}{
\includegraphics{MARK-J-3jet-2.eps} $\;\;\;\;\;\;\;\;\;\;\;$ \includegraphics{MARK-J-3jet.eps}}}
\caption{Left-hand frames: Normalised Oblateness distribution at $\sqrt{s}=17$ GeV (a), and
at $\sqrt{s}=27.4$ - $31.6$ GeV (b). The solid curves are the predictions based on a
$Q\bar{Q}g$ model and the dashed curves are based on the $Q\bar{Q}$ model with
$\langle p_T\rangle=325$ MeV (denoted as $q\bar{q}g$ and $q\bar{q}$, respectively, in this review).
The dashed-dotted curve in (b) is the $Q\bar{Q}$ model prediction with
$\langle p_T \rangle = 425$ MeV ($Q=u, d, s, c, b$).
Right-hand frames: Energy flow in the event plane defined by (a) the thrust and the major axes,
and (b) by the thrust and the minor axes with the events satisfying the cuts
thrust $<0.8$ and oblateness $>0.1$ at $\sqrt{s}=27.4$ - $31.6$ GeV.
The energy value is proportional to the radial distances.; dots are the experimental measurements
(From MARK-J~\cite{EXP4gluon-MARK-J}).
}
\label{fig:MARK-J-3j}
\end{figure}
PLUTO studied the averages of the momenta of the charged particles
$\langle p_\parallel\rangle$, where $p_\parallel$ is the longitudinal momentum,
$\langle p_\perp\rangle$ and $\langle p_\perp^2\rangle$, measured relative to the thrust
axis of the event as a function of the c.m. energy. Their analysis showed that the
quantities $\langle p_\parallel\rangle$ and $\langle p_\perp\rangle$ are not very
discriminative between the $q\bar{q}$ and $q\bar{q}g$, but the energy dependence of
$\langle p_\perp^2\rangle$ is better described if gluon bremsstrahlung is included.
To study this effect in more detail, they distinguished for every event the two jets which
are separated by a plane perpendicular to the thrust axis. The jet with the lower (higher)
average $\langle p_\perp\rangle$is called the slim (fat) jet. Fig.~\ref{fig:PLUTO-3j}(a)
shows $\langle p_\perp^2\rangle$ of the charged particles as a function of the c.m. energy,
where the average is taken over the charged hadrons in all slim (fat) jets. For the slim jet
the $q\bar{q}$ and $q\bar{q}g$ predictions from the Monte Carlo~\cite{PHENgluon1} are very similar
and the data are in agreement
with both. For the fat jet, however, the data clearly favour $q\bar{q}g$, and $q\bar{q}$ is
ruled out. Fig.~\ref{fig:PLUTO-3j}(b) and \ref{fig:PLUTO-3j}(c) show the so-called
``sea-gull plot'', obtained by plotting the variable $x_p=p/p_{\rm beam}$ and
$\langle p_\perp^2\rangle$, at lower c.m. energies 13 and 17 GeV and at higher
energies 27.6, 30 and 31.6 GeV, respectively. At the lower energy
(Fig.~\ref{fig:PLUTO-3j}(b)), there is very little difference between $q\bar{q}$ and
$q\bar{q}g$ predictions. For the higher energies(Fig.~\ref{fig:PLUTO-3j}(c)), $q\bar{q}g$
predicts a genuine one-sided jet broadening caused by the gluon jet; the effect is quite
dramatic, especially at high $x_p$. TASSO collaboration~\cite{EXP4gluon-TASSO} has done a very
similar analysis.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{Pluto-3jet-1.eps}}}
\caption{Average observed $\langle p_T^2\rangle$ of charged particles in the slim and
fat jets as
a function of the c.m. energy (a). Sea-gull plots ($\langle p_T^2\rangle$ as a
function of $x_p=p/p_{\rm beam}$, where $p=\sqrt{p_\parallel^2 +p_T^2}$)
for slim and fat jets in two separate energy range (b), (c).
The solid and dashed curves are
$q\bar{q}g$ and $q\bar{q}$ predictions, respectively. In (c), the dashed curve
corresponds to $\sigma_q=0.247$ GeV (default value) and the dash-dotted curve to
$\sigma_q=0.35$ GeV.
(From PLUTO~\cite{EXP4gluon-PLUTO}).
}
\label{fig:PLUTO-3j}
\end{figure}
The JADE analysis is based on the normalised sphericity tensor $S_{\alpha\beta}$ (defined in
Eq.~(\ref{eq:Salfabeta}))
and the resulting eigenvalues $Q_1, Q_2, Q_3$ obtained by diagonalising this tensor on an
event by event basis. The variables which play a central role
in this analysis are the sphericity $=3/2(Q_1+Q_2)$
and planarity $=(Q_2-Q_1)$. Fig.~\ref{fig:JADE-3j} shows the planarity
distribution $dN/d(Q_2-Q_1)$ measured by JADE at $\sqrt{s}=27.7$ and 30 GeV. Their data
are compared with a $q\bar{q}$ model, with $\sigma_q=250$ MeV and 350 MeV, both of
which fail to describe the data. The $q\bar{q}g$ model describes the data well.
The results reviewed in this section were the first measurements
through which the effect of a third (gluon) jet was convincingly established in
$e^+e^-$ annihilation. This is an important milestone in the confirmation of QCD
in which jet physics played a central role. From a theoretical point of view,
observation of the gluon jet was inevitable. Like many other discoveries in particle
physics, this discovery needed high energy $e^+ e^-$ beams, particle detectors well equipped to
measure the characteristics of the hadrons, and data analysis techniques.
This was the work of dedicated teams of machine builders and experimental physicists who should be
credited with the discovery.
For the interested readers we refer to individual accounts leading to the discovery
of the gluon
jets~\cite{Stella:2010ne,Schopper:1980jd,Wu:1992vc,Branson:1994eu,Soding:1996zk,Ellis:2009zz,Soding:2010zz}, but
stress that this list of references is by no means exhaustive.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{JADE-3jet-1.eps}}}
\caption{The planarity distribution compared with the model predictions
for $e^+e^- \to q\bar{q}$ and $e^+e^- \to q\bar{q}g$ at $\sqrt{s}=27.7, 30$ GeV.
(From
JADE~\cite{EXP4gluon-JADE}).
}
\label{fig:JADE-3j}
\end{figure}
\subsection{Quantitative studies of QCD at PETRA and PEP}
\label{sec:PEP-PETRAJets}
Subsequent to the discovery of the gluon jet, the four PETRA collaborations, JADE,
Mark-J, PLUTO
(later replaced by CELLO) and TASSO collaborations made many more measurements
in $e^+e^-$ annihilation to hadrons,
in which further evidence for the gluon jet was presented,
These prompted quantitative
studies of QCD for inclusive jet-observables, like thrust and the Fox-Wolfram shape
variable~\cite{Fox:1978vu}
etc., and for jet topology, like the 2-jet and 3-jet rates etc.
Also the gluon spin was determined, following a suggestion in~\cite{Ellis:1978wp}.
One important issue was
the universality of the quark-gluon coupling $\alpha_s(Q^2)$, i.e. to check
whether the same value is obtained independent of the observables and the
measurements of $\alpha_s(Q^2)$ at various values of $s=Q^2$ were consistent
with the evolution anticipated by the renormalisation group. These attempts
to obtain $\alpha_s(Q^2)$ required the calculation of next-to-leading order corrections
to the topological jet-rates and inclusive jet-observables,
and also required a better understanding of
the non-perturbative models used to interpret the experimental data. These theoretical
and phenomenological studies often took the form of detailed Monte Carlo programs
without which no realistic comparison of theory and experiment was possible. In fact,
since the days of experimentation at PETRA and PEP, Monte Carlo based theoretical
frameworks have become indispensable for the quantitative analysis of data, as
witnessed later at LEP, HERA and the Tevatron, and now at the LHC.
In $O(\alpha_s(Q^2)$, 2-jet cross sections defined by a jet-resolution criterion,
such as the Sterman-Weinberg jet-cones or the jet invariant mass, receive contributions
from the virtual corrections to the process $e^+e^- \to q \bar{q}$, and soft or collinear
configurations from the processes $e^+ e^- \to q \bar{q} g$.
In $O(\alpha_s^2(Q^2))$, the 3-jet cross sections receive
contribution from the virtual corrections to $e^+ e^- \to q \bar{q} g$ and soft and
collinear configurations from the 4-parton processes $e^+ e^- \to q \bar{q} gg$
and $e^+ e^- \to q \bar{q} q \bar{q}$. The hard and non-collinear
configurations in the 4-parton processes
give rise to 4-jet cross sections, with the leading
contribution arising in $O(\alpha_s^2(Q^2))$, whose rates were calculated
in~\cite{Ali:1979rz,Ali:1979wj} including the quark mass effects. They were important to
check the non-abelian character of QCD, as discussed later.
The first complete next-to-leading order correction to event shapes up to order
$\alpha_s^2$ were undertaken by Ellis {\it et al.}~\cite{Ellis:1980nc,Ellis:1980wv}. They presented
their results in terms of the tensor
\begin{equation}
\theta^{ij}= \sum_a (p_a^i p_a^j)/|p_a|) (\sum_a |p_a|)^{-1}~,
\label{eq:ERT-C}
\end{equation}
where $p_a^i$ are the components of the centre-of-mass three-momentum of hadrons $a$, and the
sum runs over all hadrons. The eigenvalues of $\theta$ are determined by the characteristic
equation
\begin{equation}
\lambda^3 - \lambda^2 +\frac{1}{3} C\lambda -D/27=0,~~~0 \leq C, D \leq 1~.
\end{equation}
The quantities $C$ (also called the Fox-Wolfram shape variable~\cite{Fox:1978vu})
and $D$ are symmetric functions of the eigenvalues, defined as
\begin{equation}
C\equiv 3(\lambda_1\lambda_2 + \lambda_2\lambda_3 + \lambda_3 \lambda_1),
~~D=27\lambda_1\lambda_2\lambda_3~,
\end{equation}
Integrating $\frac{1}{\sigma}\frac{d\sigma}{dC}$ in the range
$\frac{1}{2} < C < 1$ yields
\begin{equation}
\frac{1}{\sigma_0} \int_{0.5}^{1.0} dC \frac{d \sigma}{dC}=C_1 \frac{\alpha_s(Q^2)}{\pi}
(1 + C_2 \frac{\alpha_s(Q^2)}{\pi})~.
\end{equation}
Numerically, $C_1=2.8$ and $C_2=18.2 \pm 0.7$ for five quark flavours~\cite{Ellis:1980nc,Ellis:1980wv}. Thus,
large corrections are obtained for the Fox-Wolfram shape variable, $C$.
Another, and experimentally widely studied, example of an inclusive distribution
is thrust. In $O(\alpha_s^2)$, this was first calculated by Vermaseren {\it et al.}~\cite{Vermaseren:1980qz},
and verified subsequently by Ellis and
Ross~\cite{Ellis:1981re} and by others~\cite{Kunszt:1981rp,Clavelli:1981yh,Ali:1981tm}, using the
earlier work reported in~\cite{Ellis:1980nc,Ellis:1980wv},
In next-to-leading order NLO in $\alpha_s$, thrust-distribution in $e^+e^- \to {\rm hadrons}$ is
given by the following expression
\begin{equation}
\frac{d\sigma}{dT}= A_0(T) \frac{\alpha_s(Q^2)}{\pi} + A_1(T)(\frac{\alpha_s(Q^2)}
{\pi})^2~,
\label{eq:thrust2}
\end{equation}
where the functions $A_0(T)$ and $A_1(T)$ are shown in Fig.~\ref{fig:thrust-2}
(note that the variable $t$ used in these plots is the same as $T$ used in the text).
The shapes of $A_0(T)$ and $A_1(T)$ are rather similar, but the $O(\alpha_s^2(Q^2)$
corrections to the thrust-distributions are also numerically large. Integrating the
distribution in Eq.~(\ref{eq:thrust2}) up to $T=0.85$ yields
\begin{equation}
\frac{1}{\sigma_0} \int_{0.5}^{0.85} dT \frac{d\sigma}{dT}=K_1 \frac{\alpha_s(Q^2)}
{\pi}(1+ K_2 \frac{\alpha_s(Q^2)}{\pi})~.
\end{equation}
Numerically, $K_1=1.156$, $K_2=17.6 \pm 0.3$ for five quark flavours,
which for $\alpha_s(Q^2)=0.13$ at $\sqrt{s}=35$ GeV yields a correction of
about 70\%.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{PLB106-Fig1.eps}}}
\caption{The scalar functions $A_0(t)$ and $A_1(t)$ for the Thrust distribution as
defined in Eq.~\ref{eq:thrust2}. Solid histograms (based on Ellis, Ross and
Terrano~\cite{Ellis:1980nc}), dashed histograms (Vermaseren {\it et al.}~\cite{Vermaseren:1980qz}).
(From~\cite{Ellis:1981re}).
}
\label{fig:thrust-2}
\end{figure}
Theoretical calculations from~\cite{Ellis:1980nc} were later implemented in the independent jet
Monte Carlo~\cite{PHENgluon2} and used to determine $\alpha_s(Q^2)$ from the inclusive
measurements. The first such determination using the
thrust and oblateness distributions, measured by the TASSO~\cite{Brandelik:1980zw} and
MARK-J~\cite{Newman:1980ck} collaborations, respectively,
yielded~\cite{Ali:1981tm} $\alpha_s(Q=35 GeV)= 0.128 \pm 0.013$ from the TASSO data and
$\alpha_s(Q=35 GeV)= 0.120 \pm 0.010$ from the MARK-J data.
Subsequently, an enormous effort
has gone into estimating the effects from the jet resolutions, choice of jet variables,
and fragmentation models. Also, the statistical significance of the
data from the experiments at PETRA and PEP increased enormously over the time.
An observable studied
intensively in theory and experiments at PETRA is the energy-energy correlation (EEC)
and its asymmetry (AEEC). EEC is a measure of the energy flow involving two
calorimeters subtending solid angles $\Omega$ and $\Omega^\prime$ with respect to the
incoming $e^+e^-$ axis. Keeping the orientation between the two calorimeter cells fixed
$(=\chi )$, the differential distribution in $\cos \chi$ can be expressed as
\begin{equation}
\frac{1}{\sigma} \frac{d\Sigma^{EEC}}{d \cos \chi} =\frac{1}{\sigma} \sum \int\frac{d\sigma}
{dx_i dx_j d \cos \chi} x_i x_j dx_i dx_j~,
\end{equation}
where $x_i$ are the scaled energies in terms of the c.m. energy $\sqrt{s}$. The experimental
configurations with fixed angle between the calorimeters $\chi$ and the polar angle of one
of the calorimeters $\theta$ are calculable in perturbative QCD~\cite{Basham:1978bw,Basham:1978zq}.
However, most experimental measurements were carried out for the averaged EEC, obtained by
integrating over $\cos \theta$, for which perturbative QCD yields the following
expression (for $m_q=0$)
\begin{equation}
\frac{1}{\sigma_0} \frac{d\Sigma^{EEC}}{d \cos \chi}= \frac{\alpha_s(Q^2)}{\pi} F(\xi)~,
\end{equation}
where $\xi=\frac{1-\cos \chi}{2}$ and $F(\xi)$ is given by~\cite{Basham:1978bw,Basham:1978zq}
\begin{equation}
F(\xi)= \frac{(3-2\xi)}{6\xi^2(1-\xi)} \left[2(3-6\xi + 2\xi^2) +\ln(1-\xi) + 3\xi(2-3\xi)
\right]~.
\end{equation}
The averaged (obtained by integrating over $\cos \theta$) AEEC cross section has an obvious definition
\begin{eqnarray}
\frac{1}{\sigma_0} \frac{d\Sigma^{AEEC}}{d \cos \chi} &\equiv&
\frac{1}{\sigma_0} \frac{d\Sigma^{EEC}(\pi -\chi)}{d \cos \chi}
-\frac{1}{\sigma_0} \frac{d\Sigma^{EEC}(\chi)}{d \cos \chi}\nonumber\\
&=& \frac{\alpha_s(Q^2)}{\pi} \left[(F(1-\xi) -F(\xi)\right]\equiv \frac{\alpha_s(Q^2)}{\pi}A(\xi)~.
\end{eqnarray}
Effects of quark masses in the EEC and AEEC cross sections were calculated
in~\cite{Ali:1982ub,Ali:1983au,Csikor:1983dt,Cho:1984rq}. The $O(\alpha^2_s(Q^2)$ corrections to these
distributions were calculated numerically~\cite{Ali:1982ub,Ali:1983au,Richards:1982te,Richards:1983sr}.
Restricting the angular range to $-0.95 < \cos \chi <0.95$, where the non-perturbative effects
are relatively small, the NLO corrections
to the EEC cross-section were found to be moderate,
typically $O(35\%)$, but the corresponding corrections to the AEEC were small,
typically $O(10\%)$, giving reasons
to be optimistic about the convergence of perturbative QCD in these variables, particularly the
AEEC.
Measurements of the EEC and AEEC were undertaken by all four experiments at PETRA:
JADE, MARK-J, TASSO, and PLUTO. The AEEC measurements
have been used to determine $\alpha_s(Q^2)$ by comparing them with the NLO
expression\cite{Ali:1982ub}. The extracted values of $\alpha_s(Q^2)$ are found to be:
$\alpha_s(Q^2= (34~{\rm GeV})^2)=0.115 \pm 0.005$~[JADE]~\cite{Bartel:1984uc},
$\alpha_s(Q^2= (34~{\rm GeV})^2=0.13$~[MARK-J]~\cite{Adeva:1983ur},
$\alpha_s(Q^2= (34.8~{\rm GeV})^2)$\\
$=0.125 \pm 0.005$~[TASSO]~\cite{Braunschweig:1987ig},
and $\alpha_s(Q^2= (34.6~{\rm GeV})^2)=0.125 \pm 0.005$~[PLUTO]~\cite{Berger:1985xq}. Within errors, these
values of $\alpha_s(Q^2)$ are consistent with each other, and with the ones from oblateness and thrust
distributions, given earlier.
Representative distributions from the JADE~\cite{Bartel:1984uc} and [TASSO]~\cite{Braunschweig:1987ig}
collaborations are shown in
Fig.~\ref{fig:AEEC-PETRA}, in which
$A(\theta)$ vs.~$\theta$ and $1/\sigma d\Sigma^{\rm A}/d\cos \chi$ vs. $1-\cos \chi$ are plotted, respectively. These measurements are
compared with the perturbative QCD expression, calculated to $O(\alpha_s^2(Q^2))$ and the
agreement is impressive for $(\theta, \chi) > 30^\circ$. For $(\theta, \chi) < 30^\circ$,
one needs to implement non-perturbative effects as well as the resummation of the large
logs to all orders in perturbation theory.
\begin{figure}
\center{
\includegraphics[width=0.45\textwidth, height=8.0cm]{ZPhys-C25-Fig4.eps}
$\;\;\;$
\includegraphics[width=0.50\textwidth,height=8.5cm]{ZPhys-C36-Fig3d.eps}
\caption{The asymmetric part of the energy energy correlation cross section
measured by the
JADE~\cite{Bartel:1984uc} and ~[TASSO]~\cite{Braunschweig:1987ig}
collaborations at PETRA and comparison with the perturbative
QCD calculations including $O(\alpha_s^2(Q^2))$ corrections from ~\cite{Ali:1982ub}.
The distribution in the upper left-hand frame
from the JADE collaboration shows the corrected asymmetry $A(\theta)$ vs. $\theta$, measured at
$\sqrt{s}=14, 22 $ and 34 GeV. The upper right-hand frame from TASSO shows the measurements at
$\sqrt{s}=43.5$ GeV and comparison with the perturbative QCD (solid curve).
\label{fig:AEEC-PETRA}
}}
\end{figure}
A lot of experimental effort went also in studying the topological
cross sections (jet multiplicity) in $e^+e^-$ annihilation experiments at PETRA, PEP,
TRISTAN and later at
LEP. Theoretical distributions to these topologies were calculated in a series of
papers~\cite{Kramer:1986sg,Fabricius:1980fg,Fabricius:1981sx,Gutbrod:1983qa,Kramer:1986mc,Gutbrod:1987jt}.
Making use of this theoretical work, the JADE collaboration measured $\alpha_s(Q^2)$ in a
limited range of $\sqrt{s}$ using the three-jet rate and established the running of
$\alpha_s(Q^2)$. Defining the fractional three-jet rate
$R_3=\sigma_{\rm 3-jet}/\sigma_{\rm tot}$ as a function of $y_{\rm min}$, which is a
cut-off parameter such that $y_{ij} \geq y_{\rm min}$ for any pair of partons $i$ and $j$
and $y_{ij}= M_{ij}^2/s$, the measured jet-rate was fitted to the expression
\begin{equation}
R_3 (y_{\rm min})= C_1 \alpha_s(Q^2) + C_2 \alpha_s^2(Q^2)~,
\end{equation}
where $C_1$ and $C_2$ are $y_{\rm min}$-dependent constants calculated by Kramer and
Lampe (called KL below)
in~\cite{Kramer:1986mc}. The JADE measurements for $R_3(y_{\rm min})$ as a
function of $\sqrt{s}$ in the range $20 < \sqrt{s} < 44$ GeV are shown in
Fig.~\ref{fig:JADE-MARKII-3jet} (left-hand frame)~\cite{Bethke:1988zc}. They follow nicely the RG-prescribed running of
$\alpha_s(Q^2)$ with $\Lambda_{\overline{MS}}=205$ MeV for $0.04 < y_{\rm min} < 0.12$ using KL,
with almost the
same value $\Lambda_{\overline{MS}}=210$ MeV using a calculation by Gottschalk and Shatz
(called GS)~\cite{Gottschalk:1984vy}.
An even more convincing measurement of the running of $\alpha_s(Q^2)$ was presented
by the MARK II
collaboration~\cite{Komamiya:1989hw} at PEP and SLC. They determined $\alpha_s(Q^2)$
from the differential three-jet
rate $g_3(y_3)|_{y_3=y_{\rm cut}}= \frac{\partial}{\partial y_{\rm cut}}f_2(y_{\rm cut})$,
where $f_2(y_{\rm cut})$ is the fraction of two-jet events defined by the jet resolution
$y_{\rm cut}$. Their result for $g_3(y_3)$ is shown as a function of $y_3$ in
Fig.~\ref{fig:JADE-MARKII-3jet} (right-hand frame) for two values $\sqrt{s}=91$ GeV (SLC)
and $\sqrt{s}=29$ GeV (PEP). The three
curves shown are the predictions of KL~\cite{Kramer:1986mc} for three different
values $\Lambda_{\overline{MS}}=0.1, 0.3$ and 0.5 GeV. Here $\Lambda_{\overline{MS}}$
refers to the QCD scale parameter in a specific renormalisation scheme,
the so-called modified minimal subtraction scheme $\overline{MS}$~\cite{Bardeen:1978yd}.
For further reading of the technical issues of renormalisation and schem-dependencies
at a nonspecialist level, we refer to a review on perturbative QCD~\cite{Brock:1993sz}.
These measurements yielded
$\alpha_s(Q^2)=0.123 \pm 0.009 \pm 0.005$ at $Q=\sqrt{s}=91$ GeV and
$\alpha_s(Q^2)=0.149 \pm 0.002 \pm 0.007$ at $Q=\sqrt{s}=29$ GeV, The running of
$\alpha_s(Q^2)$ is
clearly established. A comparison with the values of $\alpha_s(Q^2)$ determined from the
measurements of the AEEC cross section at PETRA energies, discussed earlier, also shows
that non-perturbative
effects at these energies are observable dependent and not negligible.
\begin{figure}
\center{
\includegraphics[width=0.48\textwidth, height=9.5cm]{PLB213-Fig2.eps}
$\;\;\;$
\includegraphics[width=0.48\textwidth,height=9cm]{PRL64-Fig1.eps}}
\caption{Left-hand frame: Three-jet event rates measured by the JADE collaboration~\cite{Bethke:1988zc}
as a function of the c.m. energy $E_{\rm cm}$ [GeV]
for the indicated values of the jet resolution parameter $y_{\rm cut}$, together with the
predictions of the order $\alpha_s^2$ perturbatve calculations by Gottschalk and Shatz (GS)
and Kramer and Lampe (KL). Right-hand frame: Experimental distribution $g_3(y_3)$ as a function of $y_3$ at (a) $\sqrt{s}=91$ GeV and
$\sqrt{s}= 29$ GeV measured by the MARKII collaboration~\cite{Komamiya:1989hw}. The $y_3$ range used in
the fit for the determination of $\alpha_s$ is defined by the two dashed lines. The curves are
second order perturbative calculations with the indicated values of $\Lambda_{\overline{MS}}$.
}
\label{fig:JADE-MARKII-3jet}
\end{figure}
These investigations were extended to jet rates of higher multiplicity, i.e. four-jet
and five-jet.
An earlier paper along these lines is due to the JADE collaboration, in which $n$-jet
rates $(n=2,3,4,5)$
were presented~\cite{Bartel:1986ua}. At this time, NLO corrections to the 4-jet rates
and even LO predictions for the
5-jet final states did not exist. The data were compared with the
leading-logarithmic- approximation
(LLA) model . Similar studies based on the MARK II data at PEP ($\sqrt{s}=29$ GeV)
are found in~\cite{Bethke:1989ma} and~\cite{Bethke:1989jr} using the
so-called ``optimised perturbation theory'', i.e., by fitting the scale.
An earlier attempt to establish the non-abelian nature of QCD from a study of multijet
events was made by the AMY collaboration at the TRISTAN $e^+e^-$ storage ring at the
KEK laboratory~\cite{Park:1989fq}. Their data showed a clear preference for QCD in
contrast to an abelian model. In addition, they showed the running of $\alpha_s(Q^2)$
by measuring the 3-jet rate $R_3$ at $\sqrt{s}=50$ to 57 GeV by comparing their
measurements with those of the JADE collaboration~\cite{Bethke:1988zc} and the
TASSO collaboration~\cite{Braunschweig:1988ug} at PETRA taken at lower c.m. energies.
Other publications towards a determination of $\alpha_s$ from PEP and PETRA are
for example by MARK II~\cite{Lohr:1982wh} and CELLO~\cite{Behrend:1989jh}.
\subsection{String- and String-like effects in Jets}
The data taken by the experimental collaborations at PEP and PETRA have been used also
to investigate non-perturbative effects in the jet profiles with the view of testing
various phenomenological models available in the 1980's. This was important, since
depending on the observables considered, non-perturbative effects influenced also
the measurement of $\alpha_s$. Several groups~\cite{Bartel:1981kh,Bartel:1983ij,Aihara:1984du,Althoff:1985wt}
have used three-jet ($q \bar{q} g $) events to study the impact
of hard gluon bremsstrahlung on the hadronisation process. In these studies they
observed the so-called string effect~\cite{Andersson:1983ia}, predicting a depletion of
particles in the angular region between the quark and antiquark jet
relative to the particle flow in the regions between the quark and gluon jets and the
antiquark and gluon jets. In Fig.~\ref{fig:JADE-string} (left-hand frames), we
show the measurements of the normalised energy flow $(1/E) dE/d\theta$ in planar three-jet events and
the normalised charged particle flow in these events undertaken
by the JADE collaboration~\cite{Bartel:1983ij} between $\sqrt{s}=30$ GeV and 36 GeV at PETRA.
These distributions allowed one
to distinguish between a hadronisation model~\cite{PHENgluon1}
in which the fragmentation proceeds along the parton momenta (the
independent jet IJ model) and the model in which the fragmentation takes place along
the colour-anticolour axes (the LUND string model~\cite{Lund}), discussed earlier. Only the leading order
($O(\alpha_s) $)
matrix elements were taken into account for the gluon bremsstrahlung process ($e^+e^- \to q \bar{q}g$),
which were encoded in these fragmentation models.
As seen in this figure, JADE data on the energy and charged particle flow are better reproduced by
fragmentation along the colour axes~\cite{Lund}.
\begin{figure}
\center{
\includegraphics[width=0.48\textwidth, height=9.5cm]{PLB134-Fig1.eps}
\includegraphics[width=0.48\textwidth,height=9.5cm]{ZPC29-Fig4.eps}}
\caption{Left-hand frames: (a) The normalised energy flow $1/E dE/d\theta$ in the
three-jet events compared with two
model predictions. (b) The normalised charged particle flow $1/n dn/d\theta$.
(c) $1/n dn/d\theta$
with $p_T^{\rm out} > 0.3$ GeV. Here $n$ is the total number of particles used in each plot
(JADE collaboration~\cite{Bartel:1983ij}).
Right-hand frames: Ratios of particle densities in the angular gaps between the jet axes,
defined by
$0.25 < \psi_j^\prime < 0.75$ as a function of $x_{\rm in}$. The calculation of the IJ ($g=q$)
and of the LUND models are shown as shaded bands. a) $N(2)/N(3)$ and b)$N(1)/N(3)$ for
the three-jet event sample (TASSO collaboration~\cite{Althoff:1985wt}).
}
\label{fig:JADE-string}
\end{figure}
A similar analysis was undertaken somewhat later in 1985 by the TASSO
collaboration~\cite{Althoff:1985wt}. In this case, the three-jet events produced in
$e^+e^-$ annihilation
into hadrons at 34.4 GeV were compared with the $O(\alpha_s^2)$ perturbative QCD
calculations convoluted with two different models of fragmentation (IJ and Lund).
The analysis was undertaken in terms of the ``reduced azimuthal angles''
$\psi_j^\prime$ and $x_{\rm in}=p_{\rm in}/E_{\rm beam}$, where $p_{\rm in}$ is the particle
momentum
projected into the event plane. The $\psi_j^\prime$ are defined as
\begin{equation}
\psi_j^\prime =\frac{\psi -\Phi_i}{\Phi_k-\Phi_i}~~, i,j,k=1,2,3~{\rm and~cyclic}~,
\label{eq:reduced-psi}
\end{equation}
where the particle under consideration is located between jets $i$ and
$k$ $(\Phi_i < \psi < \Phi_k)$.
The reduced angles $\psi_j^\prime$ run from 0 and 1. The subscript $j$ denotes the
angular region opposite
to the jet $j$. The analysis was restricted to $x_{\rm in} < 0.1$ and the data
were divided in two samples $x_{\rm in}< 0.04$ and $0.04 <x_{\rm in} <0.1$. The result of the
TASSO analysis is displayed in Fig.~\ref{fig:JADE-string} (right-hand frames) showing
that the distribution of
low energy (soft) hadrons in the 3-jet plane is better described by the LUND colour
fragmentation model than by the independent jet model. The opposite is true for more energetic
particles flowing between the 3 jets.
The ``string effect'' was subsequently attributed to the coherence of soft gluon emission
from the
$q\bar{q}g$ system -- a characteristic feature of the non-abelian nature of
QCD~\cite{Azimov:1986sf}.
This is illustrated by contrasting the case of a soft gluon emission (assumed here as
$g(p_2)$)
in $e^+ e^- \to q (p_+) + \bar{q}(p_-) + g(p_1) +g(p_2)$ from the process in which
the gluon $g(p_1)$
is replaced by a photon, i.e., $e^+ e^- \to q (p_+) + \bar{q}(p_-) + \gamma(p_1) +g(p_2)$.
The angular
distribution of the soft gluon (antenna pattern)in the case of
$e^+ e^- \to q \bar{q} \gamma$ is given by
\begin{equation}
W_{+-}(\phi_2) \equiv 2C_F a_{+-} V(\alpha, \beta) =\frac{4 C_F a_{+-}}
{\cos \alpha - \cos\beta}
\left(\frac{\pi-\alpha}{\sin \alpha} -\frac{\pi-\beta}{\sin \beta} \right)~,
\label{eq;qedflow}
\end{equation}
where $\alpha=\phi_2$ and $\beta=\theta_{+-} - \phi_2$ (see the kinematics shown in the
upper left-hand frame in Fig.~\ref{fig:MARKII-string});
$a_{ik}=1-(\vec{n}_i.\vec{n}_k)$, with $\vec{n}_i$ being the unit vector in the direction of $\vec{p}_i$,
and $\theta_{+-}$ is the angle between the $q$ and $\bar{q}$ directions. Replacing $\gamma(p_1)$ with
a gluon $g(p_1)$ changes the angular distribution essentially due to the antenna element $g(p_1)$
participating in the emission as well.
One now obtains ($\gamma=\theta_{+1}+ \phi_2 $):
\begin{equation}
W_{\pm 1}(\phi_2)=N_c[a_{+1}V(\alpha, \gamma) +a_{1-}V(\alpha,\gamma)]
+ (2C_F-N_c)a_{+-} V(\alpha,\beta)~.
\label{eq;qcdflow}
\end{equation}
The (soft) particle flow according to these two configurations is illustrated
in Fig.~\ref{fig:MARKII-string} (upper right-side frame)
showing that the flow opposite to the direction of $\vec{n}_1$ is appreciably
lower for the case of a gluon than for a photon due to the destructive interference
in the case of QCD ($q\bar{q} g g $).
This phenomenon can be qualitatively understood . Omitting the small contribution from
the second terms in Eq.~(\ref{eq;qcdflow}), one reduces this equation to the sum of two
independent quark
antennas ($+,1$) and ($-,1$). Therefore, in this approximation, the total particle flow
can be obtained
by the simple incoherent composition of two ``annihilations'' $e^+e^- \to q \bar{q}$,
boosted from their
respective rest frames to the overall $q\bar{q}g$ c.m. frame. It is clear that the
angular region between
the $q$ and $\bar{q}$ will be depopulated as it is opposite to the boost direction of
both two-jet configurations.
This perturbation theory based scenario ($3=2+2 $ + Lorentz boost) then coincides with
the fragmentation of the
gluon in the process $e^+ e^- \to q \bar{q} g$ events in the LUND fragmentation model.
The independent jet
model misses this, as the gluon fragments independently on its own. Consequently, the
Lorentz boost
effect is absent.
The colour coherence study of $e^+e^-$ jets by Azimov et al.~\cite{Azimov:1986sf} suggested
an
interesting experimental test in the form of particle flows in three-jet ($q\bar{q}g $)
and radiative
two-jet ($q\bar{q}\gamma)$ events by observing the negative contribution of the third
antenna. This test was carried out by the MARK II collaboration at PEP at
$\sqrt{s}=29$ GeV~\cite{Sheldon:1986gy} with the result that in the angular region between
the quark and antiquark jets
fewer charged tracks were observed in the two-jet events than in the radiative
three-jet events. Their result is shown in Fig.~\ref{fig:MARKII-string} (lower two frames).
\begin{figure}
\center{
{\hspace*{2.0cm}\includegraphics[width=0.47\textwidth, height=4.8cm]{PLB165-Fig1.eps}}
{\hspace*{-1.0cm}\includegraphics[width=0.44\textwidth, height=4.5cm]{PLB165-Fig2.eps}}
\includegraphics[width=0.95\textwidth, height=9.5cm]{PRL57-Fig1.eps}}
\caption{Upper left-hand frame: Kinematics of non-jet radiation in three-jet events;
Upper right-hand frame:
Directivity diagram of the soft particle flows, projected on to
the $q\bar{q}\gamma$ (dashed lines) and $q\bar{q}g$ (solid line) event planes. Dotted
circles show
the constant levels of density flow [$W(\phi)=1,2,4 $] (from Azimov et
al.~\cite{Azimov:1986sf}).
Lower frame: The charged-track density as a function of the event-plane angle $\phi$.
The angular region between $\phi=0^\circ$ and $\phi=150^\circ$ separates the $q$
and $\bar{q}$ for
the $q\bar{q}\gamma$ events and for 65\% of the $q\bar{q}g$ events (from the MARK II
collaboration~\cite{Sheldon:1986gy}). }
\label{fig:MARKII-string}
\end{figure}
To end this review of the studies of jets at PETRA and PEP, we briefly discuss the
angle ordered perturbation theory, as this approach has been used to develop a parton shower
Monte Carlo~\cite{Herwig1}. In this approach, the phase space of soft gluon emission is
restricted using an angle ordering criterion, which allows one to take into account the
interference (colour coherence) approximately, and hence it reproduces the string and
string-like effects discussed above. Both the LUND fragmentation model (PYTHIA in its
modern incarnation) and the parton shower
Monte Carlo models (such as HERWIG) describe the $e^+e^-$ data adequately. However, the
main drawback of these models is that they do not (easily) match with the fixed order
perturbation theory in next-to-leading and higher orders. The main obstacle is that fixed
order perturbation theory has soft and collinear singularities that give rise to logarithmic
enhancement of higher order contributions. These enhanced terms should be summed to all orders.
However, there is no unique way of doing this. For example, the $p_T$-ordered and the angular ordered
showers can both be arranged to resum these logarithms. Matching with a fixed order
perturbation theory is more easily achieved in $p_T$ ordered showers which, however, do not have
the colour coherence needed by the low-energy $e^+e^-$ data. It is the other way around with
the angle-ordered showers. We will discuss these aspects further in the next section.
\section{Jets in QCD and at LEP}
\label{sec:LEPJets}
In this section we review the salient features of jets at LEP which were helpful
in testing some of the basic elements of QCD more precisely. Just as in the preceding section,
we recall the main detectors at LEP, ALEPH, DELPHI, OPAL, and L3, which collected large data
samples (typically, 4 million hadronic events around the $Z$ resonance for each of the
four LEP experiments. In the second stage, LEP2, the beam energy was increased to about
103 GeV. These detectors proved to be very powerful tools in carrying out precision electroweak
and QCD physics.
\subsection{Quark/Gluon cascades}
Electric charges which are accelerated, reduce their energy by
radiating photons preferentially collinear with the flight
direction of the charge. This is a general feature of gauge
theories and, specifically, collinear radiation is predicted
in QCD processes in which quarks and gluons are produced with
high energies. If the observed partons carry away a fraction
$z$ of the parent partons, the splitting functions \cite{split},
{\it cf.} Fig.~{\ref{fig:F_split}},
\begin{eqnarray}
dP \, [q \to q+g(z)] &=&
\frac{\alpha_s}{2\pi} \,
C_F \,
\frac{1+(1-z)^2}{z} \,
dz \, \frac{dQ^2}{Q^2}
\equiv\frac{\alpha_s}{2\pi} \,
P_{gq}(z) \,
dz \, \frac{dQ^2}{Q^2}~, \nonumber \\
dP \, [q \to q(z) +g] &=&
\frac{\alpha_s}{2\pi} \,
\left[ C_F \, \frac{1+z^2}{(1-z)_+} + 2 \delta(1-z)\right]\,
dz \, \frac{dQ^2}{Q^2}
\equiv\frac{\alpha_s}{2\pi} \,
P_{qq}(z) \,
dz \, \frac{dQ^2}{Q^2}~, \nonumber \\
dP \, [g \to q(z)+\bar{q}] &=&
\frac{\alpha_s}{2\pi} \,
T_R \; [z^2+(1-z)^2] \,
dz \, \frac{dQ^2}{Q^2}
\equiv\frac{\alpha_s}{2\pi} \,
P_{qg}(z) \,
dz \, \frac{dQ^2}{Q^2}~, \nonumber \\
dP \, [g \to g+g(z)] &=&
\frac{\alpha_s}{2\pi} \,
\left( 2 C_A \, [\frac{1-z}{z} + z(1-z) + \frac{z}{(1-z)_+}]
+[\frac{11}{2} -\frac{n_f}{3}]\delta(1-z)\right)
dz \, \frac{dQ^2}{Q^2} \nonumber\\
&\equiv&\frac{\alpha_s}{2\pi} \,
P_{gg}(z) \,
dz \, \frac{dQ^2}{Q^2}~,
\label{eq:dglap}
\end{eqnarray}
universally predict collinear splittings, with $Q^2 \simeq z(1-z) E^2
\Theta^2$ denoting the invariant mass of the final parton pair.
The notation $[F(z)]_+$ defines a distribution such that for any sufficiently
regular function $f(z)$,
\begin{equation}
\int_0^1dz f(z)[F(z)]_+ = \int_0^1 dz (f(z) - f(1))F(z)~.
\end{equation}
The bremsstrahlung
splittings $q \to qg$ and $g \to gg$ preferentially
generate soft radiation spectra in the limit $z \to 0$. The group
characteristics are $C_F = 4/3$, $C_A =3$ and $T_R = 1/2$ for
SU(3)$_C$ of QCD.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{fig_split.eps}}}
\caption{The three basic splitting processes of quarks and gluons
into pairs of quarks and gluons.
}
\label{fig:F_split}
\end{figure}
Repeated splittings generate cascades of collimated quarks and
gluons. Since the lifetime of the final-state pair in the
splitting processes is long, $\tau^\ast \sim E/Q^2$, the cascade
is expected to be described by a sequence of probabilities and not
by interfering quantum-mechanical amplitudes. If the branching
occurs at a value $Q^2$ without any radiation between the initial
maximum value $Q^2_{max}$ and $Q^2$, the probability is given by
\begin{equation}
d {\mathcal{P}}_{a \to bc} = \frac{\alpha_s}{2 \pi} \,
P_{a \to bc} (z) \, dz \, \frac{dQ^2}{Q^2} \;
\exp \left[ - \sum_{b',c'} \int^{Q^2_{max}}_{Q^2}
\frac{dQ'^2}{Q'^2}
\int dz^\prime \,
\frac{\alpha_s}{2 \pi} \, \hat{P}_{a \to b'c'} (z^\prime) \right]~,
\label{eq:sud}
\end{equation}
where the exponential Sudakov factor~\cite{Sudakov:1954sw} accounts for the non-radiation
probability. Here $\hat{P}_{a \to b'c'} (z^\prime)$ are the same functions as
$P_{a \to b'c'} (z^\prime)$, defined in Eq.~(\ref{eq:dglap}) except for the regularization terms
at $z^\prime \to 1$.
However, the branching probability Eq.(\ref{eq:sud}) is refined
by an important coherence effect. If in electrodynamics
a photon splits into an electron$-$positron pair, the pair can emit
photons only at angles less than the angle between the charged pair
as photons propagating at larger angles would see coherent
electron$+$positron states which, being neutral, cannot radiate.
As a result, the emission angles are ordered in the sequence $\Theta_1
> \Theta_2 > ...$ This effect is also predicted in QCD \cite{Herwig1}.
The only difference arises from the fact that the coherent
superposition adds up the colour charges of the daughter partons
to the non-zero value of the parent colour charge so that
wide-angle splitting is generated at a non-zero rate. In addition
to the angular ordering, non-resolved infrared radiation restricts
the energy fractions of the partons in the cascades. These
restrictions on energies and angles can be mapped into the boundary
values of the Sudakov integral after re-expressing the invariant mass
by the angle between the momenta of the daughter partons.
The cascading of the primordial quarks and gluons affects
the observed hadron distributions within the jets. In particular,
energy spectra are softened through the cascading mechanism, multiplicities increase strongly with
energy, and quark and gluon jets will develop different profiles.
Formulated for simplicity by neglecting the change in $k_T$, and restricting to
one parton species, the energy
dependence of the fragmentation function is described by the DGLAP
equation \cite{split,DGL,DGL-2,DGL-3}:
\begin{equation}
\frac{\partial D(z,Q^2)}{\partial \log Q^2 / \Lambda^2}
= \frac{\alpha_s(Q^2)}{2 \pi} \int^1_z \frac{d\zeta}{\zeta}
P(\zeta) D \left(\frac{z}{\zeta}, Q^2 \right) \,.
\label{eq:fragE}
\end{equation}
The splitting function $P(\zeta)$ consists of two parts (see, Eq.~(\ref{eq:dglap}). The first
part $P$ describes the standard component and accounts for the
accumulation of particles at $z$ generated by the splitting of partons at
$\zeta \ge z$, the second part accounts for the loss of particles at $z$
due to splitting to smaller energy values. In the parton model, the function $D$ depends
only on the variable $\frac{z}{\zeta}$ but not on $Q^2$. This would yield an
scale-invariant fragmentation function $D(\frac{z}{\zeta})$. In QCD,
this is obviously modified. The solution of the above equation
leads to striking effects which modify the predictions of the
scale-invariant parton model. In particular:
{\it (i)} For large $z$ values beyond 0.2 the spectrum decreases
with increasing energy while the particles accumulate at small $z$. The loss
of particles by splitting at large $z$ is bigger than the gain
by splitting from yet higher $\zeta$ values. This is naturally
opposite at small $z$ values.
{\it (ii)} The constant plateau (characterized by $D(z)$ as in Eq.~(5)) in the
parton model generates a
multiplicity of particles which increases logarithmically with
the length of the plateau $\sim \log (\sqrt{s}/m_h)$. Multiple
splittings raise the multiplicity much more strongly. Solving
Eq.~(\ref{eq:fragE}) for the multiplicity, given by the
integrated fragmentation function, predicts a rise with energy
stronger than exponential:
\begin{equation}
n(s) \sim \exp ({\alpha}^{-1/2}_s (s))
\sim \exp (\log^{1/2} \! s/\Lambda^2) \,.
\end{equation}
In addition, the flat plateau in $\log z^{-1}$ is deformed
to a humpback of Gaussian character~\cite{Azimov:1985by}, with centre
$[\log z^{-1}]_{max} \sim \log s/\Lambda^2$
and width $\sigma \sim \log^{3/4} \! s/\Lambda^2$.
Experimental proof for the energy dependence of the fragmentation
function $D(z,Q^2)$ and the formation of the humpback at small $z$ is
presented in Fig.{\ref{fig:2.2ab}}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth, height=6.5cm]{delphi-sv-gq.eps}
$\;\;\;$
\includegraphics[width=0.48\textwidth,height=6.5cm]{PLB247-Fig3.eps}
\caption{(a) Scaling violations in quark and gluon jets (using the DURHAM algorithm).
The solid curves show the DGLAP evolution and the dotted parts of the curves are
extrapolation outside the fit range [DELPHI~\cite{Abreu:1999af}].
(b) Measurements of the $\ln(1/x_p)$ distributions for the center-of-mass energies
between 14 and 91 GeV and comparison with the MLLA~\cite{Azimov:1985by} and
a Gaussian~\cite{Fong:1989qy}. This figure is referred as the
humpback plateau of the gluon fragmentation function for small $x$ values in the
text [OPAL~\cite{hback}].
}
\label{fig:2.2ab}
\end{figure}
{\it (iii)} While the first splitting of a quark jet $q \to qg$ is
determined by $C_F = 4/3$, the first splitting of a gluon jet
$g \to gg$ is determined by the bigger Casimir invariant $C_A =3$.
Thus, the large average colour
of gluons compared with quarks should increase the multiplicity
of gluon jets with regard to quark jets asymptotically in the ratio
$C_A / C_F = 9/4$. Similarly, since $dN_{g/q} \sim C_{A/F} \, d \log \Theta$,
the angular widths of quark and gluon jets are different,
$\Theta_g = \Theta^{C_F/C_A}_q$, {\it viz.} gluon jets
are wider than quark jets. Even though the asymptotic limit
has not been reached yet, the particle multiplicities in gluon jets
have been shown significantly larger than in quark jets \cite{mult}:
$n_g/n_q > 3/2$. For a recent discussion of particle multiplicities, we
refer to~\cite{Dremin:2000ep}.
{\it (iv)} Small-angle gluon radiation off heavy quarks $Q = c,b$
is suppressed compared to light quarks \cite{DokK}, and the logarithmic
enhancement of the particle yield is restricted
to infrared gluon configurations.
\subsection{Multijets at LEP}
\label{sec:LEP}
Increasing the energy from the PETRA regime of about $\sqrt{s} =$
46 GeV to the LEP regime by factors of two and five in the two phases
of LEP, $Z$-boson runs with $\sqrt{s} =$ 91 GeV and
beyond with $\sqrt{s}$ up to 206 GeV, provided two opportunities: the
experimental analysis of multijet final states \cite{LEP} and
the study of the jet properties over a large range
in energy \cite{PETRA.LEP}. This allows a more precise measurement of two fundamental
characteristics of QCD~\cite{WZ}, the running of the QCD
coupling with energy as predicted by asymptotic freedom, and the
three-gluon coupling, a fundamental ingredient of asymptotic freedom. We discuss
these measurements below in turn.
\subsection{Inclusive jet observables and determination of $\alpha_s(M_Z)$ at LEP}
\label{sec:LEP-Observables}
All four experiments at LEP, DELPHI~\cite{Abreu:1999rc},
OPAL~\cite{Pfeifenschneider:1999rz}, L3~\cite{Achard:2002kv}, and
ALEPH~\cite{Heister:2003aj}, undertook measurements of
the inclusive jet (or event shape) variables and their moments. In these analyses,
the next-to-leading order ($O(\alpha_s^2)$) perturbative QCD calculations for the
inclusive observables and event shape distributions, discussed in the context of three-jet
events at PETRA, were augmented by theoretical estimates in the next-to-leading-log
approximation (NLLA)~\cite{Catani:1991kz,Catani:1991bd} and others in which the
$O(\alpha_s^2)$ and NLLA schemes were combined~\cite{Catani:1992ua}. It is helpful
to explain the NLLA in more detail. For a generic shape variable, $y$,
well-defined in pertrubation theory (i.e., infrared and collinear safe), the typical
leading behaviour at small $y$ is
\begin{equation}
\frac{1}{\sigma}\frac{d\sigma_n}{dy} \sim \alpha_s^n \frac{1}{y} \ln^{2n-1} \frac{1}{y}~.
\end{equation}
The normalized event shape cross section $R(y)$ defined as
\begin{equation}
R(y)\equiv\int_0^ydy \frac{1}{\sigma} \frac{d\sigma}{dy}~,
\end{equation}
then has the expansion
\begin{equation}
R_n(y) \sim \alpha_s^n \ln^{2n} (1/y)\equiv \alpha_s L^{2n}~.
\end{equation}
Whenever $L$ is large, one can improve the range and accuracy of perturbative
predictions by identifying these logarithmically-enhanced terms and resum them to
all orders. For a class of variables for which the leading logarithmic
contributions exponentiate, i.e., variables which in the small-$y$ range yield the
logarithm of the shape cross-section in the form $\ln R(y) \sim Lg_1(\alpha_s L)$,
where $g_1(\alpha_s L)$ has a power series expansion in $\alpha_sL$, one has~\cite{Catani:1992ua}
\begin{equation}
R(y) =C(\alpha_s) \Sigma (y, \alpha_s) + D(y, \alpha_s)~,
\end{equation}
with $C(\alpha_s)= 1+ \sum_{n=1}^{\infty} C_n (\alpha_s/2\pi)^n$ and
$\ln \Sigma(y,\alpha_s)= Lg_1(\alpha_sL) + g_2(\alpha_sL) + \alpha_s g_3(\alpha_s L) + ...$.
The function $g_1(\alpha_s L)$ resums all the leading contributions of the form
$\alpha_s L^{n+1}$ (defining the LAA approximation) and $g_2(\alpha_s L)$ contains the
next-to-leading logarithmic corrections $\alpha_s^n L^n$ (defining the NLLA) and so on.
For the hadronisation effects, either the Monte Carlo based hadronisation models,
PYTHIA~\cite{Pythia,Sjostrand:2007gs}, HERWIG~\cite{Herwig} and ARIADNE~\cite{Lonnblad:1992tz}
were used, or, alternatively, non-perturbative power-correction formulae derived in
~\cite{Dokshitzer:1995zt,Dokshitzer:1997iz,Dokshitzer:1998pt} were employed. This latter
ansatz provides an additive term to the perturbative QCD estimate in mean event shape
variables. Studying the energy-dependence of these mean variables $\langle f\rangle$,
defined as
\begin{equation}
\langle f \rangle = \frac{1}{\sigma_{\rm tot}} \int f \frac{df}{d \sigma} =
\langle f_{\rm pert} \rangle + \langle f_{\rm pow}(\alpha_0)\rangle~,
\end{equation}
yielded a measurement of $\alpha_s(\mu)$ and $\alpha_0$, the non-perturbative parameter
characterising the power corrections. We have discussed the $O(\alpha_s^2)$ calculations of
$f_{\rm pert}$ for several observables (thrust, the Fox-Wolfram shape variable $C$, etc.)
in section 5.4. Explicit formulae for $f_{\rm pow} $ are given by
Dokshitzer {\it et al.}~\cite{Dokshitzer:1995zt,Dokshitzer:1997iz,Dokshitzer:1998pt},
and can also be seen, for example, in the DELPHI analysis~\cite{Abreu:1999rc} of the LEP2
data. For related discussion of event shapes in $e^+e^-$ annihilation and deep inelastic
scattering, we refer to~\cite{Dasgupta:2003iq}.
A typical measurement along these lines is shown in Fig.~\ref{fig:PLB456-DELPHI},
in which the measured mean values of $\langle 1-T\rangle$, and the scaled heavy jet mass
$\langle M_h^2/E_{\rm vis}^2 \rangle$ are shown as a function of the center-of-mass energy together
with the results of the fits. The dotted curves in these figures show the perturbative
QCD part only. It is obvious from this figure that even at the highest LEP2 energy,
non-perturbative power corrections are not small. The fits yield $\alpha_s(M_Z)=0.1191
\pm 0.0015 \pm 0.0051$ from $\langle 1-T \rangle$ and a very consistent value from the
other observable $\langle M_h^2/E_{\rm vis}^2 \rangle$. However, the value of $\alpha_0(2~{\rm GeV})$,
the measure of power corrections, differs by about 20\% from the two measurements,
showing considerable non-universality in the parametrisation of $f_{\rm pow}$
by Dokshitzer {\it et al.}~\cite{Dokshitzer:1995zt,Dokshitzer:1997iz,Dokshitzer:1998pt}.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{PLB456-DELPHI.eps}}}
\caption{Measured mean values of the observables
$\langle 1 -T \rangle$ and $\langle M_h^2/E_{\rm vis}^2 \rangle$
as a function of the center-of-mass energy. The solid lines present the results of the
fits including power corrections and the dotted lines show the perturbative part only.
(From DELPHI~\cite{Abreu:1999rc}). }
\label{fig:PLB456-DELPHI}
\end{figure}
Along the same lines, the L3 collaboration measured the mean values of several global shape
parameters. For all these variables, the same theoretical
frameworks~\cite{Catani:1991kz,Catani:1991bd,Catani:1992ua} as discussed
above in the context of the DELPHI measurements were used. To compare these calculations
at parton level with the experimental measurements, the effects of hadronisation and
decays were corrected for using the JETSET PS (parton shower) Monte Carlo program. We display
in Fig.~\ref{fig:L3-TC}
the measured distributions in thrust and the variable $C$ at $\sqrt{s}=206.2$ GeV
and comparison with the QCD fits, showing excellent agreement. To determine $\alpha_s$
at each energy, the formalism in ~\cite{Catani:1992ua} is used, which yielded
$\alpha_s(M_Z)=0.1227 \pm 0.0012 \pm 0.0058$~\cite{Achard:2002kv}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth, height=7.0cm]{PLB536-L3-T.eps}
$\;\;\;$
\includegraphics[width=0.48\textwidth,height=6.5cm]{PLB536-L3-C.eps}
\caption{Measured distributions of thrust, T, (left-hand frame) and the $C$-parameter in
comparison with QCD predictions at $\sqrt{s}=$206.2 GeV [From L3~\cite{Achard:2002kv}].
}
\label{fig:L3-TC}
\end{figure}
\subsubsection{Jet rates}
Due to the high energy at LEP1, up to four jets could be resolved
experimentally. The number of resolved jets depends strongly
on the criterion by which the jets are defined. Early definitions
had used the JADE recombination scheme which combined particle pairs
on the experimental side, and equally quark/gluon parton pairs
on the theoretical side \cite{Kramer:1986mc}, for scaled invariant masses $M^2_{ij}/s
= y$ below a cut-off value $\leq y_{cut}$. In the DURHAM scheme~\cite{Catani:1991hj}
the invariant mass was replaced by $M^2_{ij} = 2 \, \min(E^2_i,E^2_j) \,
[1 - \cos\theta_{ij}]$, essentially the transverse momentum between
the particles or partons for small angles. The cut-off value $y_{cut}$ was
chosen typically from $10^{-1}$ down to $10^{-3}$. Small values of $y_{cut}$
naturally lead to large numbers of jets while the number of jets
is reduced if $y_{cut}$ is increased. The cross section for 3-jet events,
\begin{equation}
\sigma_3[y] = \left( \frac{\alpha_s}{\pi} \right) \sigma_{31} +
\left( \frac{\alpha_s}{\pi} \right) ^2 \sigma_{32} +
\left( \frac{\alpha_s}{\pi} \right) ^3 \sigma_{33} \,,
\end{equation}
has been calculated up to third order in the QCD coupling \cite{Dissertori:2009qa}.
NLO corrections to the four-jet rates in the process
$e^+ e^- \to \gamma^*,Z \to 4$ jets were done around 1996 by
Dixon and Signer~\cite{Signer:1996bf,Dixon:1997th} and subsequently by
Nagy and Trocsanyi~\cite{Nagy:1998bb}.
The experimental number of jets in $Z$ decays is displayed in Fig.~\ref{fig:OPAL-5jets}
and compared with a parton shower Monte Carlo prediction (Jetset partons), and including
hadronisation effects (Jetset hadrons).
Evidently, for $y_{cut}$ below $10^{-2}$ up to four jets can clearly be identified
at LEP1 \cite{EXPnjets}.
\begin{figure}
\center{
\includegraphics[width=0.75\textwidth, height=6.5cm]{F_numb.jets.eps}
\caption{Relative production rates of $n$-jet events defined in the Durham jet
algorithm scheme~\cite{Catani:1991hj} as a function of the jet resolution
parameter $y_{cut}$. The data are compared to model calculations before
and after the hadronisation process as indicated on the figure~[OPAL\cite{EXPnjets}].}
\label{fig:OPAL-5jets}
}
\end{figure}
A dedicated effort to test QCD and determine $\alpha_s$ was undertaken by the
combined JADE (at PETRA) and OPAL (at LEP) collaborations~\cite{Pfeifenschneider:1999rz},
giving a considerably larger lever arm in energy from 35 GeV to 189 GeV. The observables used
in this (JADE + OPAL) analysis are exclusively based on the multiplicities of hadronic jets.
The $n$-jet fractions, $R_n$ were defined using the JADE~\cite{JADE},
Durham~\cite{Catani:1991hj} and
Aachen/Cambridge\cite{Wobisch:1998wt} algorithms. We show in Fig.~\ref{fig:OPAL-JADE}
the three-jet
fraction $R_3$ obtained with the JADE and Durham jet algorithms versus the
center-of-mass energy.
(The result from the Aachen/Cambridge algorithm can be seen
in~\cite{Pfeifenschneider:1999rz}.)
Here, the data from PETRA and LEP are compared with the $O(\alpha_s^2)$ prediction.
The renormalisation scale dependence is shown by the scale parameter $x_\mu= \mu/\sqrt{s}$,
with the solid lines corresponding to a fixed value $x_\mu=1$, and the dashed lines are the
results obtained with a fitted scale, indicated on the figure. This and related analyses
reported in~\cite{Pfeifenschneider:1999rz} yield a rather precise value for the QCD
coupling constant $\alpha_s(M_Z)=0.1187 ^{0.0034}_{-0.0019}$.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth, height=7.0cm]{EPJC17-OPAL.eps}
\caption{Measured 3-jet fraction as obtained with the JADE (left-hand frame) and
Durham schemes (right-hand frame) at
parton level versus the c.m.s. energy $\sqrt{s}$. The data shown are from the JADE and OPAL
collaboration, and the curves are the $O(\alpha_s^2)$ predictions at a fixed scale
(solid lines) and for the fitted values of the scale (dashed lines).
[From OPAL~\cite{Pfeifenschneider:1999rz}.]
}
\label{fig:OPAL-JADE}
\end{figure}
At LEP2 (up to $\sqrt{s}=206$ GeV), the highest jet multiplicity
measured is five, obtained using the variable $y_{\rm cut}$, and inclusive measurements
are available for up to six jets.
To match this data, NLO QCD corrections to five-jet production at LEP have been carried
out by Frederix {\t et al.}~\cite{Frederix:2010ne}, and the fixed-order perturbative
results have been compared with the LEP1 data from ALEPH~\cite{Heister:2003aj}.
Two observables have been used for this comparison:\\
(i) Differential distribution with respect to the five-jet
resolution parameter
$y_{45}$, the maximum value of $y_{\rm cut}$ such that a given event is classified as
a five-jet event by the DURHAM jet algorithm~\cite{Catani:1991hj}:
\begin{equation}
\int_{y_{\rm cut}}^1 dy_{45} \frac{d\sigma}{dy_{45}}=
\sigma_{\rm incl}^{5-{\rm jet}} (y_{\rm cut})~,
\label{eq:sigma-5-jet-incl}
\end{equation}
where $\sigma_{\rm incl}^{5-{\rm jet}}$ is the {\rm inclusive} five-jet production cross
section in $e^+e^-$ annihilation. (ii) Five-jet rate $R_5(y_{\rm cut})$, defined as follows:
\begin{equation}
R_5(y_{\rm cut}) = \frac{\sigma_{\rm excl}^{\rm 5-jet}(y_{\rm cut})}{\sigma_{\rm tot}}~,
\label{eq:sigma-5-jet-excl}
\end{equation}
where $\sigma_{\rm excl}^{\rm 5-jet}(y_{\rm cut}) $ is the exclusive five-jet production
cross section. This is also calculated using the Durham jet algorithm by requiring that
exactly five jets are reconstructed. Both observables,
$\sigma_{\rm tot}^{-1} d\sigma/d\ln y_{45}^{-1}$ and $R_5(y_{\rm cut})$, can be written as
a series in $\alpha_s(\mu)$, with the leading contributions starting in
$O(\alpha_s^3)$. A comparison of the leading order and next-to leading order predictions
for $(1/\sigma) d\sigma/d\ln y_{45}$ vs. $\ln (y_{45})$ and the exclusive 5-jet fraction
$R_5$ vs. $\ln (y_{\rm cut})$ is shown in Fig.~\ref{fig:5jet-FFMZ} with the
ALEPH data in a limited range of these variables (perturbative regime). Hadronisation
effects have been estimated
using the SHERPA MC~\cite{Sherpa}. As is typical of NLO calculations, scale dependence is
significantly reduced compared to the LO calculations.
Agreement between data and NLO theory is impressive and has been used to
extract a value of $\alpha_s$, obtaining $\alpha_s(M_Z)=0.1156^{+0.0041}_{-0.0034}$, which is
in excellent agreement with the world-average discussed below.
The limitation of using fixed-order perturbative QCD in describing the $e^+e^-$ data can
be seen in the ALEPH data shown in Fig.~\ref{fig:Aleph-MC}, which show a characteristic
turnover shape around $-\ln y_{45} \simeq 7.5$. In this region, perturbation theory fails
and a resummation (equivalently showers) have to be included to describe the data. This
underscores the importance of having MC generators which include showers.
In Fig.~\ref{fig:Aleph-MC}, the ALEPH LEP1 data for
$\sigma_{\rm tot}^{-1} d\sigma/d\ln y_{45}^{-1}$ are compared with the hadron level
predictions of three event generators (the numbers denote the various versions of
these Monte Carlo programmes),
PYTHIA6.1~\cite{Sjostrand:2006za}, HERWIG6.1~\cite{Corcella:2000bw}
and ARIADNE4.1~\cite{Lonnblad:1992tz}. Agreement between data~\cite{Heister:2003aj} and
these MCs is generally good. However, as
shown in the upper frame of fig.~\ref{fig:Aleph-MC}, hadronic corrections are large, varying
from 0.5 to 1.5 in this range. In addition, differences between hadronisation corrections
among the MCs are as large as 25\%. This deficiency can be overcome to some extent by matching the parton
shower and high multiplicity matrix elements, as, for example, proposed
in~\cite{Catani:2001cc}. This matching procedure has been implemented in the
SHERPA event generator~\cite{Sherpa} and results in improved agreement between the
MC and fixed-order perturbative description.
\begin{figure}
\center{
\resizebox{0.95\columnwidth}{!}{\includegraphics{FFMZ-Fig3.eps}}}
\caption{ALEPH LEP1 data~\cite{Heister:2003aj} compared to leading and next-to-leading order
predictions for $1/\sigma d\sigma/d\ln y_{45}$ plotted against $\ln y_{45}$ (left-hand frame)
and $R-5$
plotted against $\ln (y_{\rm cut})$ (right-hand frame) without hadronisation corrections.
The uncertainty bands are obtained by varying
the scale in the interval $0.15 M_Z < \mu < 0.6 M_Z $, and the solid lines refer to NLO QCD
evaluated at $\alpha_s(M_Z)=0.118$ and $\mu=0.3 M_Z$.
[From Ref.~\cite{Frederix:2010ne}].
\label{fig:5jet-FFMZ}}
\end{figure}
\begin{figure}
\center{
\resizebox{0.95\columnwidth}{!}{\includegraphics{FFMZ-Fig1.eps}}}
\caption{ALEPH data~\cite{Heister:2003aj} for the $y_{45}$ distribution at LEP1
compared to PYTHIA,HERWIG and ARIADNE Monte Carlo results. The upper frames show
detector and hadronisation corrections, respectively. The lowest frame shows the relative
difference between data and event generator predictions. [Figure attributed to H. Stenzel
in Ref.~\cite{Frederix:2010ne}].
\label{fig:Aleph-MC}}
\end{figure}
\subsubsection{The gluon self-coupling}
The study of the three-gluon coupling in gluon splitting to two gluons
requires four (or more) jets in $e^+ e^-$ annihilation. A variety of angular correlations
and energy distributions, see~\cite{THchi4}-\cite{jet42}, can be exploited
to signal the three-gluon coupling of QCD.
The sensitivity to angular distributions may be illustrated in a transparent
example~\cite{THchi4}. A virtual gluon, radiated off the quarks
in the process $e^+ e^- \to q \bar{q} g^\ast$, is polarised preferentially
in the production plane. The subsequent splitting of the virtual gluon
into two real gluons or a quark-antiquark pair is sensitive to the azimuthal
angle $\phi$ between the $g^\ast$ polarisation vector and the decay planes:
\begin{eqnarray}
n_{gg} &=& \frac{[1-z+z^2]^2}{z(1-z)}
+ z(1-z) \, \cos 2 \phi~, \nonumber \\
n_{q\bar{q}} &=& \frac{1}{2} \, \left[ z^2 + (1-z)^2 \right]
- z(1-z) \, \cos 2 \phi \,.
\end{eqnarray}
As a result, the polarisation vector and the decay plane tend to be aligned
in gluon splitting to two gluons. In contrast, if the gluon splits
to a quark-antiquark pair, the decay plane tends to orient itself perpendicular
to the polarisation vector.
The azimuthal distribution can be studied experimentally by measuring the
angle between the planes spanned by the two hardest and the two softest jets.
In an abelian theory the $\phi$ asymmetry is large and the two planes
would orient themselves perpendicular to each other. By contrast,
since the $\phi$-independent term in gluon splitting to two gluons in QCD
is large, the azimuthal asymmetry in this process is predicted to be small,
but the two planes should have a tendency to orient themselves parallel
rather than perpendicular. This is borne out by experimental analyses~\cite{EXPchi4}
indeed, as demonstrated in Fig.~\ref{fig:BZ-L3}.
Quite generally, four jets are produced in $e^+ e^-$ annihilation by
three mechanisms: double gluon bremsstrahlung, gluon splitting to two
gluons, and gluon splitting to a quark-antiquark pair. The cross
section \cite{Ali:1979rz,Ali:1979wj} can be decomposed accordingly:
\begin{equation}
\sigma_4 = \left( \frac{\alpha_s}{\pi} \right) ^2 \, C_F \,
[C_F \sigma_{bb} + C_A \sigma_{gg} + n_f T_R \sigma_{qq}] \,.
\end{equation}
The first term accounts for double gluon bremsstrahlung $q \to qg$
and $\bar{q} \to \bar{q} g$, the second
for gluon splitting to two gluons $g \to gg$, the third for gluon splitting to
$n_f$ quark pairs $g \to q \bar{q}$. The Casimir group characteristics
of the splitting vertices are $[C_F,C_A,T_R] = [4/3,3,1/2]$ in QCD,
while the corresponding characteristics are [1,0,3] in an abelian theory.
Measurements of their ratios yield \cite{EXPgroup}
\begin{eqnarray}
C_A / C_F &=& 2.29 \pm 0.06 [stat.] \pm 0.14 [syst.]~, \nonumber \\
T_R / C_F &=& 0.38 \pm 0.03 [stat.] \pm 0.06 [syst.] \,,
\end{eqnarray}
compared with the theoretical predictions $C_A / C_F = 9/4$ and $T_R / C_F = 3/8$
in QCD. Again we observe a strong signal of the three-gluon coupling in $C_A$,
far away from zero in the abelian theory.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics[width=0.75\textwidth]{F_chi4.eps}}}
\caption{The distribution of the azimuthal angle $\chi_{\rm BZ}$ (called $\phi$ in Eq.~(59))
between the planes formed by the two most
energetic jets and the two least energetic jets~\cite{EXPchi4};
the predictions of QCD including the gluon self-coupling are
compared with an abelian theory without self-coupling of the
gauge fields.
\label{fig:BZ-L3}
}
\end{figure}
\subsubsection{QCD coupling and asymptotic freedom}
A large range of energies can be covered in the measurement of the
QCD coupling $\alpha_s(Q^2)$, extending from 29 GeV (at PEP) to 46 GeV (at PETRA) up to
about 206 GeV at LEP; the lever arm can be elongated down to
1.8 GeV by including $\tau$ decays.
The QCD prediction for the running coupling $\alpha_s(Q^2)$ has been
determined up to the 5th power \cite{Czakon,Ritbergen-97}. Keeping terms up to 2nd order leads to
the following expression
\begin{equation}
\alpha_s(Q^2) = \frac{1}{\beta_0 \log \left( Q^2/\Lambda^2 \right) }
- \frac{\beta_1 \log \log \left( Q^2/\Lambda^2 \right)}
{\beta^3_0 \log^2 \left( Q^2/\Lambda^2 \right) } + ...,
\end{equation}
with $\beta_0 = (33-2 n_f)/12\pi$, $\beta_1 = (153-19 n_f)/24\pi^2$, ... and
$\Lambda \approx 200$ MeV denoting the QCD scale at which the coupling grows
indefinitely.
An ensemble of observables has been calculated in perturbative expansions
in next-to-leading order NLO up to N$^3$LO. Most accurate are the totally
inclusive observables, like total cross sections, followed by jet cross
sections and hadronic shape variables, like thrust. The estimates still
depend significantly on the models used for calculating the shape variables
in the non-perturbative region, see~\cite{Gehrmann2,Dissertori-2009}, for instance.
Combining the experimental measurements with the theoretical apparatus,
the knowledge of the QCD coupling and its evolution with the energy is
summarised in Fig.~{\ref{fig:3.2}}. The lever arm extends from the
hadronic decays of the $\tau$ lepton throughout the PETRA range up to the highest
energy values in the second phase of LEP. Including deep-inelastic
lepton-nucleon scattering and jet production in hadron collisions,
all the analyses are in remarkable agreement with the theoretical
expectation from asymptotic freedom \cite{asyfree}. It has become customary to quote
the value of $\alpha_s(\mu)$ measured in experiments by scaling the result to the scale
$\mu=M_Z$ using the RG equation. This yields the current world
average~\cite{Basy}
\begin{equation}
\alpha_s(M_Z)=0.1184 \pm 0.0007~.
\end{equation}
\begin{figure}
\center{
\includegraphics[width=0.48\textwidth, height=7.5cm]{asmz-09.eps}
$\;\;\;$
\includegraphics[width=0.48\textwidth,height=7.5cm]{F_QCDcplg.eps}
\caption{Left-hand frame: Summary of measurements of $\alpha_s(M_Z)$, used as input
for the world average value.
Right-hand frame: Evolution of the QCD coupling with energy.
Among other methods, analyses based on hadron production in
$e^+ e^-$ annihilation play a leading role up to the highest
LEP energies. Both frames are from Bethke~\cite{Basy}.
}
\label{fig:3.2}
}
\end{figure}
This ends our discussion of jets in $e^+e^-$ annihilation experiments and in QCD.
In summary, essential parts of QCD jets can now be controlled at the level
of typically ten percent ($\alpha_s(M_Z)$ is known better than 1\%). {\it Notabene} the basic interactions and the strength
of the quark-gluon coupling are proven to be asymptotically free. The high level of accuracy
achieved in measuring the gauge couplings - weak, electromagnetic and QCD - is now a diagnostic
tool to probe physics at scales as high as the the grand unification
scale.
\section{Jets as tools}
In the preceding sections, we discussed the impact which jet physics in
$e^+e^-$ annihilation experiments
had in establishing QCD quantitatively. This progress owes itself to some extent to the fact
that in $e^+e^-$ annihilation the initial state is precisely known.
This is not the case
in other high energy collisions, such as the electro- and photoproduction
processes $ep$ and $\gamma p$, as well as the gamma-gamma and hadron hadron
collisions, $\gamma \gamma$, $p\bar{p}$ and $pp$. Here, jets
could be used as powerful tools for studying other aspects of high energy
collisions. Examples are the partonic composition of the proton, i.e., quark
and gluon densities of the proton (and antiproton), the parton distribution functions
(PDFs) of the photon and the QCD coupling at HERA, Tevatron and the LHC. Yet other
applications of jet physics
include analyses of the electroweak sector and searches for new heavy
particles in many extensions of the Standard Model (SM) -- the QCD and
electroweak theory of particle physics based on the groups
$SU(3)_c\otimes SU(2)_I\otimes U(1)_Y$ Thus
the prominent role of jets in studying QCD phenomena extends to
quite different areas in particle physics.
Before we embark upon illustrating the use of jets as tools in $ep$,
$\gamma \gamma$, $p\bar{p}$ and $pp$ collisions, it is worth pointing out that
in these processes, QCD is at work in both the initial and the final states as opposed
to the $e^+e^-$ annihilation processes, where it influences only the final state
distributions and rates. As seen in Fig.~\ref{fig:feynborn} for the DIS process,
the cross section depends on three components: (i) the probability of finding a
parton in the proton having a fractional longitudinal momentum $x$ (or $ x_{\rm Bj}$),
(ii) the interaction between these partons and the virtual photon, and (iii) the transition
of partons to jets, which theoretically involved the recombination of two partons into
one jet. While perturbative QCD provides a framework
to evolve the PDFs and the fragmentation functions FFs from a low scale $\mu^2=Q_0^2$ to a
high scale $\mu^2=Q^2$, non-perturbative inputs for the PDFs and FFs are required at the
lower scale. This is obtained by parametrising an ansatz at lower scale. The theoretical
tool which enables this is called factorization, a key concept in the application
of QCD to high energy processes~\cite{Collins:1989gx}. Simply stated, factorization of a process (such as
inclusive- or jet-cross sections in deep inelastic scatterings) allows it to be
expressed as the product of a short-distance part, calculable in perturbative QCD, and
a long distance part, comprising of non-perturbative matrix elements or PDFs.
The universality of the PDFs and the evolution from a lower scale to a higher scale are
process-independent. The
division into the short- and long-distance parts is governed by the factorization scale
$\mu_F$. We illustrate the applications of factorization on the example of deep inelastic
scattering processes, discussed below.
\subsection{$ep$ Collisions}
In DIS, described here by the process $e p \to e +X$, an electron $e$ with momentum
$k$ emits an off-shell photon with momentum $q$ which interacts with a proton of momentum
$P$. For virtualities of the photon ($Q^2=-q^2>0$) far above the squared proton mass (but
far below the $Z$-boson mass), the differential cross section in terms of the kinematic
variables $Q^2$, $x=Q^2/(2P.q)$ and $y=(q.P)/(k.P)$ is
\begin{equation}
\frac{d^2\sigma}{dx dQ^2} = \frac{4\pi \alpha}{2xQ^2}\left[(1+(1-y)^2)F_2(x,Q^2) -
y^2F_L(x,Q^2)\right]~,
\end{equation}
where $F_2(x,Q^2)$ and $F_L(x,Q^2)$ are proton structure functions, which encode the
interaction between the photon and the proton. The structure functions are not calculable
in perturbative QCD. In the lowest order, i.e., keeping only the Born contribution
as shown in Fig.~\ref{fig:feynborn} (a) where $x=x_{\rm Bj}$, the structure functions
are given by
\begin{equation}
F_2(x,Q^2)=x\sum_q e_q^2 f_{q/p}(x)~: ~~F_L(x,Q^2)=0~,
\end{equation}
where $f_{q/p}(x)$ is the PDF for quarks of type $q$
inside the proton. This result in which $f_{q/p}(x)$ are independent of the
scale $Q$ is the ``quark-parton model'' picture. Hence, in this picture,
also the structure functions $F_2$ and $F_L$ are independent of $Q^2$. Including
higher order perturbative QCD corrections, the structure function $F_2(x,Q^2)$ has
the form~\cite{Dissertori:2010}
\begin{equation}
F_2(x,Q^2)=x\sum_{n=0}^\infty \frac{\alpha_s^n(\mu_R^2)}{(2\pi)^n}\int_x^1 \frac{dz}{z}
C_{2,i}^{(n)}(z,Q^2,\mu_F^2,\mu_R^2)f_{i/p}(\frac{x}{z},\mu_F^2)~.
\end{equation}
Here we have a series in powers of $\alpha_s(\mu_R^2)$, each term involving a coefficient
$C_{2,i}^{(n)}$, which can be calculated using Feynman graphs. The scale $\mu_R$ is
called the renormalization scale at which the QCD coupling constant $\alpha_s(\mu_R^2)$
is calculated. The leading order
in $\alpha_s$ QCD Compton scattering and the boson-gluon fusion contributions are shown in
Fig.~\ref{fig:feynborn} (b) and \ref{fig:feynborn} (c), respectively. An important point to note is that
the momentum of the quark when it interacts with the photon and the momentum when
the quark is extracted from the proton may differ. The ratio of these two momenta is
$z$. The $C_{2,i}^{(n)}$ coefficients depend on the ratio $z$, and hence one has to
integrate over $z$ as indicated above. In lowest order, i.e. without including any
perturbative QCD corrections, one has $C_{2,q}^{(0)}=e_q^2\delta(1-z)$ and $C_{2,g}^{(0)}=0$,
and one recovers the ``quark-parton model'' result.
The PDFs $f_{i/p}(\frac{x}{z},\mu_F^2)$
depend on the factorisation scale $\mu_F^2$, and this dependence is governed by the
DGLAP equation. In leading order in $\alpha_s(\mu_F^2)$, this reads as follows
\begin{equation}
\frac{\partial f_{i/p}(x,\mu_F^2)}{\partial \mu_F^2}=\sum_j\frac{\alpha_s(\mu_F^2)}{2\pi}
\int_x^1 \frac{dz}{z} P_{i \to j}^{(1)}(z)f_{j/p}(\frac{x}{z},\mu_F^2)~,
\end{equation}
where, for example, $P_{q\to g}^{(1)}(z)=T_R(z^2 + (1-z)^2)$, and the others can be
extracted from Eq.~(\ref{eq:dglap}). The coefficient functions are also $\mu_F$-dependent,
and in the leading order in $\alpha_s$, one has
$C_{2,i}(x,Q^2,\mu_R^2, \mu_F^2)= C_{2,i}(x,Q^2,\mu_R^2, Q^2) -\ln(\frac{\mu_F^2}{Q^2})
\sum_j\int_x^1 \frac{dz}{z} P_{i \to j}^{(1)} C_{2,j}^{(0)}(\frac{x}{z})$. In the
above expressions, the choice of the renormalization and factorization scales is
arbitrary. Varying $\mu_F$ and $\mu_R$ provides an estimate of the scale-dependent
uncertainties. In inclusive DIS processes, the default choice is $\mu_R=\mu_F=Q$.
The extension of the factorization formalism to the processes with two initial-state
hadrons follows very much along the same lines, and we shall discuss this somewhat
later, as we discuss high energy hadronic collisions.
\subsubsection{Jets in DIS processes}
Jet production in neutral
current (NC) deep inelastic scattering at the HERA collider at DESY
provides an important further testing ground for QCD. While inclusive DIS
gives indirect
information on the strong coupling via scaling violations of the proton
structure functions, the production of jets allows one a direct measurement of
the strong coupling constant $\alpha_s$.
The Born contribution to DIS (Fig.~\ref{fig:feynborn}a) generates no transverse
momentum in the $\gamma^{*}p$ centre-of-mass frame, where the virtual
boson and the proton collide head on. Significant transverse momentum in
the $\gamma^{*}p$ frame is produced at leading order (LO) in the strong
coupling $\alpha_s$ by the QCD-Compton (Fig.~\ref{fig:feynborn}b) and the photon-gluon
fusion (Fig.~\ref{fig:feynborn}c) processes.
\begin{figure}
\begin{center}
\resizebox{0.75\columnwidth}{!}{
\includegraphics{d09-032f1a.eps}\hskip1.0cm
\includegraphics{d09-032f1b.eps}\hskip1.0cm
\includegraphics{d09-032f1c.eps}}
\caption{
Deep-inelastic lepton-proton scattering at different orders in
$\alpha_s$: (a) Born contribution $\mathcal{O}(1)$, (b) example of the QCD Compton
scattering $\mathcal{O}(\alpha_s)$ and (c) boson-gluon fusion $\mathcal{O}(\alpha_s)$.}
\label{fig:feynborn}
\end{center}
\end{figure}
In LO the momentum fraction of the proton carried by the parton
is given by $\xi = x_{Bj}(1+M_{12}^2/Q^2)$, where $x_{Bj}$ is the
Bjorken scaling variable $x_{Bj}=Q^2/(Q^2+W^2)$. Here $W$ is the total
c.m. energy $W^2=(q+P)^2$, $q=$ is the momentum of the virtual photon, $M_{12}$ is
the invariant mass
of two jets of highest $p_T$ and $Q^2$ is the negative four-momentum
transfer squared of the ingoing and outgoing electron. In the kinematic
region of low $Q^2$ and low $\xi$, the $\gamma^{*}$-gluon
fusion dominates the jet production imparting sensitivity to the gluon
component of the parton density functions (PDFs) of the proton, whereas the
contribution of the QCD-Compton process yields information on the various
quark (antiquark) components of the proton.
In order to make theoretical predictions on jet productions in
neutral current DIS scattering one needs the PDFs of the proton, provided
mostly by the global analysis collaborations (i.e., analysis in which all
high energy physics measurements relevant for testing QCD and determination of
various parameters and non-perturbative functions are undertaken), such as
CTEQ \cite{CTEQ} and MSTW \cite{MSTW}. In addition one must have infrared and
collinear safe parton cross sections, which are known now up to NLO in
$\alpha_s$~\cite{Catani,Nagy-2001}. An example of the inclusive DIS
measurements at HERA together with data from lower energy fixed target experiments
is shown in Fig.~\ref{fig:PDG16-2}. A striking feature of the
HERA data is the dramatic rise of
the proton structure function $F_2(x,Q^2)$ for increasing $Q^2$ and
low values of $x$ (typically $x \leq 10^{-2}$).
Almost all of this rise of $F_2(x,Q^2)$is due to the gluon density in the proton.
This has profound consequences for high energy $p\bar{p}$ (at the Tevatron)
and $pp$ collisions (at the LHC).
DIS jet production
depends in general on two large scales $Q=\sqrt{Q^2}$ and the $p_T$ of the
produced jets.
\begin{figure}
\begin{center}
\resizebox{0.75\columnwidth}{!}{\includegraphics{PDG-16-2.eps}}
\caption{
Proton structure function $F_2^p(Q^2,x)$ given at two $Q^2$ values (3.5 GeV$^2$
and 90 GeV$^2$). The various data sets have been normalised by the factors
shown in the brackets in the key to the plot, which were determined in the
NNLO MSTW2008 global analysis~\cite{MSTW}. (From Amsler {\it et al.} in Ref.~\cite{Amsler}).}
\label{fig:PDG16-2}
\end{center}
\end{figure}
The $ep$ jet data were collected by two detectors at HERA: H1 and ZEUS, resulting from the
collision of electrons or positrons with energy $E_e=27.6$ GeV with protons of energy
$E_p=920$ GeV, providing a center-of-mass energy of $\sqrt{s}\simeq 320$ GeV.
In the more
recent analysis the inclusive $k_T$ jet algorithm \cite{Ellis} is used to
combine the particles in the hadronic final state into jets.
Theoretical predictions (at next-to-leading order) have been corrected
for hadronisation effects, which are
calculated via Monte Carlo Models with parton showers.
The most recent publication on jet production in DIS comes from the H1
collaboration at HERA, where data up to 2007 are included and $Q^2$ spans the
range $150<Q^2<15000~GeV^2$ \cite{H1I}.
Inclusive jet, 2-jet and 3-jet cross sections, normalised to the neutral current (NC) deep
inelastic scattering cross section, are measured as functions of $Q^2$,
jet transverse momentum and proton momentum fraction. We show
in Fig.~\ref{fig:Ijet_Q2ET} the normalised inclusive jet cross section as a function of
the jet transverse momentum $p_T$ in the Breit frame (defined as the frame in which
$2x \vec{p} + \vec{q}=0$, where $\vec{p}$ and $\vec{q}$ are the 3-momenta of the
proton and virtual photon, respectively) for two ranges of $Q^2$,
$700 < Q^2 < 5000$ GeV$^2$ (shown in the left-hand frame) and $5000 < Q^2 < 15000$~GeV$^2$
(shown in the right-hand frame).
\begin{figure}
\begin{center}
\resizebox{0.85\columnwidth}{!}{
\includegraphics{d09-032f4.eps}}
\end{center}
\caption{\label{fig:Ijet_Q2ET}The normalised inclusive jet cross
sections measured as a function of the jet transverse momentum in
the Breit frame $P_{T}$ in two regions of $Q^2$ indicated on the frames.
The points are shown at the average value of $P_T$ within each bin.
(From H1 Collaboration~\cite{H1I}).}
\end{figure}
Agreement between data and theoretical predictions~\cite{Catani,Nagy-2001} is
excellent. The ratio R (of data over theory) lies near 1 (shown at the bottom of
these frames). Similar
plots for bins with smaller $Q^2$ can be seen in \cite{H1I}. HERA data on inclusive jet
production in DIS~\cite{Chekanov:2002be,Chekanov:2006xr,:2007pb}
have constrained the gluon density in the range $0.01 < x < 0.1 $
The strong coupling $\alpha_s(Q^2)$ has been determined
and translated into $\alpha_s(M_Z) =0.1168\pm0.0007({\rm exp.})
^{+0.0046}_{-0.0036}({\rm theor.}) \pm 0.0016({\rm PDF})$ using the usual renormalisation
group equation. This result is competitive with those from $e^+e^-$
data and is in good agreement with the world average \cite{Amsler}.
A similar recent analysis of the ZEUS collaboration has less
integrated luminosity, as it is based only on the data taken from 1998-
2000. However, their data include also results for rather large $Q^2$.
Therefore, $Z$ exchange is included in addition to the $\gamma$-exchange.
The analysis is done in a similar fashion as that of the H1 collaboration
described above. They also studied the inclusive one-jet cross section as a
function of $Q^2$ and $E^{jet}_{T,B}$ (the transverse energy of the jet in the
Breit frame). In addition they also measured this cross section for three
different radii, $R=0.5,0.7$ and $1.0$, used for combining hadrons into
jets with the help of the inclusive $k_T$ cluster algorithm~\cite{Ellis,Catani-algo}. The results are shown in
Fig.~\ref{fig:Diff-Q2} for
$d\sigma/dQ^2$ for $E_{T,B}^{jet} > 8$ GeV as a function of $Q^2$.
Further kinematic constraints, namely $|\cos \gamma_h| < 0.65$, where $\gamma_h$
corresponds to the angle of the scattered quark in the quark-parton model and is
constructed using the hadronic final state, and the pseudorapidity range
$-2 < \eta_B^{\rm jet} <1.5$, are indicated on the figure. The pseudorapidity variable,
defined as $\eta=-\ln \tan(\theta/2)$ where $\cos \theta=p_z/|p|$, is approximately
equal to the (longitudinal-boost-invariant) variable called rapidity
$y=\frac{1}{2}\ln(\frac{E+p_z}{ E-p_z})$ in the limit $|p| \gg m$, and can be measured
when the mass $m$ and the momentum of the particle $|p|$ are unknown.
The NLO QCD predictions with scales $\mu_R=E_{T,B}^{jet}$, $\mu_F=Q$,
corrected to include hadronisation and $Z$ effects,
are compared with the measurements~\cite{ZEUS1}.
The calculations reproduce the measured
differential cross section quite well for all three jet radii considered.
In this work also $\alpha_s(Q^2)$ has been determined. The result is
$\alpha_s(M_Z)=0.1207\pm0.0014({\rm stat.}) ^{+0.0035}_{-0.0033}({\rm exp.})
^{+0.0023}_{-0.0023}({\rm theor.})$, which is also consistent with the world average.
\begin{figure}
\begin{center}
\resizebox{0.85\columnwidth}{!}{
\includegraphics{DESY-06-241_2a.eps}}
\caption{\label{fig:Diff-Q2}
Measured differential cross-section $d\sigma/d Q^2$ for inclusive-jet
production with $E_{T,B}^{\rm jet} >8$~GeV and $-2<\eta_{\rm B}^{\rm jet}<1.5$ (dots)
for different jet radii, in the kinematic range given by $|\cos \gamma_h|<0.65$.
(From ZEUS Collaboration~\cite{ZEUS1}).
}
\end{center}
\end{figure}
We now turn to photoproduction.
At HERA the largest cross section is due to photoproduction, where the beam
(electron or positron) interacts with the proton via the exchange of
a virtual photon with a small virtuality $Q^2 \approx 0$. The spectrum of the
ingoing virtual photon can very well be described by the well-known
Weizs\"acker-Williams formula \cite{Weizsacker,Williams-34,Frixione-93}.
The photoproduction of single jets, dijets and triple jets with high
transverse momenta can be calculated also within perturbative QCD if the
transverse momentum of the jets is large enough to provide the hard scale.
Besides the larger cross section, as compared to the DIS jet production, the
photoproduction of jets does not depend on the additional scale $Q$. The
contributions to the theoretical cross sections which have been calculated up
to NLO come from two processes: (i) the direct process in which the photon enters
the hard sub-processes directly by coupling to the quarks, in the same way as
in deep-inelastic ep scattering (see Fig.~\ref{fig:feynborn}b, c in LO), and
(ii) the so-called
resolved process in which the photon fluctuates into partons, quarks or gluon,
and one of them participates in the hard parton-parton scattering
process~\cite{Llewellyn Smith:1978dc,Brodsky:1978eq}.
This latter process is equivalent to jet production in hadron-hadron
collisions, of which the LO hard scattering cross sections for $qq'\to qq'$,
$gg \to gg$ and $gq \to gq$ are written below. The only difference is that the
PDFs of one of the hadrons is replaced by the photon PDF. This process, therefore, is
sensitive to the parton structure of the proton and the photon. It is one of
the few processes which can give information on the gluon content of the
photon.
The basic $\gamma$-parton processes which enter the calculation of the
direct process in LO are the following: QCD Compton process: $\gamma q \to gq$, and
the photon-gluon fusion: $\gamma g \to q\bar{q}$.
These $\gamma$-parton cross sections have the following simple forms
\begin{eqnarray}
\gamma q \to gq : \frac{d\sigma}{d\cos\theta^{*}}
\sim e_q^2 \frac{\alpha}{\pi}\frac{\alpha_s}{\pi}\frac{1}{s}\frac{4}{9}
\left(-\frac{\hat{u}}{\hat{s}}-\frac{\hat{s}}{\hat{u}}\right)~, \\
\gamma g \to q\bar{q}: \frac{d\sigma}{d\cos\theta^{*}} \sim e_q^2
\frac{\alpha}{\pi}\frac{\alpha_s}{\pi}\frac{1}{2s}
\left(\frac{\hat{u}}{\hat{t}}+\frac{\hat{t}}{\hat{u}}\right)~,
\end{eqnarray}
where $e_q$ is the charge of the quark with flavour $q$, $\hat{s}=4$,
$\hat{t}=-2(1 - \cos \theta^*)$, $\hat{u}=-2(1 + \cos \theta^*)$ and $\theta^{*}$ is
the angle of the dijets in their centre-of mass system. $|\cos\theta^{*}|$ is related to
the pseudorapidities of the two jets, $\eta_1$ and $\eta_2$ by
\begin{equation}
|\cos\theta^{*}| = |\tanh(\eta_1-\eta_2)/2)|~.
\end{equation}
There are many observables which have been measured and which can be used
to test the basic parton-parton cross sections for the direct and resolved
process up to NLO \cite{photonNLO,photonNLO-2,photonNLO-3,photonNLO-4}. We shall present only a few taken
from the most recent H1 \cite{H1photon} and ZEUS \cite{ZEUSphoton}
publications. The $d\sigma/d|\cos\theta^{*}|$ distribution has been studied as a function of
$|\cos\theta^{*}|$ by the H1 collaboration \cite{H1photon} with and without an
additional cut on the
invariant mass of the two jets $M_{jj} $ for the direct (resolved)
enhanced contribution. This analysis is done in terms of a variable $x_{\gamma}$ defined by,
\begin{equation}
x_{\gamma} =\frac{1}{2E_{\gamma}} \sum_{i=1}^{2} E_{T,i} e^{-\eta_i}~,
\end{equation}
where $E_{T,1}$ and $E_{T,2}$ are the transverse energies of the two jets with
the two largest $E_T$'s. In LO the direct contribution is at $x_{\gamma}=1$
and the resolved contribution has $x_{\gamma} < 1$. Therefore,
by selecting events with $x_{\gamma} > 0.8$ ($x_{\gamma} < 0.8$) the direct
(resolved) parts of the cross sections are dominant. The results of the H1
analysis are shown in Fig.~\ref{fig:costheta} for the two bins of $x_{\gamma}$
as a function of $\cos\theta^{*}$, with the upper two frames without a cut on
$M_{jj} $ and the lower two frames with $M_{jj} > 65$ GeV.
\begin{figure}
\begin{center}
\resizebox{0.75\columnwidth}{!}{\includegraphics{XsectCMS.eps}}
\caption{Bin averaged cross sections as a function of $|\cos\theta^{*}|$ for data
(points), NLO QCD calculations with (solid line) and without (dashed line) hadronisation
corrections $\delta_{\rm{had}}$ and for the PYTHIA Monte Carlo predictions (dotted line)
scaled by a factor of $1.2$. The inner (hatched) band of the
NLO$\times (1+\delta_{\rm{had}})$ result is the scale uncertainty, the outer
(shaded) band is the total uncertainty. The cross sections are shown for two regions
in $x_\gamma$, with and without an additional cut applied on the invariant dijet
mass ($M_{\rm{JJ}}$). (From H1 Collaboration~\cite{H1photon}). }
\label{fig:costheta}
\end{center}
\end{figure}
The cross section $d\sigma/d|\cos\theta^{*}|$
with the $M_{jj}$ cut is sensitive to the dynamics of the underlying
$\gamma$-parton and parton-parton hard interactions. The cross section in the
resolved sample $x_{\gamma} < 0.8$ rises more rapidly with $|\cos\theta^{*}|$
than that in the direct sample due to the dominance of the virtual gluon
exchange in the resolved processes (see formulae for parton-parton cross
sections below). The dependence on $|\cos\theta^{*}|$ and also the
absolute normalisation are well predicted by the NLO calculations.
Similar results have been presented by the ZEUS collaboration
\cite{ZEUShighmass} by varying the dijet mass $M_{jj}$ and their results have
been presented in~\cite{ZEUSphotonstr}.
Another example is the cross section $d\sigma/d\overline{E_T}$, where
$\overline{E_T}$ is the mean transverse energy of the two jets
\begin{eqnarray}
\overline{E_T} = \frac{1}{2}(E_T^{jet1} + E_T^{jet2})~.
\end{eqnarray}
An example of such a cross section for $x_{\gamma}>0.75$ and $x_{\gamma}<0.75$,
respectively, as presented by the ZEUS collaboration \cite{ZEUSphoton} is shown
in Fig.~\ref{fig:gen_et}.
\begin{figure}
\begin{center}
\resizebox{0.50\columnwidth}{!}{
\includegraphics{DESY-07-092_4.eps}}
\caption{\label{fig:gen_et} Measured cross-section $d\sigma/d\bar{E_T}$ for (a)
$ x_\gamma^{\rm obs}>$ 0.75 and (b) $ x_\gamma^{\rm obs} \leq$ 0.75 compared
with NLO QCD predictions using the AFG04~\cite{Aurenche:2005da} (solid line)
and CJK~\cite{Cornet:2004nb} (dashed line) photon PDFs. The predictions using
AFG04 are also shown with their associated uncertainties (shaded histogram). The
ratios to the prediction using the AFG04 photon PDF are shown at the bottom of the figure.
(From ZEUS Collaboration~\cite{ZEUSphoton}).}
\end{center}
\end{figure}
The cross section is measured up to
$\overline{E_T} \simeq 80$ GeV, i.e. further out in $E_T$ than in DIS jet
production. In this figure also results for two different photon PDFs
(namely, the so-called AFG04~\cite{Aurenche:2005da} and the CJK~\cite{Cornet:2004nb})
are shown. The most sensitive cross section concerning direct and resolved separation
is the cross section $d\sigma/dx_{\gamma}^{obs}$, where
$x_{\gamma}^{obs}$ is the $x_{\gamma}$ defined above with the sum over the two
jets (therefore "obs" in the notation of $x_{\gamma}$ since not all jets are
included in the sum). The result from the ZEUS collaboration is shown in
Fig.~\ref{fig:gen_xgamma}, from which one can see how the data compare with
different photon PDFs assumed in the NLO prediction. An appreciable dependence
on these PDFs is seen in the small $x_{\gamma}$ region as one would expect.
In an earlier analysis the ZEUS collaboration determined also the strong
coupling $\alpha_s$, just from jet production in $\gamma p$ interactions alone.
The result is $\alpha_s(M_Z)=0.1224\pm0.0001({\rm stat.}) ^{+0.0022}_{-0.0019}({\rm exp.})
^{+0.0054}_{-0.0042}({\rm theor.})$ \cite{ZEUSscaling}, and the variation of
$\alpha_s$ with the scale $\overline{E_T}$ has been found in good agreement
with the running of $\alpha_s$ as predicted by QCD.
Summarising the DIS and photoproduction processes at HERA, we see that QCD and jets
have made very significant impact on the profile of the proton and the photon in terms
of their respective PDFs, which determine the luminosity functions of the parton-parton
scatterings at high energies and hence the hard scattering cross sections of interest.
Detailed studies of $F_2(x, Q^2)$ at HERA have also rekindled
theoretical interest in the small-$x$ region. The evolution in $\ln(1/x)$
at fixed value of $Q^2$ is governed by
the so-called BFKL equation~\cite{Fadin:1975cb,Balitsky:1978ic}. Originally developed to study
Regge processes in high energy scatterings and the Pomeranchuk singularity (the QCD Pomeron),
it can be combined with the DGLAP equation
(for evolution in $Q^2$) to provide a quantitative description of the DIS structure
functions over an enlarged $(x,Q^2)$ domain. Several proposals in carrying out the
small-$x$ resummation have been considered in the literature, which are comparatively
discussed in a recent working group report~\cite{Dittmar:2009ii}.
In addition, the evolution in $\ln(1/x)$
leads to soft gluon enhancements, generating
a dense gluonic system over a limited range of the nucleon
wave function (hot spots). As the gluon occupation number becomes of order $1/\alpha_s$,
non-linear effects present in the QCD Lagrangian become important, leading
eventually to the saturation of the gluon density in the nucleons in high energy
collisions~\cite{McLerran:2010ua}.
This picture of high energy nucleonic wave functions (a high density, nonperturbative
gluonic system with a weak coupling constant) is called the Color Glass
Condensate\cite{McLerran:2010uc},
and is of great interest in understanding the QCD aspects of heavy ion collisions, such as
at RHIC and the LHC~\cite{Iancu:2003xm}.
\begin{figure}
\begin{center}
\resizebox{0.50\columnwidth}{!}{
\includegraphics{DESY-07-092_11.eps}}
\caption{\label{fig:gen_xgamma}
Measured cross section for $d\sigma/dx_\gamma^{\rm obs}$ compared with the NLO QCD predictions
using the AFG04~\cite{Aurenche:2005da} (solid line), CJK~\cite{Cornet:2004nb} (dashed line),
AFG~\cite{Aurenche:1994in} (dotted line), GRV~\cite{Gluck:1991ee,Gluck:1991jc} (dashed and double-dotted
line) and SAL~\cite{Slominski:2005bw} (dashed and single-dotted line)
photon PDFs. The ratios to the prediction using the
AFG04~\cite{Aurenche:2005da} photon PDF are shown at the bottom of the figure.
(From~\cite{ZEUSphoton}).}
\end{center}
\end{figure}
\subsection{$\gamma \gamma$ collisions}
Another area in which jet production has been studied experimentally and
theoretically is photon-photon collisions in the LEP2 energy range.
The two incoming photons are produced in $e^+ e^-$ collisions in the
anti-tagged mode, i.e. when both the scattered electron and the positron
escape detection. This is kinematically analogous to the photoproduction
process in high energy $ep$ collisions at HERA.
In $\gamma \gamma \to {\rm hadrons}$, four
classes of events have to be distinguished (see, Fig.~\ref{fig:DELPHI-fig1}).
The variables used in the classification of these events $x_\gamma^+$ and $x_\gamma^-$,
which are analogues of the variable $x_\gamma$ in $\gamma p$ collisions, are defined
as follows:
\begin{eqnarray}
x_\gamma^+ &=& \frac{\sum_{\rm jets}(E_{\rm jet} + p_{z, {\rm jet}})}
{\sum_{\rm part}(E_{\rm part} + p_{z, {\rm part}})}~,\nonumber\\
x_\gamma^- &=& \frac{\sum_{\rm jets}(E_{\rm jet} - p_{z, {\rm jet}})}
{\sum_{\rm part}(E_{\rm part} - p_{z, {\rm part}})}~,
\label{eq:xgammapm}
\end{eqnarray}
where 'part' corresponds to all detected particles and $E_{\rm jet}$ and $p_{z, {\rm jet}}$
are the two hard-jets energy and the component of jet momentum along the $z$-axis,
respectively. The four classes are:
\begin{enumerate}
\item Hadron production via vector meson interactions
(Vector-Meson Dominance Model VDM)~\cite{Brodsky:1972vv,Kwiecinski:1987tb}
(Fig.~\ref{fig:DELPHI-fig1}a).
\item The direct domain, where both $x_\gamma^+$ and $x_\gamma^-$ are close to 1. This
domain is mostly populated by the quark-parton model like events
$\gamma \gamma \to q + \bar{q}$ (Fig.~\ref{fig:DELPHI-fig1}b).
\item The single resolved domain, with the presence of a remnant jet, where only one of the
$x_\gamma^+$ and $x_\gamma^-$ is close to 1 and the other is shifted to some lower value
(Fig.~\ref{fig:DELPHI-fig1}c).
\item The double-resolved domain, where both $x_\gamma^\pm$ are shifted to values below 1
(Fig.~\ref{fig:DELPHI-fig1}d).
\end{enumerate}
\begin{figure}
\vspace*{5cm}
\begin{center}
\resizebox{0.50\columnwidth}{!}{\includegraphics{DELPHI-fig1.eps}}
\end{center}
\vspace*{-3.5cm}
\caption[]{Main diagrams corresponding to the hadron production in $\gamma \gamma$ collisions
via vector meson interactions (VDM-like, a), point-like interactions (QPM-like, b)
and with one (c) or both (d) photons resolved into partons. (From~\cite{Abdallah:2008zzb}).}
\label{fig:DELPHI-fig1}
\end{figure}
Due to the appearance of the double resolved region, jet production in $\gamma \gamma$
collisions has increased sensitivity to the gluon content of the resolved photon.
This has enormous significance for future high energy $\gamma - \gamma$ collisions,
being entertained in the context of a high energy linear $e^+e^-$ collider.
In the past, dijet production in $\gamma \gamma$ collisions has been studied
experimentally at $\sqrt{s_{ee}}$ from 189 to 209 GeV by the OPAL~\cite{Abbiendi:2003cn} and
DELPHI~\cite{Abdallah:2008zzb} collaborations at LEP.
To that end a number of observables (differential distributions) have been measured
by introducing $x_\gamma^\pm$-cuts at 0.75 (OPAL) and $x_\gamma^\pm =0.85$ (DELPHI).
These distributions
include, among others, $d\sigma_{\rm dijet}/d\bar{E}_T^{\rm jet}$, with
$\bar{E}_T^{\rm jet} = 1/2(E_{T,1}^{\rm jet} + E_{T,2}^{\rm jet})$ and $d\sigma_{\rm dijet}/dx_\gamma$.
The data from both collaborations have been compared with the NLO QCD calculations
based on the work of ~\cite{photonNLO} and are found to be in good agreement.
This is shown
in Fig.~\ref{fig:OPAL-etmxs} for the differential distribution
$f \dot d\sigma/d\bar{E}_T^{\rm jet}$, where the factor $f$ is used to visibly separate
the three measurements.
\begin{figure}
\begin{center}
\resizebox{0.75\columnwidth}{!}{
\includegraphics{OPAL-pr372_10.eps}}
\end{center}
\caption{\label{fig:OPAL-etmxs} The dijet cross-section in $\gamma \gamma$ collisions at LEP
as a function of the mean transverse energy $ \bar{E}_T^{\rm jet}$ of the dijet system,
for the three regions in {$x_\gamma^+ - x_\gamma^-$}-space given in the figure.
The factor $f$ is used to separate the three measurements in the figure more clearly. The
prediction of the LO program PYTHIA is compared to the data. The NLO calculation is
from~\cite{photonNLO-2}. (From OPAL~\cite{Abbiendi:2003cn}).}
\end{figure}
Inclusive jet production in $\gamma - \gamma$ collisions has also been measured by
the L3~\cite{Achard:2004rh} and OPAL~\cite{:2007jx} collaborations at LEP.
\subsection{Proton colliders}
\subsubsection{Fundamental QCD scattering processes}
In parallel to electron and photon processes in QED, a large number
of $2\to 2$ scattering processes involving quarks and gluons are predicted
in QCD, see {\it e.g.} \cite{EllisK}. They give rise to jets at hadron colliders.
Most interesting are
the fundamental abelian processes in QED transcribed to the non-abelian extensions in
QCD, like
\begin{eqnarray}
{\rm Rutherford \; quark \; scattering} \;&:&\; qq' \to qq^\prime~, \nonumber \\
{\rm Rutherford \; gluon \; scattering} \;&:&\; gg \to gg~, \nonumber \\
{\rm Super-Compton \; process} \;&:&\; gq \to gq \,. \nonumber
\end{eqnarray}
Representative scattering diagrams are depicted in Fig.{\ref{fig:Tevjets}}.
The associated cross sections scale in the energy squared $s$ for
massless initial and final-state quarks, while the angular distributions are given by
\begin{eqnarray}
qq' \to qq' \;&:&\; \frac{d\sigma}{d\cos{\theta^\ast}} \sim
\left(\frac{\alpha_s}{\pi}\right)^2 \frac{1}{s} \,
\frac{4}{9} \, \frac{\hat{s}^2+\hat{u}^2}{\hat{t}^2}~,
\nonumber \\
gg \to gg \;&:&\; \frac{d\sigma}{d\cos\theta^\ast} \sim
\left(\frac{\alpha_s}{\pi}\right)^2 \frac{1}{s} \,
\frac{9}{2} \, \left[ 3 - \frac{\hat{s}\hat{u}}{\hat{t}^2}
- \frac{\hat{s}\hat{t}}{\hat{u}^2}
-\frac{\hat{t}\hat{u}}{\hat{s}^2} \right]~,
\nonumber \\
gq \to gq \;&:&\; \frac{d\sigma}{d\cos\theta^\ast} \sim
\left(\frac{\alpha_s}{\pi}\right)^2 \frac{1}{s} \,
\left[\frac{\hat{u}^2+\hat{s}^2}{\hat{t}^2}
- \frac{4}{9} \, \frac{\hat{s}^2+\hat{u}^2}{\hat{s}\hat{u}}
\right] \,,
\end{eqnarray}
and the variables $\hat{s}, \hat{u}$ and $\hat{u}$ have been defined earlier.
One should notice the three-gluon coupling already in LO.
These amplitudes generate the expected
Rutherford singularities $\sim d\theta^{\ast 2} / \theta^{\ast 4}$ for forward scattering
$\hat{t} \to 0$, and analogously for backward scattering $\hat{u} \to 0$.
Calculating the experimentally observed
cross sections at hadron colliders requires three essential steps, which we have
already outlined in the context of calculating the DIS cross sections, namely
(i) the hard $2 \to 2$ scattering processes, including NLO QCD corrections,
(ii) flux of the incoming partons, determined in terms of the PDFs of the protons (and
antiprotons), discussed earlier in the context of DIS scattering at HERA, and
(iii) hadronic (non-perturbative) corrections.
Here also QCD plays an important role in terms of the scale dependence
of the PDFs and FFs. Thus, for example, the cross section of the hadron-hadron
scattering with the four-momenta of the two colliding hadrons $P_1$ and $P_2$
can be written as~\cite{EllisK}
\begin{equation}
\sigma (P_1,P_2)= \sum_{i,j} \int dx_1 \int dx_2 f_{i}(x_1, \mu_F^2) f_j(x_2,\mu_F^2)
\hat{\sigma}_{ij}(p_1,p_2, \alpha_s(\mu_R^2), Q^2/\mu_F^2, Q^2/\mu_R^2)~,
\end{equation}
where the hard interaction between the partons $i$ and $j$ is given by
$\hat{\sigma}_{ij}$ and $p_1=x_1P_1$ and $p_2=x_2P_2$ are the momenta of the two
partons.
\subsubsection{Jets in hadron colliders and tests of QCD}
An example for inclusive jet production at the Tevatron measured by the D0 collaboration is shown in
Fig.~\ref{fig:CMSjets} (left-hand frame). Similar measurements have been done by the CDF collaboration
at the Tevatron.
The dominant contribution at small $p_T$
can be traced back to Rutherford gluon scattering $gg \to gg$.
This result is naturally expected since, on average, the gluon colour charges are significantly
larger than the quark colour charges and, as discussed earlier, the gluon flux
for low values of $x$ by far exceeds the quark flux of high-energy protons.
\begin{figure}
\begin{center}
\resizebox{1.0\columnwidth}{!}{
\includegraphics{F_Tevjetsdia.eps}}
\end{center}
\caption{ Representative Feynman diagrams for fundamental QCD processes in hadronic
collisions.
\label{fig:Tevjets}}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.45\columnwidth}{!}{
\includegraphics{F_Tevjets.eps}}
\resizebox{0.45\columnwidth}{!}{
\includegraphics{CMS-Figure_007.eps}}
\end{center}
\caption{Left-hand frame: Transverse energy distribution of jets at the Tevatron
\cite{Tevjet}.
Right-hand frame: Comparison between the measured $p_T$ spectra by the CMS
collaboration at the LHC and theory predictions
for calorimeter jets~\cite{CMS-PAS-QCD-10-011}. For better visibility the spectra
in both frames are multiplied by arbitrary factors
indicated in the legend.
}
\label{fig:CMSjets}
\end{figure}
These jet cross sections can be exploited to determine the gluon distribution
of the proton and to measure the QCD coupling~\cite{BhattiLinc}. By combination
with other measurements the two observables are disentangled in the Tevatron
measurements. The gluon flux extracted this way is large, as anticipated, and
the QCD coupling is compatible with the world average. This~\cite{Tevjet}, and related
measurements~\cite{Aaltonen:2008eq,Abulencia:2007ez}
impact on the proton PDFs and have been used in updating this information~\cite{CTEQ,MSTW}.
In particular, they provide constraints on the gluon (and quark) distributions in the
domain $0.01 < x < 0.5$. A detailed discussion of jets and comparison
of data and theory at the Tevatron can be seen in~\cite{Ellis:2007ib}.
Very soon, similar but more sensitive
analyses will also be undertaken at the LHC and a beginning has already been made.
In Fig.~{\ref{fig:CMSjets}} (right-hand frame), we
show a comparison between the measured $p_T$ spectra by the CMS
collaboration~\cite{CMS-PAS-QCD-10-011} at the LHC with
$\sqrt{s}=7$ TeV and an integrated luminosity of 60 $nb^{-1}$ for the calorimeter jets and
theory predictions at the next-to-leading (NLO) order
accuracy, using an anti-$k_T$ jet algorithm with $R=0.5$. Data are divided in several
intervals of rapidity $y$ bins. Theory predictions are based on NLOJET++~\cite{Nagy:2001fj}
with CTEQ-6.6~\cite{CTEQ} sets of parton
distribution functions (PDF). The non-perturbative (NP) corrections are estimated
using two different hadronisation models, PYTHIA~\cite{Sjostrand:2006za}
and HERWIG++~\cite{Bahr:2008pv}, with the mean of the two predictions
taken as the correction. Despite currently modest LHC luminosity, jets having
transverse momenta up to 800 GeV are measured and the agreement with QCD is excellent.
An in-depth review discussing the physics basis and use of the general purpose Monte
Carlo event generators for hadronic collisions at the LHC is available~\cite{Buckley:2011ms},
to which we refer for a comprehensive discussion.
Experiments at the LHC have opened a window to sub-energies in the TeV range
for studying jet phenomena in QCD, enabling searches for
physics beyond-the-SM in a number of such extensions. Both ATLAS and CMS have searched
for new heavy particles, which manifest
themselves as narrow resonances in their data collected at the LHC at $\sqrt{s}=7$ TeV.
Such new states may include an excited composite quark $q^*$, expected in theories with
quark substructure~\cite{Eichten:1983hw,Baur:1987ga,Baur:1989kv};
an axigluon predicted by chiral colour-models~\cite{Frampton:1987dn,Bagger:1987fz};
a flavour-universal colour-octet coloron~\cite{Chivukula:1996yr,Simmons:1996fz};
or a colour-octet techni-$\rho$ meson predicted by models of extended technicolor
and topcolor-assisted tecnicolor ~\cite{Lane:1991qh,Lane:2002sm,Foadi:2007ue,Belyaev:2009}.
The dijet invariant mass $(m_{jj})$ is an observable which is particularly sensitive to
such new objects. This was studied already at the Tevatron in $p\bar{p}$ collisions with
negative results, exemplified by the CDF limit on the mass of the excited quarks $q^*$
in which a mass range $260 < m_{q^*} < 870$ GeV was excluded at
95\% C.L.~\cite{Aaltonen:2008dn}. ATLAS has extended this exclusion range to higher $q^*$
masses, with the range $0.40 < m_{q^*} < 1.26$ TeV now excluded using $pp$
collisions~\cite{Collaboration:2010bc}. Fig.~\ref{fig:LHCjj} shows the
predicted signal for $q^*$ masses of 500, 800, and 1200 GeV satisfying all event selection
cuts. No signal of $q^*$ is found and the data are in excellent agreement with the
background estimates based on the SM.
Similar measurements of the dijet invariant mass spectrum and search for new particles
decaying to dijets have been performed by the CMS collaboration~\cite{Khachatryan:2010jd}.
The highest observed dijet mass by CMS at $\sqrt{s}=7$ TeV is 2.13 TeV. No deviations are
found from QCD up to this dijet mass. In particular, string
resonances with a mass less than 1.67 TeV have been excluded by the current CMS
measurements at 95\% C.L. The sensitivities to the narrow resonances in the
dijet mass will increase substantially
with the increase in the LHC luminosity and energy. For example, for
the anticipated luminosity of 1 fb$^{-1}$ at $\sqrt{s}=7$ TeV, the expected limits are all in
the range of 2.5 to 3.5 TeV.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{Atlas-2010-fig1.eps}}
}
\caption{\label{fig:LHCjj} The data (D) dijet mass distribution (filled points) fitted using a
binned background (B) distribution described in the text (histogram). The predicted excited
quark $q^*$ signals for excited quark masses of 500, 800 and 1200 GeV are overlaid,
and the significance of the data-background difference is shown (from ATLAS collaboration
~\cite{Collaboration:2010bc}).}
\end{figure}
\subsubsection{Physics of the top quark and $(W^\pm$,$Z)$ bosons using jets}
Inclusive jet production in $p\bar{p}$ and $pp$ collisions in association with a
$Z/\gamma^*/W$ boson provides a stringent test of QCD. As these final states are also
of great importance in the search of the SM Higgs boson arising from the process
$pp(\bar{p}) \to W/Z + H(\to b \bar{b})$, and in the search of supersymmetry in the missing
${E}_T$ + jets channel, the processes $pp(\bar{p}) \to W/Z/\gamma^* + ~{\rm jets}$ have
received a lot of theoretical and experimental
attention. In particular, theoretical predictions for vector boson production
recoiling against a hadron jet at next-to-leading order were presented
in ~\cite{Ellis:1981hk,Arnold:1988dp,Arnold:1989ub,Giele:1993dj}. The processes
$p + \bar{p} \to W/Z/\gamma^* + 2~{\rm jets}$ to the same level of theoretical accuracy
were calculated for the Tevatron in ~\cite{Campbell:2002tg} and the corresponding processes
$p + p \to W/Z/\gamma^* + 2~{\rm jets}$ for the LHC in~\cite{Campbell:2003hd}. Vector boson
production in association with $n$-jets for $n \leq 4$ was calculated
in~\cite{Berends:1989cf,Berends:1990ax}. A parton-level event generator, called
MCFM~\cite{Campbell:2010ff},
which gives theoretical predictions for a large number of processes containing $W$, $Z$ and
$H$ bosons and jets (including heavy quark jets) is available for the Tevatron and the
LHC colliders. Similar theoretical tools have been developed which give predictions for the
transverse momentum distributions of the $Z/\gamma^*/W$ produced in hadron
collisions, based either on fixed order perturbation theory, such as~\cite{Melnikov:2006kv}
and~\cite{Catani:2009sm}, or based on soft gluon resummations valid at
low $p_T$~\cite{Ladinsky:1993zn}, such as RESBOS~\cite{Balazs:1997xd}.
They have been used in conjunction with the PDFs~\cite{CTEQ} in the
analysis of the Tevatron data~\cite{Abazov:2010kn,CDF-Note-10216}, and we show
below representative measurements from the CDF Collaboration in Figs.~\ref{fig:CDF-Z-Jets-pt}.
The NLO pQCD MCFM framework describes the data rather well over a large range
of $p_T^{\rm jet}$, as well as the jet-multiplicity.
\begin{figure}
\begin{center}
\resizebox{0.45\columnwidth}{!}{
\includegraphics{CDF-Z-Jets-pt.eps}}
\resizebox{0.45\columnwidth}{!}{
\includegraphics{CDF-Z-Njets.eps}}
\end{center}
\caption{Left-hand frame: (top) Inclusive jet differential cross section measured
by the CDF collaboration as a function of
$p_T^{\rm jet}$ in $Z/\gamma^* + \geq ~1~{\rm jet}$ events (black dots) compared to NLO pQCD
predictions (open circles). (bottom) Data/Theory versus $p_T^{\rm jet}$.
Right-hand frame: (top) Measured total cross section for inclusive jet production
in $Z/\gamma^* \to \mu^+\mu^-$
events as a function of $N_{\rm jet}$ compared to LO and NLO pQCD predictions.
(bottom) Ratio of data and LO pQCD predictions versus $N_{\rm jet}$.
(From ~\cite{CDF-Note-10216}).}
\label{fig:CDF-Z-Jets-pt}
\end{figure}
The production of heavy gauge boson pairs $(WW, WZ, ZZ)$ in $p\bar{p}$ and $pp$ collisions
provides tests of the self-interactions of the gauge bosons and hence deviations from the
SM-based predictions for the production rate could indicate new physics~\cite{Langacker:2010}.
Since, topologically
diboson production is similar to the associated Higgs boson production $pp(\bar{p}) \to
VH +X$ ($V=W,Z $), the experimental techniques developed in $pp(\bar{p}) \to VV$ are important
for the Higgs boson searches as well. The process $p\bar{p} \to VV$ with both the vector
mesons decaying into lepton pairs ($W^\pm \to \ell^\pm \nu_\ell; Z \to \ell^+\ell^-)$
has been observed at the
Fermilab Tevatron experiments by CDF~\cite{Acosta:2005mu,Aaltonen:2008mv} and D0~\cite{Abazov:2004kc}.
Diboson production has not been conclusively observed in decay channels involving
only hadrons. However, evidence for diboson decays into a mixed
$\ell \bar{\nu}_\ell q \bar{q}$
final state ($\ell=e,\mu,\tau; q=u,d,s,c,b$) has been presented by D0~\cite{Abazov:2008yg}
and CDF~\cite{Aaltonen:2009fd}. The experimental analyses involve large transverse
momentum imbalance (due to the escaped neutrino) and two jets whose invariant mass
can be reconstructed.
Because of the limited resolution in the dijet invariant mass, decays of $W^\pm \to 2$ jets and
$Z \to 2$ jets are not distinguished separately. The most significant backgrounds
to the diboson signals are $W(\ell \bar{\nu}) +$ jets, $Z (\nu\bar{\nu}) +$ jets and
QCD multijet production.
In Fig.~\ref{fig:Dijetmass-D0}, we show the dijet mass distribution from the
$e\nu q \bar{q}$ and $\mu\nu q\bar{q}$ channels for the D0 data~\cite{Abazov:2008yg} and
MC predictions. A clear diboson signal in the dijet invariant mass is seen in the lower frame.
The resulting cross section
$\sigma(WV)=20.2 \pm 4.5$ pb is consistent with the SM prediction
$\sigma(WV)=16.1 \pm 0.9$ pb at $\sqrt{s}=1.96$ TeV~\cite{Campbell:1999ah}.
Fig.~\ref{fig:Dijetmass-CDF} shows the corresponding measurements by
CDF~\cite{Aaltonen:2009fd}. This yields a combined $WW+WZ+ZZ$ cross section in
$p\bar{p}$ collisions at $\sqrt{s}= 1.96$ TeV:
$\sigma (p\bar{p} \to VV)=18.0 \pm 2.8 ({\rm stat})\pm 2.4
({\rm syst})\pm 1.1 ({\rm lumi})$ pb, consistent with the SM prediction.
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{PRL102-D0-Fig2.eps}}
}
\caption{\label{fig:Dijetmass-D0}
(a) The dijet mass distribution from the combined $e\nu q\bar{q}$ and $\mu\nu q \bar{q}$
channels for data from the D0 collaboration at the Tevatron and MC predictions.
(b) A comparison of the extracted signal (filled histogram) to
background-subtracted data (points), along with the $\pm 1 \sigma$ systematic uncertainty on
the background. The residual distance between the data points and the extracted signal,
divided by the total uncertainty, is shown at the bottom [D0~\cite{Abazov:2008yg}].
}
\end{figure}
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{PRL103-CDF-Fig2.eps}}
}
\caption{\label{fig:Dijetmass-CDF}
Top: Comparison between data in $p\bar{p} \to VV +X)$ $(V=W^\pm, Z^0)$ from the CDF
collaboration at the Fermilab Tevatron at $\sqrt{s}=1.96$ TeV and the fitted background only
in the Dijet invariant mass.
Bottom: Comparison of the diboson signal (solid line) with the background subtracted data
(points). The dashed lines represent the $\pm 1 \sigma$ statistical variations on the signal
[CDF~\cite{Aaltonen:2009fd}].
}
\end{figure}
At the Fermilab Tevatron, top quarks are produced mostly in pairs $p \bar{p} \to t\bar{t} +X$.
In the SM, top quarks decay into a $W$ boson and a $b$ quark almost 100\% of the time.
The topology of the final states resulting from the $t\bar{t}$ production depends on whether
the $W$ boson decays leptonically $W \to \ell \nu_\ell$, or hadronically
$W \to q \bar{q}^\prime$
leading to two jets. Following this, $t\bar{t}$ events have been measured in dilepton
$\ell^+\ell^- +X$, single lepton $\ell^\pm +4$ jets and also in the non-leptonic mode with
no energetic leptons. The non-leptonic $t\bar{t}$ final state has the advantage of
a large branching ratio
$(\simeq 4/9 )$. The major challenge of this channel is the large background from QCD multijet
production. To increase the purity of the candidate sample, methods based on artificial
neural networks are applied to the data. Further improvement is then obtained from the
requirement of at least one jet identified as originating from a $b$ quark using a
secondary vertex $b$-tagging algorithm. These techniques have allowed one to measure the
top quark mass and the $t\bar{t}$ cross section in spite of the overwhelming QCD
multijet production.
To these ends, a reconstructed top quark mass, $m_t^{\rm rec}$, is determined by fitting the
kinematics of the
six leading jets from the process $p\bar{p} \to t\bar{t} +X \to 6~{\rm jets}$. There exists
a strong correlation between $m_t^{\rm rec}$ and the jet energy scale JES. However, the JES can
be calibrated using a selected sample of $t\bar{t}$ candidate events, where a second
variable $m_W^{\rm rec}$ is reconstructed from the jets assigned to the $W$ boson.
The variable $m_W^{\rm rec}$ is related to the $W^\pm$ boson mass, which is known accurately.
Relating $m_t^{\rm rec}$ and $m_W^{\rm rec}$ to match the experimental data ({\it in situ}
calibration) reduces significantly the systematic errors. Further improvement comes by using a
multivariate approach taking advantage of the distinctive features of the signal and
background events through a neural network. Fig.~\ref{fig:ttbar-6jets-CDF} shows
the histogram of $m_t^{\rm rec}$ as obtained in the data and compared to the distributions
in the so-called 1-tag and $\geq 2$-tag events
from signal and background corresponding to $M_{\rm top}=175$ GeV.
The best estimates of the top quark mass from this analysis is~\cite{Aaltonen:2010pe}
\begin{equation}
M_{\rm top}= 174.8 \pm 2.4 ({\rm stat + JES}) ~{\rm GeV}~.
\end{equation}
The procedure used to measure the top quark mass also returns the average number of signal
events expected, given the selected data samples. These results can be turned into a
measurement of the $t\bar{t}$ cross section, and yield
\begin{equation}
\sigma_{t\bar{t}}= 7.2 \pm 0.5 ({\rm stat}) \pm 1.0 ({\rm syst}) \pm 0.4 ({\rm lum})~{\rm pb}~.
\end{equation}
\begin{figure}
\center{
\resizebox{0.75\columnwidth}{!}{
\includegraphics{PRD81-CDF-Fig10.eps}}
}
\caption{\label{fig:ttbar-6jets-CDF}
Histogram of $m_t^{\rm rec}$ from the CDF data (black points) for 1-tag (upper plot) and
$\geq 2$ tag events (lower plot) compared to the distributions from signal and background
corresponding to $m_{\rm top}=175$ GeV.
[CDF~\cite{Aaltonen:2010pe}].
}
\end{figure}
\subsubsection{Searches for the Higgs particles}
We have discussed
numerous electroweak processes at the Tevatron in which jets play an essential
role in the analysis. In particular, $W^\pm$ and $Z$ gauge bosons and top quarks have been
measured using jets. The last and the most-prized on this list is the Higgs boson.
This is being searched for at the Tevatron feverishly. For $m_H < 135$ GeV, the dominant
decay mode is $H \to b\bar{b}$~\cite{higgs};
analyses of this decay mode open a powerful new Higgs discovery channel \cite{Butter}.
The dominant production modes are
$gg \to H$ and $q\bar{q} \to H$. The $b\bar{b}$ signal in this channel is overwhelmed
by the QCD $b\bar{b}$ production. A promising production
and search strategy is the production of a Higgs boson decaying to a pair of bottom
quarks in association with a vector boson $V$ ($W$ or $Z$) decaying to quarks or leptons,
resulting into a four-jet or a charged lepton + two-jet final states. In either case, two of
the jets are required to have secondary vertices consistent with $B$-hadron decays.
So far Tevatron Run II searches have used signatures where the $V$ decays to leptons
(see, for example Refs.~\cite{Aaltonen:2008zx,Aaltonen:2008ip}). Recently, also
searches in the four-jet channels have been reported~\cite{:2009bz}. Using an integrated
luminosity of 2 fb$^{-1}$, Higgs boson searches in this channel provide a weak upper bound.
For example, for $m_H=120$ GeV, CDF is able to exclude a Higgs production cross section
larger than 38 times the SM prediction! Hence, establishing the Higgs signal in this
channel requires much more statistics, but also some fundamental progress in jet algorithms
to be more efficient in Higgs (and other similar particle) searches.
New opportunities are offered by observing $b$ jets in Higgs decays
at the LHC.
The key technique is the 2-jet splitting of a fat $b \bar{b}$ jet generated in
events in which the Higgs boson is boosted to large transverse momenta in
the processes $pp \to W^\pm H$ and $ZH$. If the ``fat'' jet
is characterised by a jet radius $R = R_{bb} \simeq M_H / p_T$, the clustering
is partially undone by shrinking the radius $R$ until the fat jet $R_{b\bar{b}}$
decomposes into two slim $R_b$ subjets with significantly lower mass,
each containing a $b$ quark.
Additional criteria will reduce the contamination by standard QCD processes.
Though the boost will strongly reduce the event rate, the significance will
nevertheless be raised to such a level that light Higgs events can clearly be
isolated above background. Extending the method to the channel $pp \to
t \bar{t} H \to t \bar{t} b \bar{b}$, the crucial $ttH$ coupling, apparently
not accessible at LHC otherwise, can be measured in the light Higgs sector
\cite{Plehn:2009rk}.
The concept is useful also for the analysis of other processes, for example,
the search for supersymmetric particles decaying to jets from the hadronic decays of the
electroweak and Higgs bosons \cite{Raklev}, or for detecting strongly interacting $W^\pm,Z$
bosons \cite{Cox}, or the search for heavy resonances in decays to
top-quark jets \cite{Baur}.
\section{Summary}
Quantum Chromodynamics has been established experimentally in the past four
decades as the microscopic theory of the strong interactions, formulated as
a non-abelian gauge theory for coloured quarks and gluons within the Standard
Model of particle physics. Jet physics has been a crucial instrument for achieving
this fundamental result. The beginning was made at SPEAR with the observation of
quark jets in $e^+e^-$ annihilation by the SLAC-LBL collaboration~\cite{EXPquark}.
Subsequent studies undertaken, in particular at DORIS,
PEP and PETRA, involving higher center-of-mass energies largely consolidated the
phenomenological profile of the quark jets (see~\cite{AliSoeding} for a review).
In fact, jets provide an irrefutable
case for the existence of quarks as dynamical entities directly observable in particle
detectors, despite colour confinement, convincing even the most die-hard skeptics about the
reality of quarks. Moreover, making use of the larger masses of the $b$- and $c$-quarks,
relatively long half-lifes of the corresponding hadrons and their characteristic decay patterns,
one can efficiently flavour-tag the heavy quark jets. In the meanwhile, these techniques
have been developed to the level of a diagnostic tool to search for new phenomena in which
heavy quarks play a role. The decays $t \to b W$ and $H \to b\bar{b}$ are two good cases
in point.
Theoretically, the existence proof of quark jets in fixed order perturbative
QCD was provided by Sterman and Weinberg~\cite{SterW} using a jet-cone definition
which coincided with
the actual process of detection of hadrons in finite segments of hadron calorimeters.
Subsequently, Sterman~\cite{Sterman:1979uw} provided an all-orders argument for the
infra-red safety of jet cross sections.
Phenomenologically, quark jets
follow from the observation that the transverse momenta of the hadrons produced in
the fragmentation $q^* \to $ hadrons is limited, whereas the longitudinal components
of the hadron momenta scale with energy. A very intuitive and largely accurate
quark jet fragmentation model was developed along these lines by Field and
Feynman~\cite{FieldF}, which played an important role in the quantitative analysis of jet data.
Analysis of
the decays $\Upsilon(9.46) \to $ hadrons measured by the PLUTO
collaboration~\cite{Berger79} working at
DORIS were undertaken in terms of the underlying perturbative process $\Upsilon(9.46) \to ggg$.
In particular, the experimental profile of the most energetic parton
($\langle E_1\rangle \simeq 4.1 $ GeV) was close to the phenomenological expectations
of a hadron jet.
However, a clear three-jet topology using {\it en vogue} jet definitions was not
established in $\Upsilon(9.46)$ decays for lack of energy~\cite{Berger79}. This
three-jet topology was
established later in the $e^+e^-$ experiments operating at higher energies (typically
$\sqrt{s}=30$ GeV), resulting from the energetic quark, anti-quark and gluon from the process
$e^+e^- \to q \bar{q} g $. The jet profile of the three jets as well as the inclusive
hadronic measurements undertaken at PETRA in 1979 followed detailed theoretical
expectations~\cite{THgluon,PHENgluon1,PHENgluon2}. Thus, it is fair to conclude that
the study of the decay $\Upsilon(9.46)\to 3g$, initiated by PLUTO at DORIS~\cite{Berger79}, was
an important step in the confirmation of QCD which served as a prelude to the unambiguous
discovery of the three-jet topology by the experiments at PETRA. More detailed and
quantitative tests of perturbative QCD in the decays of $\Upsilon(9.46)$ were also presented
subsequently by PLUTO~\cite{Berger81}.
Theoretical proofs
of the existence of three-jet topologies, in the Sterman-Weinberg sense, were
provided in 1981, and somewhat later in terms of the next-to-leading order calculations
of the three-jet cross sections.
This was done for inclusive jet distributions (such as the Fox-Wolfram shape
variable~\cite{Ellis:1980nc,Ellis:1980wv}, thrust~\cite{Vermaseren:1980qz,Ellis:1981re}
and energy-energy correlations\cite{Ali:1982ub,Ali:1983au,Richards:1982te,Richards:1983sr})
and in terms of topological jet cross
sections~\cite{Fabricius:1980fg,Fabricius:1981sx,Gutbrod:1983qa,Kramer:1986mc,Gutbrod:1987jt,Gottschalk:1984vy}.
Confirmation of the non-abelian character of QCD in jets~\cite{THchi4,ang34,ang34-2,jet42} in the
four-parton processes $e^+e^- \to q\bar{q} gg$ came from experiments at LEP~\cite{EXPchi4,EXPgroup}.
In the meanwhile, multijet physics has developed enormously, with the NLO calculation of
$e^+ e^- \to \gamma, Z \to 4$ jets~\cite{Signer:1996bf,Dixon:1997th,Nagy:1998bb} completed around 1996,
and the NLO calculation to five-jet production at LEP reported recently~\cite{Frederix:2010ne}.
The properties of gluon jets
have largely been determined by experiments at PETRA and subsequently at LEP. The
fragmentation of gluon jets was initially conceived by treating them as independent
partons~\cite{PHENgluon1},
or implemented by the perturbative process $g^* \to q \bar{q}$ as the first step followed
by incoherent quark fragmentation~\cite{PHENgluon2} (IJ models). The resulting picture
could largely account
for the essential properties of the two- and three-jet events seen at PETRA and PEP,
and they helped in the discovery of three-jet events (and hence, of gluons) at PETRA. However,
analysis of the PETRA jets saw the emergence of an alternative fragmentation scheme for
the $e^+ e^- \to q \bar{q} g$ events - the LUND string model~\cite{Lund} - in which
hadronisation
was implemented in terms of two strings stretched along the colour-anticolour axes
which then fragmented very much like the quark-antiquark string in $e^+e^- \to$ 2-jets.
This model provided a better phenomenological description of data, in particular the particle
flow between the quark, antiquark and gluon
jets~\cite{Bartel:1981kh,Bartel:1983ij,Aihara:1984du,Althoff:1985wt}.
The LUND-string effect was subsequently
understood in perturbation theory in terms of the antenna radiation pattern
of QCD~\cite{Azimov:1986sf},
reflecting the colour coherence effect of the non-abelian character of this theory.
Detailed fragmentation models were built along the angle-ordered perturbation
theory, which preserve the colour coherence in QCD, and in which parton showers
were included in the form of cascades. which then finally fragmented as
hadron clusters according to phase space (cluster hadronisation models)~\cite{Herwig1}.
These Monte Carlo models developed for the PETRA jet analysis have played an
important role in the analysis of all high energy data involving jets. The modern incarnation
of these fragmentation models are PYTHIA~\cite{Pythia}, HERWIG~\cite{Herwig}
and SHERPA~\cite{Sherpa} , which differ in details on
how the parton showers are matched on to the fixed order perturbative QCD matrix elements
and in the hadronisation schemes. A central role is also played by the jet algorithms,
which starting from the JADE scheme~\cite{JADE} have now evolved as trustworthy tools in
the definition of jets, with the modern versions called the $k_T$~\cite{Catani:1991hj} (mostly
in $e^+e^-$ annihilation processes) and anti-$k_T$ jet algorithms~\cite{Cacciari:2008gp}.
Another large application of QCD is in studying DIS, photoproduction and $\gamma \gamma$
collisions. In these cases, initial states are not so well known as in
$e^+e^-$ annihilation. Jets and QCD have played a central role in mapping the
PDFs of the proton and the photon. We have summarised some of the
highlights in this article. In particular, DIS measurements at
HERA~\cite{Amsler,H1I,Chekanov:2002be,Chekanov:2006xr,:2007pb,ZEUS1} have firmly
established the rise of the structure function $F_2(x, Q^2)$, which is due to the rapid
growth of the gluon density $g(x,Q^2)$ for low values of $x$ as $Q^2$ increases. Likewise,
high energy $p\bar{p}$ collisions at the Tevatron, in particular the Tevatron Run II data on
inclusive jet production~\cite{Tevjet,Aaltonen:2008eq,Abulencia:2007ez},
have led to greatly firming up the PDFs of the proton.
On the theoretical
side, the complete next-to-next-to leading order (3-loop) parton splitting functions
have been derived
by Moch {\it et al.}~\cite{Moch:2004pa,Vogt:2004mw}. They have been used in
working out the proton PDFs by the CTEQ~\cite{CTEQ} and the MSTW~\cite{MSTW} collaborations.
Thus, the HERA and the
Tevatron measurements and the progress in the QCD splitting functions
will prove to be an asset in understanding the forthcoming
LHC data.
In the meanwhile, a fundamental change of paradigm has taken place concerning ``Jets and QCD''.
The theory (QCD) is so well controlled (in particular, $\alpha_s(M_Z)$ is known to an
accuracy of better than 1\% and the crucial property of asymptotic freedom is now fully
established) that jet-physics can serve as a tool to chart out
new territories in high energy physics. We have reviewed here some applications of
jet techniques in quantifying the properties of the top quark and the electroweak gauge
bosons $(W,Z)$. They have already played a significant role in determining the properties of
the SM particles in experiments at the Tevatron and they will play an even more
important role in the analysis of data from the experiments at the LHC. For example,
jets are now increasingly used in developing search strategies for the Higgs
boson, and even particles suggested in theories beyond the Standard
Model. New jet techniques, such as the 2-jet splitting of a fat $b\bar{b}$ jet
will be required to disentangle the decay $H \to b\bar{b}$ from an overwhelming
QCD background in hadron colliders.
One problem in jet physics however remains unsolved up to now. While,
due to asymptotic freedom, the dynamics of quarks and gluons can
theoretically be described with high accuracy at short distances,
matched by numerical lattice calculations for static properties
of hadrons, the transition from small to large distances in the evolution
of jets is theoretically not understood. However, a bridge is built,
at the present time, by intuitively formulated models, which are constrained
experimentally so stringently that hadron jets can be exploited to draw
a valid picture of quarks and gluons and their interactions at
short distances. New theoretical methods may help solve this
multi-scale problem rigorously in the future.
{\bf Acknowledgements} We thank Peter Zerwas for the collaboration in the early stages
of this work, for numerous helpful discussions that we had with him all along,
and for his valuable input in good parts of this manuscript. Helpful discussions with
Hans-J\"urgen Meyer, Hinrich Meyer and Bruno Stella on the PLUTO analysis of the
$\Upsilon(9.46)$ data are thankfully acknowledged.
We also thank a large
number of our colleagues and collaborators whose dedicated and painstaking work over decades
has contributed decisively to the development of QCD and jet physics.
This article is dedicated collectively to all of them.
|
1,108,101,563,195 | arxiv | \section{Introduction}
Materials, especially metals, sparsely doped with magnetic impurities have been widely investigated throughout the last 50 years primarily because some of them display
the fascinating Kondo effect \cite{Kondo1964}. Such systems can serve as an ideal playground to explore the underlying many body physics. However, the prerequisite to observe a Kondo effect at all is the survival of a local moment of the embedded impurity.\\
The interaction of the localized impurity electronic states with the itinerant electrons of the host material is theoretically described by the well established $sd$- or Anderson model \cite{Anderson1961a}. Within this model the size of the local moment arises from an intricate interplay of the on-site Coulomb repulsion $U_0$, the energy penalty for adding a second electron to the localized state, and the width $2\Gamma$ of the localized state. The width of the localized state, also known as virtual bound state, results from hybridization of the electronic states of the impurity with the delocalized states of the host material. In case of a symmetric arrangement of the impurity spin levels $E_{d,\pm}$, \textit{i.e.} $E_{d,\pm}=E_F\mp U_0/2\left(n_+-n_-\right)$ ($E_F$ is the Fermi energy of the system and $n_\pm$ the occupation of $E_{d,\pm}$), a simple criterion for the existence of a local magnetic moment can be derived \cite{Anderson1961a}:
$U_0/\Gamma>\pi$.\\
In recent years, advances in experimental techniques and theoretical methods allowed to push the research into the finite size domain \cite{Skomski2010,Pastor2005,Kaul2009,Kaul2006,Kaul2005,Rotter2009,Liu2012,Thimm1999,Cornaglia2002,Booth2005}. In finite systems the itinerant electrons are confined and populate highly discretized energy levels. This in turn can be expected to have tremendous influence on the description within the Anderson impurity model, which accounts only for a continuous host density of states. Indeed we found evidence that the size of the spin magnetic moment of a chromium impurity embedded in a small gold cluster is strongly affected by the discretized density of states of the host particle \cite{Hirsch2013}. This becomes most evident in host particles that exhibit a shell closure and therefore a wider highest-occupied--lowest-unoccupied molecular orbital (HOMO-LUMO) gap.
To get a more fundamental grasp on the influence of an energy gap or a highly discretized host density of states on the spin magnetic moment of an embedded impurity, we investigate such a systems
using a modified Anderson impurity model.
\section{Model Hamiltonian}
We model the system in a tight-binding approach, using the following model Hamiltonian:
\begin{equation}
\mathcal{H}_{TB}=\begin{pmatrix} E_{d,\pm} & a & \cdots & a \\ a & E_{k,1} & 0 & 0\\ \vdots & 0& \ddots & 0 \\ a& 0& 0& E_{k,N} \end{pmatrix}
\label{eq:TB_Hamiltonian}
\end{equation}
In $\mathcal{H}_{TB}$, a single localized orbital at energy $E_{d,\pm}$ interacts with a finite number $N$ of delocalized states at energies $E_{k,i}$. Like in the Anderson model, the coupling strength $a$ of the localized orbital to the continuum states is assumed to be the same for all states $E_{k,i}$. Diagonalization of the matrix $\mathcal{H}_{TB}$ yields the $N+1$ eigenstates $\phi_i$ and eigenenergies $\epsilon_i$ of the system. This is to be done separately for majority $(+)$ and minority $(-)$ impurity spin states $E_{d,\pm}= E_F \mp U_0/2\left(n_+-n_-\right)$ to yield spin resolved eigenfunctions $\phi_i^\pm=(c_1^{i,\pm},c_2^{i,\pm}, \ldots c_{N+1}^{i,\pm})$ and eigenenergies $\epsilon_i^\pm$. The states $E_{d,\pm}$ are separated by the on-site Coulomb repulsion $U_0$, which is the energy neccessary to add an electron to the localized orbital. Eigenfunctions and eigenenergies obtained from diagonalization are used to calculate the occupation numbers $n_\pm$ of the majority and minority spin states from the projected spin density of states $\rho_\pm\left(E\right)$ as:
\begin{eqnarray}
\rho_\pm\left(E\right) &=&\sum_i|c_1^{i,\pm}|^2 \, \delta (E-\epsilon_i) \label{eq:MAIM_DOS}\\
n_\pm &=& \int_{- \infty}^{E_F}\rho_\pm\left(E\right)dE \label{eq:MAIM_occ}
\end{eqnarray}
Here, $\delta\left(E\right)$ is the delta function and $E_F$ the Fermi energy of the system.\newline
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{Fig1.eps}
\caption{Comparison of the solutions for the occupation numbers $n_\pm$ of spin-up and -down states using $U_0/\Gamma \approx 9.1$, obtained analytically from the Anderson impurity model and the tight-binding Hamiltonian equation (\ref{eq:TB_Hamiltonian}), using a coupling strength $a=\unit[0.02]{eV}$, on-site Coulomb repulsion $U_0=\unit[2]{eV}$ and a host density of states of $\unit[180]{eV^{-1}}$. Both models yield almost identical results, the self-consistent solutions are marked by the orange circles. Inset: Impurity state projected density of states for the spin polarized solution, obtained by solving equations (\ref{eq:TB_Hamiltonian}-\ref{eq:MAIM_occ}) self-consistently. The Lorentzian fit agrees well with the density of states.}
\label{fig:Vgl_AIM_TB}
\end{figure}
In order to find the spin polarization $\frac{\left(n_+ - n_-\right)}{\left(n_+ + n_-\right)}$ of the system the equations (\ref{eq:TB_Hamiltonian}-\ref{eq:MAIM_occ}) have to be solved self-consistently, since the energetic position of the localized orbital depends on the occupation $n_\pm$ and vice versa. More specifically, the energetic position of $E_\mp$ is determined by $n_\pm$ which in turn dictates the occupation of $n_\mp$. Like Anderson \cite{Anderson1961a} we solve this problem graphically by plotting the majority spin state occupation as a function of the minority spin state occupation $n_+\left(n_-\right)$ as well as $n_-\left(n_+\right)$. A self-consistent solution is found at the intersections of both curves, as shown in Fig. \ref{fig:Vgl_AIM_TB}. We test our model by comparing its results for dense but discrete levels, approximating a continuous band, to the analytical solution of the Anderson impurity model. In this limit both models should yield identical results. We chose a constant density of states of $\unit[180]{eV^{-1}}$, which is comparable to the density of states at the Fermi level of a free electron gas as can be found, for example, in a gold Au$_{660}$ nano-particle if the level bunching due to electron shell effects is neglected. The analytical solution \cite{Anderson1961a} of the Anderson impurity model for the occupations of majority and minority spin state is given by
\begin{equation}
n_\pm=\frac{1}{\pi}\arctan\left(\frac{U_0\cdot \left(n_\mp-0.5\right)}{\Gamma}\right)+0.5.
\label{eq:AIM_occ}
\end{equation}
A symmetrical arrangement of the impurity states $E_{d,\pm}$ relative to the Fermi energy $E_F$ is assumed. Such a symmetrical arrangement of the levels is a reasonable assumption implying that the dopant remains charge neutral. Although in metallic systems the impurity can be charged to some extent, this will be well below one elementary electric charge, rendering its influence on the Anderson model negligible.\newline
Fig. \ref{fig:Vgl_AIM_TB} demonstrates that the numerical solution of $\mathcal{H}_{TB}$ in the continuous band limit and the analytical solution of the Anderson impurity model are nearly indistinguishable. Furthermore, the inset of Fig. \ref{fig:Vgl_AIM_TB} shows the impurity state projected density of states that results from the numerical calculation. The Lorentzian shape of the virtual bound states is also in very good agreement with what one would expect from the Anderson impurity model and further confirms that our tight-binding model agrees with the Anderson impurity model in the continuous band limit.\newline
The on-site Coulomb repulsion was chosen to be $U_0=\unit[2]{eV}$, since typical values of $U_0$ for transition metals are ranging from \unit[1]{eV} to \unit[6]{eV} \cite{Sasoglu2011,Kulik2006,Anisimov1991a}. For a given density of states and on-site Coulomb repulsion the half-width of the virtual bound state is solely determined by the coupling strength $a$, which was set to $\unit[0.02]{eV}$ here. This set of parameters results in a width $2\Gamma$ of $\approx \unit[0.44]{eV}$ which is obtained from a lorentzian fit shown in the inset of Fig. \ref{fig:Vgl_AIM_TB} and compares well with the analytical value $2 \Gamma= \pi a^2 \rho(E_F)=\unit[0.45]{eV}$. Generally, a parameter range of the coupling strength $a$ of $\unit[0.02]{eV}-\unit[0.08]{eV}$ yields line widths which are consistent with line widths seen in UPS experiments carried out on $3d$-transition metal impurities embedded in gold and silver \cite{Reehal1980,Hochst1980,Hillebrecht1983,Folkerts1987}, scanning tunneling experiments on adatoms \cite{Crommie1993} as well as density functional theory calculations \cite{Podloucky1980,Weissmann1999}.\newline
It should be noted that the model introduced here is constructed for a single impurity state only. An extension to multi-orbitals as present in, e.g., $3d$-transition metals can be done, but does not fundamentally alter the description. The main impact is a further stabilization of the impurity's spin by the exchange interaction of the local orbital electrons.\newline
Having tested our model in the way described above, we can now turn to studying the influence of a discretized host density of states on the spin polarization of the impurity. We will proceed in two steps. First, we will keep the host density of states quasi-continuous and introduce an energy gap at the Fermi level. Second the host density of states will additionally be discretized.
\begin{figure*}[th!]
\includegraphics[width=0.7\textwidth]{Fig2.eps}
\caption{\label{fig:VBS}Upper panels: Impurity state projected density of states obtained using the tight-binding model Hamiltonian, equation (\ref{eq:TB_Hamiltonian}), for an impurity interacting with a dense discrete host density of states ($\unit[180]{eV^{-1}}$) without a gap (a) and with a gap of $\unit[0.1]{eV}$ (b). The ordinate in panel (b) is interrupted, while the inset shows the impurity state projected density of states as calculated. Parameters $a=\unit[0.04]{eV}$ and $U_0=\unit[2]{eV}$ were kept constant. The insets show the uncoupled impurity and host density of states. Lower panels (c) and (d) show the resulting solution for the occupation numbers for spin-up and -down states. The impurity magnetization is restored when introducing a small gap in the host density of states, panel (d). The values used correspond to an Anderson criterion of $U_0/\Gamma=2.21<\pi$.}
\end{figure*}
\section{Energy Gap in the Host Density of States}
The influence of an energy gap in the host density of states on the total occupation of the impurity states has been studied in the seminal work of Haldane \cite{Haldane1976}, which is an extension of the Anderson impurity model. Haldane was able to explain the large variety of charge states that are observed in dilute magnetic semiconductors. However, the spin polarization was not addressed in Haldanes study.
\begin{figure}[ht!]
\includegraphics[width=0.9\columnwidth]{Fig3.eps}
\caption{\label{fig:Spin_a}Comparison of systems incorporating an energy gap (\unit[0.1]{eV} and \unit[0.5]{eV}) in the host density of states to a system exhibiting no energy gap. On-site Coulomb repulsion $U_0=\unit[2]{eV}$ and density of states $\unit[180]{eV^{-1}}$ were kept constant. Panel (a): Spin polarization as a function of coupling strength $a$. Only for very small coupling parameters similar spin polarizations can be found. The spin magnetic moment is quenched for $a \geq \unit[0.035]{eV}$ in the continuous band case, whereas it survives in presence of a gap. Panel (b): Anderson criterion drops below $\pi$ for coupling strengths $a \geq \unit[0.035]{eV}$ marking the magnetic-to-nonmagnetic transition.}
\end{figure}
\begin{figure}[ht!]
\includegraphics[width=0.9\columnwidth]{Fig4.eps}
\caption{Spin polarization of the impurity as a function of the host energy gap at constant coupling parameter $a=\unit[0.04]{eV}$, on-site Coulomb repulsion $U_0=\unit[2]{eV}$ and host density of states of $\unit[180]{eV^{-1}}$.}
\label{fig:Spin_Gap}
\end{figure}
In this study we will concentrate on the influence of a gap on the spin polarization of the system.\newline
Parameters of on-site Coulomb repulsion $U_0=\unit[2]{eV}$, host density of states of $\unit[180]{eV^{-1}}$ and coupling strength $a=\unit[0.04]{eV}$ were chosen so that the spin polarization of the system vanishes in the continuous band limit. The relative energies of the electronic states of the uncoupled host and impurity are depicted in the inset of Fig. \ref{fig:VBS} (a). Vanishing spin polarization of the system is indicated by the degeneracy of the virtual bound majority and minority spin states of the composite system, obtained from a self-consistent solution as described in the previous section and shown in Fig. \ref{fig:VBS} (a). Note that self-consistency is only reached for the trivial non-magnetic solution $n_+=n_-=0.5$, as can be seen in panel (c) of the same figure.\newline
However, upon introducing an energy gap in the host density of states at the Fermi energy as small as $\unit[0.1]{eV}$, \textit{cf.} inset of Fig. \ref{fig:VBS} (b), the spin polarization is restored. Both, majority and minority spin states no longer feature a lorentzian shape, but exhibit poles at the Fermi level as shown in Fig. \ref{fig:VBS} (b). This in turn leads to a transfer of density of states from the minority to the majority spin state resulting in a finite spin polarization. This can also be seen in Fig. \ref{fig:VBS} (d) where additionally to the non-magnetic solution $n_\pm=0.5$ magnetic solutions $n_+ \neq n_-$ can be found. The depicted curves are no longer in agreement with the analytical description within the Anderson impurity model. The deviation from the $\arctan$-function equation (\ref{eq:AIM_occ}) is most obvious in the regions exhibiting straight lines around $n_+=n_-=0.5$, which suppress the quenching of the local spin magnetic moment.\newline
These observations indicate that the simple criterion of the Anderson impurity model for the magnetic to non-magnetic transition, $U_0/ \Gamma=\pi$, does not hold if an energy gap is introduced in the host density of states. To be more specific, magnetic solutions can be found although the criterion $U_0/ \Gamma$ yields values smaller than $\pi$ in the continuous band limit. The stabilization of a magnetic solution by introduction of an energy gap to the host density of states seems to be a quite robust effect as can be seen from Fig. \ref{fig:Spin_a} (a). Here, a comparison of the spin polarization as a function of the coupling strength $a$ is shown between systems exhibiting an energy gap in the host density of states and a system lacking an energy gap. For very small coupling parameters $a$ the influence of the gap is negligible, as has already been shown experimentally \cite{Hirsch2013}. For larger coupling strengths $a$, however, a severe deviation between the system with and without energy gap can be observed, most strikingly at $a \geq \unit[0.035]{eV}$. For that particular set of parameters ($U_0=\unit[2]{eV}$, $a \geq \unit[0.035]{eV}$ and $\rho=\unit[180]{eV^{-1}}$) the spin polarization of the system without a gap in the host density of states vanishes while the spin polarization survives in case of the systems exhibiting a gap. The vanishing spin polarization can be associated to the drop of the Anderson criterion $U_0/\Gamma$ below $\pi$, marking the magnetic to non-magnetic transition in the Anderson impurity model as depicted in panel (b) of Fig. \ref{fig:Spin_a}.\newline
Although the magnitude of the spin polarization will depend on the particular choice of the parameters $U_0$ and $a$ as well as the host density of states, opening up an energy gap reliably introduces a spin polarization in the system. This is even true for comparable large coupling parameters $a$ corresponding to small values $U_0/\Gamma<\pi$ in the continuous band limit, \textit{cf.} panels (a) and (b) of Fig. \ref{fig:Spin_a}. Again, this clearly shows the breakdown of the simple criterion for the magnetic to non-magnetic transition as given in the Anderson impurity model.\newline
We can therefore state at this point that an energy gap in the host density of states has a profound influence on the spin polarization. We can quantify this influence by calculating the magnitude of the spin polarization as a function of the size of the gap. The spin polarization as a function of the energy gap is plotted in Fig. \ref{fig:Spin_Gap}. All other parameters are kept constant as before $U_0=\unit[2]{eV}$, $a=\unit[0.04]{eV}$, and density of states $\unit[180]{eV^{-1}}$. Again, opening up the gap immediately restores a spin polarization which then monotonically increases as a function of the gap in the host density of states. The dependence of the spin polarization on the coupling strength $a$ will be discussed in more detail in the next section.\newline
\section{Discrete Host Density of States}
\begin{figure}[t!]
\includegraphics[width=0.9\columnwidth]{Fig5.eps}
\caption{\label{fig:Spin_vs_Gap}Spin polarization of a system consisting of 24 randomly distributed host states within \unit[20]{eV} as a function of the host energy gap for different coupling parameter $a$. Using an on-site Coulomb repulsion of $U_0=\unit[1]{eV}$.}
\end{figure}
The model Hamiltonian (\ref{eq:TB_Hamiltonian}) enables us to tackle not only bulk-like systems exhibiting a band gap but also to treat discrete energy levels of the host, which can be found, \textit{e.g.}, in isolated systems consisting of only a few atoms.\newline
It is reasonable to assume that the details of the spin polarization in such a system will depend on the exact relative arrangement of impurity and host electronic states. However, we will show here that the overall scaling of the spin polarization with the energy gap, which in case of finite systems is the HOMO-LUMO gap, of the host will hold also for a highly discretized host density of states.\newline
To this end we calculated the spin polarization of a system featuring 24 delocalized host states (12 occupied, 12 unoccupied) interacting with a single impurity state as a function of coupling strength $a$ as well as of the energy gap. The system introduced here could, \textit{e.g.}, be a $\mathrm{Na}_{12}$, $\mathrm{Au}_{12}$ or $\mathrm{Cu}_{12}$ host particle. As already pointed out the spin polarization will depend on the actual level arrangement. However, in order to get a detailed insight into the influence of the energy gap, for every set of parameters we generated thousand different host systems with randomly distributed host levels within a \unit[20]{eV} energy range \cite{Itoh2009} under the constraint to exhibit a certain energy gap at the Fermi level. The resulting density of states of $\unit[1.2]{eV^{-1}}$ compares well with the density of states at the Fermi level of a free electron gas, again neglecting level bunching due to electronic shell effects, in a coinage metal particle of size 12. The results of these calculations are depicted in Fig. \ref{fig:Spin_vs_Gap}, where the mean value of the system's spin polarization is plotted versus the energy gap. The standard deviation is given by the error bars. This was done for different coupling strengths $a=\unit[0.1-0.5]{eV}$ keeping $U_0=\unit[1]{eV}$ constant. The values for the coupling strength $a$ used in the calculation correspond to the parameters used in the previous section, since $a$ scales with the number of host states $N$ as $a=a_0/\sqrt{N}$.\newline
As can be seen from the figure, on average, the spin polarization scales inversely with the coupling strength $a$. But more importantly the spin polarization scales with the size of the energy gap, comparable to the behavior of a quasi-continuous band exhibiting a gap as presented in the previous section. The standard deviation is larger for small energy gaps and intermediate coupling strengths, \textit{cf.} Fig. \ref{fig:Spin_vs_Gap}, since for parameters close to the transition from a magnetic to a non-magnetic state the actual arrangement of the host and impurity states becomes naturally more important, in contrast to systems exhibiting very small or large coupling strengths $a$. In case of very weak interaction between host and impurity, hybridization is small independent of the relative arrangement of the levels. In the large coupling strength regime, hybridization is also mainly independent of the particular arrangement of the levels, since the host states are coupled to the impurity state disregarding their energetic separation.\newline
\section{Conclusion}
The influence of an energy gap in the host density of states as well as the discrete nature of a host density of states on the spin polarization of an impurity was studied using a tight-binding approach. The criterion for a transition from a magnetic to a non-magnetic system as stated within the Anderson impurity model is found not to be valid anymore. For cases where the magnetic moment of the impurity is quenched in a system having a continuous host density of states, we have shown that the opening of a gap can recover the spin magnetic moment. The size of the spin polarization scales with the size of the energy gap. This observation even holds for a discrete host density of states. On average the spin polarization follows the size of the energy (HOMO-LUMO) gap and the actual relative energetic position of impurity and host electronic states is of minor importance. This can severely influence the magnetic moment of impurities embedded in finite size matrices exhibiting a discretized density of states. Although their bulk counterpart may lack a magnetization, finite systems can exhibit a spin polarization, which can be tuned by the size of the host energy (HOMO-LUMO) gap. The described dependence is expected to be observable by studying, e.g. cobalt doped aluminum gas phase clusters combining x-ray magnetic circular dichroism \cite{Hirsch2009,Niemeyer2012,Zamudio-Bayer2013} and ultraviolet photoelectron spectroscopy \cite{Pettiette1988}.\\
Furthermore, the results even point at the possibility to switch the impurity's spin magnetic moment in a particular class of bulk host materials, i.e., materials exhibiting a Peierls transition. By passing through the Peierls transition temperature and thereby opening and closing an energy gap at the Fermi level, respectively, the spin of an embedded impurity may be switched on and off.
\section{Acknowledgments}
KH thanks Linn Leppert for fruitful discussions.
|
1,108,101,563,196 | arxiv | \section{Introduction}
\label{sec:introduction}
Machine learning (ML) has become the driving force pushing diverse computational services like search engines, robotics, traffic forecasting, natural language processing, and medical diagnosis, to only mention a few.
This has led to a more diverse group of people affected by ML.
Placing the human in the center of ML investigates the needs of the user and the interaction between developer and user.
Many approaches refer -- possibly indirectly -- to a pairing of a developer and a deployer, interacting for a certain application.
Typical examples are from ML in sciences, where the physicist, physician, biologist, drug developer interacts with the ML expert to establish a reliable data analysis process that leads to scientific insights.
In the long run, others may benefit from this without ever accessing the ML process.
Patients, for instance, take for granted that the diagnosis methods or drugs are approved by a valid procedure.
Other applications involve several parties.
A vendor company, its online shop, the company that optimizes the click rate, a recommendation engine, and the customers buying the products are all playing their part in modern sales ecosystems.
Modern financial business processes, like money laundry detection, involve a network of stakeholders who need to know the reliability of the ML classifications.
In the gig economy, the platform company provides the freelancers and vendors with their scheduling and accounting system that most likely deploys machine learning for optimization.
Companies apply diverse ML methods, and some have employees who know ML well.
For them, to inspect a learned model is important and explainable AI is serving them.
In contrast, the customers and freelancers are affected by the process but do not face the ML system directly nor do they interact with its developers.
They rely on tech companies to have done their job in a trustworthy manner.
In an even broader context, societies establish regulations that protect individual rights, e.g., regarding privacy of data and fair business processes.
Moreover, the goals of sustainability and the fight against climate change demand regulations on energy consumption of ML processes.
Now the regulating agencies need valid information about ML processes.
We see the diversity of stakeholders who need valid information in order to accept or not accept a certain ML process.
Some of them know the ML theory and are experienced in evaluating models.
Some of them will interpret the models directly if they are nicely visualized.
Some of them will find the time for an interactive inspection of models.
All these needs have raised considerable attention in the AI community and attractive methods have been put forward.
However, methods that inform users who do not want to spend time in learning about ML method are missing.
This is the type of user we want to address.
We think that due to the increasing number of applications and the diversity of application areas, this user type will become more and more important.
We consider this user type a customer of ML who wants the product to fulfill some requirements.
Whether a method meets the demand is partially given by the theory that states its properties.
Finding and understanding these statements that are scattered across decades of scientific publications requires years of studies.
Of course, our 'customers´ who do not want to invest time into considering a particular model will have even less interest in visiting courses.
Hence, the theoretical insights need to be communicated easily without simplifying matters.
It is this view that has led to the concept of care labels~\citep{Morik/2021a}, which we adopt here.
This concept moves beyond individual man-machine interaction towards a public declaration of a method's properties.
It allows for specific regulations, for instance, regarding energy consumption.
The design of ML care labels requires defining the set of relevant properties.
Robustness in the sense that small changes to the data lead only to small changes in the model is a property that is studied intensively.
Runtime and memory bounds are straightforward.
Communication needs are important for applications in the area of the Internet of Things.
Energy consumption is important, since some state-of-the-art ML pipelines such as the NLP model ``BERT'' have an average power consumption of \SI{12}{\kilo\watt} with training alone, consuming \SI{1}{\MWh} (as much as a single-person household consumes in eight months\footnote{Based on the average power consumption per capita in Europe \url{https://ec.europa.eu/eurostat/statistics-explained/index.php/Electricity_and_heat_statistics}}), which can potentially have a high impact on our environment \citep{Strubell/etal/2020a}.
Classes of ML methods, like exponential families, cover a range of algorithms for training and inference.
Consider the choice of an algorithm for performing inference on probabilistic graphical models, which leads to totally different theoretical properties and runtimes, determining the CO\textsubscript{2}\xspace footprint.
The marginal probabilities are often under- or overestimated by using the approximative \ac{lbp}\@\xspace algorithm instead of the exact \ac{jt}\@\xspace algorithm.
Therefore, an inexperienced user might tend to prefer the exact \ac{jt}\@\xspace algorithm without being aware of the resulting runtime and CO\textsubscript{2}\xspace impact, caused by high asymptotic runtime complexity, if this property were not indicated by the care label.
In general, different ML methods may need different categories, and even the same category may need different criteria to be tested.
Hence, if we take a more detailed look at the overall ML field, there is not one single set of categories with test criteria for all.
Instead, an expert database should store the specific instances of the categories for ML methods.
Moreover, considering static characteristics of a method does not suffice.
Worst-case asymptotic time and memory bounds are given by theory, but can vary by orders of magnitude across compute platforms, even if they implement the same abstract method.
A \cnn, for example, may be trained on a GPU and CPU, or on a microcontroller (e.g., Arduino or \ac{fpga}\@\xspace).
The former is much more resource-heavy, consuming energy at an order of magnitude around \SI{e2}{\watt} compared to only $10^{-1}$ to \SI{e0}{\watt}.
The latter, on the other hand, may be severely constrained with respect to the number of layers, or input data types:
An \ac{fpga}\@\xspace may work best using only integer arithmetic, or the Arduino may only have \SI{256}{\kilo\byte} of RAM, which limits the model's number of parameters.
In general, the same method can be implemented on different hardware architectures and particular implementations might vary.
Hence, the more dynamic features of ML execution environments must be covered by the labels, as well.
For each property, ranges of values need to be defined that will then be expressed by symbols similar to those on the paper slips found in clothes and textiles.
Since we want to validate the properties, we need criteria which classify a certain instance of a method into the appropriate value range.
Where static properties may be listed based on theoretical results, the dynamic properties of a particular implementation on particular hardware demand tests on specific data sets.
Overall, for the set of properties with their value ranges, a certification process needs to be implemented.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig1.pdf}
\caption{Proposed framework for certifying machine learning methods with \emph{care labels}. They capture aspects of the underlying theory (stored in an expert knowledge database) and their relevance for the practical implementation (based on profiling data sets)}
\label{fig:framework}
\end{figure}
In this work, we propose a novel means of communication between ML scientists and stakeholders that goes beyond logical, visual or natural language descriptions of single models.
We instead aim at providing a framework for certifying ML methods in general, as schematically displayed in \autoref{fig:framework}.
Our contribution comprises the following points:
\begin{enumerate}
\item We present an easy-to-understand \emph{care label design}, serving as a single graphical certificate (\autoref{fig:carelabel}) for ML methods and their implementations.
\item We devise a \emph{rating} system, drawing from an \emph{expert knowledge database} created, maintained and continually expanded by the research community.
\item We introduce \emph{categories} under which we bundle \emph{criteria}, which represent important properties of ML methods.
They are stored in the expert knowledge base.
\item We suggest to certify a given implementation against its underlying theory with the help of \emph{reliability and performance bound checks} on \emph{profiling data sets}, and reporting resource consumption.
\item We define \emph{badges} that are awarded to ML methods that fulfill certain noteworthy criteria.
\item We present a concept for a \emph{Certification Suite} that accesses the expert knowledge database and certifies a method together with its implementation.
\end{enumerate}
We start by giving an overview of good standards that have already been achieved for ML certification, and identifying their shortcomings.
Next up, we introduce our novel care label concept and its constituting parts in \autoref{sec:carelabels}.
In \autoref{sec:pgm} we put our concept into practice for \acp{mrf}, a class of probabilistic graphical models with a wide range of applications.
\acp{mrf} constitute a powerful probabilistic tool with a rich theoretical background, serving to illuminate all aspects of our care label concept.
We conclude our work with a summary of our investigations, and outline future work in \autoref{sec:conclusion}.
\section{Related Work}
\label{sec:relatedwork}
The importance of trustworthy AI methods is increasing, especially because decision-making takes data-based models more and more into account
\citep{Lepri/Oliver/2018a,Bellotti/Edwards/2001a,Floridi/etal/2018a,Houben/etal/2021a}.
In their comprehensive book, \cite{Dignum/2019a} addresses AI's ethical implications of interest to researchers, technologists, and policymakers.
Another recent book brings together many perspectives of AI for humanity and justifies the urgency of reflecting on AI with respect to reliability, accountability, societal impact, and juridical regulations
\citep{Braunschweig/Ghallab/2021a/fixed}.
\citep{Brundage/etal/2020a} and \citep{Langer/etal/2021a} summarized many important aspects of developing trustworthy learning systems.
Their reports emphasize that institutional mechanisms (e.g.\@\xspace, auditing, red team exercises, etc.\@\xspace), software mechanisms (e.g.\@\xspace, audit trails, interpretability, etc.\@\xspace) and hardware mechanisms (assessing secure hardware, high precision compute management, etc.\@\xspace) are required for obtaining trusted systems, and that the diversity of stakeholders needs to be taken into account.
Where privacy-preserving data mining
(e.g.\@\xspace, \cite{Atzori/etal/2008a}) on the side of data analysis methods and the European General Data Protection Regulation (GDPR) on the side of political regulation successfully went together towards citizen's rights, a similar strategy for ML models and regulations in concert is missing.
\subsection{Inherent trustworthiness}
From its beginning, the machine learning community aimed at offering users understandable machine learning processes and results.
Interpretability guided the development of methods, their combination and transformation enabling users to inspect a learned model
\citep{Rueping/2006a}.
Recently, \cite{chen/etal/2018c} investigated interpretability of probabilistic models.
Inductive logic programming (ILP) assumed that relational logic, and description logic in particular, is easily understandable \citep{Muggleton/91a,Morik/Kietz/91a}.
Particular methods for interactive inspection and structuring of learned knowledge and given data offered a workbench for cooperative modeling of an expert with the ILP system
\cite{Morik/89c}.
This close man-machine interaction in building a model creates common understanding of system developers and users.
However, it does not scale to larger groups of affected stakeholders.
Decision trees were promoted as inherently understandable.
However, feature selection and very deep trees quickly outgrow human intuition, and subtleties are only recognized by experts.
For instance, the weights of redundant features are not appropriately computed in decision trees, whereas in support vector machines, they are \citep{Mierswa/Wurst/2005b}.
Statistics, even if expressed in natural language, is not easy to understand correctly, as has been shown empirically
\citep{Wintle/etal/2019b}.
\subsection{Explainable AI}
Explainable AI aims at offering an easy understanding of complex models for inspection by stakeholders.
They investigate particular tasks, e.g., recommender systems
\citep{Nunes/Jannach/2017a},
or particular ML families, e.g. Deep Neural Networks
\citep{Huang/etal/2020a,Samek/etal/2019a},
or survey the needs of diverse stakeholders
\citep{Langer/etal/2021a}.
Agnostic explanation routines are to explain a variety of learned models
\citep{Ribeiro/etal/2016a,Guidotti/etal/2018a}.
Given the large amount of research in this field, it has become a necessity in its own right to describe explanation methods along a proposed schema
\citep{Sokol/Flach/2018a}.
For model inspection by a domain expert or application developer, these methods are of significant importance.
\subsection{Description of methods and models}
Modern machine learning toolboxes like RapidMiner, KNIME, or OpenML
\citep{Mierswa/etal/2006a,Rijn/etal/2013a} are oriented towards knowledgeable users of ML or application developers.
They and others \citep{Falkner/etal/2018a,Brazdil/etal/2003a} use meta-data which offer a descriptive taxonomy of machine learning.
In this sense, the ML processes are carefully described and documented.
The user may click on any operator to receive its description and requirements.
RapidMiner, in addition, recognizes problems of ML pipelines and recommends or automatically performs fixes.
It enhances understandability by structuring ML pipelines and offering processes in terms of application classes.
Moving beyond the direct interaction between system and application developer aims at accountable descriptions of models.
The approaches for \emph{FactSheets} from IBM \citep{Arnold/etal/2019a} and \emph{Model cards} from Google \citep{Mitchell/etal/2019a} are closely related to our approach.
They give impetus to document particular models for specific use cases in the form of natural language and tabular descriptions, and even suggest to include them with ML publications.
In a recent user study, most interviewees found the idea of model-specific FactSheets helpful in understanding the methodology behind the model creation process \citep{Hind/etal/2020a}.
Another line of work aims at automatically tracking and visualizing the training process, including computed metrics as well as model architecture \citep{Schelter/etal/2017a,Vartak/etal/2016a}.
The approaches mentioned so far also do not consider resource usage (e.g.\@\xspace, power consumption), even though it is crucial information for users \citep{Henderson/etal/2020a} and allows for discussing the environmental ethical impact of ML methods.
Often, there is a trade-off between two important properties, such as higher runtime in exchange for lower energy consumption, which might influence the customer's decision on which model to choose.
As an example, \acp{fpga} offer unique performance and energy-saving advantages, but the software engineering part is challenging \citep{Omondi/Rajapakse/2006a,Teubner/etal/2013a}.
The challenges faced when deploying machine learning on diverse hardware in different application contexts \citep{Hazelwood/etal/2018a} even gave rise to a new conference, bridging the gap between machine learning and systems researchers \citep{Ratner/etal/2019a}.
The current increase in awareness regarding CO\textsubscript{2}\xspace emissions foregrounds these properties even more for users who want to design ML systems responsibly.
Indeed, the amount of CO\textsubscript{2}\xspace emitted during training and deployment of state-of-the-art machine learning models has increased drastically in recent years \citep{Strubell/etal/2020a,Schwartz/etal/2019a}.
To give more insight into this issue, \cite{Schwartz/etal/2019a} urge researchers to provide a price tag for model training alongside the results, intending to increase visibility and making machine learning efficiency a more prominent point of evaluation.
In \cite{Henderson/etal/2020a}, the authors provide a framework to measure energy consumption of ML models.
However, they only measure specific model implementations, mostly disregarding theoretical properties and guarantees.
We argue that a proper framework also needs to consider known theory, ideally stored as a database.
While these approaches are an important and necessary call for participation in the endeavour of describing learned models, we argue that natural language descriptions and empirical results alone are not enough to enhance trust in the model.
They do not account for whether or not the theoretical properties of the model are fulfilled in the specific implementation at hand.
This was stunningly shown by \citep{Dacrema/etal/2019a}, where baseline heuristics were able to beat top-tier methods, and many results could not be reproduced at all.
This brings the issue of certification and testing to the foreground \citep{Cremers/etal/2019a}.
\cite{Brundage/etal/2020a} argue that descriptions must be verifiable claims.
\citep{Kwiatkowski/etal/2011} investigate verification of probabilistic real-time systems.
\input{tab_rw_comparison}
A summary of related work is shown in \autoref{fig:rw_comparison_table}, highlighting which aspects of ML certification the authors cover, and where they fall short.
What we find missing in the current state-of-the-art is a unifying concept to certify ML methods and their implementation on diverse hardware in terms of adherence to known theoretical properties and resulting resource demands.
We argue that a method's properties need to be classified independently from a specific use case or data set.
Additionally, this information needs to be accessible for non-experts, thus complicated theory has to be concealed through appropriate levels of abstraction.
This is where our proposed \emph{care labels} come into play, offering a comprehensive and easy to interpret overview of methodological properties, both theoretical and given the particular implementation at hand.
\section{Machine learning care labels}
\label{sec:carelabels}
We now introduce our care label concept for certifying ML methods, addressing all aforementioned issues.
With ``method'', we refer to a combination of ``components'' for performing a specific machine learning task, such as training or applying a model.
Most components are customizable, resulting in wildly varying properties.
We later show this in practice for different probabilistic inference algorithms.
Implementation and choice of hardware add yet another layer of complexity.
Our care labels produce relief by hiding all underlying complexity behind a user-friendly façade.
In the following sections, we discuss the details of our concept, following the structure of our contribution list from \autoref{sec:introduction}.
\subsection{Care label design}
\label{ssec:care_labels}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig2.pdf}
\caption{Design of machine learning care labels, consisting of theory and implementation segments}
\label{fig:carelabel}
\end{figure}
Evidently, our proposed care labels need to take manifold theoretical and practical insights about ML methods into account, and compile them into a single short comprehensive document, similar to an index card.
Our design is shown in \autoref{fig:carelabel} and consists of two segments:
The upper-left segment contains information about the method's theoretical properties, while the bottom-right segment also considers the given execution environment.
As methods can be implemented in various ways and on various compute architectures, we designed this segment to ``attach'' to its theory.
This is analogous to different brands of refrigerators:
While the abstract task stays the same (keeping food and beverages cool), manufacturers use different components and circuits, and their specifics (e.g.\@\xspace, lowest possible temperature, noise level, energy consumption) vary.
In the same way, different ML implementations perform the same abstract task, but have their specific strengths and weaknesses.
Both segments contain a name and short description in their upper left corner.
The theoretical segment displays the method's rating for five important \emph{categories} on the right, represented as colored hexagons.
The ratings for each category are drawn from an expert knowledge database (see \autoref{ssec:rating} and \autoref{ssec:categories} respectively).
By restricting the care label to simple color-based ratings, we allow for a high-level assessment without the need for deeper understanding of the underlying theory.
On the left, white hexagonal fields provide space for badges, which we describe in \autoref{ssec:badges}.
The segment contains three checkbox fields on the right that connect to three theoretical categories, indicating whether the theoretical properties are verified for the implementation.
For a refrigerator, this could be a test whether the temperature reliably stays below the point where bacteria grow quickly.
For ML methods, we can check if theoretical bounds about result quality, runtime or memory consumption hold.
Additionally, three white hexagonal fields with colored symbols at the bottom show measurement results for runtime, memory, and energy consumption (for more details see \autoref{ssec:boundchecks}).
In short, our design accomplishes the following:
\begin{itemize}
\item Provides general information about the ML method at a glance
\item Shows simple ratings for important categories, trading complexity and detail for simplicity and user-friendliness
\item Clearly highlights the interplay of theory and implementation by showing whether the implementation fulfills all theoretical properties
\item Is understandable for users without scientific background, allowing for easy comparison between ML methods
\item Highlights noteworthy properties that stakeholders may need for their particular application
\end{itemize}
\subsection{Expert knowledge database for method ratings}
\label{ssec:rating}
The theory behind machine learning methods is manifold, with different model classes having their own intricacies that can only be fully understood by experts.
Consequently, they are required to assess important properties, identify to what extent a method exhibits them, and convey that knowledge to less informed users.
We propose to assemble a database with criteria that describe theoretical properties of ML methods, independent of their implementation.
By bundling criteria into few concise categories, we allow for easier comparability between methods.
We further propose to assign a rating to each category, consisting of four levels \emph{A} -- \emph{D}, each represented by a color from a gradient ranging from green (\emph{A}, best rating) to red (\emph{D}, worst rating).
This is inspired by similar certification concepts, e.g.\@\xspace, the EU energy labelling framework \citep{Union/2017a}, which rates energy efficiency of electronic devices, or Nutri-Score for nutrient content in food \citep{Julia/etal/2018a}.
The ratings represent an expert assessment of how strongly the method at hand fulfills the respective criteria, as listed in \autoref{tab:rating}.
\input{tab_rating}
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{Fig3}
\caption{Ratings of individual components are combined. Neutral ratings (gray) get overridden}
\label{fig:rating-combo}
\end{figure}
As mentioned before, customizable components of a method can lead to quite different properties:
While a basic model class may fulfill most criteria and consequently be rated \emph{A}, a specific component may override the rating to \emph{B}, e.g.\@\xspace, due to much slower asymptotic runtime, or simplifying assumptions resulting in weaker theoretical guarantees.
To address this issue, we propose a ``building block'' approach, by storing separate ratings for methods and their components in the database and combining them for a user-specified method.
Where ratings for certain categories are not affected by configuring the component, we allow neutral ratings.
Generally, ratings should be combined pessimistically, i.e.\@\xspace good ratings get overridden by worse ratings, this is depicted in \autoref{fig:rating-combo}.
This complexity is hidden from users, as they only receive a single label.
\subsection{Categories and criteria}
\label{ssec:categories}
In an attempt to untangle all aspects that ML users should consider when choosing a method and implementation, we propose a set of categories that summarize desirable properties.
For each category, we compile a list of yes-no type criteria:
A high number of fulfilled criteria supports a method's aptitude w.r.t.\@\xspace the category, resulting in a higher rating.
In this section we give only a few examples of criteria for each category, hoping that through practical insights and input from fellow researchers the list will grow.
While the categories we propose are designed to be more or less universal across model families, the constituting criteria may not apply to certain types of models or algorithms, e.g.\@\xspace, because they belong to completely different paradigms.
A solution to this problem would be to differentiate between model families, such as \emph{generative} and \emph{discriminative} models, providing alternative criteria for each separately.
We want to point out that we purposefully do \emph{not} include categories or criteria concerning quality metrics like accuracy or convergence speed.
These are highly dependent on specific input data and give no reliable impression of how well a method performs in an arbitrary, unknown use case.
Giving simple ratings for these criteria would strongly contradict the ``no free lunch'' theorem -- the fact that no single method performs universally well on all problems \citep{Wolpert/Macready/97a}.
We do however propose to investigate selected performance properties of method implementations, as described in \autoref{ssec:boundchecks}.
\paragraph{Expressivity}
\begin{itemize}[label=$\square$]
\item Example criterion: \textit{The method provides at least one human-interpretable measure of uncertainty along with its output}
\end{itemize}
We call a model \emph{expressive} if it produces a variety of useful outputs for different input formats.
This simple definition implies multiple properties:
On the one hand, an expressive model should be able to handle arbitrarily complex functions, e.g.\@\xspace, a classifier splitting every labeled data set, or a generative model approximating any probability distribution.
On the other hand, highly expressive models provide additional outputs (e.g.\@\xspace measures of uncertainty or justification) to make them more interpretable.
For users with certain safety-critical application contexts, such information may even be a strict requirement.
\paragraph{Usability}
\begin{itemize}[label=$\square$]
\item Example criterion: \textit{The model is free of hyperparameters}
\end{itemize}
This category is concerned with the method's ease of use for end users.
A main aspect is the number and complexity of hyperparameters:
A hyperparameter-free method can be directly applied to a problem, while a method with many parameters needs fine-tuning to produce good results, which requires considerable effort and experience.
Even more experience is required for choosing parameter values that are difficult to interpret:
Choosing $k$ for a $k$-Means Clustering is conceptually much easier than choosing the weight decay in stochastic gradient descent.
The difficulty of choosing optimal hyperparameters can be alleviated by theoretical knowledge of optimal parameter settings.
We consider a method to be more easily usable if there are algorithms or formulas for deriving good or even optimal parameter values for given inputs.
\paragraph{Reliability}
\begin{itemize}[label=$\square$]
\item Example criterion: \textit{The method produces theoretically exact results}
\end{itemize}
We require \emph{reliable} models to be firmly grounded in theory, i.e.\@\xspace when there is evidence of mathematical error bounds, and if there are insights about the model's fairness or bias.
As an example, uncertainty given by neuron activations in ANNs alone was found to not necessarily be a reliable measure~\citep{Guo/etal/2017a}.
Such models are highly untrustworthy when they are used in safety-critical fields such as autonomous driving.
Contrary, \acp{mrf} were proven to recover the correct marginal probabilities with increasing number of data points, given the underlying independence structure \citep{Piatkowski/2019c}.
It is important to comprehensible visualize these fundamental differences.
Importantly, if there are theoretical bounds for the method at hand, they should also be verifiable by software tests, which we call \emph{bound checks}.
The particular tests need to be defined separately for all methods eligible for certification -- we discuss details in \autoref{ssec:boundchecks}.
\paragraph{Theoretical time and memory consumption}
\begin{itemize}[label=$\square$]
\item Example criterion: \textit{The method's runtime scales (at worst) quadratically with the input dimensionality, i.e.\@\xspace in $\mathcal{O}(n^2)$}
\end{itemize}
Runtime and memory usage are factors of utmost importance for stakeholders, especially when facing resource constraints.
ML theory provides insights on (worst-case) time and memory consumption of algorithms in the form of big $\mathcal{O}$ notation.
Based on this theoretical tool, we propose a ranking of asymptotic time and memory complexity classes, with the rank being displayed in the care label's theory segment.
In cases where big $\mathcal{O}$ notation depends on different factors (e.g.\@\xspace, number of features or data points), we propose to classify the method according to the factor with the highest complexity class.
Energy is another important factor to consider when deploying ML, but it results directly from runtime, memory consumption and hardware.
As such, theory does not provide any additional information here.
\subsection{Certifying the implementation}
\label{ssec:boundchecks}
So far we only considered static theoretical properties of ML methods, and how corresponding information can be summarized via simple care label ratings.
However, looking at theory alone is not enough, as practically rolled out ML can diverge from it.
Consider runtime as a practical example: In theory, an algorithm may be very efficient, but its implementation may still be very slow, due to slow periphery or inefficient code.
Many popular ML implementations even suffer from severe bugs \citep{Thung/etal/2012a,Islam/etal/2019a}, let alone aligning with respective theoretical properties.
We therefore propose to also certify the implementation's compliance with its underlying theory via test procedures that we call \emph{bound checks}, which either investigate the dynamic aspects of \emph{reliability} or \emph{performance}.
The former intend to verify method-specific theoretical guarantees for reliability (cf. \autoref{ssec:categories}), as provided by the expert knowledge database.
We propose to check those guarantees programmatically via software tests.
This requires synthetic data with known properties, which is fed into the implementation.
Its output is then checked against the known expected results.
Our performance checks investigate runtime and memory usage in the given execution environment.
We here draw from the previously introduced asymptotic complexity classes, that the implementation is expected to comply with.
We check this compliance by running experiments on synthetic data with varying input sizes.
Measuring the corresponding runtime and memory usage in a software profiling fashion (cf. \autoref{sssec:ImpPart}) allows for checking whether the theoretical complexity holds.
Checkmark symbols on the right hand segment of our care label denote whether the implementation satisfies the reliability and performance checks.
Information about the available hardware allows to also assess the energy consumption, and thus, carbon footprint.
As motivated earlier, this is of high interest for aspects of environmental ethics \citep{Henderson/etal/2020a}.
By drawing inspiration from electronic household devices and informing about energy consumption, our care label rewards energy-efficient implementations and ultra-low-power devices such as FPGAs and ASICs, even if they are limited in expressivity or have a higher runtime.
For providing specific information, we assess the practical runtime (in seconds) and memory demand (in megabytes), along with energy consumption (in Watt-seconds) for a medium-sized data set.
Those measurements are displayed in the implementation segment.
For comprehensibility we also display them as colored badges, based on their position on a scale for different orders of magnitudes.
\subsection{Badges for noteworthy criteria}
\label{ssec:badges}
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[height=2cm]{Fig4a.pdf}
\caption{``Method provides uncertainty measure''}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[height=2cm]{Fig4b.pdf}
\caption{``Method can be tested for robustness''}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[height=2cm]{Fig4c.pdf}
\caption{``Method can be used with data streams''}
\end{subfigure}
\caption{Three examples of badges as a compact way to summarize a method's noteworthy properties (Images taken from \url{Flaticon.com})}
\label{fig:badges}
\end{figure}
We argue that there are certain properties, which are particularly noteworthy, because they are rare among comparable methods and have great impact on the method's overall rating.
Examples of such noteworthy properties include uncertainty measures, whether the method can be tested for robustness, and whether the model can be used with streaming data.
In order to highlight these properties, we introduce badges in form of pictograms that get printed on the care labels.
Some examples of badges, along with short explanations, are given in \autoref{fig:badges}.
\subsection{Certification suite concept}
\label{ssec:certificationsuite}
We propose to develop a certification suite software that enables a less informed user to enquire comprehensible information on ML methods and their implementations, in the form of care labels.
Most importantly, it allows to configure a specific method from its available constituting components.
For the chosen configuration, the software queries the ranking information, asymptotic performance bounds, and reliability bounds from the knowledge database, and combines them into the theoretical label segment.
After configuring the method's backend implementation according to the user input, the suite then profiles and runs bound checks.
We also propose to implement an interactive, high-level interface, that hides all complicated ML logic from users who are not very experienced in the field of ML.
In terms of our work we have already implemented a simple prototype, see \autoref{sssec:ImpPart} for more information.
\section{Applying the care label concept to graphical models}
\label{sec:pgm}
We now implement our concept for selected members of the probabilistic graphical model (PGM) family.
Having a long history in statistics and computer science, theoretical properties of graphical models have been studied rigorously in literature \citep{Koller/Friedman/2009a}.
Thus, they are well-suited to demonstrate our care label concept as a means to aid the user in their decision making process.
Firstly, we briefly discuss the theoretical background of \acfp{mrf}, a specific subtype of \acp{pgm}.
Secondly, we present the care label generation procedure for two different \ac{mrf}\@\xspace variants (\autoref{ssec:Theo_Impl}).
We discuss the static, theoretical properties and corresponding rating, as stored in the expert knowledge database, determining the theory segment of our label (\autoref{sssec:theoPart}).
In addition, we present results of our testing procedures, which certify a given \ac{mrf} implementation against the underlying theory, both in terms of reliability and resource demand, while also assessing the energy consumption of the execution environment at hand (\autoref{sssec:ImpPart}).
\subsection{Background on Markov random fields}
\label{ssec:mrf_background}
\acp{mrf} belong to the family of \acp{pgm} and are used in many different applications like satellite image gap filling \citep{Fischer/etal/2020b}, medical diagnosis \citep{Schmidt/2017a}, and security-critical systems \citep{Lin/etal/2018b}.
Moreover, \acp{mrf} can be used for constrained learning scenarios, like distributed environments \citep{Heppe/etal/2020a} or platforms under strict resource requirements \citep{Piatkowski/etal/2016a}.
We now shortly discuss the underlying theory of \acp{mrf}, as it provides guarantees and static properties that determine their care label ratings.
\acp{mrf} combine aspects from graph and probability theory in order to model complex probability distributions $\mathbb{P}(X)$ over some $d$-dimensional random vector $\ensuremath{\boldsymbol{X}} = (\ensuremath{\boldsymbol{X}}_1,\hdots,\ensuremath{\boldsymbol{X}}_d)^\top$ efficiently.
Conditional independences between elements of $\ensuremath{\boldsymbol{X}}$ are exploited and modeled through a graph $G = (V,E)$, where each vertex $v \in G$ is associated with one random variable of $\ensuremath{\boldsymbol{X}}$.
If two vertices $i$ and $j$ are not connected by an edge, $\ensuremath{\boldsymbol{X}}_i$ and $\ensuremath{\boldsymbol{X}}_j$ are conditionally independent given the remaining variables.
In this work we focus on discrete \acp{mrf}, allowing for an intuitive parametrization, therefore each element $\ensuremath{\boldsymbol{X}}_i$ can take values in its discrete finite state space $\mathcal{X}_i$.
By introducing the so-called potential functions $\psi_C: \mathcal{X}_C \mapsto \mathbb{R}_{+}$ for each of the cliques in $G$, mapping variable assignments to positive values, the joint density factorizes according to \cite{Hammersley/Clifford/71a} as follows:
\begin{equation}
\mathbb{P}(\ensuremath{\boldsymbol{X}} = x) = \frac{1}{Z} \prod_{C \in \mathcal{C}(G)} \psi_C(x_C)\enspace,
\end{equation}
where $Z$, which is called \emph{partition function}, acts as normalization,
\begin{equation}
Z = \sum_{x \in \mathcal{X}} \prod_{C \in \mathcal{C}(G)} \psi_C(x_C).
\end{equation}
The potential functions $\psi$ are parametrized by weights, which allows to define a \emph{loss function}.
By minimizing it during the training process the model adapts to a given data set.
Typically, the weights are learned via maximum-likelihood-estimation with first order \emph{optimization methods}.
Having access to the joint density clears the path for many further ML tasks, such as generating data via Gibbs sampling, answering marginal $\mathbb{P}(X_i = x)$ or conditional $\mathbb{P}(X_i = x_i|X_j = x_j)$ probability queries, or providing maximum a-posteriori (MAP) estimates. However, solving such tasks requires \emph{probabilistic inference}.
Algorithms for such computations can be divided into two variants, namely \emph{exact} and \emph{approximate} algorithms (see upper part of \autoref{fig:inference}).
On the one hand, the \acf{jt} algorithm~\citep{Wainwright/Jordan/2008a} is a well-known method to perform exact inference on arbitrary graph structures, while having a very slow asymptotic runtime.
On the other hand, the \acf{lbp} algorithm~\citep{Kim/Pearl/83a} performs approximate inference, sacrificing theoretical guarantees for considerably faster performance.
Further approximation algorithms include the variational \citep{Wainwright/etal/2003a} and sampling-based approaches \citep{Andrieu/etal/2003a}.
Keep in mind that exact probabilistic inference is a \#P-complete task \citep{Piatkowski/2018a}.
\subsection{Deriving care labels for Markov random fields}
\label{ssec:Theo_Impl}
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{Fig5.pdf}
\caption{Overview of configurable components for \acp{mrf}. Each instanced model requires choosing the loss, an optimizer and an inference algorithm. For our experiments we chose likelihood as loss function, gradient descent for optimization, and evaluated two different inference algorithms. The respectively instantiated care labels are shown below, with resulting theoretical and practical ratings}
\label{fig:inference}
\end{figure}
We generate care labels for different \acp{mrf}, based on combinations of chosen components.
In the context of \acp{mrf}, there are three major configurable components: an \emph{optimizer}, a \emph{loss function}, and an \emph{inference algorithm}.
We restrict ourselves to investigating \emph{gradient descent} optimization with a \emph{likelihood} loss function, using either the \ac{lbp}\@\xspace or the \ac{jt}\@\xspace algorithm for performing inference.
These combinations are depicted in \autoref{fig:inference}, along with their corresponding final care labels.
\subsubsection{Expert knowledge-based ratings}
\label{sssec:theoPart}
\acp{mrf} already exhibit certain static properties and receive corresponding ratings for our categories.
The ratings in the theoretical segment of the care label are stored in the expert knowledge database (cf. \autoref{ssec:rating}) and were agreed upon by 10 experts using a majority vote.
In case of a tie, we decided in favor of the method.
As combining the individual components can greatly influence the rating, we also have to identify their individual ratings.
Combining the component ratings should be based on a fixed set of rules.
Here, we stick to the already proposed way of taking the infimum overall ratings (cf. \autoref{fig:rating-combo}).
We display the respective ratings for the \ac{mrf}\@\xspace components and variants in \autoref{tab:mrf_ratings}, and now explain their expert-knowledge-based justification and corresponding implications for the user, following the columns from left to right.
\input{tab_mrf_ratings}
\paragraph{Expressivity}
Looking at \autoref{tab:mrf_ratings}, it stands out that the general ML method choice is the only component affecting the expressivity rating.
We reason that the expressive power of \acp{mrf} is determined only by its inherent properties, while the customizable components are neutral.
\acp{mrf} are very expressive, because they can perform all generative model tasks: they can be queried for conditional or joint probabilities and they allow to sample data from the distribution.
In addition, \textit{their probability output is a natural uncertainty measure}.
Therefore, we argue that \acp{mrf} should be rated \emph{A}.
\paragraph{Usability}
In terms of usability, \acp{mrf} receive the grade \emph{B}, since the independence graph is usually unknown for real-world use cases, and thus has to be defined manually.
For this, the user must either incorporate expert knowledge or use procedures for structure estimation \citep{Yang/etal/2014d}.
The loss function does not impact the usability and is rated neutral.
Gradient descent only requires choosing a step size, which is well-documented with reasonable defaults, therefore we rate it \emph{A}.
The \ac{lbp} inference requires careful tuning of the stopping criterion by specifying the convergence threshold or number of iterations.
However, more iterations do not necessarily improve performance, which makes the choice quite unintuitive, resulting in a \emph{C} rating for usability.
\ac{jt} inference does not require additional hyperparameter tuning, which yields an \emph{A} rating.
The usability rating shows the user that \ac{lbp}\@\xspace makes \acp{mrf} a bit harder to use.
\paragraph{Reliability}
When provided with an exact inference algorithm and a convex optimization problem, \acp{mrf} are guaranteed to recover the correct distribution, with the error being bounded by the number of training instances \citep{Bradley/Guestrin/2012a}.
This bound, which we call \emph{distribution recovery check}, can be verified through software, resulting in an \emph{A} rating.
Since the likelihood as a loss function exhibits strong statistical guarantees, like \emph{consistency}, \emph{unbiasedness} and \emph{efficiency}, we also award it with \emph{A}.
Given a density, which is convex w.r.t.\@\xspace parameters, a gradient descent optimizer is able to \emph{recover the global optimum}, resulting in an \emph{A} rating.
This is also verifiable for the dynamic properties of a specific implementation via software, the so-called \emph{convergence check}.
However, all this reliability is only given if the chosen inference algorithm provides exact results for the gradient update.
We reflect this restriction by assigning \emph{D} and \emph{A} ratings to \ac{lbp}\@\xspace and \ac{jt}\@\xspace respectively, as the former does not even provide a bound on the approximation error\footnote{Under certain, special conditions \ac{lbp}\@\xspace may have guarantees, but this is generally not the case~\citep{Ihler/etal/2005a}.}.
Our ratings clearly show the user that \ac{lbp}\@\xspace makes \acp{mrf} a lot less reliable.
\input{tab_inference_complexity}
\paragraph{Runtime and Memory}
The theoretical runtime and memory complexity classes depend mostly on the chosen inference algorithm component, while the \acp{mrf} type's general runtime is unspecified.
Their memory demand for parametrization depends on the data complexity in two ways:
It grows linearly with the number of features and quadratically with the number of discrete states, resulting in a \emph{B} rating.
Due to only requiring a plain mathematical function evaluation, the likelihood loss acts neutrally on both categories.
The gradient descent optimizer itself is resource efficient, assuming we are given the gradients of our loss function.
Both memory and runtime complexity scale linearly with the model dimension, resulting in an \emph{A} rating for both categories.
The resource requirements of our inference algorithms are shown in \autoref{tab:inference}~\citep{Piatkowski/2018a}.
We rated the runtime of \ac{jt} inference with \emph{D} due to scaling exponentially with the junction tree's width\footnote{The junction tree ~\citep{Huang/Darwiche/1996a} is an auxiliary graph which needs to be derived in order to run the complete inference algorithm.} $w$.
Memory receives the same rating, because the underlying data structures also grow exponentially with tree width.
Thus, users might find \ac{jt} inference feasible for sparse independence structures, while for dense graphs its runtime and memory demand may exceed available resources.
\ac{lbp}\@\xspace on the other hand works efficiently on general graphs and even provides exact solutions for trees and polytrees.
It is rated \emph{B} due to scaling quadratically in number of states of the largest state space $\mathcal{X}_{\max}$, and linearly in the number of edges $\lvert E \rvert$ in the graph, the number of iterations $I$ and the size of the largest neighborhood $\mathcal{N}_{\max}$ in $G$.
The memory consumption for \ac{lbp}\@\xspace is rated \emph{A} as we only have to store intermediate results, whose memory demands scale linearly in the number of states per clique.
Choosing between \ac{jt}\@\xspace and \ac{lbp}\@\xspace inference, the user can trade exactness and strong guarantees for better runtime and memory.
This is illustrated in \autoref{fig:exp_tradeoff}, showing runtime and memory consumption for \acp{mrf} with increasing number of vertices.
\subsubsection{Testing the implementation}
\label{sssec:ImpPart}
To test the dynamic properties of specific \ac{mrf} implementations and derive the care label's implementation segments, we implemented a certification suite, as described in \autoref{ssec:certificationsuite}.
It draws the theory-based static bounds and ratings from the expert knowledge database, performs reliability bound checks, investigates the implementation's behavior in terms of runtime, memory and energy consumption, and outputs the complete care label.
\paragraph{Experimental setup}
In order to run our bound checks (cf. \autoref{ssec:boundchecks}) we generated synthetic data sets by defining specific distributions and sampling from them.
Having access to both the sampled data and their underlying distribution parameters allows for assessing whether reliability checks pass.
As graph structure we chose a grid graph, with binary state space and increasing grid size from $2 \times 2$ up to $15\times 15$.
This resulted in data sets of different sizes, which have been utilized to perform the resource bound checks.
For running the \acp{mrf} logic, we utilized the \texttt{pxpy}\@\xspace\footnote{\url{https://pypi.org/project/pxpy/}} library, which implements \ac{jt}\@\xspace and \ac{lbp}\@\xspace.
Our certification suite measures runtime, memory demand\footnote{\url{https://github.com/giampaolo/psutil}}, and CPU energy consumption\footnote{\url{https://github.com/wkatsak/py-rapl}} via established tools, similar to~\cite{Henderson/etal/2020a}.
All experiments were performed on a workstation equipped with an Intel(R) Xeon(R) W-2155 CPU, 64 GB RAM and Ubuntu 20.04.1 LTS as operating system.
\paragraph{Reliability bound checks}
To verify the reliability of the implementation against the static characteristics described in \autoref{sssec:theoPart}, which are especially important for safety-critical applications, we perform two exemplary bound checks:
The \emph{distribution recovery check}~\citep{Hoeffding/63a} and the \emph{likelihood convergence check}.
The first check is performed by comparing the true marginal probabilities $\mu^*$ to the marginals $\hat{\mu}$ computed by the provided implementation for the true parameters $\theta^*$.
Therefore, clique-wise KL-divergences are computed and reduced to the max value.
If this value falls below a given threshold, the check passes.
The second check verifies if the given implementation fits the data.
To this end, we run an optimization procedure with the true structure and our samples, checking convergence based on the gradient norm.
If the norm falls below a given threshold, the check passes.
The investigated \ac{jt}\@\xspace implementation was able to pass both checks.
Recall that \ac{lbp}\@\xspace inference is not exact as depicted in \autoref{fig:motiv_example} and received a \emph{D} for reliability.
Even though the \ac{lbp}\@\xspace algorithm does not exhibit theoretical guarantees, it was able to pass the reliability tests for some data sets.
Still, it failed for most, therefore the implementation did not receive the reliability checkmark.
\begin{figure}[t]
\centering
\resizebox{!}{0.48\textwidth}{
\includegraphics[width=\textwidth]{Fig6.pdf}
}
\hspace{1cm}
\caption{Deviations of \ac{lbp}\@\xspace marginals compared to the true marginals computed with the exact \ac{jt}\@\xspace algorithm over five runs.
We used a random Erdos-Renyi-Graph with 20 nodes and an edge probability of $0.25$.
Each dot represents a single marginal probability, with all marginals ideally being located on the diagonal (LBP marginal equal to JT marginal).}
\label{fig:motiv_example}
\end{figure}
\paragraph{Runtime and memory bound checks}
Next, we evaluate whether the performance in the given execution environment complies with the identified complexity classes.
We depict the measured resource consumption for both \ac{jt}\@\xspace and \ac{lbp}\@\xspace configurations in \autoref{fig:exp_tradeoff}.
As expected, it shows that both memory usage and runtime increase with the number of vertices for \ac{jt}\@\xspace.
For our automatic checks, we fit different linear regression models with the resource measurements (i.e.\@\xspace one model for linear, quadratic, cubic, etc.\@\xspace complexity).
We also cross-validated this assessment by subdividing the measurements into several independent sets, and fitting the regression for each group.
In our experiments, those results corresponded to the identified theoretical complexity of the tested \ac{mrf} configurations, thus all methods receive memory and runtime checkmarks.
\begin{figure}[t]
\begin{minipage}[t]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig7a.pdf}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig7b.pdf}
\end{minipage}
\caption{Comparison of running \ac{jt} and \ac{lbp} inference in terms of memory consumption and runtime for increasing number of vertices, i.e.\@\xspace model complexity.
Results are averaged over ten experiment repetitions (standard deviation $\sigma < 0.5$).}
\label{fig:exp_tradeoff}
\end{figure}
\paragraph{Resource consumption testing}
For specific resource measurements, we chose a medium-sized data set stemming from a grid-structured graph with $14 \times 14$ vertices and binary state space.
The results are displayed in \autoref{tbl:resource_consumption}.
They confirm that the \ac{jt}\@\xspace configuration requires much more runtime and energy than \ac{lbp}\@\xspace.
The hardware platform was internally measured to consume an average of $20 - 43$ Watt per experiment.
To obtain the complete energy consumption, we multiplied the power with runtime.
The badge colors in the implementation part of the care labels are directly derived from those measurements.
\begin{table}[!htb]
\centering
\begin{minipage}{0.48\linewidth}
\input{tab_measured_values}
\end{minipage}
\caption{
Comparison of the two methods on a data set with a grid independence structure sized $14 \times 14$.
The experiments were repeated ten times with a standard deviation of $\sigma < 0.5$.}
\label{tbl:resource_consumption}
\end{table}
Our experimental findings show the usefulness of our care label concept, compacting the extensive theory of \acp{pgm}, while still providing useful information that is otherwise not accessible for users.
\section{Conclusion}
\label{sec:conclusion}
With state-of-the-art systems, ML user requirements can differ vastly.
Certain users might know the theory and have an intuitive understanding of properties and guarantees.
Often, however, users struggle trying to understand the intricacies of different methods.
There are approaches that discuss how trust in ML can be increased, but they often fail to connect theory and practice, or are too abstract and inaccessible.
We address these issues via our care labels to inform a broad range of users and ML customers.
Our labels identify important theoretical properties that are highly relevant for safety-critical or resource-constrained use cases.
We test implementations against theory by performing bound checks for reliability and measuring resource consumption.
All this information is neatly displayed in our care label design, which is easy-to-understand for both experts and customers without a scientific background.
We demonstrated that our concept is practical for a broad class of probabilistic ML models.
The resulting labels allow users to assess implications of using \ac{mrf} variants with different components.
The extensive amount of theory behind \acp{pgm} remains invisible to the user.
To extend our concept, it needs to be implemented for a wider range of methods.
\acp{mrf} are a good starting point due to extensive theoretical foundations and high customizability.
By applying our concept to more methods, we will identify aspects that can be improved.
For our experimental evaluation, we implemented a verification suite, expert knowledge database, and care label design (cf. \autoref{fig:framework}).
They allow generating care labels for \acp{mrf}.
Subsequent to this work, we intend to refine and finally publish it for ML practitioners.
We invite fellow researchers to test our concept and reproduce results, so we can extend it for more use cases.
We are aware that our proposed concept leaves some room for interpretation -- indeed this is intended.
In order to make care labels a useful certificate, we argue that refining the details is a task for the community instead of a closed team of researchers.
More criteria and badges will be introduced, and rules for combining the components' ratings will be extended.
We contributed a first draft that requires feedback by more field experts to really make care labels an acknowledged certificate.
The developers of new methods know their respective properties best and have a sound understanding of how implementations can be tested.
We explicitly call the research community to action:
Join us in the endeavour of making methods and applications more responsible and trustworthy, by building a bridge between researchers and users of ML.
\begin{acknowledgements}
This research has been funded by the Federal Ministry of Education and Research of Germany as part of the competence center for machine learning ML2R (01IS18038A/B). Part of the work on this paper has been supported by Deutsche Forschungsgemeinschaft (DFG) - project number 124020371 - within the Collaborative Research Center SFB 876 "Providing Information by Resource-Constrained Analysis", DFG project number 124020371, SFB project C3.
\end{acknowledgements}
|
1,108,101,563,197 | arxiv | \section{Introduction}
It is known that the energy in a three dimensional homogeneous and isotropic
turbulent flow cascades forward, from the forcing scales to the dissipative
scales~\cite{k41}. When Reynolds number is high enough, an intermediate range
of scales develops where the energy flux is constant~\cite{frisch}. However,
systems like rotating flows~\cite{mininni,deusebio}, flows confined along one
direction~\cite{celani} and flows of conducting materials~\cite{brandenburg}
show an inverse energy transfer toward larger and larger scales. As a result,
it is still not clear what are the internal dynamical mechanisms that trigger
the direction of the energy flux in fully developed turbulence. In this paper,
we present a series of numerical experiments done on a modified version of the
three-dimensional Navier-Stokes equations where a subset of Fourier modes have
been removed. There are many different ways to achieve a mode reduction, from
the usual Galerkin truncation of all modes with $|{{\bm k}}| >k_{\rm max}$ to more
refined self-similar truncation done on a fractal-Fourier set~\cite{fractal}.
Here, we are interested to further explore the possibility to reduce mode on
the basis of their helicity content~\cite{waleffe,constantin,biferale2013}. Helicity,
together with energy, is an inviscid invariant of three-dimensional
Navier-Stokes equation and it is known to play a key role both for
hydrodynamical and magnetohydrodynamical
systems~\cite{biferale-jstat,moffatt69,moffatt92,brissaud,laing,ditlevsen,holm,biferaleh,chen,chen2,dubrulle,biskamp,baer,mininni2010}.
In previous works~\cite{biferale2013,biferale2012} we have shown that by
constraining the velocity field to develop fluctuations with only one sign of
helicity, the energy flows backward: from the forced scale to the largest scale
in the system, without reaching a steady state if not confined on a finite box
or without the addition of external friction. More recently, we
have shown that the inverse cascade regime, observed for the fully helical
case, is highly fragile~\cite{sahoo2015}: it is enough to have a tiny number of helical modes
with the opposite sign distributed uniformly on the Fourier space to revert the
system to a forward cascade regime. Such a conclusion is also
supported by arguments based on absolute
equilibrium~\cite{herbert,kraichnan}. In this paper we explore the case
when all Fourier modes have the same helicity (say positive) except for a small subset
possessing also the opposite (negative) helicity. The latter being limited to
belong to a tiny shell in Fourier space. The goal is to make a further step
toward a better understanding of the dynamics of energy transfer in
Navier-Stokes equations, triad-by-triad. The paper is organized as follows: In
Sec. 2 we briefly describe helical decimation and write the Navier-Stokes
equations for helical Fourier modes; In Sec. 3 we discuss the results from our
direct numerical simulations for two different series of computations, either
when the negative helical modes are confined to a shell of wavenumbers larger
than the force scale or in the opposite case. Conclusions can be found in Sec.
4.
\section{Helically decomposed Navier-Stokes equations}
\begin{figure*}[!htb]
\center
\includegraphics[scale=0.43]{figures/triads-schematic}
\caption{(Color online) Schematic presentation of triads~\cite{waleffe}:
Triads where the two largest wavenumbers have the same sign of helicity are
responsible for a reverse transfer of energy and are called of R-type. They
include triads of Class I and of Class II. Triads where the two largest
wavenumbers have opposite sign of helicity are responsible for forward transfer
of energy and are called of F-type. They include triads in Class III and
Class IV. For R-type (F-type) the Fourier mode with the medium (smallest)
wavenumber is unstable and transfers energy to the other two
Fourier modes. The arrows (green for reverse and red for forward) show
direction of energy transfer. }
\label{fig1}
\end{figure*}
The velocity field in a periodic domain is expressed by the Fourier series
\begin{align}
{\bm u}({\bm x}) = \sum_{{\bm k}} {\hat {\bm u}}_{\bm k} e^{i{\bm k}\cdot{\bm x}},
\end{align}
where the modes ${\hat {\bm u}}_{\bm k}$ satisfy the incompressibility condition
${\bm k}\cdot{\hat {\bm u}}_{\bm k}=0$ and can be exactly decomposed in terms of the helically
polarized waves as~\cite{waleffe,constantin}
\begin{equation}
\label{eq:dec}
{\hat {\bm u}}_{\bm k} = u^+_{\bm k} {\bm h}^+_{\bm k} +u^-_{\bm k} {\bm h}^-_{\bm k}.
\end{equation}
The eigenvectors of the curl ${\bm h}^\pm_{\bm k}$ are given by
\begin{align}
{\bm h}^\pm_{\bm k} = \hat{\nu}_{\bm k} \times
\hat{k} \pm i \hat{\nu}_{\bm k},
\end{align}
so that $i {{\bm k}} \times {\bm h}^\pm_{\bm k} = \pm k {\bm h}^\pm_{\bm k}$; where $ \hat{\nu}_{\bm k}$ is an
unit vector orthogonal to ${{\bm k}}$ such that $\hat{\nu}_{\bm k} = -
\hat{\nu}_{-{\bm k}}$ to enforce reality of the field. One can choose for example~\cite{waleffe}:
\begin{align}
\hat{\nu}_{{\bm k}} = \frac{{{\bm z}} \times {{\bm k}}}{ || {{\bm z}} \times {{\bm k}} ||},
\end{align}
where ${\bm z}$ is any arbitrary vector. The eigenvectors ${\bm h}^\pm_{\bm k}$
satisfy the orthogonality condition ${\bm h}^{s}_{\bm k}\cdot{\bm
h}^{t*}_{\bm k}=2\delta_{st}$, where $s,t = \pm$ denote the signs of the helicity
and $*$ is for the complex conjugate.
We define a projector
\begin{align}
\label{eq:poperator}
{\mathcal P}^\pm_{{\bm k}} \equiv \frac {{\bm h}^\pm_{\bm k} \otimes {\bm h}^{\pm*}_{\bm k}} {{\bm h}^{\pm*}_{\bm k} \cdot {\bm h}^\pm_{\bm k}},
\end{align}
which projects the Fourier modes of the velocity on eigenvectors ${\bm h}^\pm_{\bm k}$
as
\begin{align}
\label{eq:projection}
{\mathcal P}^\pm_{{\bm k}} {\hat {\bm u}}_{\bm k} = {\hat {\bm u}}^\pm_{\bm k} = u^\pm_{\bm k}{\bm h}^\pm_{\bm k}.
\end{align}
The Navier-Stokes equations can be decomposed in terms of the evolution of
velocities with positive or negative sign of helicity as follows:
\begin{equation}
\label{eq:NS}
\frac{\partial{\bm u}^\pm({\bm x})}{\partial t} + {\cal D}^\pm {\rm \bf N}[{\bm u}({\bm x}),{\bm u}({\bm x})] = \nu\nabla^2{\bm u}^\pm({\bm x})+{\bf f}^{\pm},
\end{equation}
where the operator ${\cal D}^{\pm}({\bm u})$ acts on a generic three-dimensional
vector field by projecting all Fourier components on ${\bm h}^\pm_{\bm k}$:
\begin{equation}
\label{eq:projector}
{\cal D}^{\pm}{{\bm u}}({\bm x}) \equiv \sum_{{\bm k}} e^{i{\bm k}\cdot{\bm x}}\,{\mathcal P}^{\pm}_{{\bm k}} {{\hat {\bm u}}_{\bm k}},
\end{equation}
and ${\rm \bf N}[{\bm u}({\bm x}),{\bm u}({\bm x})]$ is the nonlinear terms of the Navier-Stokes equations~\cite{biferale2012}.
The total energy and the total helicity can also be easily expressed in terms of the helical modes:
\begin{align}
E &= \int d^3 x \, |{\bm u}({\bm x})|^2 = \sum_{{\bm k}} |u^+_{\bm k}|^2 + |u^-_{\bm k}|^2,\\
H &= \int d^3 x \, {\bm u}({\bm x}) \cdot {\bm \omega}({\bm x}) = \sum_{{\bm k}} k(|u^+_{\bm k}|^2 - |u^-_{\bm k}|^2),
\end{align}
where ${\bm \omega}({\bm x})=\nabla\times{\bm u}({\bm x})$ is the vorticity. From the above
expression one can introduce the energy spectrum for positive and for negative
helical modes~\cite{chen,chen2}:
\begin{align}
E^+(k) = \sum_{|{\bm k}| \in [k,k+1]} |u^+_{\bm k}|^2; \\
E^-(k) = \sum_{|{\bm k}| \in [k,k+1]} |u^-_{\bm k}|^2.
\end{align}
Plugging the decomposition (\ref{eq:dec}) in to the Navier-Stokes equations (\ref{eq:NS}) it is easy to
realize that the nonlinear term consists of triadic interactions with eight
(four for the evolution of $u^+$ and four for the evolution of $u^-$) possible
helical combinations of the generic modes $u^{s_{\bm k}}_{\bm k}$, $u^{s_{\bm p}}_{\bm p}$,
$u^{s_{\bm q}}_{\bm q}$ forming an interacting triad, i.e., ${\bm k}+{\bm p}+{\bm q}=0$, for
$s_{\bm k}=\pm$, $s_{\bm p}=\pm$, $s_{\bm q}=\pm$~\cite{waleffe} (see fig.~\ref{fig1} where
for simplicity we assume that $k\le p \le q$). The four classes of triads are
classified as follows: Class I, containing triads formed with all wavenumbers
having the same sign of helicity, i.e., $(+, +, +)$; Class II, made of triads
where the two smallest wavenumbers have opposite sign of helicity and the two
largest wavenumbers have the same sign of helicity, i.e., $(-, +, +)$; Class III,
containing triads where the two smallest wavenumbers have the same sign of
helicity and the two largest wavenumbers have an opposite sign of helicity,
i.e., $(+, +, -)$; and Class IV, made of triads where the two smallest
wavenumbers and the two largest wavenumbers have opposite sign of helicity,
i.e., $(+, -, +)$ (see fig.~\ref{fig1}). In ~\cite{waleffe}, studying the
instability of the energy exchange among modes of each single triad, it was
argued that the triads in Class III and Class IV transfer energy from the smallest
wavenumber to the other two wavenumbers and are responsible for the forward
cascade of energy. Whereas for the triads in Class I and Class II, the Fourier mode with the
medium wavenumber transfers energy to the other two Fourier modes. These sets
of triads might then contribute to both forward and backward energy transfers.
The presence of competing interactions might suggest that the direction of the
energy transfer mechanism is not set {\it a priori}. Depending on the empirical
realization (the forcing scheme, the boundary conditions, the coupling with
other active dynamical fields as for the case of conducting
flows~\cite{celani,brandenburg,alexakis14,alexakis15}) different directions of the energy could be
developed. As said, in the whole system where all triads are present and with
a neutral homogeneous and isotropic external forcing, energy is observed to be
transferred forward: from large to small scales. On the other hand, a system in
which only modes of one sign of helicity are present, i.e., the dynamics is
restricted to interacting triads with $s_{\bm k}=s_{\bm p}=s_{\bm q}$ (Class I), energy
cascades from small scales to the large scales~\cite{biferale2012}. This was
reconducted to the fact that helicity becomes a sign-definite quantity for such
subset of interactions. In a recent work it was observed~\cite{sahoo2015} that
presence of few percent of modes with opposite sign of helicity changes the
direction of energy transfer in a singular manner: a few modes with both sign
of helicity at all scales, even though one type is a small fraction of other
type, allows formation of triads with two largest wavenumbers having opposite
signs of helicity which efficiently transfers energy to the small scales.
\begin{figure}[!htb]
\includegraphics[scale=0.6]{figures/en_total}\\
\includegraphics[scale=0.6]{figures/en_plus_inset}\\
\includegraphics[scale=0.6]{figures/en_minus_inset}
\caption{Time evolution of the energy based on (a) all modes, (b) only
positive helical modes, and (c) only negative helical modes, for the three
cases where $k_f\in [10,12]$ and $u^-_{\bm k} = 0$ except wavenumbers around
$k_m=2,4,6$. In the insets of panel (b) and (c) we show an enlargement of the
initial period. Notice that the dynamics is first dominated by the sucking of
energy by the positive helical modes at low wavenumbers and then it switches
to transfer energy only to the negative ones. In panels (a) and (b) we also
show the results for the growth of energy when only positive helical modes are
present. In the latter case the growth of the energy in the positive helical
modes is not stopped. }
\label{fig2}
\end{figure}
In this work we attempt to control the energy transfer mechanism in presence
of two different set of triads (Class II and Class IV). To do that we remove
the negative helical modes for all wavenumbers, except for those falling in one
shell of Fourier modes $|{{\bm k}}| \in D_m$ with $ D_m \equiv \{ {{\bm k}} : |{{\bm k}}|
\in [k_m,k_{m}+1] \}$. We consider two cases (i) $k_m <k_f$ and (ii) $ k_m >k_f$,
where $k_f$ is the typical wavenumber where we apply the external forcing
mechanism. To do that we define an operator ${\cal D}_m$ which projects the
velocity on ${\bm h}^+_{\bm k}$ for wavenumbers outside the coset of
$D_m$:
\begin{equation}
\label{eq:projv}
{\bm u}'({\bm x}) \equiv {\cal D}_m {{\bm u}}({\bm x}) \equiv \sum_{{\bm k}} e^{i{\bm k}{\bm x}}\, [(1-\gamma_{{\bm k}}) + \gamma_{{\bm k}} {\mathcal P}^+_{{\bm k}}] {{\hat {\bm u}}_{\bm k}},
\end{equation}
where $\gamma_{{\bm k}}=0$ for ${\bm k} \in D_m$ and $\gamma_{{\bm k}}=1$ otherwise.
The decimated three-dimensional Navier-Stokes equations are given by:
\begin{equation}
\label{eq:ns+++}
\partial_t {\bm u}' = {\cal D}_m [- ({\bm u}' \cdot \nabla) {\bm u}' -{\bm \nabla} p'] +\nu \Delta {\bm u}' + {\bf f}',
\end{equation}
where $p'$ is the pressure, $\nu $ is the viscosity and ${\bf f}'$ is the external forcing (see later for details).
Although nonlinear terms of the decimated Navier-Stokes equations
do not have Lagrangian properties~\cite{moffatt14}, it can still be shown that
both energy
\begin{align}
E &= \sum_{{\bm k}} ( |u^+_{\bm k}|^2 + (1-\gamma_{{\bm k}})|u^-_{\bm k}|^2), \label{eq:ealpha}
\end{align}
and helicity
\begin{align}
H &= \sum_{{\bm k}} k( |u^+_{\bm k}|^2 - (1-\gamma_{{\bm k}})|u^-_{\bm k}|^2), \label{eq:halpha}
\end{align}
are invariants of eq.(\ref{eq:ns+++}) in the inviscid and unforced limit.
\begin{table}
\begin{center}
\begin{tabular}{| c | c | c | c | c | c | c | c |}
\hline
RUN & $N$ & $L$ & $k_f$ & $k_m$ & $\nu$ & $\delta t$ & $F_0$ \\
\hline
R1 & $512$ & $2\pi$ & $[10,12]$ & $6$ & $0.002$ & $0.0001$ & $5$ \\
R2 & $512$ & $2\pi$ & $[10,12]$ & $4$ & $0.002$ & $0.0001$ & $5$ \\
R3 & $512$ & $2\pi$ & $[10,12]$ & $2$ & $0.002$ & $0.0001$ & $5$ \\
R4 & $512$ & $2\pi$ & $[4,6]$ & $10$ & $0.001$ & $0.0001$ & $5$ \\
R5 & $512$ & $2\pi$ & $[4,6]$ & $16$ & $0.001$ & $0.0001$ & $5$ \\
\hline
\end{tabular}
\end{center}
\caption{$N$: number of collocation points along each axis. $L$: size of
the simulation box. $k_f$: range of forced wavenumbers. $k_m$:
wavenumber of the shell with also negative helical modes. $\nu$: kinematic viscosity.
$\delta t$: time step. $F_0$: forcing amplitude.}
\label{table1}
\end{table}
\begin{figure*}[!htb]
\includegraphics[scale=0.7]{figures/evolve_espectra_except2_kf10}
\includegraphics[scale=0.7]{figures/evolve_ep_spectra_except2_kf10}\\
\includegraphics[scale=0.7]{figures/evolve_eflux_except2_kf10}
\includegraphics[scale=0.7]{figures/check_flux_except2_kf10}
\caption{(a) Log-log plot of total energy spectra at different times. (b) The
same of (a) for the positive helical modes spectrum. The mismatch between the
two spectra for $k=k_m$ is due to the energy of the negative helical modes. We
have drawn a dashed line with slope of -5/3 to highlight the possible growth of
inverse cascade spectrum when there is a large inertial range of scales. (c)
Fluxes of energy (see definition (\ref{eq:flux})).
(d) Comparision of energy flux ${\rm \Pi_E}(k)$ and dissipation
${\rm D}(k)$ (see text) at the time when the simulation is stopped ($t\sim32$,
see fig.~\ref{fig2}). The forced wavenumbers at $k_f\in [10,12]$ are marked
with a light grey band, while the wavenumbers with negative helical modes
around $k_m=2$ are in dark grey.}
\label{fig3}
\end{figure*}
\begin{figure*}[!htb]
\includegraphics[scale=0.7]{figures/evolve_espectra_except6_kf10}
\includegraphics[scale=0.7]{figures/evolve_ep_spectra_except6_kf10}\\
\includegraphics[scale=0.7]{figures/evolve_eflux_except6_kf10}
\includegraphics[scale=0.7]{figures/check_flux_except6_kf10}
\caption{The same of fig.~\ref{fig3} but for the case when $k_m=6$,
except for (d) where energy flux and dissipation are compared at $t\sim40$ when
the simulation is stopped (see fig.~\ref{fig2}).}
\label{fig4}
\end{figure*}
\section{Direct Numerical Simulations}
A pseudo-spectral spatial method is adopted to solve
eqs.~(\ref{eq:ns+++}), fully dealiased with the two-thirds rule;
time stepping is implemented with a second-order Adams-Bashforth
scheme. We performed different run up to a resolution of $512^3$ collocation points, by changing the forced wavenumbers and the shell of modes where negative helical waves are retained.
We applied a random Gaussian force with
\[\langle f_i({\bm k},t) f_j({\bm q},t')\rangle = F(k) \delta({\bm k}-{\bm q}) \delta(t-t')
Q_{ij}({\bm k}),\] where the projector $Q_{ij}({\bm k})$ ensures incompressibility and
$F(k) = F_0 k^{-3}$; the forcing amplitude $F_0$ is nonzero
only for $ k_f \in [k_{\rm min}:k_{\rm max}]$.
Table.~\ref{table1} lists the details of various simulations. Moreover, we always projected the forcing on its positive helical components in order to ensure maximal helicity
injection.
We carried out two sets of simulations; First we retained the
negative helical modes in a shell of wavenumbers $\sim k_m$ smaller than the forced
wavenumbers $k_f$, while in the second case
we retained the negative helical modes at
a $k_m>k_f$. In the first set, negative helical modes exist only at
wavenumbers smaller than the forcing mechanisms so effectively we add triads of Class II to the triads of Class I.
In the second set, negative helical modes exist at higher wavenumbers,
resulting in the addition mainly of triads of Class III and Class IV.
\begin{figure}[!htb]
\includegraphics[scale=0.7]{figures/energy_except16_kf4}
\caption{Time evolution of total energy $E(t)$, energy of positive
helical modes $E^+(t)$, and energy of negative helical modes $E^-(t)$ when $k_f\in [4,6]$ and $k_m = 16$.}
\label{fig5}
\end{figure}
\begin{figure*}[!htb]
\includegraphics[scale=0.7]{figures/evolve_espectra_except16_kf4}
\includegraphics[scale=0.7]{figures/evolve_ep_spectra_except16_kf4}\\
\includegraphics[scale=0.7]{figures/evolve_eflux_except16_kf4}
\includegraphics[scale=0.7]{figures/check_flux_except16_kf4}
\caption{The same of fig.~\ref{fig3} but for the case when $k_f\in [4,6]$ and $k_m=16$,
except for (d) where energy flux and dissipation are compared at $t~10$ when
the simulation is stopped (see fig.~\ref{fig5}).}
\label{fig6}
\end{figure*}
\subsection{Energy transfer for $k_m<k_f$}
In this set of simulations we keep $k_f\in[10,12]$ and change the value of
$k_m$ to $2$, $4$ and $6$. Figure~\ref{fig2}(a) shows the evolution of energy
in the three cases. We always observe a steady inverse energy cascade which
reaches a statistically steady state, except for $k_m=2$ where the run was not
long enough to stabilize the system. Notice that we never introduced
an external energy sink at large scales. Therefore, a statistically stable
system means that a stable large scale helical condensate is formed with an
energy large enough to be dissipated directly by molecular viscosity. The
growth of energy in the positive and negative helical modes are shown
separately in fig.~\ref{fig2} (b) and (c). It is striking to note that in the
steady state the negative helical modes, existing only at $k=k_m$, carry almost all the
energy of the system, signaling that the inverse energy cascade process is
very efficient to move energy to the opposite helical modes via Class II
interactions. Moreover, the negative helical modes act as sinks and do not allow the
inverse cascade to proceed further to larger scales, stabilizing a condensate
to a given wavenumber, independent of the size of the box. A statistically
stationary state is then reached only when molecular drag becomes efficient at
such scales. Initially the growth of energy is in the positive helical modes,
shown in the insets of panels (b) and (c). There is a critical change in the
dynamics of the system when the negative helical modes become energetic enough
(i.e., for the $k_m=2$ case around $t \sim 3$). The positive helical modes
at $k<k_m$ lose their energy as they form triads of Class III or Class IV with
the negative helical modes and therefore contribute to the formation of condensate
at $k\sim k_m$. To better understand the
dynamics among different wavenumbers we show the spectrum of energy at
different times in fig.~\ref{fig3}, for the case $k_m=2$. From
fig.~\ref{fig3}(a) we see that at initial times ($t<2.0$) the growth of energy
in the large scales ($k<k_f$) is due to an inverse transfer to the positive
helical modes. This transfer is driven by triads of Class I. When the negative
helical modes at $k=k_m$ becomes energetic enough ( $t \sim 5$) the positive
helical modes start to be depleted, leading for later times ($t \sim 9$) to
a configuration where all the energy is concentrated only on the $u^-_k$,
albeit they correspond to a small minority of the total number of
degrees-of-freedom. Figure~\ref{fig3}(c) shows the flux of total energy as a
function of time. We observe a persistent constant positive flux corresponding
to inverse cascade of energy in the range $k\in [k_m,k_f]$. This confirms that
also triads of Class II lead to a reverse energy cascade. The energy is then
directly dissipated by the viscous effect which becomes substantial for the
highly energetic negative helical modes. This is shown in fig.~\ref{fig3}(d),
where we compare the energy flux due to the nonlinear terms,
\begin{align}
{\rm \Pi}_E(k)=\sum_{|{\bm k}'|<k} {\hat {\bm u}}_{\bm k'}^* \cdot \hat{\bf N}_{{\bm k}'},
\label{eq:flux}
\end{align}
across a wavenumber $k$, where
\begin{align}
\hat{\bf N}_{\bm k}= \left(\mathbb{I}-\frac{{\bm k}\bk}{k^2}\right)\left[\sum_{{\bm p}+{\bm q}=-{\bm k}}({\hat {\bm u}}_{\bm p}\cdot{\bm q}){\hat {\bm u}}_{\bm q}\right]
\end{align}
is the nonlinear term in the Fourier space, and
the total molecular dissipation in the same Fourier interval:
\begin{align}
{\rm D}(k)= 2\nu \sum_{|{\bm k}'|<k} k'^2 E(k').
\end{align}
It should be noted that with this definition (\ref{eq:flux}) of energy flux, which has the
opposite sign of what is commonly used, a positive/negative flux means
the presence of an inverse/direct energy cascade. Let us stress that the viscous
contribution does not match exactly the
nonlinear transfer because the energy is still growing in time. Simulations for
the case where $k_f=4,6$ reach a steady state earlier and they show a much
better matching between the two contributions, see below panel (d) of
fig.~\ref{fig4}. Let us also notice that a sort of $k^{-5/3}$ scaling is
observed in the inverse cascade regime as for the case when only Class I triads
are present~\cite{biferale2012}, at least up to the time when the condensate
does not become too energetic to spoil the scaling properties.
In fig.~\ref{fig4} we show the results from the case where $k_m=6$. The main
interest to select this window is that in this way we can change the degree
of nonlocality of the triad geometry. In \cite{waleffe} it was argued that in
the scaling regime triads of Class II should display either a forward or a
reverse energy transfer depending whether the ration between the smallest and
the medium wavenumber $v=k/p$ is larger or smaller than 0.278. If
we assume that the main energy transfer happens via a triad where two
wavenumbers fall in the forced range and the other belong to the negative
helical modes then we have $v=0.6$ for $k_m=6$ and $v=0.2$ for $k_m=2$.
As seen in fig.~\ref{fig4} we observe an inverse energy
transfer also for $v=0.6$ contradicting the prediction made by~\cite{waleffe}. This is
probably due to the absence of any scaling regime for the configuration of
forced and negative helical modes chosen here, as shown by panel (a) and (b) of
fig.~\ref{fig4}, and therefore our configuration does not satisfy the assumptions made in
\cite{waleffe}. Figure~\ref{fig4}(d) shows the balance of ${\rm \Pi}_E(k)$ and ${\rm D}(k)$
for the wavenumbers $k\in[k_m,k_f]$ which confirms that negative helical modes
lose energy due to molecular dissipation in such case.
\subsection{Energy transfer for $k_m>k_f$}
In this second set of simulations we forced at $k_f\in[4,6]$ and kept the
negative helical modes only for larger wavenumbers, $k_m=10$ and $k_m=16$. The
behavior of the growth of energy is similar to the cases of $k_m<k_f$ (see
fig.~\ref{fig5}). After the negative helical modes become energetic they
continue to accumulate energy and then reach a steady state by dissipating
energy directly via molecular viscosity. However the dynamics of energy
transfer is entirely different from previous cases as seen in fig.~\ref{fig6}.
In fig.~\ref{fig6} (a) and (b) we show the spectrum for the total energy and
for the positive helical modes respectively. As before, the difference between
the two gives the energy content in the negative helical modes. In the
beginning we initialize the field at the forced scales and we observe a clear
inverse cascade of energy to large scales, shown by the energy spectra in
fig.~\ref{fig6} (a) and (b) and in the positive energy flux in
fig.~\ref{fig6}(c) at $t \sim 2.2$. This transfer is due to the triads of
Class I. Then, as soon as the negative helical modes become energetic enough,
the triads of Class III and Class IV take the lead and the energy flux is
reversed toward the negative helical modes at scales smaller than the forced
ones from times $t \sim 4$ and larger. It is interesting to observe that the
positive helical modes at large scales ($k<k_f$) also lose their energy by a
forward cascade, probably highly nonlocal. Figure~\ref{fig6}(c) shows the
evolution of the energy flux during the backward and forward regimes.
Panel (d) of the same figure compares the viscous contribution and the
nonlinear flux. The figure shows that in the late stationary regime the viscous
drag, induced by the high energy content of the negative helical modes, is
balanced with the nonlinear flux. In this case we have a small-scales
condensate that adsorbs all energy flowing between modes at $k \sim k_f$ and $k
\sim k_m$. This is possibly due to the fact that positive helical modes
at $k>k_m$ do not receive energy from the negative helical modes at $k\sim k_m$
as they could only form triads of Class II which are responsible for inverse
energy transfer.
\begin{figure*}[!htb]
\center
\includegraphics[scale=0.35]{figures/except-k4-kf10-isovorticity.png}
\includegraphics[scale=0.35]{figures/except-k16-kf4-isovorticity.png}
\caption{(Color online) Iso-vorticity surfaces for: (a) $k_f=[10,12]$,
$k_m=4$, (b) $k_f=[4:6]$, $k_m=16$. Color palette is proportional to the
intensity of the helicity: red for high positive values ($\sim 10^3$) to blue
for high negative values ($\sim -10^3$).}
\label{visual}
\end{figure*}
\subsection{Coherent structures}
As discussed in the previous sections, both experiments leads to a sort of
helical condensate concentrated on the wavenumbers where the negative helical
modes exists. This is a different way to produce (and stabilize) strong
nonlinear structures in Navier-Stokes equations with respects to the well known
case of two-dimensional turbulence~\cite{kraichnan67,boffetta,chertkov,cencini,clercx,laurie}. A
visualization of the vorticity field where an inverse cascade of energy is
observed is shown in fig.~\ref{visual}(a). The presence of helical stable
structures is clearly detectable. In panel (b) of the same figure we show similar
small-scales condensates that populate the flow when $k_f\in[4,6]$ and
$k_m=16$. It would be interesting to understand if one
can highlight some universality properties of such configurations as done for
the two-dimensional case~\cite{laurie}.
\section{Summary}
We have performed several numerical simulations of a modified (decimated)
version of the three-dimensional Navier-Stokes equations by keeping only some
subsets of Fourier modes with different helical properties. The aim is to
further understand the different roles played by triads with different helical
structures in the dynamics of the nonlinear energy transfer mechanism. We have
shown that as predicted in \cite{waleffe} there exist two classes (Class I and
Class II) of triads that transfer energy to large scales, i.e. which can
support an inverse cascade even in fully homogeneous and isotropic turbulence
(but not mirror symmetric). This result for Class I where all modes have the
same helical sign was already known~\cite{biferale2013,biferale2012}. The second class
(here called Class II) is made of triads where helicity is not globally
sign-definite. The structure is such that the mode with the different helicity
is the one at the smallest wavenumbers. Hence, when the small-scales are
strongly helically-signed the forward energy transfer is depleted. The
existence of inverse cascade even when helicity is not positive-definite
contradicts the predictions based only on the absolute equilibrium in the
inviscid and unforced limit~\cite{herbert,kraichnan}.\\
By concentrating the negative helical modes at small scales (high wavenumbers) we showed
that as soon as triads of the other two classes (Class III and Class IV)
become competitive, they take the leadership in the energy transfer mechanisms
and the energy flux is reversed, reaching a more standard forward-cascade
regime. In both cases the energy is preferentially transferred to the minority
helical modes (here negative), leading to either a large-scale condensate or to a
small-scales condensate. Our study further supports the idea that the direction
of the energy transfer in a turbulent flow might strongly be influenced by the
helicity distribution among different scales~\cite{biferale2013,biferale2012,sahoo2015,stepanov,kessar}.
\section{acknowledgement}
We acknowledge useful discussions with F. Bonaccorso and funding from the
European Research Council under the European Union Seventh Framework
Programme, ERC Grant Agreement No 339032. Numerical simulations have been
partially supported by the INFN initiative INF14\_fldturb.
|
1,108,101,563,198 | arxiv | \section{Introduction}
LIBOR market models (LMMs) are nowadays widely used by practitioners to valuate market instruments which depend on interest rate movements, such as caps or swaptions.
These models are popular due to their relative ease of calibration coupled with the possibility to economically interpret the relevant parameters. LMMs have been developed (\cite{MR97,Brigo,Rebonato}) with a view towards valuating interest rate derivatives with a maturity/tenor structure at the the order of months or a few years.
Recently, the valuation of long term guarantees has become increasingly important in the life insurance sector. The regulatory framework Solvency~II \cite{L1}, which was implemented by the European Union per January 1, 2016, requires European insurers to assign market consistent values to their liability portfolios. For the case of life insurance with profit participation this means that cash flows have to be projected along arbitrage free scenarios to yield a Monte Carlo method of calculating the associated expected value (\cite{GH21,HG19,VELP17}).
A life insurance portfolio may have a time to run-off of several decades and it is not unusual to have a projection horizon of $60$ years, or more. Indeed, the EIOPA risk free curve is published with a length of $120$ years (\cite{EIOPA_curve}).
For the aforementioned reasons LMMs have become very popular also in the insurance sector. However, because of the projection horizon of a typical life insurance portfolio, the generated scenarios often suffer from blow-up. In this context, blow-up or explosion means that there is a significant number of scenarios (e.g., more than $1\%$) such that the forward rate (for any maturity and any point in time) exceeds a predefined threshold (e.g., $50\%$ of interest).
The explosion problem of LMMs is also theoretically well-known (see e.g. \cite{SG}).
This is a practical problem for two reasons: Firstly, extremely high interest rate scenarios are unlikely since central banks act to stabilize rates around a given ultimate forward rate target. Hence, a significant percentage of exploding rates hints at unrealistic scenario evolution. Secondly, explosion of rates over a longer period of time leads to discount factors below machine accuracy, resulting in a vanishing cash-flow in the Monte Carlo routine.
To mitigate the explosion problem, there are two popular and practical approaches (\cite[p.~38f]{AP20}):
\begin{itemize}
\item
\emph{Volatility freeze:}
If a scenario breaches a predefined threshold (e.g., $50\%$) then the scenario evolves according to the prevailing term structure from that time onward. That is, the volatility is formally set to $0$ in this scenario.
\item
\emph{Capping:}
If a scenario breaches a predefined cap (e.g., $70\%$) then the rates are set equal to this cap from that time forward.
\end{itemize}
While these methods are clearly very effective in avoiding explosion, there are caveats: The cap has negative consequences for the martingale properties of the set of risk-free scenarios, and both reduce the scenario implied volatility; which violates market-consistency. The German Association of Actuaries (DAV) outlines that a capping of exploding interest rates should be avoided as it violates in general the risk neutral framework (\cite{DAV1}, \cite{DAV2}).
The no-arbitrage property is important in practice also as a necessary condition for the applicability of the so-called leakage test (see \cite{HG19}).
In this paper, we work towards an extension of LMMs such that all the salient features (ease of calibration, economic interpretation of parameters, martingale property) are preserved, but with the added benefit that the probability of blow-up is significantly reduced.
Therefore, we introduce \emph{mean-field LIBOR market models} (MF-LMMs). The essential idea here is that the mean-field dynamics can depend on properties of the empirically observed distribution of the scenario set. Specifically, we use the observed second moment as a measure for explosion and design the evolution equation in such a way that the growth of the solution's second moment is dampened.
We remark that our main motivation for this mean-field extension is the valuation of long term guarantees. To this end, we provide a numerical study which shows that the probability of explosion is considerably reduced in the MF-LMM framework. This is viewed as evidence that the mean-field approach has, when properly implemented, effects which are desirable from the economic perspective.
Moreover, in order to have a tractable extension of the standard LMM technique, we consider a standard LMM calibration and an a posteriori mean-field augmentation of the dynamics. Thus the parametrization of the mean-field component is justified by the economic plausibility of the resulting scenarios while the non-mean-field parameters follow from a standard calibration routine.
Concerning the economic motivation for MF-LMMs we stress that interest rate movements are stabilized by central bank policy. In short rate models this stylized fact is often incorporated by means of a mean-reversion assumption. This is an observation which is also reflected by the literature strand on \textit{backward-looking rates} that disentangles LIBOR rates in a compounded overnight risk-free rate plus a fixed spread (see e.g. \cite{VPSine}, \cite{LM19, LM19_}), based on the ISDA fallback protocol (see \cite{ISDA1,ISDA2}) for LIBOR benchmarks. Whereas this LIBOR spread approach will be pertinent in the long-term future of fixed-term reference rate modeling, our approach focuses on existing long-term contracts based on the LIBOR that are still present and popular among practitioners.
\subsection{Outline}\label{sec:orga}
This paper is organized as follows: In Section~\ref{sec:model}, the MF-LMM is introduced and its existence is proved by showing that the corresponding mean-field SDE\footnote{For the remainder of the paper and consistently with e.g.~\cite{BLPR}, we use the terms mean-field SDE and McKean-Vlasov SDE synonymously.} is well-posed and has a unique strong solution. Section~\ref{SEC:Cali} addresses practical aspects of the MF-LMM such as a Black '76-type formula for a given measure flow, calibration and change to the spot measure. Section~\ref{sec:numerics} contains the numerical study and results, which show how the model can be used to reduce explosion. The effects on cap and swaption prices are studied, and it is shown that these can be (approximately) preserved by a judicial choice of the mean-field component.
Section~\ref{Sec:WEll} deals with existence and uniqueness of the solution to the underlying mean-filed SDE.
\subsection{Notation}\label{sec:prel}
For the remainder of the manuscript, we resort to the following terms and definitions:
\begin{itemize}
\item $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t \geq 0},\mathbb{P})$ denotes a filtered probability space satisfying the usual conditions, see e.g., \cite{SAB}.
\item Let $(\mathbb{R}^d,\left \langle \cdot,\cdot \right \rangle, |\cdot|)$ be $d$-dimensional ($d \geq 1$) Euclidean space. As a matrix-norm, we use the Hilbert-Schmidt norm denoted by $\| \cdot \|$.
\item
Let $\mathscr{P}(\mathbb{R}^d)$ to denote the family of all probability measures on $\mathbb{R}^d$ and define the subset of probability measures with finite second moment by
\begin{equation*}
\mathscr{P}_2(\mathbb{R}^d)
:=
\Big \{ \mu\in \mathscr{P}(\mathbb{R}^d) :
\int_{\mathbb{R}^d} |x|^2 \mu(\mathrm{d} x) < \infty
\Big \}.
\end{equation*}
\end{itemize}
\section{The mean-field LIBOR market model}\label{sec:model}
We start with the recapitulation of standard LIBOR market models:
The fair price at time $t$ of a zero-coupon bond paying $1$ unit of currency at expiry date $T\ge t$ will be denoted by $P(t,T)$.
We fix a tenor structure $0 \leq t_0 < t_1 < \ldots < t_N$, and remark that $t_N$ may be large.
The $i$-th forward LIBOR rate (i.e., the rate valid on $[t_{i-1},t_i]$) at time $t \le t_{i-1}$ is defined as
\begin{equation*}
L_t^{i}
:=
\frac{1}{\delta_i}
\frac{P(t,t_{i-1}) - P(t,t_i)}{P(t,t_i)}\,.
\end{equation*}
where we assume accrual periods $\delta_i = t_i - t_{i-1}$.
We follow the backward induction approach (as e.g. presented in \cite{MR97}) to define the LIBOR dynamics, see also \cite{MR09,Fil}.
We therefore employ the following standard postulates:
\begin{enumerate}
\item[(1)]
The initial term structure $P(0,t_i)$ is positive and non-increasing, thus $L^i_0\ge0$ for all $i=1,\dots,N$.
\item[(2)]
For each index $i=1,\dots,N$, there is an $\R^d$-valued volatility $\sigma_i$, a forward measure $\mathcal{Q}^i$ and a corresponding $d$-dimensional Brownian motion $W^i$ ($d\ge1$), such that the dynamics of $L^i$ under $\mathcal{Q}^i$ is given by
\begin{align}
\label{e:LMM}
\mathrm{d}L_t^{i}
= L_t^{i} \sigma_i^{\top} \,\textup{d} W^{i}_t.
\end{align}
\item[(3)]
The Radon-Nikodym derivatives of the forward measures are naturally given by
\begin{equation}
\label{e:cons}
\frac{d\mathcal{Q}^{i-1}}{d\mathcal{Q}^i}
= \mathcal{E}_{t_{i-1}}
\left( \int_{0}^{\cdot}
\frac{\delta_i L^i_s}{\delta_i L^i_s + 1} \sigma_i^{\top}(s) \, \,\textup{d} W^i_s
\right)\,.
\end{equation}
\end{enumerate}
Note that the $\mathcal{Q}^i$, for $i<N$, are fixed by backward induction starting from the terminal measure $\mathcal{Q}^N$.
In the classical LIBOR market model the volatility structures are deterministic functions of time, that is $\sigma_i = \sigma_i(t)\in\R^d$ for $t\in[0,t_{i-1}]$.
We lift this construction to the mean-field setting.
Therefore, we choose
\begin{equation}
\label{e:LMM-mf}
\sigma_i = \sigma_i(t,\mu_{t}^{i})
\end{equation}
where $\mu_{t}^{i} = \text{Law}(L_t^{i})$ is the law of $L_
t^i$ under $\mathcal{Q}^i$.
Consequently, the process given by \eqref{e:LMM} needs to be formulated as a mean-field SDE, see e.g.\ \cite{CD}. In order to obtain a well-formulated and applicable model, we have to derive the following three results:
\begin{enumerate}
\item[(1)]
The mean-field version of the SDE \eqref{e:LMM} is well-posed.
\item[(2)]
The mean-field version of the SDE \eqref{e:LMM} can be transformed (for all $i\le N$) to a mean-field SDE under the terminal measure $\mathcal{Q}^N$.
\item[(3)]
The model should imply a Black '76-type formula for cap prices.
\end{enumerate}
The first two theoretical items are proved in Theorem~\ref{TH:TH1} and the third practical item is shown in Theorem~\ref{thm:cap}.
\subsection{Specific volatility structure}\label{sec:vol_str}
For answering (1) and (2), one clearly needs to give the volatility~\eqref{e:LMM-mf} some specific structure, to be in line with the theoretical results obtained in Section~\ref{Sec:WEll}. Exemplary choices may depend on the variance of the LIBOR rates, e.g. we consider volatility structures of the form
\begin{equation}
\label{e:vol_gen}
\sigma_i(t,\mu_t^i) = \lambda^i(t,\mathbb{V}_{\mathcal{Q}^i}[L_t^{i}])
\end{equation}
where $\mathbb{V}_{\mathcal{Q}^i}[L_t^{i}]$ denotes the variance of $L_t^i$ under $\mathcal{Q}^i$ and $\lambda^i: [0,t_{i-1}]\times\R^+\rightarrow}\def\l{\ell}\def\iint{\int\R^d$ is a deterministic function.
In order to incorporate a dampening effect, we will assume in the numerical study in Section~\ref{sec:numerics} that
\begin{equation}
\label{eq:diffusion}
\sigma_i(t,\mu_{t}^{i})
=
\sigma_i^{(1)}(t)
\exp\Big(
-\max\left\{\mathbb{V}_{\mathcal{Q}^i}[L_t^{i}] - \tilde{\sigma},0\right\}
\Big),
\end{equation}
where $\sigma_i^{(1)}$ is modeled as a bounded function $\sigma_i^{(1)}: [0,t_{i-1}] \rightarrow}\def\l{\ell}\def\iint{\int {\mathbb R}^d$. As in the classical model, this deterministic component is calibrated to market data, such as caplet prices. $\tilde{\sigma}>0$ represents a threshold volatility, motivated from historical data.
This means that the volatility is tamed if $\mathbb{V}_{\mathcal{Q}^i}[L_t^{i}]$ increases beyond the threshold. We emphasize that our general existence and uniqueness result, Theorem~\ref{TH:TH1} covers this chosen parametrization; compare in particular Remark~\ref{rem:max}.
\subsection{Existence and uniqueness of the MF-LMM}
The existence and uniqueness of the classical LIBOR market model is based on a backward induction argument.
Therefore, we derive the dynamics of the $i$-th LIBOR rate under the terminal forward measure $\mathcal{Q}^N$ in the form
\begin{align}
\label{e:LMM3}
\mathrm{d}L_t^{i}
=
L_t^{i}\Big(
b_i(t,L_t,\tilde{\mu}_{t}^{i}) \,\mathrm{d}t
+ \tilde{\sigma}_i(t,\tilde{\mu}_t^{i})\,\mathrm{d}W^{N}_t \Big)
\end{align}
for $0 \leq t < t_{i-1}$.
In this formula,
$\tilde{\mu}_t^{i}$ is the law under $\mathcal{Q}^N$ of a process that is specified below. The drift $b_i$ and the volatility structure, $\tilde{\sigma}$, are also explicitly derived below, see equation~\eqref{e:LIBORRates}.
The occurrence of $L_t$ in $b_i$ indicates that, under $\mathcal{Q}^N$, the dynamics of $L^{i}$ will depend also on the other LIBOR rates $L^j$, for $j \ge i$.
Applying Girsanov's theorem, together with \eqref{e:cons}, implies
\begin{equation}
\label{e:girs-i}
\,\textup{d} W^{i}_t
= \,\textup{d} W^{N}_t - \sum_{k=i+1}^{N} \frac{\delta_k L_t^k}{1 +\delta_k L_t^k} \sigma_k(t,\mu_t^k) \,\textup{d} t \,
\end{equation}
where $\sigma_k$ is given by \eqref{e:LMM-mf}.
However, here the $\mu_t^k$ are still the laws of $L_t^k$ under $\mathcal{Q}^k$ and not $\mathcal{Q}^N$.
When switching from $\mathcal{Q}^i$ to $\mathcal{Q}^N$ we need to consider the distribution of $L^k$ for $i < k \leq N$ under $\mathcal{Q}^N$. Thus, in the specific situation of \eqref{e:vol_gen}, the variances have to be calculated with respect to $\mathcal{Q}^N$:
\begin{align}
\label{e:V-trnsf}
\mathbb{V}_{\mathcal{Q}^i}[L_t^{i}]
=
\mathbb E_{\mathcal{Q}^i}\Big[\Big(
L_t^i - \mathbb E_{\mathcal{Q}^i}[L_t^i]
\Big)^2\Big]
=
\mathbb E_{\mathcal{Q}^N}\Big[\Big(
L_t^i - \mathbb E_{\mathcal{Q}^N}[L_t^i Z_t^{i,N}] \Big)^2 Z_t^{i,N} \Big]
=:
\Psi_t^{i,N}\,,
\end{align}
where
\begin{equation}
Z_t^{i,N}
:=
\frac{\mathrm{d}\mathcal{Q}^i}{\mathrm{d}\mathcal{Q}^N}\Big|_{\mathcal{F}_t}
=
\mathbb E_{\mathcal{Q}^N}\Big[
\frac{\mathrm{d}\mathcal{Q}^i}{\mathrm{d}\mathcal{Q}^N}\Big| \mathcal{F}_t
\Big].
\end{equation}
To this end, we start with $i=N-1$ and obtain from \eqref{e:cons} the specific form
\[
Z_t^{N-1,N}
=
\frac{\mathrm{d}\mathcal{Q}^{N-1}}{\mathrm{d}\mathcal{Q}^N}\Big|_{\mathcal{F}_t}
= \mathcal{E}_{t}
\left( \int_0^{\cdot}
\frac{\delta_N L^N_s}{\delta_N L^N_s + 1}\sigma^N(s,\mu_s^N)^{\top} \,\textup{d} W^N_s
\right) \,.
\]
To show the difference to the classical situation, we look at $i = N - 2$ and obtain
\[
Z^{N-2,N}_t
=
\frac{\mathrm{d}\mathcal{Q}^{N-2}}{\mathrm{d}\mathcal{Q}^{N-1}}\frac{\mathrm{d}\mathcal{Q}^{N-1}}{\mathrm{d}\mathcal{Q}^{N}} \Big|_{\mathcal{F}_t}
=
Z^{N-1,N}_t \frac{\mathrm{d}\mathcal{Q}^{N-2}}{\mathrm{d}\mathcal{Q}^{N-1}} \Big|_{\mathcal{F}_t}
=: Z^{N-1,N}_t Z^{N-2,N-1}_t \,.
\]
Consequently, the dynamics of $Z^{N-2,N}_t$ are given by
\begin{align*}
\mathrm{d} Z^{N-2,N}_t
&=
Z^{N-1,N}_t \mathrm{d}Z^{N-2,N-1}_t
+ Z^{N-2,N-1}_t \mathrm{d}Z^{N-1,N}_t
+ \mathrm{d}[Z^{N-1,N}_t,Z^{N-2,N-1}_t] \\
&=
Z^{N-1,N}_t Z^{N-2,N-1}_t \frac{\delta_{N-1} L_t^{N-1}}{1 + \delta_{N-1} L_t^{N-1}} \sigma_{N-1}(t,\mu_t^{N-1})^{\top} \mathrm{d}W_t^{N-1} \\
& \quad\quad
+ Z^{N-2,N-1}_tZ^{N-1,N}_t \frac{\delta_{N} L_t^{N}}{1 + \delta_{N} L_t^{N}} \sigma_N(t,\mu_t^{N})^{\top} \mathrm{d}W_t^{N}
+ \mathrm{d}[Z^{N-1,N}_t,Z^{N-2,N-1}_t].
\end{align*}
Thus, in order to obtain the distribution of $L^{N-2}$ under $\mathcal{Q}^N$, we also need to consider the distribution of $Z^{N-2,N}$ and $Z^{N-1,N}$. The same procedure applies to $L^{N-j}$, $j=3,\ldots,N-1$.
To do so, let $\tilde{\mu}_t^{N-1}$ denote the joint law of $(L_t^{N-1},Z_t^{N-1,N})$ under $\mathcal{Q}^N$. Then it follows that
\[
\mathbb{V}_{Q_{N-1}}[L_t^{N-1}]
= \Psi_t^{N-1,N}
= \Psi^{N-1}(\tilde{\mu}_t^{N-1})
\]
where $\Psi^{N-1}$ is now a map $\Psi^{N-1}: \mathcal{P}_2(\mathbb{R}^2)\rightarrow}\def\l{\ell}\def\iint{\int\mathbb{R}$.
Therefore, we can define a new volatility coefficient
\begin{equation}
\label{e:sigma-t}
\tilde{\sigma}_{N-1}(t,\tilde{\mu}_t^{N-1})
:= \lambda^{N-1}(t, \Psi^{N-1}(\tilde{\mu}^{N-1}_t))
\end{equation}
which replaces $\sigma_{N-1}(t,\mu_t^{N-1})$ but, crucially, depends on the law $\tilde{\mu}_t^{N-1}$ under $\mathcal{Q}^N$.
Substituting $\sigma_{N-1}(t,\mu_t^{N-1})$ by the coefficient $\tilde{\sigma}_{N-1}(t,\tilde{\mu}_t^{N-1})$, and using the relation \eqref{e:girs-i} between $W_t^{N}$ and $W_t^{N-1}$, yields
\begin{align*}
\mathrm{d}Z^{N-2,N}_t
&=
Z^{N-2,N}_t \frac{\delta_{N-1} L_t^{N-1}}{1 + \delta_{N-1} L_t^{N-1}} \tilde{\sigma}_{N-1}(t,\tilde{\mu}_t^{N-1}) \left( \mathrm{d}W^{N}_t - \frac{\delta_N L_t^N}{1 + \delta_N L_t^N} \sigma_N(t,\mu_t^N)^{\top} \mathrm{d}t \right) \\
&\quad
+ Z^{N-2,N}_t \frac{\delta_N L_t^{N}}{1 + \delta_N L_t^{N}} \sigma_N(t,\mu_t^{N})^{\top} \mathrm{d}W_t^{N} \\
&\quad
+ Z^{N-2,N}_t \frac{\delta_N L_t^N}{1 + \delta_N L_t^N} \sigma_N(t,\mu_t^N)^{\top}\frac{\delta_{N-1} L_t^{N-1}}{1 + \delta_{N-1} L_t^{N-1}} \tilde{\sigma}_{N-1}(t,\tilde{\mu}_t^{N-1}) \mathrm{d}t \\
&=
Z^{N-2,N}_t \frac{\delta_{N-1} L_t^{N-1}}{1 + \delta_{N-1} L_t^{N-1}} \tilde{\sigma}_{N-1}(t,\tilde{\mu}_t^{N-1})^{\top} \mathrm{d}W^{N}_t \\
&\quad
+ Z^{N-2,N}_t \frac{\delta_N L_t^{N}}{1 + \delta_N L_t^{N}} \sigma_N(t,\mu_t^{N})^{\top} \mathrm{d}W_t^{N}.
\end{align*}
Therefore, we have found the SDE for $Z_t^{N-2,N}$ under the terminal measure and with coefficients which depend on the laws $\tilde{\mu}_t^{N-1}$ and $\mu_t^N = \tilde{\mu}_t^N$ also under the terminal measure.
For $i<N-2$ we proceed analogously. Thus, let $\tilde{\mu}_t^i$ denote the joint law of $(L_t^i,Z_t^{i,N},\dots,Z_t^{N-1,N})$ under $\mathcal{Q}^N$. It follows that
\[
\mathbb{V}_{Q_i}[L_t^i]
= \Psi_t^{i,N}
= \Psi^i(\tilde{\mu}_t^i) \,,
\]
where $\Psi^i$ is now a map $\Psi^i: \mathcal{P}_2(\mathbb{R}^{N-i+1})\rightarrow}\def\l{\ell}\def\iint{\int\mathbb{R}$.
Invoking \eqref{e:vol_gen}, we again define the new volatility coefficient
\begin{equation}
\label{e:sigma-t2}
\tilde{\sigma}_i(t,\tilde{\mu}_t^i)
:= \lambda^i(t, \Psi^i(\tilde{\mu}^{i}_t))\,,
\end{equation}
which now depends on the law $\tilde{\mu}_t^i$ under the terminal measure $\mathcal{Q}^N$.
In total we arrive at the system
\begin{equation}
\left(
\begin{array}{c}
\mathrm{d}L_t^{i} \\[7pt]
\mathrm{d}Z^{N-1,N}_t \\[7pt]
\vdots \\[7pt]
\mathrm{d}Z^{i,N}_t
\end{array}
\right)
=
\left(
\begin{array}{c}
L_t^i\left( \tilde{\sigma}_i(t,\tilde{\mu}_t^i)^{\top} \mathrm{d}W^{N}_t - \sum_{k=i+1}^{N} \frac{\delta_k L_t^k}{1 +\delta_k L_t^k} \tilde{\sigma}_k(t,\tilde{\mu}_t^k)^{\top} \tilde{\sigma}_i(t,\tilde{\mu}_t^i) \mathrm{d}t \right) \\[7pt]
Z^{N-1,N}_t \frac{\delta_N L_t^{N}}{1 + \delta_N L_t^{N}} \sigma_N(t,\mu_t^{N})^{\top} \mathrm{d}W_t^{N} \\[7pt]
\vdots \\[7pt]
Z^{i,N}_t \sum_{q=0}^{N-i-1} \frac{\delta_{N-q} L_t^{N-q}}{1 + \delta_{N-q} L_t^{N-q}} \tilde{\sigma}_{N-q}(t,\tilde{\mu}_t^{N-q})^{\top} \mathrm{d}W_t^{N}
\end{array}
\right)\,.
\end{equation}
We now state our main result on the well-posedness of the mean field LIBOR market model (MF-LMM).
\begin{thm}[Existence of MF-LMM]\label{TH:TH1}
Let the volatility coefficients $\sigma_i: [0,t_{i-1}] \times \mathscr{P}_2({\mathbb R}) \rightarrow}\def\l{\ell}\def\iint{\int {\mathbb R}^d$ be of the form
\begin{equation*}
\sigma_i(t,\mu_{t}^{i})
=
\sigma_i^{(1)}(t)
\exp\Big(
-\max\left\{\mathbb{V}_{\mathcal{Q}^i}[L_t^{i}] - \tilde{\sigma},0\right\}
\Big),
\end{equation*}
where $\mu^i$ denotes the law of $L^i$, and satisfy the following assumptions:
\begin{enumerate}
\item[(1)] There exists a constant $L>0$ such that
\begin{equation*}
| \sigma_i(t,\mu)| \leq L, \quad \forall \ t \in [0,t_{i-1}], \ \forall \ \mu \in \mathscr{P}_2({\mathbb R}).
\end{equation*}
\item[(2)]
For any $t\in[0,t_{i-1}]$, $\sigma(t,\cdot)$ satisfies assumption ({\bf A}$_{b\sigma}^1$) of Section~\ref{Sec:WEll} with a constant independent of~$t$.
\end{enumerate}
Then the functions $\tilde{\sigma}_i: [0,t_{i-1}] \times \mathscr{P}_2({\mathbb R}^{N-i+1}) \rightarrow}\def\l{\ell}\def\iint{\int {\mathbb R}$
defined by \eqref{e:sigma-t2} satisfy these assumptions as-well.
Let $\mathcal{Q}^N$ be a probability measure and $W^N$ an adapted $d$-dimensional Brownian motion $W^{N}$. Then, the mean-field SDE
\begin{equation}
\label{e:LIBORRates1}
\mathrm{d}L_t^{N}
= L_t^{N}
\tilde{\sigma}_N(t,\tilde{\mu}_t^{N})^{\top}
\mathrm{d}W^{N}_t ,
\end{equation}
with $t\in[0,t_{N-1}]$, has a unique strong solution $L^N$.
Moreover, for $i < N$, the mean-field SDE
\begin{align}
\label{e:LIBORRates}
\left(
\begin{matrix}
\mathrm{d}L_t^{i} \\[7pt]
\mathrm{d}Z^{N-1,N}_t \\[7pt]
\vdots
\\[7pt]
\mathrm{d}Z^{i,N}_t
\end{matrix}
\right)
=
\left(
\begin{matrix}
L_t^i\left( \tilde{\sigma}_i(t,\tilde{\mu}_t^i)^{\top} \mathrm{d}W^{N}_t - \sum_{k=i+1}^{N} \frac{\delta_k L_t^k}{1 +\delta_k L_t^k} \tilde{\sigma}_k(t,\tilde{\mu}_t^k)^{\top} \tilde{\sigma}_i(t,\tilde{\mu}_t^i) \mathrm{d}t \right) \\[7pt]
Z^{N-1,N}_t \frac{\delta_N L_t^{N}}{1 + \delta_N L_t^{N}} \tilde{\sigma}_N(t,\mu_t^{N})^{\top} \mathrm{d}W_t^{N} \\[7pt]
\vdots \\[7pt]
Z^{i,N}_t \sum_{q=0}^{N-i-1} \frac{\delta_{N-q} L_t^{N-q}}{1 + \delta_{N-q} L_t^{N-q}} \tilde{\sigma}_{N-q}(t,\tilde{\mu}_t^{N-q})^{\top} \mathrm{d}W_t^{N}
\end{matrix}
\right)
\end{align}
has a unique strong solution $(L^i,Z^{i,N}, \ldots, Z^{N-1,N})$,
where $\tilde{\mu}_t^i$ is the joint law of $(L_t^i,Z^{i,N}_t, \ldots, Z^{N-1,N}_t)$.
Under the forward measure $\mathcal{Q}^i$, the martingale representation
\begin{equation}
\label{e:MRep}
\mathrm{d}L_t^{i}
= L_t^{i} \sigma_i(t,\mu_t^{i})^{\top}\mathrm{d}W^{i}_t
\end{equation}
holds, where $W^{i}$ is a $\mathcal{Q}^i$-Brownian motion in $\mathbb{R}^d$ and $\mu_t^{i}$ is the law of $L_t^{i}$ under $\mathcal{Q}^i$.
\end{thm}
\begin{proof
We employ an inductive argument to prove well-posedness of the equations defined in \eqref{e:LIBORRates}. We start with $i=N$ and consider the mean-filed SDE
\begin{equation*}
\mathrm{d}L_t^{N} = L_t^{N} \sigma_N(t,\mu_{t}^{N})^{\top}\mathrm{d}W^{N}_t,
\end{equation*}
which, as shown in Theorem~\ref{TH:TH2}, has a unique strong (non-negative) solution, since it is a special case of the equation considered in Section \ref{Sec:WEll}.
Assume that we have shown well-posedness for $i=k+1$. For $i=k$, we observe that the drift of
\begin{equation*}
\mathrm{d}L_t^{k}
=
L_t^{k}\left( \tilde{\sigma}_k(t,\tilde{\mu}_t^{k})^{\top} \mathrm{d}W^{N}_t - \sum_{j=k+1}^{N} \frac{\delta_j L_t^{j}}{1 +\delta_j L_t^{j}} \tilde{\sigma}_j(t,\tilde{\mu}_t^{j})^{\top} \tilde{\sigma}_k(t,\tilde{\mu}_t^{k}) \mathrm{d}t \right),
\end{equation*}
depends on LIBOR rates with index larger than $k$, which, by induction hypothesis, have a non-negative, unique strong solution. Furthermore, due to the boundedness of $\tilde{\sigma}$ (which follows from the boundedness of $\sigma)$, there exists a constant $C >0$ (independent of $t$ and the measure) such that
\begin{equation*}
\left| \sum_{j=k+1}^{N} \frac{\delta_j L_t^{j}}{1 +\delta_j L_t^{j}} \tilde{\sigma}_j(t,\tilde{\mu}_t^{j})^{\top} \tilde{\sigma}_k(t,\tilde{\mu}_t^{k}) \right| \leq C.
\end{equation*}
Therefore, Girsanov's theorem as stated in \cite[Lemma A.2]{BMBP18}, is applicable.\footnote{This also implies that all investigated change of measure intensities $Z^{i,N}$ for $i=1,\ldots,N-1$ are real martingales.}
Also the coefficients of each of the processes $(Z_t^{i})_{0 \leq t \leq T}$ only depend on LIBOR rates with index larger than $k$ and grow linearly in $Z^{i}$. Similar arguments to the ones employed in Section \ref{Sec:WEll} yield the claim, as the process $(L_t^i,Z^{i,N}_t, \ldots, Z^{N-1,N}_t)_{0 \leq t \leq T}$ can be identified with a $(N-i+1)$-dimensional process $(X_t)_{0 \leq t \leq T}$; See Remark~\ref{rem:ass}.
\end{proof}
\begin{remark}[Connection to Backward-Looking Rates (BLR)]
As noted in the introduction, our approach shares common features with the approach on backward-looking rates. This literature strand adapts classical forward-looking rates to the new concept of in arrear backward-looking rates while at the same time incorporating a deterministic (tenor-dependent) dampening effect at the respective tenor dates, see in eq.~(13) in \cite{LM19}, respectively eq.~(5) in \cite{LM19_}. We wish to stress that the MF-LMM approach and the BLR approach differ fundamentally, in particular:
\begin{itemize}
\item \cite{LM19,LM19_} focus on extending traditional forward-looking rates (like the LIBOR) to encompass the new setting-in-arrears backward-looking rates, while staying compatible with these traditional rates.
\item The MF-LMM applies a distribution-dependent dampening in order to prevent explosion of the LIBOR rates, whereas the BLR approach uses a deterministic time-dependent dampening.
\end{itemize}
Additionally, as a result of the compatibility with traditional forward-looking (LIBOR) rates, our MF-LMM is as well consistent with backward-looking rates. We leave this topic for future research.
\end{remark}
\begin{remark}[Nonlinear diffusion in the sense of McKean]
Equation \eqref{e:LIBORRates} is, viewed individually, a mean-field SDE only if one considers the input from other distributions as an additional time-dependence and allows for random coefficients. Indeed, for $i<N$, the coefficients in \eqref{e:LIBORRates} depend on distributions $\tilde{\mu}^j$ and rates $L^j$ with $i<j\le N$.
By contrast, equation~\eqref{e:MRep} is a mean-field SDE for each $i$.
Moreover, equations~\eqref{e:LIBORRates1} and \eqref{e:LIBORRates} can be reformulated as components of the mean-field SDE \eqref{e:MF} (with non-random coefficients):
With the same assumptions and notation as in Theorem~\ref{TH:TH1}, the $\mathbb{R}^{2N-1}$-valued process
\[
M = (L^1,\dots,L^N,Z^{1,N},\dots,Z^{N-1,N})\,,
\]
satisfies the mean-field equation
\begin{equation}
\label{e:MF}
\mathrm{d}M_t
= \tilde{b}(t,M_t,\tilde{\mu}_t)\,\textup{d} t
+ \tilde{\sigma}(t,\tilde{\mu}_t)^{\top}
\mathrm{d}W^{N}_t \,,
\end{equation}
where $\tilde{\mu}_t$ is the law of $M_t$ under $\mathcal{Q}_N$. Then, the components of this equation are defined as follows:
For the collection $(L^j,Z^{j,N},\ldots,Z^{N-1,N})$ for $j=1,\ldots,N-1$ let $m_{(L^j,Z^{j,N},\ldots,Z^{N-1,N})}: \mathscr{P}_2({\mathbb R}^{2N-1}) \rightarrow}\def\l{\ell}\def\iint{\int\mathscr{P}_2({\mathbb R}^{N-j+1})$ be the projection onto the corresponding marginal which corresponds to the joint law of $(L^j,Z^{j,N},\ldots,Z^{N-1,N})$ under $\mathcal{Q}^N$.
Consider
\begin{equation}
\label{e:LR1}
\mathrm{d}L_t^{N} = L_t^{N}
\tilde{\sigma}_N(t,\tilde{\mu}_t^{N})^{\top}
\mathrm{d}W^{N}_t ,
\end{equation}
where $0 \leq t < t_{N-1}$ and $\tilde{\mu}_t^N = \mu_t^N = m_{L^N}(\tilde{\mu}_t)$ is the law of $L^{N}_t$.
Furthermore, for $i < N$, consider
\begin{align}
\label{e:LR2}
\mathrm{d}L_t^{i}
&=
L_t^i\left( \tilde{\sigma}_i(t,\tilde{\mu}_t^i)^{\top} \mathrm{d}W^{N}_t - \sum_{k=i+1}^{N} \frac{\delta_k L_t^k}{1 +\delta_k L_t^k} \tilde{\sigma}_k(t,\tilde{\mu}_t^k)^{\top} \tilde{\sigma}_i(t,\tilde{\mu}_t^i) \mathrm{d}t \right)\,,
\\
\label{e:LR3}
\mathrm{d}Z^{i,N}_t
&=
Z^{i,N}_t \sum_{q=0}^{N-i-1} \frac{\delta_{N-q} L_t^{N-q}}{1 + \delta_{N-q} L_t^{N-q}} \tilde{\sigma}_{N-q}(t,\tilde{\mu}_t^{N-q})^{\top} \mathrm{d}W_t^{N} \,,
\end{align}
where
$\tilde{\mu}_t^i
= m_{(L^i,Z^{i,N},\ldots,Z^{N-1,N})}(\tilde{\mu}_t)$ is the joint law of $(L_t^i,Z^{i,N}_t, \ldots, Z^{N-1,N}_t)$.
\end{remark}
\section{Practical aspects of the MF-LMM}\label{SEC:Cali}
\subsection{A mean-field cap formula}
In order to show that our framework is consistent with classical LIBOR market models, we also provide a Black '76-type cap pricing formula for a given measure flow along the lines of \cite{Black76}, cast into our mean-field framework:
\begin{thm}\label{thm:cap}
Let $t_1 < \ldots < t_N$ be a given tenor structure. We assume that the LIBOR rates $L_t^{i}$, for $t < t_{i-1}$ on the time-interval $[t_{i-1},t_i]$, associated to this tenor structure follow the evolution specified by equation (\ref{e:LMM}) and that $\{\mu_s^i\,\vert\,0\leq s\leq t_{i-1}\}$ are given distributions in $\mathcal{P}_2(\R)$ (for each $s$ and $i$).
In the sequel, $K$ and $V$ will denote the strike and nominal value, respectively. Then, we have that:
\begin{enumerate}
\item The price $C_i(t,\sigma_i(\cdot,\mu_{\cdot}^{i}))$ of a caplet with expiry date $t_i$ and payoff $\delta_i \cdot (L_{t_{i-1}}^{i} -K)^{+}$ is determined by
\begin{align*}
& C_i(t,\sigma_i(\cdot,\mu_{\cdot}^{i})) = \delta_i P(t,t_i) \left[ L_t^{i} \Phi(d_t^{1}) -K \Phi(d_t^{2}) \right], \\
& d_t^{1} = \frac{\log\left( \frac{L_{t}^{i}}{K} \right) + \frac{1}{2} (\bar{\sigma}_t^{i})^2}{\bar{\sigma}_t^{i}}, \quad d_t^{2} = d_t^{1} - \bar{\sigma}_t^{i}, \\
& (\bar{\sigma}_t^{i})^2 = \int_{t}^{t_{i-1}} \left(\sigma_i(s,\mu_{s}^{i}) \right)^2 \mathrm{d}s.
\end{align*}
\item The price $Cap_{FL}(t;V,K)$ of a cap, consisting of caplets with expiry dates $t_1 < \ldots < t_N$ such that $t < t_1$, in the MF-LMM is given b
\begin{equation*}
Cap_{FL}(t;V,K) = V \sum_{i=1}^{N} C_i(t,\sigma_i(\cdot,\mu_{\cdot}^{i})).
\end{equation*}
\end{enumerate}
\end{thm}
\begin{proof}
The second item is a straightforward consequence of the first, as the individual rates are independent. Since the $i$-th LIBOR rate is modeled under $\mathcal{Q}^i$, we obtain
\begin{align*}
C_i(t,\sigma_i(t,\mu_{t}^{i})) &= \delta_i P(t,t_i) \mathbb{E}^{\mathcal{Q}^i}\left(L_{t_{i-1}}^{i} - K \right)^{+} \\
& = \delta_i P(t,t_i)
\mathbb{E}^{\mathcal{Q}^i}
\left( L_{t_{i-1}}^{i} \mathrm{I}_{\{L_{t_{i-1}}^{i} >K\}} \right) - \delta_i\, K \mathcal{Q}^i \left( L_{t_{i-1}}^{i} > K \right).
\end{align*}
Due to (\ref{e:LMM}), we have that $L_{t_{i-1}}^{i}$ under $\mathcal{Q}^i$ is log-normal distributed with
\begin{align*}
\log \left( L_{t_{i-1}}^{i} \right) \sim \mathcal{N} \left( \log \left( L_{t}^{i} \right) - \frac{1}{2} \int_{t}^{t_{i-1}} \left(\sigma_i(s,\mu_{s}^{i}) \right)^2 \mathrm{d}s, \int_{t}^{t_{i-1}} \left(\sigma_i(s,\mu_{s}^{i}) \right)^2 \mathrm{d}s \right).
\end{align*}
The remaining part of the proof follows now standard arguments, as in the derivation for the Black-Scholes model.
\end{proof}
This corresponds now exactly to the market price formula in a Black-Scholes sense with an averaged volatility. In contrast to the classical LIBOR market model, we now average over measure-dependent functions for the volatility instead of purely deterministic functions. Note that this formula is also the theoretical foundation of the suggested calibration procedure in the following section.
\subsection{Mean-field calibration}\label{sec:mfCal}
The LIBOR rates
\begin{align*}
\mathrm{d}L_t^{i} = L_t^{i} \sigma_i(t,\mu_{t}^{i})\mathrm{d}W^{i}_t,
\end{align*}
where $\mu_{t}^{i} = \text{Law}(L_t^{i})$ and the law is considered under the measure $\mathcal{Q}^i$, will be used for the calibration of the coefficients $\sigma_i$. Recall that the dependence of $\sigma_i$ on the measure is already specified in Theorem \ref{TH:TH1}.
To cast the problem into a well-known setting, one possibility is to use the auxiliary model
\begin{align*}
\mathrm{d}\tilde{L}_t^{i} = \tilde{L}_t^{i} \sigma_i^{(1)}(t)\mathrm{d}W^{i}_t\,,
\end{align*}
which focuses on the deterministic component of \eqref{eq:diffusion}. Then one can calibrate $\sigma_i^{(1)}$ to market data using classical cap prices. Parametric approaches for the calibration of $\sigma_i^{(1)}$, explained in \cite{Brigo}, are constructed in a way such that $\sigma_i^{(1)}$ increases as $t$ approaches the time to maturity.
Clearly, a modified model with added measure dependent coefficients is not a-priori market consistent, but the numerical results from Section \ref{sec:numerics} demonstrate that for long maturities consistency is achieved. Since avoidance of explosive paths is crucial for efficient valuation of long term contracts, this approach with a distinct damping factor seems to be quite meaningful for practical considerations.
However, with the help of Theorem~\ref{thm:cap} and an iterative procedure we can also directly calibrate the mean-field LMM. To achieve this, we propose the following procedure: At first we observe that, for given distributions $\{\mu_s^i\,\vert\,0\leq s\leq t_{i-1}\}$ (which correspond to estimates for the variance of $L_s^{i}$ at time $s$), given a perfect calibration we would have that
\begin{align*}
\left(\hat{\sigma}^{\text{market}}_i \right)^2= \int_{0}^{t_{i-1}} \left(\sigma_i(s,\mu_{s}^{i}) \right)^2 \mathrm{d}s \,,
\end{align*}
where $\hat{\sigma}^{\text{market}}_i$ is the quoted implied volatility of a caplet with expiry date $t_i$.
The implemented method is inspired by the Picard-type iteration employed in the proof of Theorem \ref{TH:TH2}. We start with a given initial variance function $v^{(0)}:[0,t_{i-1}]\rightarrow}\def\l{\ell}\def\iint{\int{\mathbb R}^+$ and use it as a substitute for the yet unknown variance of $L^i$, i.e.,
$v^{(0)}(s)\widehat{=}\mathbb{V}_{\mathcal{Q}^i}(L_s^i)$. Then, we further approximate
\begin{align}\label{eq:sigmaapprox}
\left(\hat{\sigma}^{\text{market}}_i \right)^2 &= \int_{0}^{t_{i-1}} \left(\sigma_i(s,\mu_{s}^{i}) \right)^2 \mathrm{d}s \notag \\
& \approx\sum_{j=1}^{J} \left(\sigma_{i}^{(1)}(s_j) \right)^2 \exp\left(-2 \max\{v^{(0)}(s_j)-\tilde{\sigma},0\}/\tilde{\sigma} \right) \left( s_j - s_{j-1} \right),
\end{align}
with $0=s_0<\ldots<s_J=t_{i-1}$ for some $J\in\mathbb N} \def\kk{\kappa} \def\m{{\bf m}$.\\
At first we focus on the deterministic component $\sigma_i^{(1)}$ of the LIBOR rates' volatility coefficient using \eqref{eq:sigmaapprox}. If we assume a parametric form of $\sigma_i^{(1)}$, we compute a first estimate of its parameters by determining:
\begin{align*}
\mbox{argmin}\,\Big |{\left(\hat{\sigma}^{\text{market}}_i \right)^2 - \sum_{j=1}^{J} \left(\sigma_i^{(1)}(s_j) \right)^2 \exp\left(-2 \max\{v^{(0)}(s_j)-\tilde{\sigma},0\}/\tilde{\sigma} \right) \left( s_j - s_{j-1} \right)}\Big |_2.
\end{align*}
For example, if we use $\sigma_i^{(1)}(t)=g(t_{i-1}-t)$ with
\begin{align}\label{eq:sigma_parametric}
g(\tau)=(a+b\tau)e^{-c\tau}+d,
\end{align}
the above minimization is with respect to $\{a,b,c,d\}$. We denote the resulting first estimator by $\hat{\sigma}_i^{(1,1)}$. In a next step we need to update the variance $v^{(0)}$.
Therefore, we run Monte Carlo simulations to generate $M\in{\mathbb N}$ independent paths of the corresponding process,
\begin{align*}
\mathrm{d}L_t^{i}= L_t^{i}\hat{\sigma}_i^{(1,1)}(t)\exp\left(-\max \{v^{(0)}(t)-\tilde{\sigma},0\}/\tilde{\sigma}\right) \, \mathrm{d}W_t^i,
\end{align*}
up to the terminal time $t_{i-1}$. We derive an approximated update for the variance function $v^{(1)}$ from the associated empirical distribution. This procedure can now be immediately iterated. Replace $v^{(0)}$ by $v^{(1)}$ for computing $\hat{\sigma}_i^{(1,2)}$ in a first step and continue until the norm $|\hat{\sigma}_i^{(1,l+1)}-\hat{\sigma}_i^{(1,l)}|$ becomes acceptably small for some $l\in{\mathbb N}$.
\\
\\
In the following numerical example we demonstrate the feasibility of the above described approach. We focus on $t_i=20$ and use an equidistant grid $s_j-s_{j-1}=h=\frac{1}{30}$.
For the deterministic component of the volatility we use functions of the type $g(\tau)=(a+b\,\tau)e^{-c\,\tau}+d$, such that for $L^i$ we have $\sigma_i^{(1)}(t)=g(t_{i-1}-t)$.
Specifically, for our \emph{toy} example we choose $a=0.14,\,b=0.01,\,c=0.05,\,d=0.2$ and use a number of paths ($M=10^5$) to approximate $\mathbb{E}^{\mathcal{Q}_i}[\max\{L_{t_i-h}-K),0\}]$. The corresponding results for the calibrated deterministic model are stated in Table \ref{tab:Iterations} for $k=0$ iteration steps.
Using the results from Theorem \ref{thm:cap} with $\delta_i=1$, strike $K=L_0^i=0.02$ and implicitly setting $P(0,t_i)=1$, we can compute an associated \emph{quoted implied} volatility $\hat{\sigma}_i^{market}=1.55$. For the damping volatility we choose $\tilde{\sigma}=\frac{1.55}{20}$. This choice is motivated by uniformly distributing $\hat{\sigma}_i^{market}$ over the considered period of time.\\
Table \ref{tab:Iterations} at $k=6$ collects the results after 6 iteration steps. In general, one can say that the estimated parameters of $g$ stabilize very quickly. Thus the stated relative error is mainly due to the MC-method (remember that the stepsize is $h=\frac{1}{30}$).
\begin{center}
\begin{table}
\begin{tabular}{ c| c| c|c }
k & CI for \emph{caplet} price & Parameters in $g(\cdot)$ & Relative error\\
\hline
0 & (0.0109399,{\bf{0.0112332}},0.0115265) & $\{0.14,0.01,0.05,0.2\}$ & - \\
6 & (0.0110543,{\bf{0.0113342}},0.0116141) & $\{2.08184,0.878775,3.89368,0.262653\}$ & 0.00899268
\end{tabular}
\caption{\label{tab:Iterations} Approximated caplet prices and estimated parameters.}
\end{table}
\end{center}
The effect of the variance dependent term can nicely be seen in Figures \ref{fig:Variances} and \ref{fig:Tails}. In Figure \ref{fig:Variances} the red line plots $\mathbb{V}_{Q_i}[L_t^{i,1}]$ resulting from the $M$ paths of the classical LMM as a function of $t\in [0,t_i-h)$. The blue line presents $\mathbb{V}_{Q_i}[L_t^{i,6}]$ computed from the paths of the 6th iteration step for $t\in [0,t_i-h)$.
\begin{figure}
\centering
\includegraphics[width=7cm]{VarPlotNew.png}
\caption{Resulting variances of classical and mean-field LMM.}
\label{fig:Variances}
\end{figure}
In Figure \ref{fig:Tails} we depict the tail of the empirical distributions of $L_t^{i}$ and $L_t^{i,6}$ for $t=15$. Again the red line corresponds to the case of a deterministic volatility component and the blue one to the mean-field situation. One can nicely see that the un-damped variant features more probability mass for rates above 40\% of interest.
\begin{figure}
\centering
\includegraphics[width=7cm]{TailPlot.png}
\caption{Tail of the distribution of $L_{15}^i$ classical and mean-field LMM.}
\label{fig:Tails}
\end{figure}
\subsection{Conversion to the spot measure}\label{sec:spot}
Since existence and uniqueness of the mean-field system \eqref{e:LMM}, \eqref{eq:diffusion} are proved,
all the usual transformation rules of LIBOR market models apply. If a measure change, as below to the spot measure, is carried out it only remains to calculate the variances in \eqref{e:vol_gen} with respect to the new measure.
Let $\mathcal{Q}^*$ be the spot measure and $W^*$ the corresponding $d$-dimensional Brownian motion as in, e.g., Section~11.4 of \cite{Fil}.
The Girsanov transformation, keeping in mind the relation $ \sigma_k(t,\mu_t^{k}) = \lambda^k(t,\mathbb{V}_{\mathcal{Q}^k}(L_t^k))$, yields
\begin{equation}
\label{e:spot1}
\,\textup{d} L_t^m
=
L_t^m\Big(
\sum_{k=\eta(t)}^m\frac{\delta_k L_t^k}{\delta_k L_t^k + 1}
\lambda^k(t,\mathbb{V}_{\mathcal{Q}^k}(L_t^k))^{\top}
\lambda^m(t,\mathbb{V}_{\mathcal{Q}^m}(L_t^m)) \,\textup{d} t
+
\lambda^m(t,\mathbb{V}_{\mathcal{Q}^m}(L_t^m))^{\top} \,\textup{d} W_t^*
\Big)
\end{equation}
where the right-continuous function $\eta: [0,t_{M-1}]\rightarrow}\def\l{\ell}\def\iint{\int\{1,\dots,M\}$ is such that
\begin{equation}
\label{e:eta}
t_{\eta(t)-1}\le t < t_{\eta(t)}.
\end{equation}
However, the variances continue to depend on the forward measures. A calculation analogous to \eqref{e:V-trnsf} implies
\begin{equation}
\label{e:Phi_t}
\mathbb{V}_{\mathcal{Q}^m}[L_t^m]
=
\mathbb E_{\mathcal{Q}^*}\Big[\Big(
L_t^m - \mathbb E_{\mathcal{Q}^*}[L_t^m Y_t^m]
\Big)^2 Y_t^m \Big]
=: \Psi_t^{m,*}
\end{equation}
where
\begin{equation*}
Y_t^m
:=
\frac{\mathrm{d}\mathcal{Q}^m}{\mathrm{d}\mathcal{Q}^*}\Big|_{\mathcal{F}_t}
=
\mathbb E_{\mathcal{Q}^*}\Big[\frac{\mathrm{d}\mathcal{Q}^m}{\mathrm{d}\mathcal{Q}^*}\Big|\mathcal{F}_t\Big].
\end{equation*}
The expressions $\lambda^k(t,\Psi_t^{k,*})$ are the volatility coefficients in the spot formulation \eqref{e:spot1} with respect to the joint law, $\mu_t^{k,*}$, of $(L_{t}^k, Y_t^k)$ under $\mathcal{Q}^*$. We can also write $\lambda^k(t,\Psi_t^{k,*}) = \sigma_k^*(t,\mu_t^{k,*})$ to emphasize the dependence on $\mu_t^{k,*}$.
\subsubsection{Continuous-time formulation}
The process $Y^m$ can be expressed as a stochastic exponential (\cite[Equ.~(7.1)]{Fil})
\begin{equation}
\label{e:sto_exp}
Y_t^m
=
\mathcal{E}_t\left(
- \int_0^\cdot \sum_{k=\eta(s)}^m\frac{\delta_k L_s^k}{\delta_k L_s^k + 1}
\lambda^k(s,\Psi_s^{k,*})^{\top}
\mathrm{d} W^*_s
\right).
\end{equation}
With \eqref{e:Phi_t} for the variance in terms of the joint law of $L_t$ and $Y_t$ under $\mathcal{Q}^*$ this means that \eqref{e:spot1} can be transformed into a mean field system along the lines of Theorem~\ref{TH:TH1}, where the $Y_t^m$ play now the roles of the $Z_t^m$:
\begin{align}
\label{e:mfLY1}
dL_t^m
&=
L_t^m\left(
\sum_{k=\eta(t)}^m\frac{\delta_k L_t^k}{\delta_k L_t^k + 1}
\lambda^k(t,\Psi_t^{k,*})^{\top}
\lambda^m(t,\Psi_t^{m,*}) d t
+
\lambda^m(t,\Psi_t^{m,*})^{\top} \mathrm{d} W^*_t
\right)\,,
\\
\label{e:mfLY2}
d Y_t^m
&=
- Y_t^m \sum_{k=\eta(t)}^m\frac{\delta_k L_t^k}{\delta_k L_t^k + 1}
\lambda^k(t,\Psi_t^{k,*})^{\top}
\mathrm{d} W_t^*\,,
\end{align}
with $Y_0^m = 1$.
Equations~\eqref{e:mfLY1} and \eqref{e:mfLY2} are again a mean field system of SDEs since the $\Psi_t^{k,*}$ depend on the joint law of $L_t^k$ and $Y_t^k$ under $\mathcal{Q}^*$.
\subsubsection{Projection along tenor dates}
If $t=t_j$ is a tenor date, then the conditional expectation can be expressed as
\begin{equation}
Y_{t_j}^m
=
B^*(t_j)^{-1}\frac{P(j,m)}{P(0,m)}\,,
\end{equation}
(see \cite[Sec.~7.1]{Fil}) where
\begin{equation}
B^*(t_j)
= (1+\delta_{j-1} L_{t_{j-1}}^{j-1})B^*(t_{j-1}),
\quad
B^*(t_0) = 1\,,
\end{equation}
is the implied money market account (i.e., the numeraire) and
\begin{equation}\label{eq:bond}
P(j,m)
=
\Pi_{l=j}^{m-1}(1+\delta_l L_{t_j}^l)^{-1}\,,
\end{equation}
is the time $t_j$-value of one unit of currency paid at $t_m$.
With
\begin{equation}
\label{e:phi1}
\Psi_j^{m,*}
:=
E_{\mathcal{Q}^*}\left[\left(
L_{t_j}^m - E_{\mathcal{Q}^*}\left[L_{t_j}^m B^*(t_j)^{-1}\frac{P(j,m)}{P(0,m)}\right]
\right)^2 B^*(t_j)^{-1}\frac{P(j,m)}{P(0,m)} \right]\,
\end{equation}
it follows that the evolution along the tenor dates of the mean field system \eqref{e:mfLY1}-\eqref{e:mfLY2} is given by
\begin{equation}
\label{e:spot_phi}
d L_{t_j}^m
=
L_{t_j}^m\left(
\sum_{k=j+1}^m\frac{\delta_k L_{t_j}^k}{\delta_k L_{t_j}^k + 1}
\lambda^k(t_j,\Psi_j^{k,*})^{\top}
\lambda^m(t_j,\Psi_j^{m,*}) \,\textup{d} t
+
\lambda^m(t_j,\Psi_j^{m,*})^{\top} \mathrm{d} W^*_t\,
\right),
\end{equation}
since $\eta(t_j)=j+1$.
\subsubsection{Exogenous mean-field dynamics}
The discrete spot measure formulation \eqref{e:spot_phi} is the basis for the numerical scheme in Section~\ref{sec:numerics}.
Therefore, the volatility structure $\lambda^m(t,\Psi_t^{m,*})$ in \eqref{e:mfLY1}-\eqref{e:mfLY2} has to be specified.
To this end, and following the ideas of Section~\ref{sec:vol_str}, we split the volatility structure as
\begin{equation}
\label{e:split}
\lambda^m(t,\Psi_t^{m,*})
=
\sigma^{(1)}_m(t) \lambda^{\textup{mf}}(\Psi_t^{m,*})\,,
\end{equation}
where $\sigma^{(1)}_m(t)$ is a deterministic volatility specification and $\lambda^{\textup{mf}}(\Psi_j^{m,*})$ depends on the distribution of $(L_t^m,Y_t^m)$ under the spot measure. We remark that $\lambda^{\textup{mf}}$ is assumed to be time-homogeneous, i.e. there is no explicit dependence on time. In principle, $\lambda^{\textup{mf}}$ could also depend on the maturity $m$ but we will not need this extra degree of freedom.
Examples of possible choices for $\sigma^{(1)}_m(t)$ can e.g. be found in \cite{Brigo}.
The choices for the numerical study are presented in Section~\ref{sec:numerics} below.
\begin{remark}
This approach seems to be promising when evaluating long-term guarantees as a part of life insurance or pension contracts. Here, one obtains the $\sigma_m^{(1)}$ by calibrating a classical LMM to market data and uses \eqref{e:split} with a variance dependent dampening factor in the internal Monte Carlo procedure. At first sight, this has the consequence that the valuation principle is not market-consistent, but the involved routine is more stable. Moreover, the numerical results from Section~\ref{sec:numerics} demonstrate in particular that for long maturities the difference is negligible.
\end{remark}
\section{Monte Carlo simulation of MF-LMM}
\label{sec:numerics}
The numerical implementation of the MF-LMM is based on an Euler-Maruyama scheme for \eqref{e:spot_phi} together with a specification of the splitting assumption \eqref{e:split}.
Making use of \eqref{e:spot_phi} implies that time steps equal tenor dates. This has the practical advantage that the
empirical variance \eqref{e:Phi_t} can be calculated without the necessity of simulating the process $Y^m$, defined in \eqref{e:mfLY2}. Moreover, this choice is compatible with industry practice for valuation of long term guarantees where the simulation time may be of the order of $100$ years and yearly time steps are generally used.
For our numerical simulations, we consider the risk free term structure that has been provided by EIOPA at year-end 2020 (\cite{EIOPA_curve}).
This interest rate curve is used by European insurance companies for the calculation of technical provisions.
In order to deal with negative interest rates, we apply a displacement factor $\alpha\in\mathbb{R}_{\ge0}$
(see \cite{diffusion} and \cite[p.~471]{Brigo}).
\subsection{Euler-Maruyama discretization scheme}
\label{sec:euler}
Let $M$ be a positive integer and
consider a discrete time grid $0, 1, 2, \dots, M$ consisting of yearly time steps. Elements in the time grid will be denoted by $t_n$.
For numerical purposes the mean field equation \eqref{e:spot_phi} is approximated by an interacting particle system (IPS), compare equation \eqref{eq:euler1}. Each `particle' in this IPS corresponds to an interest rate curve that interacts with all other `particles' via a Monte Carlo approximation of \eqref{e:phi1}. If the number of particles, $P$, is sufficiently large we obtain a numerical approximation of the mean-field model.
Within the IPS, the simulated stochastic processes evolve according to $P$ independent copies of a $d$-dimensional Brownian motion
$(W^{p})_{1 \leq p \leq P}$ with increments
$\Delta W^{p}_{t_n} := W^{p}_{t_{n+1}} - W^{p}_{t_n}$.
For a given maturity $m$, all `particles' $L^{m,p}$ start from the same initial interest rate curve $L_0^m$.
In order to improve the numerical stability of the model, we will work on a logarithmic scale within the simulation.
The numerical scheme is therefore as follows: at $t_n$ and for $1 \leq p \leq P$, the subsequent LIBOR rate is updated using the discretization
\begin{align}
\log(L_{t_{n+1}}^{m,p}+\alpha)
&= \log(L_{t_n}^{m,p}+\alpha)
+ \lambda^m(t_n,\Psi_{t_n}^{m,*})^{\top}
\sum_{k=\eta(t_n)}^{m} \frac{ L_{t_n}^{k,p}+\alpha}{1 + L_{t_n}^{k,p} + \alpha} \lambda^k(t_n,\Psi_{t_n}^{k,*}) \nonumber\\
&\phantom{==}
- \frac{1}{2} | \lambda^m(t_n,\Psi_{t_n}^{m,*}) |^2
+ \lambda^m(t_n,\Psi_{t_n}^{m,*})^{\top} \Delta W^{p}_{t_n} ,
\label{eq:euler1}
\end{align}
where the right continuous function $\eta(t_n) = n+1 = t_{n+1}$ is defined in \eqref{e:eta}.
The dependence on the joint law of the forward rates in \eqref{e:phi1} at time $t_n$ and maturity $m$ is thus realized as
\begin{equation}
\label{e:Psi_sim}
\Psi_{t_n}^{m,*}
=
\frac{1}{P}\sum_{p=1}^P
\left[\left(
L_{t_j}^{m,p} -
\frac{1}{P}\sum_{q=1}^P
\left[L_{t_j}^{m,q} B_q^*(t_j)^{-1}\frac{P_q(j,m)}{P_q(0,m)}\right]
\right)^2
B_p^*(t_j)^{-1}\frac{P_p(j,m)}{P_p(0,m)} \right]\,,
\end{equation}
where $P_p(n,i)$ refers to the time $t_n$-value of one unit of currency paid at $t_i$ in the $p$-th particle, $1 \leq p \leq P$.
By convention the dimension, $d$, of the Brownian increments, $\Delta W_{t_n}^p$, shall be equal to the dimension of the vectors $\lambda^m$.
\subsection{Volatility structure}\label{sec:VS2}
The classical LIBOR market model (without any mean-field interaction) is immediately realized as a special case of our approach \eqref{eq:euler1} by setting $\lambda^{\textup{mf}}=1$ in \eqref{e:split}.
For $\sigma_m^{(1)}(t)$
we consider the parametric volatility structure given by
\begin{equation}\label{eq:vola1}
\lambda^m(t)
=
\sigma_m^{(1)}(t)
:=
\left(\Big(a(t_{m-1} - t) + d\Big) e^{-b(t_{m-1} -t)} + c\right)
\left(
\begin{matrix}
\cos\theta_m\\
\sin\theta_m
\end{matrix}
\right),
\quad t \le t_{m-1} \,,
\end{equation}
where the $\theta_m$ are angles which depend on the maturity but not on time.
Thus, in this case, the dimension of the Brownian increment in \eqref{eq:euler1} is $d=2$.
This choice provides a hump-shaped structure for instantaneous volatility of the LIBOR rate $L^m$ as a function of the time to maturity. See \cite{Brigo,Rebonato} for further background and an economic interpretation.
The subsequently presented results are obtained with respect to the year-end 2020 (without the so-called volatility adjustment) risk-free EIOPA interest rate curve (\cite{EIOPA_curve}) with a projection horizon of $M=50$.
The displacement factor is fixed as
\begin{equation}
\label{e:dispF}
\alpha = 1\,\%.
\end{equation}
We consider two sets of parameters for the hump shaped volatility curve:
\begin{align}
\textup{RMW parameters: }\quad
&
a = 0.07,\quad
b = 0.2, \quad
c = 0.6, \quad
d = 0.075 \label{e:paraRMW} \\
\textup{Excited parameters: }\quad
&a = 0.01,\quad
b = 0.05, \quad
c = 0.2, \quad
d = 0.14
\label{e:para}
\end{align}
The values \eqref{e:paraRMW} are taken from the textbook \cite[page~13]{RMW} where these are interpreted as representing a `normal' state of the volatility structure. The parameters \eqref{e:para} are chosen specifically to represent an excited state of the market with increased volatility, see Figure~\ref{fig:vol_cur}. This is where the blow-up problem is most pronounced whence the mean-field interaction has the strongest (and graphically most visible) effect.
\begin{figure}
\centering
\includegraphics[width=7cm]{vol_curves20210601.png}
\caption{Volatility curves representing the scalar part of \eqref{eq:vola1} with respect to time to maturity $\tau_m=M-t_m$ and parameters \eqref{e:paraRMW}-\eqref{e:para}.}
\label{fig:vol_cur}
\end{figure}
The angles, $\theta_m$ are chosen as in Figure~\ref{fig:theta} to represent a generic and economically plausible correlation structure~\eqref{e:corrM}.
\begin{figure}
\centering
\includegraphics[width=7cm]{theta.png}
\caption{Choice of angles, $\theta_m$, as a function of time indexed by $m$.}
\label{fig:theta}
\end{figure}
The choice depicted in Figure~\ref{fig:theta} yields the following correlation structure $\cos(\theta_m-\theta_n)$, where indices $m,n$ are in $\{1,6,11,16,\dots,46\}$:
\begin{equation}
\label{e:corrM}
\begin{bmatrix}
1.00 & 0.99 & 0.95 & 0.89 & 0.81 & 0.71 & 0.59 & 0.45 & 0.31 & 0.19 \\
0.99 & 1.00 & 0.99 & 0.95 & 0.89 & 0.81 & 0.71 & 0.59 & 0.45 & 0.34 \\
0.95 & 0.99 & 1.00 & 0.99 & 0.95 & 0.89 & 0.81 & 0.71 & 0.59 & 0.48 \\
0.89 & 0.95 & 0.99 & 1.00 & 0.99 & 0.95 & 0.89 & 0.81 & 0.71 & 0.61 \\
0.81 & 0.89 & 0.95 & 0.99 & 1.00 & 0.99 & 0.95 & 0.89 & 0.81 & 0.73 \\
0.71 & 0.81 & 0.89 & 0.95 & 0.99 & 1.00 & 0.99 & 0.95 & 0.89 & 0.83 \\
0.59 & 0.71 & 0.81 & 0.89 & 0.95 & 0.99 & 1.00 & 0.99 & 0.95 & 0.90 \\
0.45 & 0.59 & 0.71 & 0.81 & 0.89 & 0.95 & 0.99 & 1.00 & 0.99 & 0.96 \\
0.31 & 0.45 & 0.59 & 0.71 & 0.81 & 0.89 & 0.95 & 0.99 & 1.00 & 0.99 \\
0.19 & 0.34 & 0.48 & 0.61 & 0.73 & 0.83 & 0.90 & 0.96 & 0.99 & 1.00 \\
\end{bmatrix}
\end{equation}
In the following, we compare four simulation methods differing in their volatility structure. We refer to the classical model without mean-field dependence as \emph{VolSwi2} and the method with mean-field taming as \emph{VolSwi25} (Section~\ref{sec:VS25}). We furthermore present simulation methods with correlation assumptions based on economic considerations including anti-correlation (\emph{VolSwi4} in Section~\ref{sec:VS4}) and decorrelation of interest rates (\emph{VolSwi6} in Section~\ref{sec:VS6}) in order to deal with "exploding" rates". In this context we define blow-up, or explosion, as the occurrence of a significant number (i.e., more than $1\%$) of scenarios beyond a certain threshold (i.e., $50\%$ interest) at a given time. This can be tested graphically by looking at the histograms in Figures~\ref{fig:hist_RMW} and \ref{fig:hist} of $L^1$ at times $10$, $20$, $30$ and $40$, or at the excess plots in Figure~\ref{fig:exc}.
\subsection{Mean-field taming beyond threshold}\label{sec:VS25}
One possibility of mitigating explosion is to include a taming factor which depends on the observed scenario variance, $\Psi_t^{m,*}$, at time $t$ under the spot measure. Thus we choose a variance threshold $\tilde\sigma$ and define
\begin{equation}
\label{e:taming}
\lambda^m(t,\Psi_t^{m,*})
=
\sigma_m^{(1)}(t)\exp\Big(-\max \{\Psi_t^{m,*}-\tilde\sigma,0\} / \tilde\sigma \Big)\,,
\end{equation}
where $\sigma_m^{(1)}$ is given by \eqref{eq:vola1}.
In this case the dimension of the Brownian increment in \eqref{eq:euler1} is $d=2$.
For the purposes of the simulation, $\Psi_t^{m,*}$ is given by \eqref{e:Psi_sim}, and the relevant parameters are \eqref{e:para} and the variance threshold is
\begin{equation}
\label{e:V}
\tilde \sigma = \Big( L^{10}_0 \Big)^2\,,
\end{equation}
which is the square of the initial $10$ year forward rate.
This means that the threshold is assumed to correspond to a coefficient of variation (relative standard deviation) of the $10$ year yield of $100\%$.
It would also be possible to choose a different threshold corresponding to different maturities, however we find that this only adds unnecessary complexity.
As expected, this taming reduces the scenario variance, thereby making explosion very unlikely. This effect can be observed by looking at the histograms in Figure~\ref{fig:hist} of $L^1$ at times $10$, $20$, $30$ and $40$, or at the excess plots In Figures \ref{fig:excRMW} and \ref{fig:exc}.
The taming function is such that, once the threshold has been breached, the growth rate of the scenario variance approaches $0$.
Accordingly, there is a strong effect on cap prices. Indeed, once the scenario variance has increased beyond the threshold, the cap prices begin to decrease when compared to the mean-field independent case, compare Figures~\ref{fig:Caps_RMW} and \ref{fig:Caps}. Since we assume that the parameters, \eqref{e:paraRMW} or \eqref{e:para}, are obtained from a calibration routine based on the classical LMM, this poses restrictions on the applicability of the taming structure with respect to derivatives (such as caplets) whose values depend on instantaneous volatilities.
\begin{remark}
Due to Remark~\ref{rem:max}, the method of Section~\ref{sec:VS25} is covered by our existence and uniqueness Theorem~\ref{TH:TH1}. The variants in Sections~\ref{sec:VS6} and \ref{sec:VS4} are included because in the numerical study of their potential practical interest, but the corresponding existence and uniqueness problem is left for future research.
\end{remark}
\subsection{Decorrelation beyond threshold}\label{sec:VS6}
In this section we assume a continuous decorrelation of rates as the observed variance $\Psi^{m,*}$ increases.
Thus, the splitting \eqref{e:split} is realized as
\begin{equation}
\label{e:VS6}
\lambda^m\Big(t,\Psi_t^{m,*}\Big)
=
\left(
\exp(-\Psi_t^{m,*}/ \tilde{\sigma})
i_M \sigma^{(1)}_m(t)
+
\Big(
1 - \exp(-\Psi_t^{m,*}/\tilde{\sigma})
\Big)e_m
\right)/F
\end{equation}
where $e_m$ is the $m$-th standard vector in $\mathbb{R}^M$, $i_M: \mathbb{R}^2\rightarrow}\def\l{\ell}\def\iint{\int\mathbb{R}^M$ is the embedding along the first two factors, and $F = F(t,\Psi_t^{m,*})$ is normalization factor such that $|\lambda^m| = |\sigma^{(1)}_m|$.
In this case the dimension of the Brownian increment in \eqref{eq:euler1} is $d=M$.
If $\Psi_t^{m,*}$ is small compared to $\tilde{\sigma}$ then the system behaves approximately according to \eqref{eq:vola1}, and if $\Psi_t^{m,*}$ is large compared to $\tilde{\sigma}$ decorrelation sets in.
The decorrelation approach is not quite as reliable as the taming function in reducing the probability of blow-up. However, compared to the standard LMM, the likeliness of explosion is still reduced significantly. See Figures~\ref{fig:hist_RMW} and \ref{fig:hist} or the excess plots in Figures \ref{fig:excRMW} and \ref{fig:exc}.
Note that the instantaneous volatility, $\sigma_m^{(1)}$, remains unchanged in this setting. Therefore, caplet prices are unaffected.
Thus it can be expected, and is numerically verified in Figures~\ref{fig:Caps_RMW} and \ref{fig:Caps} that cap prices are preserved.
Concerning swaption prices, we observe a good level of replication under normal initial market conditions, represented by parameters~\eqref{e:paraRMW}, but significant deviations under excited conditions, represented by parameters~\eqref{e:para}.
\subsection{Anti-correlation beyond threshold}\label{sec:VS4}
The approach of Section~\ref{sec:VS6} is successful at mitigating explosion and preserving cap prices. However, as noted, there may be undesired effects on swaption prices.
Thus we consider the following anti-correlation prescription:
\begin{equation}
\label{e:anti-cor}
\lambda^m\Big(t,\Psi_t^{m,*}\Big)
=
\left\{
\begin{matrix}
\eqref{eq:vola1} && \quad\textup{if}\quad \Psi_t^{m,*} \le \tilde\sigma \\
|\sigma_m^{(1)}(t)| e_n
&&\quad\textup{if}\quad \Psi_t^{m,*} > \tilde\sigma \textup{ and } m = 2n-1\\
-|\sigma_m^{(1)}(t)| e_n
&&\quad\textup{if}\quad \Psi_t^{m,*} > \tilde\sigma \textup{ and } m = 2n
\end{matrix}
\right\}
\end{equation}
where $n=1,\dots,M/2$ (we assume that $M$ is even) and $e_n$ is the $n$-th standard basis vector.
In this case the dimension of the Brownian increment in \eqref{eq:euler1} is $d=M$.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots5000rebonato_20210605/r_1Y_20.png}
\includegraphics[width=0.49\textwidth]{plots5000rebonato_20210605/r_1Y_30.png}
\includegraphics[width=0.49\textwidth]{plots5000rebonato_20210605/r_1Y_40.png}
\includegraphics[width=0.49\textwidth]{plots5000rebonato_20210605/r_1Y_50.png}
\caption{Histograms of $L^1$ generated according to \eqref{eq:vola1} at times $10$, $20$, $30$ and $40$. Parameters: \eqref{e:paraRMW}.}
\label{fig:hist_RMW}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots5000dummy_20210604/d_1Y_20.png}
\includegraphics[width=0.49\textwidth]{plots5000dummy_20210604/d_1Y_30.png}
\includegraphics[width=0.49\textwidth]{plots5000dummy_20210604/d_1Y_40.png}
\includegraphics[width=0.49\textwidth]{plots5000dummy_20210604/d_1Y_50.png}
\caption{Histograms of $L^1$ generated according to \eqref{eq:vola1} at times $10$, $20$, $30$ and $40$. Parameters: \eqref{e:para}.}
\label{fig:hist}
\end{figure}
Moreover, it is numerically verified that this choice (approximately) preserves caplet prices (Figure~\ref{fig:Caps_RMW} and \ref{fig:Caps}) and swaption prices (Figures~\ref{fig:SwaptionsRMW} and \ref{fig:Swaptions}), and significantly reduces blow-up (Figure~\ref{fig:hist}).
Finally, the time evolution of the percentage of scenarios exceeding $50\%$, resp.\ $100\%$, is shown in Figures~\ref{fig:excRMW} and \ref{fig:exc}.
\begin{figure}
\centering
\includegraphics[width=7cm]{plots5000_20210601/CapPrices_Stike_1_F10.png}
\includegraphics[width=7cm]{plots5000_20210601/CapPrices_Stike_2_F10.png}
\includegraphics[width=7cm]{plots5000_20210601/CapPrices_Stike_5_F10.png}
\includegraphics[width=7cm]{plots5000_20210601/CapPrices_Stike_10_F10.png}
\caption{$1$-year caplet prices: VolSwi2 corresponds to Section~\ref{sec:VS2}, VolSwi25 to Section~\ref{sec:VS25}, VolSwi6 to Section~\ref{sec:VS6} and VolSwi4 to Section~\ref{sec:VS4}. The red line (VolSwi2) corresponds to the model calibration and is viewed as the `truth'. Note the deviation of the VolSwi25 line corresponding to the dampening assumption. Parameters: \eqref{e:paraRMW}}
\label{fig:Caps_RMW}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{plots5000dummy_20210601/CapPrices_Stike_1_F10.png}
\includegraphics[width=7cm]{plots5000dummy_20210601/CapPrices_Stike_2_F10.png}
\includegraphics[width=7cm]{plots5000dummy_20210601/CapPrices_Stike_5_F10.png}
\includegraphics[width=7cm]{plots5000dummy_20210601/CapPrices_Stike_10_F10.png}
\caption{$1$-year caplet prices: VolSwi2 corresponds to Section~\ref{sec:VS2}, VolSwi25 to Section~\ref{sec:VS25}, VolSwi6 to Section~\ref{sec:VS6} and VolSwi4 to Section~\ref{sec:VS4}. The red line (VolSwi2) corresponds to the model calibration and is viewed as the `truth'. Note the deviation of the VolSwi25 line corresponding to the dampening assumption. Parameters: \eqref{e:para}}
\label{fig:Caps}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{plots5000_20210601/SwaptionPrices10_Strike_1_F10.png}
\includegraphics[width=7cm]{plots5000_20210601/SwaptionPrices10_Strike_2_F10.png}
\includegraphics[width=7cm]{plots5000_20210601/SwaptionPrices10_Strike_5_F10.png}
\includegraphics[width=7cm]{plots5000_20210601/SwaptionPrices10_Strike_10_F10.png}
\caption{$10\times 10$ swaption prices: VolSwi2 corresponds to Section~\ref{sec:VS2}, VolSwi25 to Section~\ref{sec:VS25}, VolSwi6 to Section~\ref{sec:VS6} and VolSwi4 to Section~\ref{sec:VS4}. The red line (VolSwi2) corresponds to the model calibration and is viewed as the `truth'. Graphically, it is hardly distinguishable from VolSwi25 and VolSwi4. Parameters: \eqref{e:paraRMW}}
\label{fig:SwaptionsRMW}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{plots5000dummy_20210601/SwaptionPrices10_Strike_1_F10.png}
\includegraphics[width=7cm]{plots5000dummy_20210601/SwaptionPrices10_Strike_2_F10.png}
\includegraphics[width=7cm]{plots5000dummy_20210601/SwaptionPrices10_Strike_5_F10.png}
\includegraphics[width=7cm]{plots5000dummy_20210601/SwaptionPrices10_Strike_10_F10.png}
\caption{$10\times 10$ swaption prices: VolSwi2 corresponds to Section~\ref{sec:VS2}, VolSwi25 to Section~\ref{sec:VS25}, VolSwi6 to Section~\ref{sec:VS6} and VolSwi4 to Section~\ref{sec:VS4}. The red line (VolSwi2) corresponds to the model calibration and is viewed as the `truth'. Graphically, it is hardly distinguishable from VolSwi25 and VolSwi4. Parameters: \eqref{e:para}}
\label{fig:Swaptions}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{plots5000_20210601/excess50.png}
\includegraphics[width=7cm]{plots5000_20210601/excess100.png}
\caption{Excess plots: The time evolution of the percentage of scenarios exceeding $50\%$, resp.\ $100\%$, is shown. VolSwi2 corresponds to Section~\ref{sec:VS2}, VolSwi25 to Section~\ref{sec:VS25}, VolSwi6 to Section~\ref{sec:VS6} and VolSwi4 to Section~\ref{sec:VS4}.
Parameters: \eqref{e:paraRMW}}
\label{fig:excRMW}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{plots5000dummy_20210601/excess50.png}
\includegraphics[width=7cm]{plots5000dummy_20210601/excess100.png}
\caption{Excess plots: The time evolution of the percentage of scenarios exceeding $50\%$, resp.\ $100\%$, is shown. VolSwi2 corresponds to Section~\ref{sec:VS2}, VolSwi25 to Section~\ref{sec:VS25}, VolSwi6 to Section~\ref{sec:VS6} and VolSwi4 to Section~\ref{sec:VS4}.
Parameters: \eqref{e:para}}
\label{fig:exc}
\end{figure}
\newpage
\section{Existence and uniqueness of the solution to the underlying mean-field SDE}\label{Sec:WEll}
In what follows, we require the following notions and definitions:
\begin{itemize}
\item For a given $T >0$, we denote by $\mathscr{C} := C([0,T], \mathbb{R})$ the space of real-valued continuous functions endowed with the supremum norm,
\begin{equation*}
\| f \|_t := \sup_{0 \leq s \leq t} |f_s|
\end{equation*}
for $f \in \mathscr{C}$. The space $\mathscr{C}$ is also called path space.
\item
For $p \geq 2$,
$\mathcal{S}^p([0,T])$ refers to the space of $\mathbb{R}^d$-valued progressively measurable, continuous processes, defined on the interval $[0,T]$, with bounded $p$-th moments, i.e., processes $(X_t)_{0 \leq t \leq T}$ satisfying $\mathbb{E} \left[ \|X \|_{T}^p \right] < \infty$.
\item
The set of probability measures on path-space $\mathscr{C}$ is denoted by
$\mathscr{P}(\mathscr{C})$ and the subset of
square integrable
probability measures is
is denoted by
\begin{equation*}
\mathscr{P}_2(\mathscr{C})
=
\Big\{
\mu\in\mathscr{P}(\mathscr{C}):
\mathbb{E}^{\mu}\left[ \| X \|^2_T \right] < \infty
\Big\}.
\end{equation*}
\item
As a metric on $\mathscr{P}_2(\mathscr{C})$, we use the following variant of the Wasserstein distance, see e.g., \cite{CD}:
for $\mu, \nu \in \mathscr{P}_2(\mathscr{C})$ define
\begin{equation}
\label{e:W1}
\mathbb{W}^{(2), \mathscr{C}}_T(\mu, \nu)
:=
\left(\inf_{\pi \in \Pi(\mu,\nu)}
\int_{\mathscr{C} \times \mathscr{C}} \|x-y\|_T^2 \pi(\mathrm{d}x,\mathrm{d}y) \right)^{1/2},
\end{equation}
where $\Pi(\mu,\nu)$ denotes the set of couplings of $\mu$ and
$\nu$, i.e., $\pi \in \Pi(\mu,\nu)$ if and only if
$\Pi(\cdot,\mathscr{C})=\mu(\cdot)$ and $\Pi(\mathscr{C},\cdot)=\nu(\cdot)$.
We recall the definition of the standard $L_2$ Wasserstein distance: For any $\mu, \nu \in \mathscr{P}_2(\mathbb{R}^d)$, we define
\begin{equation}
\label{e:W2}
\mathbb{W}_{2}(\mu, \nu) := \left(\inf_{\pi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^d \times \mathbb{R}^d} |x-y |^2 \pi(\mathrm{d}x,\mathrm{d}y) \right)^{1/2}.
\end{equation}
\end{itemize}
Since the coefficients of our MF-LMM are not Lipschitz continuous with respect to the $L_2$ Wasserstein distance as defined in \eqref{e:W1} and \eqref{e:W2}, common results on the existence and uniqueness of solutions to the corresponding mean-field SDEs do not apply. Hence, we need to show in full generality the existence and uniqueness of solutions for a class of mean-field SDEs involving such non-standard coefficients.
Consider, for a given time horizon $[0,T]$, the following mean-field SDE on $\R^d$
\begin{equation}\label{EE1}
\d X_t
= b \left(t,X_t,\mu_t^{X} \right)\d t
+ \sigma} \def\ess{\text{\rm{ess}}(t,X_t,\mu_t^{X})\d W_t,
\quad X_0=\xi,
\end{equation}
where here $\mu_t^{X}$ is the marginal law of $X$ at the time $t \geq 0$,
$b:[0,T] \times \Omega \times \R^d \times \mathscr{P}_2(\R^d) \rightarrow}\def\l{\ell}\def\iint{\int \R^{d}$ and
$\sigma} \def\ess{\text{\rm{ess}}: [0,T] \times \Omega \times \R^d\times \mathscr
{P}_2(\R^d)\rightarrow}\def\l{\ell}\def\iint{\int \R^{d} \otimes \R^{m}$
are progressively measureable maps satisfying the assumptions stated below,
$\xi$ is an $\R^d$-valued random variable with bounded $p$-th moments (for a given $p \geq 2$), and $(W_t)_{t\ge0}$ is an
$m$-dimensional Brownian motion on the filtered probability space
$(\OO,\mathcal{F},(\mathcal{F}_t)_{t\ge0},\P)$.
Here, we consider the case that $b$ and $\sigma$ are decomposable as
\begin{equation}
\label{e:dec}
b(t,x,\mu)
= b_1(t,x,\mu) + xg(\mu)h_1(t,\omega),
\quad
\sigma(t,x,\mu)
= \sigma_1(t,x,\mu) + h_2(t,\omega)x^{T}g(\mu),
\end{equation}
where $g: \mathscr {P}_2(\R^d) \rightarrow}\def\l{\ell}\def\iint{\int \R$, $h_1:[0,T] \times \Omega \rightarrow}\def\l{\ell}\def\iint{\int {\mathbb R}, h_2:[0,T] \times \Omega \rightarrow}\def\l{\ell}\def\iint{\int {\mathbb R}^m$ and we will assume that, for any $x,y\in\R^d $, any $t \in [0,T]$, and $\mu,\nu\in\mathscr
{P}_2(\R^d)$:
\begin{enumerate}
\item[({\bf A}$_b^1$)] There exists a constant $L_b^1>0$ such that
\begin{align*}
|b_1(t,x,\mu)-b_1(t,y,\nu)| \le L_b^1(|x-y|+\mathbb{W}_2(\mu,\nu)).
\end{align*}
\item[({\bf A}$_\sigma} \def\ess{\text{\rm{ess}}^1$)] There exists a constant $L_\sigma} \def\ess{\text{\rm{ess}}^1>0$ such that
\begin{equation*}
\|\sigma_1(t,x,\mu)-\sigma_1(t,y,\nu)\| \le L_\sigma} \def\ess{\text{\rm{ess}}^1(|x-y|+\mathbb{W}_2(\mu,\nu)).
\end{equation*}
\item[({\bf A}$_{b\sigma}^1$)] The functions $h_1, h_2, g$ are uniformly bounded and there exists a constant $L_1>0$ such that
\begin{align*}
|g(\mu) - g(\nu)| \leq L_1 \mathbb{E}\left[ |X+Y|^2 \right] \mathbb{W}_2(\mu,\nu),
\end{align*}
where $X$ and $Y$ are random variables with distribution, $\mu$ and $\nu$, respectively.
\item[({\bf A}$_{b\sigma}^2$)] There exists a constant $L_2>0$ (independent of $\mu$) such that
\begin{align*}
|b_1(t,x,\mu)| + \|\sigma_1(t,x,\mu)\| \leq L_2(1 + |x|).
\end{align*}
\end{enumerate}
\begin{remark}
Note that the decompositions~\eqref{e:dec} are compatible with the form of the proposed volatility structures in Section~\eqref{sec:vol_str}.
\end{remark}
In what follows, we will prove that the mean-field SDE (\ref{EE1}), with deterministic initial data, indeed has a unique strong solution. We remark that generic constants $C>0$ might change their value from line to line in a chain of inequalities.
\begin{thm}[Existence of a unique solution]\label{TH:TH2}
Let $X_0=x$, for some given value $x \in {\mathbb R}^d$.
Further, let assumptions ({\bf A}$_b^1$)--({\bf A}$_\sigma} \def\ess{\text{\rm{ess}}^1$) and ({\bf A}$_{b\sigma}^1$)--({\bf A}$_{b\sigma}^2$) be satisfied.
Then, the mean-field SDE~\eqref{EE1},
\begin{equation*
\d X_t
= b \left(t,X_t,\mu_t^{X} \right)\d t
+ \sigma} \def\ess{\text{\rm{ess}}(t,X_t,\mu_t^{X})\d W_t,
\quad X_0=\xi,
\end{equation*}
has a unique strong solution in $\mathcal{S}^{p}([0,T])$, for any $p \geq 2$.
\end{thm}
\begin{proof}
For any given $\mu \in \mathscr{P}_2(\mathscr{C})$, we can reinterpret (\ref{EE1}) as a classical SDE
\begin{equation}\label{eq:Model1mu}
\mathrm{d} X_t^{\mu} = b^{\mu}(X_t^{\mu},t) \mathrm{d}t + \sigma^{\mu}(X_t^{\mu},t) \mathrm{d}W_t, \quad X^{\mu}_0= x \in \mathbb{R}^d,
\end{equation}
where
\begin{align*}
b^{\mu}(X_t^{\mu},t) & := b(t,X_t^{\mu}, \mu_t) = b_1(t,X_t^{\mu}, \mu_t) + X_t^{\mu}g(\mu_t)h_1(t) \\
\sigma^{\mu}(X_t^{\mu},t) & := \sigma(t,X_t^{\mu}, \mu_t) = \sigma_1(t,X_t^{\mu}, \mu_t) + h_2(t)X_t^{\mu,T}g(\mu_t)
\end{align*}
i.e., the coefficients do not depend on the law of $X_t^{\mu}$. Hence it can be seen as a classical (time-dependent) SDE, which has a unique strong solution in $\mathcal{S}^p([0,T])$, for $p \geq 2$ (see, e.g., \cite{XM}), i.e., there is a constant $C_p >0$ (independent of $\mu$, due to ({\bf A}$_{b\sigma}^1$)--({\bf A}$_{b\sigma}^2$)), such that
\begin{align*}
\mathbb{E}\left[\sup_{0 \leq t \leq T} |X_t^{\mu}|^p \right] \leq C_p.
\end{align*}
In a next step, we introduce the map $\Phi: \mathscr{P}_2(\mathscr{C}) \rightarrow}\def\l{\ell}\def\iint{\int \mathscr{P}_2(\mathscr{C})$ by
\begin{equation*}
\Phi(\mu) = \text{Law}(X^{\mu}),
\end{equation*}
i.e., for a fixed $\mu$, we solve the SDE (\ref{eq:Model1mu}) and set $\Phi(\mu)$ to be the law of the solution.
Hence, $(X,\mu)$ is a solution of (\ref{EE1}) if and only if
\begin{equation*}
X=X^{\mu} \text{ and } \mu = \Phi(\mu).
\end{equation*}
Consequently, we need to prove that $\Phi$ admits a unique fixed point, in order to show existence and uniqueness of a solution of (\ref{EE1}).
Using the Lipschitz assumptions and applying the Burkholder-Davis-Gundy inequality yields, for a constant $C>0$ (changing its value from line to line),
\begin{align*}
\mathbb{W}^{(2),\mathscr{C}}_t(\Phi(\mu),\Phi(\nu))^2 &\leq \mathbb{E}\left[\sup_{0 \leq s \leq t} |X_s^{\mu} - X_s^{\nu} |^2 \right] \\
& \leq C \mathbb{E} \left[\int_{0}^{t} |b_1(s,X_s^{\mu},\mu_s) - b_1(s,X_s^{\nu}, \nu_s)|^2 \mathrm{d}s \right] \\
& \quad + C \mathbb{E} \left[\int_{0}^{t} |X_s^{\mu}g(\mu_s)h_1(s) - X_s^{\nu} g(\nu_s)h_1(s) |^2 \mathrm{d}s \right] \\
& \quad + C \mathbb{E} \left[\int_{0}^{t} \|h_2(s)X_s^{\mu,T}g(\mu_s) - h_2(s)X_s^{\mu,T}g(\nu_s) \|^2 \mathrm{d}s \right] \\
& \quad + C \mathbb{E} \left[ \sup_{0 \leq s \leq t} \left| \int_{0}^{s} ( \sigma_1(u,X_u^{\mu},\mu_u) - \sigma_1(u,X_u^{\nu},\mu_u)) \mathrm{d}W_u \right|^2 \right] \\
& \leq C \mathbb{E} \left[ \int_{0}^{t} (\| X^{\mu} - X^{\nu} \|^2_s + \mathbb{W}_2(\mu_s,\nu_s)^2 )\mathrm{d}s \right] \\
& \quad + C \mathbb{E} \left[\int_{0}^{t} |X_s^{\mu}g(\mu_s) - X_s^{\nu} g(\nu_s) |^2 \mathrm{d}s \right].
\end{align*}
Note that, employing the Lipschitz property and boundedness of $g$, ({\bf A}$_{b\sigma}^1$) and the fact that
\begin{equation*}
\mathbb{E}\left[\sup_{0 \leq t \leq T} |X_t^{\mu}|^p \right] < \infty,
\end{equation*}
where we recall that the bound is uniform in $\mu \in \mathscr{P}_2({\mathbb R}^d)$, due to ({\bf A}$_{b\sigma}^1$)--({\bf A}$_{b\sigma}^2$), we obtain the estimate
\begin{align*}
\mathbb{E} \left[\int_{0}^{t} |X_s^{\mu}g(\mu_s) - X_s^{\nu} g(\nu_s) |^2 \mathrm{d}s \right]
& \leq
C \mathbb{E} \left[ \int_{0}^{t} \| X^{\mu} - X^{\nu} \|^2_s \mathrm{d}s \right] + C \int_{0}^{t} \mathbb{E}\left[ |X_s^{\nu}|^2 \right] \mathbb{W}_2(\mu_s,\nu_s)^2 \mathrm{d}s \\
& \leq C_{\mu,\nu} \mathbb{E} \left[ \int_{0}^{t} (\| X^{\mu} - X^{\nu} \|^2_s + \mathbb{W}_2(\mu_s,\nu_s)^2 )\mathrm{d}s \right],
\end{align*}
where we used the fact that $\mathbb{E}[|X_s + Y_s|^2] \leq C_{\mu,\nu}$, with $X_s$ and $Y_s$ having distribution $\mu_s$ and $\nu_s$, respectively, and the boundedness of $g$.
Consequently, Gronwall's inequality implies
\begin{equation*}
\mathbb{E}\left[\sup_{0 \leq s \leq t} |X_s^{\mu} - X_s^{\nu} |^2 \right] \leq C_{\mu,\nu} \int_{0}^{t} \mathbb{W}_2(\mu_s,\nu_s)^2 \mathrm{d}s.
\end{equation*}
Hence, we arrive at
\begin{align}\label{eq:PIter}
\mathbb{W}^{(2),\mathscr{C}}_t(\Phi(\mu),\Phi(\nu))^2 & \leq \mathbb{E} \left[\sup_{0 \leq s \leq t} |X_s^{\mu} - X_s^{\nu} |^2 \right]
\leq C_{\mu,\nu} \int_{0}^{t} \mathbb{W}_2(\mu_s,\nu_s)^2 \mathrm{d}s
\leq C_{\mu,\nu} \int_{0}^{t} \mathbb{W}^{(2),\mathscr{C}}_s(\mu,\nu)^2 \mathrm{d}s,
\end{align}
for any $t \in [0,T]$ with a constant $C_{\mu,\nu}>0$ depending on $\mu$ and $\nu$.
Uniqueness follows from the previous inequality and another application of Gronwall's inequality. Existence is a consequence of a standard Picard-iteration type argument. We start with an arbitrary $\mu^0 \in \mathscr{P}_2(\mathscr{C})$ and then define the iterates $\mu^{k+1} = \Phi(\mu^k)$, for $k \geq 0$. This sequence forms a Cauchy sequence, which can be shown by iterating inequality (\ref{eq:PIter}) sufficiently often, with a limit that is a fixed point of $\Phi$.
Therefore the SDE (\ref{EE1}) has unique strong solution in $\mathcal{S}^p([0,T])$, for $p \geq 2$. Also note that we have
\begin{equation*}
\sup_{n \geq 0} \sup_{0 \leq t \leq T} \int_{{\mathbb R}^d} |y|^p \mu^{X^{n}}_t(\mathrm{d}x) < \infty,
\end{equation*}
$p \geq 2$, which means that we have a uniform moment bound across all Picard-steps. Therefore, we conclude that the constant $C_{\mu,\nu}$ in \eqref{eq:PIter} only depends on the choice of $\mu^{0}$.
\end{proof}
\begin{remark}[Link to the Existence of MFLMMs]\label{rem:max}
We note that if, both, drift and diffusion coefficient have the form
\begin{equation*}
X_t g(t)e^{-\int_{{\mathbb R}} x^2 \mu_t(\mathrm{d}x) + \left(\int_{{\mathbb R}} x \mu_t(\mathrm{d}x) \right)^2},
\end{equation*}
where $\mu_t$ is the law of $X_t$ and $g: [0,T] \rightarrow}\def\l{\ell}\def\iint{\int {\mathbb R}$ is a bounded function, then the assumptions of the above result are satisfied. To be precise, using the elementary inequality
\begin{equation}\label{eq:usefull}
\left| \exp(x) - \exp(y) \right| \leq |x-y| \left(\exp(x) + \exp(y) \right)
\end{equation}
for all $x,y \in {\mathbb R}$, we deduce, for any coupling $\pi$ of $\mu$ and $\nu$, that
\begin{align}
&\left|e^{-\int_{{\mathbb R}} x^2 \mu(\mathrm{d}x) + \left(\int_{{\mathbb R}} x \mu(\mathrm{d}x) \right)^2} - e^{-\int_{{\mathbb R}} x^2 \nu(\mathrm{d}x) + \left(\int_{{\mathbb R}} x \nu(\mathrm{d}x) \right)^2} \right| \nonumber \\
& \leq \left |-\int_{{\mathbb R}} x^2 \mu(\mathrm{d}x) + \left(\int_{{\mathbb R}} x \mu(\mathrm{d}x) \right)^2 + \int_{{\mathbb R}} x^2 \nu(\mathrm{d}x) - \left(\int_{{\mathbb R}} x \nu(\mathrm{d}x) \right)^2 \right| \times\nonumber \\
& \quad \times \left|e^{-\int_{{\mathbb R}} x^2 \mu(\mathrm{d}x) + \left(\int_{{\mathbb R}} x \mu(\mathrm{d}x) \right)^2} + e^{-\int_{{\mathbb R}} x^2 \nu(\mathrm{d}x) + \left(\int_{{\mathbb R}} x \nu(\mathrm{d}x) \right)^2} \right| \label{eq:usefullstart}\\
& \leq C \left| \int_{{\mathbb R}^2} |x -y||x+y| \pi(\mathrm{d}x, \mathrm{d}y) \right| + \left| \int_{{\mathbb R}^2} |x-y| \pi(\mathrm{d}x, \mathrm{d}y)\right| \left| \int_{{\mathbb R}^2} |x+y| \pi(\mathrm{d}x, \mathrm{d}y) \right|\nonumber \\
& \leq C \left( \int_{{\mathbb R}^2} |x -y|^2 \pi(\mathrm{d}x, \mathrm{d}y) \right)^{1/2} \left( \int_{{\mathbb R}^2} |x + y|^2 \pi(\mathrm{d}x, \mathrm{d}y) \right)^{1/2} + \left| \int_{{\mathbb R}^2} |x-y| \pi(\mathrm{d}x, \mathrm{d}y)\right| \left| \int_{{\mathbb R}^2} |x+y| \pi(\mathrm{d}x, \mathrm{d}y) \right|,\nonumber
\end{align}
for some constant $C>0$.
Since the above inequality holds for any coupling $\pi$, we also have
\begin{align}\label{eq:WA}
&\left|e^{-\int_{{\mathbb R}} x^2 \mu(\mathrm{d}x) + \left(\int_{{\mathbb R}} x \mu(\mathrm{d}x) \right)^2} - e^{-\int_{{\mathbb R}} x^2 \nu(\mathrm{d}x) + \left(\int_{{\mathbb R}} x \nu(\mathrm{d}x) \right)^2} \right| \nonumber \\
& \leq C \mathbb{W}_2(\mu, \nu) \left( \left( \int_{{\mathbb R}^2} |x + y|^2 \pi(\mathrm{d}x, \mathrm{d}y) \right)^{1/2} + \int_{{\mathbb R}^2} |x+y| \pi(\mathrm{d}x, \mathrm{d}y) \right).
\end{align}
In particular, we also have in this case, for each Picard-iteration, a moment-bound of the SDE with fixed measure (\ref{eq:Model1mu}), which is independent of the present Picard-step. Based on \eqref{eq:usefull} one can derive
\begin{equation*}
\left|\exp\{-x^+\}-\exp\{-y^+\}\right|\leq |x-y|\left(\exp\{-x\}+\exp\{-y\}\right),
\end{equation*}
for $x,\,y\in{\mathbb R}$ and $x^+=\max\{x,0\}$. This inequality fortunately leads us to
\begin{align*}
&\left| e^{-\left[\int_{{\mathbb R}}x^2 \mu(\mathrm{dx})-\left(\int_Rx \mu(\mathrm{dx})\right)^2-\alpha\right]^+}-e^{-\left[\int_{{\mathbb R}}x^2 \nu(\mathrm{dx})-\left(\int_Rx \nu(\mathrm{dx})\right)^2-\alpha\right]^+}\right|\\
&\leq\left|\int_{{\mathbb R}}x^2 \mu(\mathrm{dx})-\left(\int_Rx \mu(\mathrm{dx})\right)^2-\int_{{\mathbb R}}x^2 \nu(\mathrm{dx})+\left(\int_Rx \nu(\mathrm{dx})\right)^2\right|\times\\
&\quad\times\left(e^{-\int_{{\mathbb R}}x^2 \mu(\mathrm{dx})+\left(\int_Rx \mu(\mathrm{dx})\right)^2+\alpha}+e^{-\int_{{\mathbb R}}x^2 \nu(dx)+\left(\int_Rx \nu(\mathrm{dx})\right)^2+\alpha}\right),
\end{align*}
for some parameter $\alpha>0$. This resembles \eqref{eq:usefullstart} such that we can draw the same conclusions as in \eqref{eq:WA}. Terms of these form are relevant for a distribution dependent LIBOR market model, see (\ref{eq:diffusion}).
\end{remark}
\begin{remark}[Complementing the proof Theorem~\ref{TH:TH1}]\label{rem:ass}
In the proof of Theorem \ref{TH:TH1}, a coefficient depending on $\tilde{\mu}_t^i$, the law of
\begin{equation*}
X_t:=(L_t^i,Z^{i,N}_t, \ldots, Z^{N-1,N}_t),
\end{equation*}
under $\mathcal{Q}^{N}$, appears. We analyse this coefficient for $i=N-1$ (similarly for other values of $i$). As shown in Section~\ref{sec:model}, this coefficient is
\begin{align*}
\tilde{\sigma}_{N-1}(t,\tilde{\mu}_t^{N-1}) = \sigma_{N-1}^{(1)}(t) e^{-\mathbb{E}^{\mathcal{Q}^N} \left[ \left( L_t^{N-1} \right)^2 Z^{N-1,N}_t \right] + \left(\mathbb{E}^{\mathcal{Q}^N} \left[L_t^{N-1} Z^{N-1,N}_t \right] \right)^2}.
\end{align*}
For two different measures $\tilde{\mu}^i,\tilde{\nu}^i \in \mathscr{P}_2(\mathscr{C})$, with marginals $\tilde{\mu}_t^i,\tilde{\nu}_t^i \in \mathscr{P}_2({\mathbb R}^2)$, we have the estimate
\begin{align*}
& \left|e^{-\mathbb{E}^{\mathcal{Q}^N,\tilde{\mu}^i} \left[ \left( L_t^{N-1} \right)^2 Z^{N-1,N}_t \right] + \left(\mathbb{E}^{\mathcal{Q}^N,\tilde{\mu}^i} \left[L_t^{N-1} Z^{N-1,N}_t \right] \right)^2} - e^{-\mathbb{E}^{\mathcal{Q}^N,\tilde{\nu}^i} \left[ \left( L_t^{N-1} \right)^2 Z^{N-1,N}_t \right] + \left(\mathbb{E}^{\mathcal{Q}^N,\tilde{\nu}^i} \left[L_t^{N-1} Z^{N-1,N}_t \right] \right)^2} \right| \\
& \leq C \mathbb{W}_2(\mu^{i}_t, \nu^{i}_t),
\end{align*}
where $C>0$ depends on the moments (with respect to $\tilde{\mu}_t^i$, and $\tilde{\nu}_t^i$) of $L_t^{N-1}$ and $Z^{N-1,N}_t$. Also in this case the moments are uniformly bounded across all Picard-iterations and the proof of Theorem \ref{TH:TH1} also applies to this framework.
Furthermore, from the second part of Remark \ref{rem:max} we have that for the case of a measure dependence of the form,
$$
\exp \left\{
-\left[\mathbb{E}^{\mathcal{Q}^N,\tilde{\mu}^i} \left[ \left( L_t^{N-1} \right)^2 Z^{N-1,N}_t \right] + \left(\mathbb{E}^{\mathcal{Q}^N,\tilde{\mu}^i} \left[L_t^{N-1} Z^{N-1,N}_t \right] \right)^2-\alpha \right] ^+
\right\},
$$
the conclusion hold true as-well.
\end{remark}
\section{Conclusions}
We have introduced MF-LMM, a mean-field extension of the classical LIBOR Market Model. The main motivation for this is the reduction of blow-up probability which is particularly relevant in the context of the valuation of long term guarantees. In this work we have studied the following aspects of MF-LMM:
\begin{enumerate}
\item[(1)]
Theorem~\ref{TH:TH1} proves existence and uniqueness of the MF-LMM, based on the results of Section~\ref{Sec:WEll}.
\item[(2)]
Theorem~\ref{thm:cap} contains a Black formula for a given measure flow in the mean-field setting.
\item[(3)]
Section~\ref{sec:mfCal} adapts the Picard iteration construction of Theorem~\ref{TH:TH2} to devise a calibration algorithm based on Theorem~\ref{thm:cap}. The feasibility of this algorithm is shown in a numerical example.
\item[(4)] In Section~\ref{sec:numerics}, we use an Euler-Maruyama discretization to simulate several variants of the MF-LMM. The numerical examples demonstrate (Figures~\ref{fig:hist_RMW} and \ref{fig:hist}) that a judicial choice of the mean-field dependence can lead to a reduction of blow-up probability (i.e., of number of scenarios breaching a certain predefined threshold), as expected.
\item[(5)]
The numerical approach of Section~\ref{sec:numerics} is tailor-made to facilitate the extension of classical LMMs to the mean-field realm. Thus, given the above mentioned well-posedness result, it can be readily applied to augment existing models -- whenever exploding rates are an issue.
\end{enumerate}
Irrespective of blow-up considerations, one may also envisage to use the method of Theorem~\ref{thm:cap}, in conjunction with Section~\ref{sec:mfCal}, as additional degrees of freedom in the calibration process, e.g.\ if a fit to out-of-the-money caps is desirable. A possible topic for future studies is the extension of the Heath-Jarrow-Morton methodology to the mean-field setting.
\section*{Funding}
S. Desmettre is supported by the Austrian Science Fund (FWF) project F5507-N26, which is part of the Special Research Program \textit{Quasi-Monte Carlo Methods: Theory and Applications}.\\
S. Thonhauser is supported by the Austrian Science Fund (FWF) project P33317.
\section*{Acknowledgements}
We wish to thank Wolfgang Stockinger for many fruitful and extensive discussions, in particular concerning the existence and uniqueness of the underlying mean-field SDE.
|
1,108,101,563,199 | arxiv | \section{Introduction}
In the last decade, thanks to the discovery and study of afterglow emission and
host galaxies, it has been possible to estimate the redshift of several
tens of Gamma--Ray Bursts (GRBs), and thus to derive
their distance scale, luminosities and
other intrinsic properties.
Among these, the correlation between the cosmological rest--frame $\nu$F$_\nu$ spectrum peak
energy, $E_{\rm p,i}$, and the isotropic equivalent radiated energy, $E_{\rm iso}$,
is one of the most intriguing and robust. Indeed, as shown, initially by
\cite{Amati02,Amati03,Ghirlanda04a,Friedman05} and, more recently,
by \cite{Amati06}, all
long GRBs with known redshift and estimated $E_{\rm p,i}${} are consistent with
the relation $E_{\rm p,i}${} = K $\times$ $E_{\rm iso}^{m}$
(K $\sim$75--110 and $m$ $\sim$0.4--0.6, with $E_{\rm p,i}${} in keV and
$E_{\rm iso}${} in units of 10$^{52}$ erg),
with the only exception of GRB\,980425 (which is anyway
a peculiar event under several other aspects). The $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation holds
from the brightest GRBs to the weakest and softest X--Ray Flashes (XRFs)
and is characterized by a scatter in log($E_{\rm p,i}$) of $\sim$0.2 dex
(by assuming a
Gaussian distribution of the deviations).
The implications and uses of the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation include
prompt emission physics, jet geometry and structure, testing of GRB/XRF synthesis and
unification
models, pseudo--redshift
estimators, cosmology (when additional observables, like e.g. the break time of the
optical afterglow light curve or the high signal time scale, are included
\cite{Ghirlanda04a,Liang05,Firmani06}); see \cite{Amati06} for a review.
In the recent years
there has been a debate, mainly based on BATSE GRBs without known redshift,
about the impact of selection effects on the
sample of GRBs with known redshift and thus on the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation.
Based on the analysis of BATSE GRBs without known redshift, different conclusions
were reported (e.g., \cite{Nakar05,Band05,
Ghirlanda05,Pizzichini06}), but with the general agreement that
{\it Swift}{} would allow us to test the correlation in a more stringent way:
\begin{itemize}
\item{with respect, e.g., to BATSE, the BAT GRB detector has a sensitivity
which is comparable
for GRBs with peak energies above $\sim$100 keV, but much better for softer
events (e.g., \cite{Band03}}), thus reducing selection effects at the detection stage;
\item{the fast pointing capability permits few arcsec localizations with XRT and their
dissemination to optical telescopes as early
as $\sim$100-200s form GRB onset, thus reducing selection effects at the redshift
estimate stage;
this is clearly demonstrated by the fact the redshift estimates for {\it Swift}{} GRBs are
more frequent
and differently distributed with respect to the past
(see, e.g., \cite{Berger05,Jakobsson06}).
}
\end{itemize}
In addition, thanks to the fast pointing capability, it is possible in some cases,
to follow the later
and softer phase of the prompt emission with XRT, thus providing a more
accurate estimate of the peak energy (as in the case of GRB\,060218
\cite{Campana06}).
From the point of view of testing the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation, the only negative
of {\it Swift}{} is that BAT, due to its limited energy band ($\sim$15--350 keV),
provides an estimate the spectral peak energy only for a small fraction (15--20\%)
of the events.
Fortunately, this is partially compensated by the simultaneous detection of several
{\it Swift}{} GRBs by broad band experiments like HETE--2, Konus/Wind and, more recently,
RHESSI and Suzaku/WAM, all capable to provide estimates of $E_{\rm p,i}${} for most
of the events detected by them.
\begin{table}
\caption{$E_{\rm p,i}${} and $E_{\rm iso}${} values for {\it Swift}{} GRBs/XRFs with known redshift and firm
estimates of (or upper/lower limits to) $E_{\rm p,i}$. All values have been taken from the literature or
computed based on published spectral parameters and fluences, see last
column for references (an asterisk indicate that the references can be found
at http://swift.gsfc.nasa.gov/docs/swift/archive/grb\_table/).
For GRB\,050724, the first line corresponds to the short pulse, the second
line to the soft tail.
}
\begin{tabular}{llllllc}
\hline
GRB & Type & $z$ & $E_{\rm p,i}${} & $E_{\rm iso}${} & Instruments & Refs. \\
& & & (keV) & (10$^{52}$ erg) & & \\
\hline
050315 & LONG & 1.949 & $<$89 & 4.9$\pm$1.5
& SWI & \cite{Amati06} \\
050318 & LONG & 1.44 & 115$\pm$25 & 2.55$\pm$0.18
& SWI & \cite{Amati06} \\
050401 & LONG & 2.90 & 467$\pm$110 & 41$\pm$8 &
KON & * \\
050416a & LONG & 0.650 & 25.1$\pm$4.2 & 0.12$\pm$0.02
& SWI & \cite{Amati06} \\
050505 & LONG & 4.27 & $>$274 & 40$\pm$10
& SWI & * \\
050509b & SHORT & 0.22 & $>$183 & 0.0007$\pm$0.0004
& SWI & * \\
050525 & LONG & 0.606 & 127$\pm$10 & 3.39$\pm$0.17
& SWI & \cite{Amati06} \\
050603 & LONG & 2.821 & 1333$\pm$107 & 70$\pm$5 &
KON & \cite{Amati06} \\
050724 & SHORT & 0.258 & $>$126 & 0.03$\pm$0.01
& SWI & * \\
050724 & SHORT & 0.258 & 11$\pm$2 & 0.035$\pm$0.007
& SWI & * \\
050730 & LONG & 3.967 & $>$750 & 26$\pm$19
& SWI & \cite{Perri06} \\
050813 & SHORT & 0.72 & $>$344 & 0.09$\pm$0.06
& SWI & * \\
050824 & LONG & 0.83 & $<$23 & 0.13$\pm$0.029
& HET & \cite{Amati06} \\
050904 & LONG & 6.29 & $>$1100 & 193$\pm$127
& SWI & \cite{Amati06} \\
050922c & LONG & 2.198 & 415$\pm$111 & 6.1$\pm$2.0 &
HET & \cite{Amati06} \\
051022 & LONG & 0.80 & 754$\pm$258 & 63$\pm$6 &
HET/KON & \cite{Amati06} \\
051109 & LONG & 2.346 & 539$\pm$200 & 7.5$\pm$0.8
& KON & \cite{Amati06} \\
051221a & SHORT & 0.5465 & 622$\pm$35 & 0.29$\pm$0.06
& KON & \cite{Amati06} \\
060115 & LONG & 3.53 & 281$\pm$93 & 9.1$\pm$1.5
& SWI & * \\
060124 & LONG & 2.296 & 784$\pm$285 & 48$\pm$7
& KON & * \\
060206 & LONG & 4.048 & 380$\pm$95 & 5.8$\pm$0.6
& SWI & * \\
060218 & LONG & 0.0331 & 4.9$\pm$0.3 & 0.0062$\pm$0.0003
& SWI & \cite{Amati06b} \\
060418 & LONG & 1.489 & 572$\pm$143 & 15$\pm$3
& KON & * \\
060502b & SHORT & 0.287 & $>$193 & 0.025$\pm$0.020
& SWI & * \\
060505 & SHORT?& 0.089 & $>$160 & 0.003$\pm$0.001
& SWI & \cite{Amati06b} \\
060614 & LONG & 0.125 & 55$\pm$45 & 0.25$\pm$0.1
& KON & \cite{Amati06b} \\
060707 & LONG & 3.425 & 290$\pm$27 & 8.0$\pm$1.5
& SWI & * \\
060927 & LONG & 5.6 & 470$\pm$120 & 10$\pm$2
& SWI & * \\
061007 & LONG & 1.261 & 890$\pm$124 & 100$\pm$10
& KON & \cite{Mundell06} \\
\hline
\end{tabular}
\vspace{-0.8cm}
\end{table}
\section{Long GRBs}
The sample of {\it Swift}{} long GRBs with known redshift and published spectral peak energy
consists of 17 events; for other 5 events upper/lower limits to $E_{\rm p,i}${} have been reported.
These events are listed in Table 1;
for $\sim$half of the events, the values of (or upper/lower limits to) $E_{\rm p,i}${} and $E_{\rm iso}${} are taken from \cite{Amati06}; for
the others, they have been computed based on published spectral information
following the method reported in \cite{Amati02,Amati06} and
by assuming a cosmology with $\Omega_m$ = 0.3, $\Omega_{\Lambda}$ = 0.7 and H$_0$ =
65--70 km s$^{-1}$ Mpc$^{-1}$ (i.e. the uncertainties on $E_{\rm iso}${} take into account the
uncertainty in the value of H$_0$). As can be seen in Figure 1, all these events (filled circles) are fully consistent with the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation as characterized by the best fit
power--law and logarithmic scatter derived by \cite{Amati06}. The fit with a
power--law of the {\it Swift}{} sample of 17 long GRBs with firm estimates of $z$ and $E_{\rm p,i}${}
gives an index of 0.58$\pm$0.01 and a normalization of 86$\pm$3 ($E_{\rm p,i}${} in keV and
$E_{\rm iso}${} in units of 10$^{52}$ erg ). Consistently again with the findings based on
data from previous
satellites, despite the correlation is very significant (the Spearman's rank correlation
coefficient between log($E_{\rm p,i}${}) and log($E_{\rm iso}$) of the 17 {\it Swift}{} events is $\sim$0.93), the
chi--square value obatined with the power--law fit is high (47/15), confirming the
presence of extra--Poissonian dispersion. As can be seen in Figure 1,
the scatter of the data in terms of log($E_{\rm p,i}${}) around the best-fit power--law can be
fitted by a Gaussian with $\sigma$$\sim$0.16, slightly lower than the $\sim$0.2 found by
\cite{Amati06}.
Finally, as can be seen in Figure 1, peculiar {\it Swift}{} events like the sub--energetic,
very close GRB\,060218/SN2006aj and
GRB\,060614, a long GRB not associated with a hypernova, are both consistent
with the correlation,
an evidence that gives important clues for the understanding of their
nature (as shown and discussed by
\cite{Amati06b}).
\begin{figure}
\centerline{\includegraphics[width=11cm]{amatil_fig1.ps}}
\caption{Location in the $E_{\rm p,i}$ -- $E_{\rm iso}${} plane of {\it Swift}{} long (filled circles) and short (diamonds)
GRBs with known redshift and available estimates of $E_{\rm p,i}$. The long GRBs sample
includes also 4 GRBs for which upper/lower limits to $E_{\rm p,i}${} have been reported; the lower limits to $E_{\rm p,i}${} for short GRBs have been estimated based on the BAT spectral
photon indices (see text).
The continuous line is the
power--law best fitting the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation and the dashed lines
delimitate the 2$\sigma$ confidence region (from \cite{Amati06}).}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=7.3cm]{amatil_fig2.ps}}
\caption{Dispersion of the values of log($E_{\rm p,i}$) of 17 long Swift GRBs with respect to the
power--law best fitting the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation, modeled with a Gaussian
($\sigma$$\sim$0.16).
}
\end{figure}
\section{Short GRBs}
An important breakthrough of the {\it Swift}{} era is the discovery of afterglow emission from
short GRBs, leading to the first redshift estimates (starting with GRB\,050709 detected
by HETE--2) for this elusive events. Even if short GRBs with known redshift are still
a few, there is evidence that, as long GRBs, they lie at comsological distances, even if
at lower redshifts ($<$$\sim$0.7). As can be seen in Table 1 and Figure 1, only for 1
of {\it Swift}{}
short GRBs (051221) there is an estimate of $E_{\rm p,i}$; the other short GRB with known
redshift and $E_{\rm p,i}${} is GRB\,050709. As already shown by \cite{Amati06}, both these
events are inconsistent with the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation (see Figure 3). For the other
{\it Swift}{} short GRBs, also reported in Table 1 and shown in Figure 1 (diamonds), only
{\it approximate} lower limits to $E_{\rm p,i}${} can be inferred based on the photon indices estimated by
fitting
the BAT spectra with a power--law. In all cases, the photon index is hard enough to
support the hypothesis of a peak energy at least higher than 100 keV (the fits of BAT spectra
reported in GCNs are typically performed in the 15-150 or 15--350 keV energy band).
More specifically, based on the reported photon indices and energy bands, the lower
limit to the observer's frame peak energy was chosen to be 100 keV for GRB\,050724,
150 keV for GRBs 050509b, 0605002b and 060505, 200 keV for GRB\,050813.
For each event, $E_{\rm iso}${} was computed by assuming $E_{\rm p,i}${} varying between its lower limit and
(conservatively) 10000 keV. Despite its T$_{90}$ of 4$\pm$1 s, I included GRB\,060505
in the short GRB sample, because of its very low fluence and hard spectrum (typical
features of short GRBs) and its duration anyway consistent with the tail of short
GRBs duration distribution (see also \cite{Amati06b}).
In Figure 1 it can be seen that all {\it Swift}{} GRBs with known redshift
are inconsistent with the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation; in particular, they lie significantly
above the region populated by long events.
As discussed by \cite{Amati06}, the different location of long and short GRBs in the
$E_{\rm p,i}$ -- $E_{\rm iso}${} plane is consistent with the different distributions of
these two classes in the hardness--intensity diagram and can give important
clues for the understanding of the differences in their emission mechanism(s)
and progenitors.
Under this point of view, of particular interest is the emerging evidence that at least
some short
GRBs are followed by an extended, weak and soft emission. For one of the events included
in the {\it Swift}{} sample of short GRBs with known redshift,
GRB\,050724, an estimate of the $E_{\rm iso}${} and $E_{\rm p,i}${} of this soft component is available
(see Table 1), thanks
to the joint fit of XRT and BAT data \cite{Barthelmy05}. Intriguingly, as can be seen
in Figure 1, the extended soft emission of the short GRB\,050724 is fully consistent with
the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation. This evidence, if confirmed by future observations,
may suggest that the emission mechanisms responsible for
most of the emission of long GRBs could also be at work in short GRBs but with a much
lower efficiency.
\begin{figure}
\centerline{\includegraphics[width=11cm]{amatil_fig3.ps}}
\caption{Location in the $E_{\rm p,i}$ -- $E_{\rm iso}${} plane of the most updated sample of GRBs with
known redshift and {\it accurate} estimate of $E_{\rm p,i}$, including 50 long GRBs, 2 short GRBs
and the peculiar sub--energetic GRB\,980425. {\it Swift}{} GRBs are shown as filled circles.
The continuous line is the
power--law best fitting the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation and the dashed lines
delimitate the 2$\sigma$ confidence region (from \cite{Amati06}).}
\end{figure}
\section{Conclusions}
{\it Swift}, thanks to the combination of the high sensitivity of BAT with the few arcsec
source location accuracy of XRT and the very fast
slewing capability of the spacecraft, is making possible a substantial reduction of
selection effects in the sample of GRBs with known redshift, and thus
to test more stringently than in the past the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation.
As shown above, all {\it Swift}{} long GRBs with an estimate of $E_{\rm p,i}${}
(17 events) or
an upper / lower limit to this quantity (5 events) are fully consistent
with the correlation. The results of the correlation analysis and power--law
fit of the these events,
which cover more than 4 orders of magnitude in $E_{\rm iso}${} and 3 orders of magnitude
in $E_{\rm p,i}$, are fully consistent with what found for events detected by previous
satellites. This is a clear evidence, and a further confirmation,
that the $E_{\rm p,i}$ -- $E_{\rm iso}${} correlation is likely not an artifact of selection effects.\\
Short {\it Swift}{} GRBs with known redshift (1 firm estimate of $E_{\rm p,i}${} and
5 lower limits) are inconsistent with the correlation, further confirming
that different emission mechanisms (possibly due to different conditions,
progenitors or circum--burst environment)
with respect to long GRBs are at work for this class of events.
Remarkably, the long, soft and weak tail following the short GRB\,050724
is characterized by values of $E_{\rm p,i}${} and $E_{\rm iso}${} fully consistent with the
correlation holding for long GRBs, suggesting that
the emission mechanisms producing long GRBs could be at work also for
at least some short GRBs but with much less efficiency.
Finally, Figure 3 shows the location in the $E_{\rm p,i}$ -- $E_{\rm iso}${} plane of GRBs with known
redshift and more accurate estimates of $E_{\rm p,i}$, a sample consisting of 51 long
GRBs plus 2 short GRBs. As can be seen, a part the two short
GRBs (050709 and 051221), only the peculiar and very close GRB\,980425
is a firm
outlier to the correlation.
|
1,108,101,563,200 | arxiv | \section*{Introduction}
Let~${\mathcal A}=(A_n)_{n\ge1}$ be a sequence of integers.
A prime~$p$ is called a \emph{primitive divisor of~$A_n$} if
\[
p\mid A_n\qquad\text{and}\qquad p\nmid A_i\quad\text{for all $1\le i<n$.}
\]
The \emph{Zsigmondy set of~${\mathcal A}$} is the set
\[
{\mathcal Z}({\mathcal A}) = \bigl\{n\ge1 :
\text{$A_n$ does not have a primitive divisor}\bigr\}.
\]
A classical theorem of Bang~\cite{bang} (for~$b=1$)
and Zsigmondy~\cite{MR1546236} in general
says that if~$a,b\in\mathbb{Z}$ are integers with~$a>b>0$, then
\[
{\mathcal Z}\bigl((a^n-b^n)_{n\ge1}\bigr)~\text{is a finite set}.
\]
Indeed assuming that~$\gcd(a,b)=1$, they prove
that~${\mathcal Z}\bigl((a^n-b^n)_{n\ge1}\bigr)$ contains no~$n>6$, which is
a strong uniform bound. This useful result has been extended and
generalized in many ways, for example to more general linear
recursions, to number fields, to elliptic curves, and to Drinfeld
modules,
see~\cite{%
MR1863855
MR1503541
MR1502458
MR2220263
arxivNT0609120
hsia:drinfeld
IngramEDSoCC
ingramsilverman
MR0223330
MR0344221
MR96191
}.
\par
In this note we prove a Bang-Zsigmondy result for sequences
associated to iteration of rational functions. For ease of exposition,
we state here a special case of our main result for dynanmical systems
over~$\mathbb{Q}$. See Theorem~\ref{thm:dynBZ} for the general statement.
\begin{theorem}
\label{thm:finiteZset}
Let~$\varphi(z)\in\mathbb{Q}(z)$ be a rational function of degree~$d\ge2$ such
that~$\varphi(0)=0$, but~$\varphi$ does not vanish to order~$d$ at~$z=0$.
Let~$\a\in\mathbb{Q}$ be a point with infinite orbit under iteration
of~$\varphi$. For each~$n\ge1$, let~$\varphi^n$ denote the~$n$'th iterate
of~$\varphi$ and write
\[
\varphi^n(\a) = \frac{A_n}{B_n}\in\mathbb{Q}
\]
as a fraction in lowest terms. Then the dynamical Zsigmondy
set ${\mathcal Z}\bigl((A_n)_{n\ge0}\bigr)$ is finite.
\end{theorem}
\begin{remark}
Rice~\cite{rice07} investigates primitive divisors in the case that
\text{$\varphi(z)\in\mathbb{Z}[z]$} is a monic polynomial and~$\a\in\mathbb{Z}$. (See
also~\cite{flatters07} for some similar results.) For example, Rice
proves that if~$\varphi(z)\ne z^d$, if~$0$ is preperiodic for~$\varphi$, and
if~$\a\in\mathbb{Z}$ has infinite~$\varphi$-orbit, then
${\mathcal Z}\bigl((\varphi^n(\a))_{n\ge0}\bigr)$ is finite. Our
Theorems~\ref{thm:finiteZset} and~\ref{thm:dynBZ} are generalizations
of~\cite{rice07} to arbitrary rational maps over number fields.
(However, we do assume that~$0$ is periodic, while Rice allows~$0$ to
be preperiodic.)
\end{remark}
A key tool in the proof of Theorem~\ref{thm:finiteZset} is a dynamical
analog~\cite{MR1240603} of Siegel's theorem~\cite[IX.3.1]{MR1329092}
for integral points on elliptic curves. Continuing with the notation
from Theorem~\ref{thm:finiteZset}, the dynamical canonical
height~\cite[\S3.4]{silverman:ads} of~$\a$ is the limit
\[
\lim_{n\to\infty} \frac{\log\max\bigl\{|A_n|,|B_n|\bigr\}}{d^n}
= {\hat h}_\varphi(\a) > 0.
\]
The positivity is a consequence of the fact that~$\a$ has
infinite orbit. A deeper result, proven in~\cite{MR1240603} as a
consequence of Roth's theorem, implies that
\begin{equation}
\label{eqn:Andntohhat}
\lim_{n\to\infty} \frac{|A_n|}{d^n} = {\hat h}_\varphi(\a) > 0,
\end{equation}
and an estimate of this sort is needed to prove
Theorems~\ref{thm:finiteZset} and~\ref{thm:dynBZ}.
\par
Of course, there are many situations in which it is easy
to prove that~\eqref{eqn:Andntohhat}
holds, for example if~$\varphi(z)\in\mathbb{Z}[z]$ and~$\a\in\mathbb{Z}$.
In such cases the exact determination of the Zsigmondy set often becomes
an elementary exercise, see Example~\ref{example:emptyZset}
and some of the examples in~\cite{flatters07,rice07}.
\begin{remark}
The first question that one asks about the Zsigmondy set of a
sequence is whether it is finite. Theorems~\ref{thm:finiteZset}
and~\ref{thm:dynBZ} give an affirmative answer for certain sequences
defined by iteration of rational maps on~$\mathbb{P}^1$. Assuming that the
Zsigmondy sets under consideration are finite, it is also
natural to ask for explicit upper bounds for
\[
\#{\mathcal Z}({\mathcal A})\qquad\text{and}\qquad \max{\mathcal Z}({\mathcal A}),
\]
where one hopes that the bounds depend only minimally on the
sequence. For example, Zsigmondy's original theorem says that for
integers~$a>b>0$, we have~$\max{\mathcal Z}(a^n-b^n)\le6$. A recent deep
result of Bilu, Hanrot and Voutier~\cite{MR1863855} extends this to
the statement the~$\max{\mathcal Z}({\mathcal L})\le30$ for any nontrivial Lucas or
Lehmer sequence~${\mathcal L}$. In this paper we are content to prove the
finiteness of certain dynamical Zsigmondy sets. We leave the question
of explicit and/or uniform bounds as a problem for future study.
\end{remark}
\begin{remark}
Tom Tucker has pointed out to the authors that the results of this
paper should be valid also for iteration of non-split functions
over~$\mathbb{C}(T)$, and more generally over one-dimensional function fields
of characteristic~$0$, since in this setting
Benedetto~\cite{arxiv0510444} (for polynomial maps) and
Baker~\cite{arxiv0601046} (for rational maps) have recently proven
that points with infinite orbit have strictly positive canonical
height.
\end{remark}
The material in this article is divided into two sections. In
Section~\ref{section:dynZsigthm} we state and prove our main theorem
via a sequence of lemmas, some of which may be of independent
interest. Section~\ref{section:speculations} discusses variants of
our main theorem and raises questions, makes conjectures, and
indicates directions for further research.
\begin{acknowledgement}
The authors would like to thank Rob Benedetto for sketching the
construction described in Remark~\ref{remark:localvsglobal}, and
Graham Everest, Igor Shparlinski and Tom Tucker for their helpful
comments on the initial draft of this paper.
\end{acknowledgement}
\section{A dynamical Zsigmondy theorem}
\label{section:dynZsigthm}
In this section we state and prove our main theorem concerning
primitive divisors in sequences defined by iteration of certain types
of rational functions. We start by recalling that primitive divisors
in number fields are most appropriately defined using ideals, rather
than elements.
\begin{definition}
Let~$K$ be a number field and let~${\mathcal A}=({\mathfrak{A}}_n)_{n\ge1}$ be a sequence
of nonzero integral ideals.
A prime ideal~${\mathfrak{p}}$ is called
a \emph{primitive divisor of~${\mathfrak{A}}_n$} if
\[
{\mathfrak{p}}\mid {\mathfrak{A}}_n\qquad\text{and}\qquad
{\mathfrak{p}}\nmid {\mathfrak{A}}_i\quad\text{for all $1\le i<n$.}
\]
The \emph{Zsigmondy set of~${\mathcal A}$} is the set
\[
{\mathcal Z}({\mathcal A}) = \bigl\{n\ge1 :
\text{${\mathfrak{A}}_n$ does not have a primitive divisor}\bigr\}.
\]
\end{definition}
We also recall some basic definitions from dynamical systems.
\begin{definition}
Let~$\varphi(z)\in K(z)$ be a rational function of degree~$d\ge2$,
which we may view as a morphism~\text{$\varphi:\mathbb{P}^1_K\to\mathbb{P}^1_K$}.
\par
A point~$\gamma\in\mathbb{P}^1({\bar K})$ is \emph{periodic for~$\varphi$}
if~$\varphi^n(\gamma)=\gamma$ for some~$n\ge1$. The smallest such~$n$ is called
the~\emph{$\varphi$-period of~$\gamma$}. A point of~$\varphi$-period~$1$ is
called a \emph{fixed point}.
\par
Similarly, we say that~$\gamma$ is \emph{preperiodic}
if~$\varphi^{m+n}(\gamma)=\varphi^m(\gamma)$ for some~$n\ge1$
and~$m\ge0$. Equivalently,~$\gamma$ is preperiodic if its
\emph{$\varphi$-orbit} $\bigl\{\a,\varphi(\a),\varphi^2(\a),\dots\bigr\}$ is finite.
\par
A point that is not preperiodic, i.e., that has infinite $\varphi$-orbit,
is called a \emph{wandering point}.
\par
Let~$\gamma$ be a point of $\varphi$-period~$k$. We say that~$\varphi$ is
\emph{of polynomial type at~$\gamma$} if
\begin{equation}
\label{fngzgdpsiz}
\varphi^k(z) = \gamma + \frac{(z-\gamma)^d}{\psi(z)}
\quad\text{for some~$\psi(z)\in K[z]$ with $\psi(\gamma)\ne0$.}
\end{equation}
\end{definition}
\begin{remark}
A more intrinsic algebro-geometric definition is that~$\varphi$ is of
polynomial type at~$\gamma$ if the map~$\varphi^k:\mathbb{P}^1\to\mathbb{P}^1$ is totally
ramified at~$\gamma$. Equivalently, if~$\varphi(z)$ has the
form~\eqref{fngzgdpsiz} and if we move~$\gamma$ to~$\infty$ by
conjugating~$\varphi$ by the linear fractional transformation
$f(z)=1/(z-\gamma)$, then the following calculation
shows that~$\varphi^k$ becomes a polynomial:
\begin{align*}
(f\circ\varphi^k\circ f^{-1})(z)
&= \frac{1}{\varphi^k(f^{-1}(z))-\gamma}
= \frac{1}{(f^{-1}(z)-\gamma)^d/\psi(f^{-1}(z))}\\
&= \frac{1}{z^{-d}/\psi(z^{-1}+\gamma)}
= z^d\psi(z^{-1}+\gamma)
\in K[z].
\end{align*}
\end{remark}
\begin{remark}
It is an easy exercise using the Riemann-Hurwitz genus formula to show
that if~$\varphi$ is of polynomial type at~$\gamma$, then the~$\varphi$-period
of~$\gamma$ is at most~$2$, cf.~\cite[Theorem~1.7]{silverman:ads}. We
will not need to use this fact, but mention it because it provides an
easy way to check if there exist any points at which a given map~$\varphi$
is of polynomial type.
\end{remark}
\begin{theorem}
\label{thm:dynBZ}
Let~$K$ be a number field and let~$\varphi(z)\in K(z)$ be a rational
function of degree~$d\ge2$. Let~$\gamma\in K$ be a periodic point
for~$\varphi$ such that~$\varphi$ is not of polynomial type at~$\gamma$. Let~$\a\in
K$ be a wandering point, i.e., a point with infinite~$\varphi$-orbit, and
for each~$n\ge1$, write the ideal
\[
\bigl(\varphi^n(\a)-\gamma\bigr) = {\mathfrak{A}}_n{\mathfrak{B}}_n^{-1}
\]
as a quotient of relatively prime integral
ideals. \textup(If~$\varphi^n(\a)=\infty$, then set \text{${\mathfrak{A}}_n=(1)$}
and~${\mathfrak{B}}_n=(0)$.\textup) Then the dynamical Zsigmondy set
${\mathcal Z}\bigl(({\mathfrak{A}}_n)_{n\ge1}\bigr)$ is finite.
\end{theorem}
\begin{remark}
The assumption in Theorem~\ref{thm:dynBZ} that~$\varphi$ is not of
polynomial type at~$\gamma$ is a necessary condition. For example,
let~$F(z)\in \mathbb{Z}[z]$ be any polynomial of degree at most~$d$ with~$F(0)=1$
and consider the rational map
\[
\varphi(z)=z^d/F(z).
\]
Then~$\varphi$ is of polyomial type at~$\gamma=0$, and an easy calculation shows
that for any starting value~$\a=A_1/B_1\in\mathbb{Q}$, we have~$A_n=A_1^{d^n}$
for all~$n\ge0$. Hence~$A_n$ has no primitive divisors for
any~$n\ge1$.
\end{remark}
\begin{proof}
The proof of Theorem~\ref{thm:dynBZ}
is structured as a series of lemmas that provide the necessary tools.
\begin{lemma}
If Theorem~$\ref{thm:dynBZ}$ is true when~$\gamma$ is a fixed point
of~$\varphi$, then it is true when~$\gamma$ is a periodic point of~$\varphi$.
\end{lemma}
\begin{proof}
Suppose that~$\gamma$ has~$\varphi$-period~$k\ge2$, so~$\varphi^k(\gamma)=\gamma$ and no smaller
iterate of~$\varphi$ fixes~$\gamma$. For each~$0\le i<k$ we consider the subsequence
\[
\bigl(\varphi^{nk+i}(\a)-\gamma\bigr) = {\mathfrak{A}}_{nk+i}^{\vphantom1}{\mathfrak{B}}_{nk+i}^{-1}
\quad\text{for~$n=0,1,2,\dots\,$.}
\]
We claim that these subsequences have very few common prime divisors.
More precisely, define the set of good primes~${\mathcal P}={\mathcal P}_{\varphi,\a,\gamma}$
to be primes satisfying the following two conditions:
\begin{enumerate}
\item[(A)]
$\varphi$ has good reduction at~${\mathfrak{p}}$. (See~\cite[Chapter~2]{silverman:ads}
for the definition and basic properties of maps with good reduction.)
\item[(B)]
$\varphi^i(\gamma)\not\equiv\gamma\pmodintext{{\mathfrak{p}}}$ for all $0\le i<k$.
\end{enumerate}
It is clear that~${\mathcal P}$ contains all but finitely many primes.
Now suppose that some~${\mathfrak{p}}\in{\mathcal P}$ divides terms in different
subsequences, say
\begin{equation}
\label{eqn:gpgAnki}
{\mathfrak{p}}\mid {\mathfrak{A}}_{nk+i}
\quad\text{and}\quad
{\mathfrak{p}}\mid {\mathfrak{A}}_{mk+j}
\quad\text{for some $0\le j<i<k$.}
\end{equation}
Note that the good reduction assumption means
that~$(\varphi\bmod{\mathfrak{p}})^n=(\varphi^n)\bmod{\mathfrak{p}}$, i.e., reduction modulo~${\mathfrak{p}}$
commutes with composition of~$\varphi$
(see~\cite[Theorem~2.18]{silverman:ads}).
Hence if~${\mathfrak{p}}$ is a prime of good reduction for~$\varphi$, then
we have
\[
{\mathfrak{p}}\mid{\mathfrak{A}}_n\quad\Longleftrightarrow\quad
\varphi^n(\a)\equiv\gamma\pmodintext{\mathfrak{p}}.
\]
So we can rewrite assumption~\eqref{eqn:gpgAnki} as
\[
\varphi^{nk+i}(\a) \equiv \varphi^{mk+j}(\a) \equiv \gamma \pmod{\mathfrak{p}}.
\]
Suppose first that~$nk+i>mk+j$. Since~$0\le j<i\le k$ by assumption,
this implies that~$n\ge m$. We compute
\begin{align*}
\gamma &\equiv \varphi^{nk+i}(\a) \pmodintext{\mathfrak{p}} \\
&= \varphi^{(nk+i)-(mk+j)}\bigl(\varphi^{mk+j}(\a)\bigr) \\
&\equiv \varphi^{(nk+i)-(mk+j)}(\gamma) \pmodintext{\mathfrak{p}} \\
&= \varphi^{i-j}\bigl( (\varphi^k)^{n-m}(\gamma) \bigr) \\
&= \varphi^{i-j}(\gamma) \qquad\text{since $\varphi^k(\gamma)=\gamma$.}
\end{align*}
But~$0<i-j<k$, so this contradicts Property~(B) of~${\mathcal P}$.
\par
Similarly, if~$nk+i < mk+j$, then~$m>n$ (since~$i>j$), so we have
\begin{align*}
\gamma &\equiv \varphi^{mk+j}(\a) \pmodintext{\mathfrak{p}} \\
&= \varphi^{(mk+j)-(nk+i)}\bigl(\varphi^{nk+i}(\a)\bigr) \\
&\equiv \varphi^{(mk+j)-(nk+i)}(\gamma) \pmodintext{\mathfrak{p}} \\
&= \varphi^{j-i+k}\bigl( (\varphi^k)^{m-n-1}(\gamma) \bigr) \\
&= \varphi^{j-i+k}(\gamma) \qquad\text{since $\varphi^k(\gamma)=\gamma$.}
\end{align*}
This is again a contradiction of Property~(B), since~$0<j-i+k<k$.
\par
We have now proven that for primes~${\mathfrak{p}}\in{\mathcal P}$, at most one of
the subsequences
\[
({\mathfrak{A}}_{nk+i})_{n\ge0},\qquad i=0,1,\ldots,k-1,
\]
has a term divisible by~${\mathfrak{p}}$.
\par
We are assuming that Theorem~\ref{thm:dynBZ} is true if~$\gamma$ is a
fixed point. It follows that for each~$0\le i<k$, the Zsigmondy
set~${\mathcal Z}\bigl(({\mathfrak{A}}_{nk+i})_{n\ge0}\bigr)$ is finite,
since~$({\mathfrak{A}}_{nk+i})_{n\ge0}$ is the sequence associated to the map~$\varphi^k$, the
initial point~$\varphi^i(\a)$, and the fixed point~$\gamma$ of~$\varphi^k$.
(Note that the condition on~$\varphi$ is not a polynomial at~$\gamma$ is equivalent
to the condition that~$\varphi^k$ is not of polynomial type at~$\gamma$.)
\par
It follows that we can find a number~$N$ so that
for all~$n\ge N$ and all~$0\le i<k$ there is a prime ideal~${\mathfrak{p}}_{n,i}$
satisfying
\[
{\mathfrak{p}}_{n,i} \mid {\mathfrak{A}}_{nk+i}
\quad\text{and}\quad
{\mathfrak{p}}_{n,i} \nmid {\mathfrak{A}}_{mk+i}
\quad\text{for all $0\le m<n$.}
\]
In other words, for a fixed~$i$, the ideals~${\mathfrak{p}}_{n,i}$ are primitive
divisors in the subsequence~$({\mathfrak{A}}_{nk+i})_{n\ge0}$. Increasing~$N$ if
necessary, we may assume that~${\mathfrak{p}}_{n,i}\in{\mathcal P}$ for all~$n$ and
all~$i$, since the complement of~${\mathcal P}$ is finite.
\par
It is now clear that for~$n>N$ and~$0\le i<k$, the prime~${\mathfrak{p}}_{n,i}$
is a primitive divisor of~${\mathfrak{A}}_{nk+i}$ in the full
sequence~$({\mathfrak{A}}_m)_{m\ge0}$. This is true because it is a primitive
divisor in its own subsequence, and we proved above that it does not
divide any of the terms in any of the other subsequences.
\end{proof}
We next reduce to the case~$\gamma=0$, which will simplify our later
computations.
\begin{lemma}
It suffices to prove Theorem~$\ref{thm:dynBZ}$
under the assumption that \text{$\gamma=0$}.
\end{lemma}
\begin{proof}
Let~$f(z)=z+\gamma$. Then we have
\[
\varphi^n(\a) - \gamma = f^{-1}\bigl(\varphi^n(\a)\bigr)
= (f^{-1}\circ\varphi\circ f)^n\bigl(f^{-1}(\a)\bigr).
\]
Hence replacing~$\varphi$ by~$f^{-1}\circ\varphi\circ f$ and replacing~$\a$
by~$f^{-1}(\a)$ allows us to replace~$\gamma$ wtih~$f^{-1}(\gamma)=0$.
Finally, we note that~$\varphi$ is of polynomial type at~$\gamma$ if and
only if~$f^{-1}\circ\varphi\circ f$ is of polynomial type at~$f^{-1}(\gamma)=0$,
so the conjugated map has the required property.
\end{proof}
We are now reduced to the case that~$\gamma=0$ is a fixed point
of~$\varphi(z)$. This means that we can write~$\varphi$ in the form
\begin{equation}
\label{eqn:fza1zb0}
\varphi(z) = \frac{a_ez^e+a_{e+1}z^{e+1}+\cdots+a_dz^d}
{b_0+b_1z+b_2z^2+\cdots+b_dz^d},
\end{equation}
with~$a_e\ne0$, where without loss of generality we may assume that
all~$a_i$ and~$b_i$ are in the ring of integers~$R$ of~$K$. Further,
since~$\deg\varphi=d$, we know that at least one of~$a_d$ and~$b_d$ is
nonzero, and also~$b_0\ne0$. Finally, our assumptions that~$\varphi(0)=0$
and that~$\varphi$ is not of polynomial type at~$0$ imply that
\begin{equation}
\label{0lteltd}
0 < e < d.
\end{equation}
Note the strict inequalities on both sides.
\par
We next prove an elementary, but useful, lemma that bounds how
rapidly the~${\mathfrak{p}}$-divisibilty of~${\mathfrak{A}}_n$ can grow, where recall
that~${\mathfrak{A}}_n$ is the integral ideal obtained by writing
\[
\bigl(\varphi^n(\a)\bigr) = {\mathfrak{A}}_n{\mathfrak{B}}_n^{-1}
\]
as a quotient of relatively prime integral ideals. In order to
state the result, we need one definition.
\begin{definition}
For each prime ideal~${\mathfrak{p}}$, we define the \emph{rank of apparition}
(\emph{of~$\varphi$ and~$\a$}) \emph{at~${\mathfrak{p}}$} to be the integer
\[
r_{\mathfrak{p}} = \min\{r\ge0 : \operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_r > 0\}.
\]
(If no such~$r$ exists, we set~$r_{\mathfrak{p}}=\infty$.) Notice that this is a
direct analogy of Ward's definition~\cite{MR0023275} of the rank of
apparition for with elliptic divisibility sequences.
\end{definition}
\begin{lemma}
\label{lemma:pdivgrowth}
With notation as above, let
\[
S = \bigl\{{\mathfrak{p}} : \operatorname{ord}_{\mathfrak{p}}(a_eb_0)\ne0\bigr\}.
\]
Then for all primes~${\mathfrak{p}}\notin S$,
\begin{align}
k &\le r_{\mathfrak{p}} &\Longrightarrow&& \operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_{k-1} &= 0,
\label{eqn:ordgAsmall} \\
k &> r_{\mathfrak{p}} &\Longrightarrow&& \operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_{k}&=e\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_{k-1},
\label{eqn:ordgAbig}
\end{align}
where~$e$ is the order of vanishing of~$\varphi(z)$ at~$z=0$,
see~\eqref{eqn:fza1zb0}.
\end{lemma}
\begin{proof}
We note that~\eqref{eqn:ordgAsmall} is true by the definition
of~$r_{\mathfrak{p}}$, so we only need to prove~\eqref{eqn:ordgAbig},
which we do by induction on~$k$. Since~${\mathfrak{A}}_{r_{\mathfrak{p}}}>0$ by definition,
the inductive hypothesis implies that
\begin{equation}
\label{eqn:gAn1mpma1pb0}
\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_i>0\qquad\text{for all $r_{\mathfrak{p}}\le i<k$.}
\end{equation}
In particular, \text{$\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_{k-1}>0$}.
\par
For notational convenience, we let~$\b=\varphi^{k-1}(\a)$. The fact
that~${\mathfrak{A}}_{k-1}$ has positive valuation implies that
\[
\operatorname{ord}_{\mathfrak{p}}\b =\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_{k-1} > 0.
\]
Further, the assumption that~${\mathfrak{p}}\notin S$ means that
\[
\operatorname{ord}_{\mathfrak{p}} a_e = \operatorname{ord}_{\mathfrak{p}} b_0 = 0,
\]
so when we evaluate the numerator and denominator
of~$\varphi(\b)$, the lowest degree terms have the strictly
smallest~${\mathfrak{p}}$-adic valuation. Hence the ultrametric triangle
inequality gives
\begin{align*}
\operatorname{ord}_{\mathfrak{p}}({\mathfrak{A}}_{k})
&=\operatorname{ord}_{\mathfrak{p}}\bigl(\varphi(\b)\bigr)\\
&= \operatorname{ord}_{\mathfrak{p}}(a_e\b^e+a_{e+1}\b^{e+1}+\cdots+a_d\b^d) \\*
&\qquad{} - \operatorname{ord}_{\mathfrak{p}}(b_0+b_1\b+\cdots+b_d\b^d) \\
&=e\operatorname{ord}_{\mathfrak{p}}(\b)
\qquad\text{since $\operatorname{ord}_{\mathfrak{p}}(a_eb_0)=0$ and $\operatorname{ord}_{\mathfrak{p}}(\b)>0$.}\\*
&=e\operatorname{ord}_{\mathfrak{p}}({\mathfrak{A}}_{k-1}).
\end{align*}
This completes the proof of Lemma~\ref{lemma:pdivgrowth}.
\end{proof}
Next we recall the definition and basic properties of
the canonical height associated to~$\varphi$.
\begin{lemma}
\label{lemma:canoicalht}
The \emph{canonical height associated to~$\varphi$} is
the function \text{${\hat h}_\varphi:\mathbb{P}^1({\bar K})\to\mathbb{P}^1({\bar K})$} defined by
the limit
\[
{\hat h}_\varphi(\b) = \lim_{n\to\infty} \frac{1}{d^n}h\bigl(\varphi^n(\b)\bigr).
\]
It satisfies, and is characterized by, the two following properties\textup:
\begin{align}
{\hat h}_\varphi(\b) &= h(\b)+O(1)&&\text{for all $\b\in\mathbb{P}^1({\bar K})$.}
\label{eqn:ht1}\\
{\hat h}_\varphi\bigl(\varphi(\b)\bigr) &= d{\hat h}_\varphi(\b)
&&\text{for all $\b\in\mathbb{P}^1({\bar K})$.}
\label{eqn:ht2}
\end{align}
The~$O(1)$ constant in~\eqref{eqn:ht1} depends on~$\varphi$, but is
independent of~$\b$.
\par
The values~${\hat h}_\varphi(\b)$ are nonnegative, and
\[
{\hat h}_\varphi(\b)>0 \quad\Longleftrightarrow\quad
\text{$\b$ has infinite $\varphi$-orbit}.
\]
\end{lemma}
\begin{proof}
See~\cite{callsilv:htonvariety}, \cite[\S B.4]{MR1745599},
or~\cite[\S3.4]{silverman:ads}.
\end{proof}
\begin{definition}
Let~$S$ be a finite set of places of~$K$, including all archimedean places
and let~${\mathfrak{A}}$ be an integral ideal. The \emph{prime-to-$S$ norm of~${\mathfrak{A}}$} is
the quantity
\[
{\operatorname{\mathsf{N}}}_S{\mathfrak{A}} = \prod_{{\mathfrak{p}}\notin S} {\mathfrak{p}}^{\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}}.
\]
As the name suggests,~${\operatorname{\mathsf{N}}}_S{\mathfrak{A}}$ is the part of~${\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{A}}$ that
is relatively prime to all of the primes in~$S$.
\end{definition}
We next apply the main result of~\cite{MR1240603} to show that
$\log({\operatorname{\mathsf{N}}}_S{\mathfrak{A}}_n)$ grows like a constant multiple of~$d^n$. We observe
that the proof of~\cite[Theorem~E]{MR1240603} requires some sort of
nontrivial theorem on Diophantine approximation such as Roth's
theorem, so despite its simple statement, the following lemma conceals
the deepest part of the proof of Theorem~\ref{thm:dynBZ}. For our
purposes, we require the general number field version proven
in~\cite{MR1240603}, but see~\cite[\S3.8]{silverman:ads} for a more
leisurely exposition of the same result over~$\mathbb{Q}$
with~$S=\{\infty\}$.
\begin{lemma}
\label{lemma:htvslognorm}
\textup{(a)}
There is a constant~$C=C(\varphi,\a)$ so that
\begin{equation}
\label{eqn:1ednlnAn1}
\frac{1}{[K:\mathbb{Q}]} \log{\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{A}}_n
\le d^n{\hat h}_\varphi(\a) + C
\qquad\text{for all $n\ge0$.}
\end{equation}
\par\noindent\textup{(b)}
Let~$S$ be a finite set of places, including all archimedean places,
and let~$\epsilon>0$. There is an~$n_0=n_0(\epsilon,S,\varphi,\a)$ so that
\begin{equation}
\label{eqn:1ednlnAn2}
\frac{1}{[K:\mathbb{Q}]} \log{\operatorname{\mathsf{N}}}_S{\mathfrak{A}}_n
\ge (1-\epsilon)d^n{\hat h}_\varphi(\a)
\qquad\text{for all $n\ge n_0$.}
\end{equation}
\end{lemma}
\begin{remark}
We observe that the elementary upper bound~\eqref{eqn:1ednlnAn1} is
true for any rational map, while the deeper lower
bound~\eqref{eqn:1ednlnAn2} requires the assumption that~$\varphi$ is not
of polynomial type at~$0$.
\end{remark}
\begin{proof}
(a)
In general, if~$\b\in K^*$ and if we write the ideal~$(\b)$
as a quotient of relatively prime integral ideals~$(\b)={\mathfrak{A}}{\mathfrak{B}}^{-1}$,
then the (normalized logarithmic) height of~$\b$ is given by
the formula
\begin{equation}
\label{eqn:hbnorm}
h(\b) = \frac{1}{[K:\mathbb{Q}]} \Bigl(\log{\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{B}}
+ \sum_{v\in M_K^\infty} [K_v:\mathbb{Q}_v]\log\max\{1,|\b|_v\} \Bigr).
\end{equation}
See for example~\cite[\S3.1]{LangDG}. We apply this
with~$\b=\varphi^n(\a)$, so~${\mathfrak{B}}={\mathfrak{A}}_n$, and we use the fact that all of
the terms in the sum are non-negative to deduce that
\begin{equation}
\label{eqn:hfn1}
h\bigl(\varphi^n(\a)\bigr) \ge \frac{1}{[K:\mathbb{Q}]}\log{\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{A}}_n.
\end{equation}
Finally, we use Lemma~\ref{lemma:canoicalht} to compute
\begin{equation}
\label{eqn:hfn2}
h\bigl(\varphi^n(\a)\bigr) = {\hat h}_\varphi\bigl(\varphi^n(\a)\bigr) + O(1)
= d^n{\hat h}_\varphi(\a)+O(1).
\end{equation}
Combining~\eqref{eqn:hfn1} and~\eqref{eqn:hfn2} completes
the proof of~(a).
\par\noindent(b)
Our assumption that~$\varphi$ is not of polynomial type at~$0$ allows us to
apply~\cite[Theorem~E]{MR1240603}. This theorem implies that
for each~$v\in S$ we have
\begin{equation}
\label{eqn:limdfna}
\lim_{n\to\infty} \frac{\d_v\bigl(\varphi^n(\a),0\bigr)}{d^n}
= 0,
\end{equation}
where~$\d_v$ is a logarithmic $v$-adic distance function
on~$\mathbb{P}^1(K_v)$. Since we are measuring the distance to~$0$, we may
take~$\d_v$ to be the function
\[
\d_v(\b,0) = 1 + \log\left(\frac{\max\{|\b|_v,1\}}{|\b|_v}\right)
= 1 + \log \max\{1,|\b|_v^{-1}\}.
\]
(See~\cite[\S3]{MR1240603}.) Substituting this into~\eqref{eqn:limdfna},
we obtain the equivalent statement
\begin{equation}
\label{eqn:limdfna1}
\lim_{n\to\infty} \frac{\log \max\{1,|\varphi^n(\a)|_v^{-1}\}}{d^n}
= 0.
\end{equation}
\par
We now rewrite~\eqref{eqn:hbnorm}, moving the part of the norm coming
from primes in~$S$ into the sum. Using our notation for the
prime-to-$S$ norm, formula~\eqref{eqn:hbnorm} becomes
\begin{equation}
\label{eqn:hbnorm1}
h(\b) = \frac{1}{[K:\mathbb{Q}]} \Bigl(\log{\operatorname{\mathsf{N}}}_S{\mathfrak{B}}
+ \sum_{v\in S} [K_v:\mathbb{Q}_v]\log\max\{1,|\b|_v\} \Bigr).
\end{equation}
We apply~\eqref{eqn:hbnorm1} with~$\b=\varphi^n(\a)^{-1}$, so~${\mathfrak{B}}={\mathfrak{A}}_n$.
Since~$h(\b)=h(\b^{-1})$ for any nonzero~$\b$, this gives
\begin{multline}
\label{eqn:hanorm}
h\bigl(\varphi^n(\a)\bigr) = \frac{1}{[K:\mathbb{Q}]} \log{\operatorname{\mathsf{N}}}_S{\mathfrak{A}}_n \\*
+ \sum_{v\in S} \frac{[K_v:\mathbb{Q}_v]}{[K:\mathbb{Q}]}
\log\max\{1,|\varphi^n(\a)|^{-1}_v\}.
\end{multline}
We now divide both sides of~\eqref{eqn:hanorm} by~$d^n$ and let~$n\to\infty$.
The limit formula~\eqref{eqn:limdfna1} tells us that the sum
over the places in~$S$ goes to~$0$. On the other hand, the
left-hand side of~\eqref{eqn:hanorm} is exactly the limit that defines the
canonical height. Hence we obtain
\[
{\hat h}_\varphi(\a)
= \lim_{n\to\infty} \frac{1}{[K:\mathbb{Q}]} \frac{\log{\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{A}}_n}{d^n}.
\]
This limit implies the lower bound~\eqref{eqn:1ednlnAn2} that we are
trying to prove, which completes the proof of
Lemma~\ref{lemma:htvslognorm}. (It also implies an upper bound, but a
weaker upper bound than we obtained by the elementary argument in~(a).)
\end{proof}
We have assembled all of the tools needed to complete the proof of
Theorem~\ref{thm:dynBZ}. Our goal is to show that~${\mathfrak{A}}_n$ has a
primitive prime divisor for all sufficiently large~$n$.
In order to do this, we define an ideal
\begin{equation}
\label{eqn:prod1}
{\mathfrak{E}}_{n,S} :=
\prod_{\substack{\text{primes ${\mathfrak{p}}\notin S$ that divide}\\
\text{one of ${\mathfrak{A}}_0,{\mathfrak{A}}_1,\dots,{\mathfrak{A}}_{n-1}$}\\}}
{\mathfrak{p}}^{\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_n}.
\end{equation}
Similarly, we let~${\mathfrak{A}}_{n,S}$ be the prime-to-$S$ part of~${\mathfrak{A}}_n$, thus
\[
{\mathfrak{A}}_{n,S} = \prod_{{\mathfrak{p}}\notin S} {\mathfrak{p}}^{\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_n}.
\]
We will prove that for all sufficiently large~$n$, the
ideal~${\mathfrak{A}}_{n,S}$ is strictly larger than the ideal~${\mathfrak{E}}_{n,S}$. This
will imply the desired result, since it will show that~${\mathfrak{A}}_n$ has a
primitive prime divisor, and indeed a primitive prime divisor not
lying in~$S$.
\par
Suppose that~${\mathfrak{p}}\notin S$ divides~${\mathfrak{A}}_i$ for some~$i<n$.
Then~$r_{\mathfrak{p}}\le i<n$, so Lemma~\ref{lemma:pdivgrowth}
tells us that
\[
\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_{n-1}>0 \qquad\text{and}\qquad
\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_n = e\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_{n-1}.
\]
Thus in the product~\eqref{eqn:prod1} defining~${\mathfrak{E}}_{n,S}$,
it suffices to multiply over the primes dividing~${\mathfrak{A}}_{n-1}$, and we
find that
\[
{\mathfrak{E}}_{n,S} = \prod_{{\mathfrak{p}}\notin S,\;{\mathfrak{p}}\mid{\mathfrak{A}}_{n-1}} {\mathfrak{p}}^{\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_n}
= \prod_{{\mathfrak{p}}\notin S,\;{\mathfrak{p}}\mid{\mathfrak{A}}_{n-1}} {\mathfrak{p}}^{e\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_{n-1}}
= {\mathfrak{A}}_{n-1,S}^e.
\]
The upper bound in Lemma~\ref{lemma:htvslognorm}(a), applied
to~${\mathfrak{A}}_{n-1}$, gives the estimate
\begin{align}
\label{eqn:lognorm1}
\log{\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{E}}_{n,S}
& = e\log\log{\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{A}}_{n-1,S}\notag\\
&\le e\log {\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{A}}_{n-1} \notag\\
&\le [K:\mathbb{Q}]ed^{n-1}{\hat h}_\varphi(\a) - O(1),
\end{align}
where the~$O(1)$ depends only on~$\varphi$,~$\a$, and~$K$.
\par
To obtain the complementary lower bound, we apply
Lemma~\ref{lemma:htvslognorm}(b) to~${\mathfrak{A}}_n$ with~$\epsilon=\frac{1}{2d}$. This
gives
\begin{equation}
\label{eqn:lognorm2}
\log{\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{A}}_{n,S}
= \log{\operatorname{\mathsf{N}}}_S{\mathfrak{A}}_n
\ge [K:\mathbb{Q}](1-\tfrac{1}{2d})d^n{\hat h}_\varphi(\a),
\end{equation}
valid for~$n\ge n_0(S,\varphi,\a)$.
\par
Finally, we combine~\eqref{eqn:lognorm1} and~\eqref{eqn:lognorm2}
to obtain
\begin{align*}
\log\frac{{\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{A}}_{n,S}}{{\operatorname{\mathsf{N}}}_{K/\mathbb{Q}}{\mathfrak{E}}_{n,S}}
&\ge \left(1-\frac{1}{2d}\right)d^n{\hat h}_\varphi(\a)
- [K:\mathbb{Q}]ed^{n-1}{\hat h}_\varphi(\a) + O(1) \\
&\ge \frac12[K:\mathbb{Q}]d^{n-1}{\hat h}_\varphi(\a) + O(1),
\quad\text{since $e<d$ from~\eqref{0lteltd}.}
\end{align*}
The right-hand side is positive for all sufficiently large~$n$,
which completes the proof that~${\mathfrak{A}}_n$ has a primitive divisor
for all sufficiently large~$n$, and
hence that the Zsigmondy set~${\mathcal Z}\bigl(({\mathfrak{A}}_n)_{n\ge1}\bigr)$ is
finite.
\end{proof}
\begin{example}
\label{example:emptyZset}
It is easy to construct specific examples, and even families of examples,
for which one can find the full dynamical Zsgimondy set by an
elementary calculation. We illustrate with one such example,
see~\cite{flatters07,rice07} for others.
\par
Let~$\varphi(z)=z^2+z$, let~$\gamma=0$, and let~$\a\in\mathbb{Q}$ with~$\a>0$.
Then for all~$n\ge1$ we have~$\gcd(A_{n-1},B_{n-1})=1$ and
\[
\frac{A_{n}}{B_{n}} = \frac{A_{n-1}^2+A_{n-1}B_{n-1}}{B_{n-1}^2},
\]
so~$A_{n}=A_{n-1}^2+A_{n-1}B_{n-1}$ and~$B_{n}=B_{n-1}^2$. In particular,
\[
A_0\mid A_1\mid A_2\mid\cdots
\]
so~$A_n$ has a primitive prime divisor if and only if there is a
prime~$p\mid A_n$ with~$p\nmid A_{n-1}$. But
from~$A_{n}=A_{n-1}(A_{n-1}+B_{n-1})$, this means that~$A_n$ has a
primitive prime divisor if and only if~$A_{n-1}+B_{n-1}>1$. This is
true for every~$n\ge1$, so it follows
that~${\mathcal Z}\bigl((A_n)_{n\ge1}\bigr)=\emptyset$.
\end{example}
\section{Questions and Speculations}
\label{section:speculations}
It is interesting to ask to what extent one can relax or modify the
hypotheses in Theorem~\ref{thm:dynBZ}. We discuss this question in a
series of remarks.
\begin{remark}
Presumably Theorem~\ref{thm:dynBZ} remains true if we only assume
that~$\gamma$ is preperiodic, i.e., has a finite $\varphi$-orbit, rather than
assuming that~$\gamma$ is periodic. However, some care is needed in order
to generalize the argument, so we have not pursued it here. We observe
that Rice's main result~\cite[Theorem~1.1]{rice07}
for monic~$\varphi(z)\in\mathbb{Z}[z]$ and~$\a\in\mathbb{Z}$ permits~$\gamma$ to be preperiodic,
albeit in a situation where it is relatively easy to classify all
situations for which there exist nonperiodic preperiodic points.
\end{remark}
\begin{remark}
\label{remark:localvsglobal}
If~${\mathfrak{A}}_n$ is a classical divisibility sequence associated to a
(possibly twisted) multiplicative group or to an elliptic curve, then
the higher order~${\mathfrak{p}}$ divisibility can be can be described quite
precisely via properties of the formal group of the group
variety. Roughly speaking, if~${\mathfrak{A}}_r$ is the first term divisible
by~${\mathfrak{p}}$, then
\begin{equation}
\label{eqn:classicalordp}
\operatorname{ord}_{\mathfrak{p}}({\mathfrak{A}}_{kr}) = \operatorname{ord}_{\mathfrak{p}}({\mathfrak{A}}_r) + \operatorname{ord}_{\mathfrak{p}}(k)
\quad\text{for all $k\ge1$,}
\end{equation}
and no other terms are divisible by~${\mathfrak{p}}$. The proof
of~\eqref{eqn:classicalordp} is essentially ${\mathfrak{p}}$-adic in nature, and
indeed it holds for classical divisibility sequences defined over the
${\mathfrak{p}}$-adic completion~$K_{\mathfrak{p}}$ of~$K$. We note that the
estimate~\eqref{eqn:classicalordp} forms an essential, albeit
reasonably elementary, component of the proof that classical Zsigmondy
sets are finite.
\par
In the proof of our main theorem, Lemma~\ref{lemma:pdivgrowth}
plays the role of~\eqref{eqn:classicalordp}, but note
that Lemma~\ref{lemma:pdivgrowth}, which implies that
\[
\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_k = e^{k-r}\operatorname{ord}_{\mathfrak{p}}{\mathfrak{A}}_r,
\]
is only valid for primes outside a certain bad set~$S$. It is natural
to ask if the dynamical analog of~\eqref{eqn:classicalordp} is true
over~$K_{\mathfrak{p}}$, since that might lead to a better understanding of the
underlying dynamics. The answer is probably not. Rob Benedetto
(private communication) has sketched an argument using ideas
from~\cite{MR1941304,MR2285248} which suggests that there are
dynamical examples over~$\mathbb{Q}_p$ such that~$\operatorname{ord}_p{\mathfrak{A}}_n$ grows grows
extremely rapidly, for exampe faster than~$O(d^n)$, or even faster
than~$O(2^{d^n})$.
\par
This is in marked contrast to the situation over a number field,
where the elementary height estimate
\begin{multline*}
\operatorname{ord}_{\mathfrak{p}}\varphi^n(\a)\log {\operatorname{\mathsf{N}}}{\mathfrak{p}}
\le\log {\operatorname{\mathsf{N}}}{\mathfrak{A}}_n
\le [K:\mathbb{Q}]h\bigl(\varphi^n(\a)\bigr) \\
= [K:\mathbb{Q}]{\hat h}_\varphi\bigl(\varphi^n(\a)\bigr)+O(1)
= d^n[K:\mathbb{Q}]{\hat h}_\varphi(\a)+O(1)
\end{multline*}
shows that $\operatorname{ord}_{\mathfrak{p}}\varphi^n(\a)$ cannot grow faster than~$O(d^n)$.
\end{remark}
\begin{remark}
\label{remark:gwandering}
If we change the assumptions of Theorem~\ref{thm:dynBZ} to make~$\gamma$
a wandering point, then Zsigmondy-type theorems appear to be
more difficult to prove. Note that when stripped to their essentials,
proofs of Zsigmondy-type theorems have two main ingredients:
\begin{enumerate}
\item
Prove that the sequence~${\mathfrak{A}}_n$ grows very rapidly.
\item
Prove that once a prime~${\mathfrak{p}}$ divides some term in the sequence, it
cannot divide later terms to an extremely high power.
\end{enumerate}
Condition~1 is a global condition, and it is true for both preperiodic
and wandering~$\gamma$. This may be proved
using~\cite[Theorem~E]{MR1240603} as we did in the proof of
Lemma~\ref{lemma:htvslognorm}. Indeed, the proof of
Lemma~\ref{lemma:htvslognorm} only uses the assumption that~$\varphi$
is not of polynomial type at~$\gamma$, it does not require that~$\gamma$
be a periodic point.
\par
Thus the difficulty in proving a Zsigmondy-type theorem for
wandering~$\gamma$ is Condition~(2). As noted in
Remark~\ref{remark:localvsglobal}, classical multiplicative or elliptic
divisibility sequences satisfy
\[
\operatorname{ord}_{\mathfrak{p}}({\mathfrak{A}}_{kr}) = \operatorname{ord}_{\mathfrak{p}}({\mathfrak{A}}_r) + \operatorname{ord}_{\mathfrak{p}}(k),
\]
where~$r$ is the rank of apparition at~${\mathfrak{p}}$. However, in the
dynamical setting there is no analogous uniform result, and indeed it
is easy to construct examples in which $\operatorname{ord}_{\mathfrak{p}}({\mathfrak{A}}_{kr})$ is
arbitrarily much larger than~$\operatorname{ord}_{\mathfrak{p}}({\mathfrak{A}}_r)$. For example, let
\[
\gamma=0,\quad
\a=p,\quad\text{and}\quad
\varphi(z)=z^2-pz+p^e.
\]
Then
\[
\operatorname{ord}_p({\mathfrak{A}}_0)=\operatorname{ord}_p(\a)=1
\quad\text{and}\quad
\operatorname{ord}_p({\mathfrak{A}}_1)=\operatorname{ord}_p(p^e)=e.
\]
\end{remark}
\begin{remark}
\label{remark:a=g}
Our main theorem is really about primes~${\mathfrak{p}}$ such that the reduction
modulo~${\mathfrak{p}}$ of a wandering point~$\a$ coincides with a given periodic
point~$\gamma$. This is a natural question, but it is possibly not the
most natural generalization of the classical multipicative and
elliptic divisibility sequences. In the classical case, one studies
the order of~$\a$ modulo~${\mathfrak{p}}$ for varying primes ideals~${\mathfrak{p}}$, which
means the order of~$\a\bmod{\mathfrak{p}}$ as a torsion point in the underlying
group. The dynamical analog of the set of torsion points
is the set of preperiodic points, so
a natural dynamical analog of the classical Zsigmnody theorems
would be to look at primitive prime divisors in the sequence
\[
\varphi^n(\a) - \a,
\qquad\text{or, more generally,}\quad
\varphi^{m+n}(\a)-\varphi^m(\a).
\]
We formulate two primitive divisor conjectures, analogous to
Theorem~\ref{thm:dynBZ}, corresponding to these situations.
\end{remark}
\begin{conjecture}
\label{conj:weakconj}
Let~$K$ be a number field, let~$\varphi(z)\in
K(z)$ be a rational function of degree~$d\ge2$,
and let~$\a\in K$ be a $\varphi$-wandering point.
For each~$n\ge1$, write the ideal
\[
\bigl(\varphi^n(\a)-\a\bigr) = {\mathfrak{A}}_n{\mathfrak{B}}_n^{-1}
\]
as a quotient of relatively prime integral ideals. Then the dynamical
Zsigmondy set ${\mathcal Z}\bigl(({\mathfrak{A}}_n)_{n\ge1}\bigr)$ is finite.
\end{conjecture}
Note that as discussed in Remark~\ref{remark:gwandering}, there is no
problem with the growth of~${\operatorname{\mathsf{N}}}{\mathfrak{A}}_n$. However, there is a
potential problem of primes reappering in the sequence to very high
powers. For example, if we take~$\varphi(z)=p-z+p^ez^2$ and~$\a=0$, then
\[
\varphi(\a) = p\quad\text{and}\quad \varphi^2(\a)=\varphi(p)=p^{e+2}.
\]
In order to state the second conjecture, we need to extend the
definition of Zsigmondy sets to doubly indexed sequences.
\begin{definition}
Let
\[
({\mathfrak{A}}_{m,n})_{\substack{m\ge1\\n\ge0\\}}
\]
be a doubly indexed sequence of ideals. We say that~${\mathfrak{p}}$ is a
\emph{primitive prime divisor} of~${\mathfrak{A}}_{m,n}$ if
\[
{\mathfrak{p}}\mid{\mathfrak{A}}_{m,n}
\quad\text{and}\quad
{\mathfrak{p}}\nmid{\mathfrak{A}}_{i,j}
\quad\text{for all $i,j$ with $0\le i<m$ or $1\le j<n$.}
\]
The we define the \emph{Zsigmondy set~${\mathcal Z}({\mathfrak{A}}_{m,n})$} to be the set
\[
\bigl\{(m,n):\text{$n\ge1$, $m\ge0$, and
${\mathfrak{A}}_{m,n}$ has no primitive divisors}\bigr\}.
\]
\end{definition}
\begin{conjecture}
\label{conj:strongconj}
Let~$K$ be a number field, let~$\varphi(z)\in K(z)$ be a rational function
of degree~$d\ge2$, and let~$\a\in K$ be a $\varphi$-wandering point. For
each~$n\ge1$ and~$m\ge0$, write the ideal
\begin{equation}
\label{eqn:fmnafma}
\bigl(\varphi^{m+n}(\a)-\varphi^m(\a)\bigr) = {\mathfrak{A}}_{m,n}{\mathfrak{B}}_{m,n}^{-1}
\end{equation}
as a quotient of relatively prime integral ideals. Then the dynamical
Zsigmondy set ${\mathcal Z}\bigl({\mathfrak{A}}_{m,n}\bigr)$ is finite.
\end{conjecture}
\begin{remark}
In the classical multiplicative and elliptic Zsigmondy theorems, every
prime divides some term of the sequence, but this is (probably) not
true for (most) dynamically defined sequences. In some nontrivial
dynamical cases, various authors~\cite{jones:denprimdiv,MR805714,MR813379}
have proven that
\[
\bigl\{{\mathfrak{p}}:{\mathfrak{p}}\mid{\mathfrak{A}}_n~\text{for some $n$}\bigr\}
\]
is a set of density~$0$. See also~\cite{MR917803} for a weak lower
bound on the number of primes in this set.
\par
On the other hand, it is clear that every prime divides at least one
term in the doubly indexed dynamical sequence~${\mathfrak{A}}_{m,n}$ defined
by~\eqref{eqn:fmnafma}. In particular, if~$\varphi$ has good reduction
at~${\mathfrak{p}}$, then~${\mathfrak{p}}\mid{\mathfrak{A}}_{m,n}$ if and only if the orbit of~$\a$
mdoulo~${\mathfrak{p}}$ has a tail of length~$m$ and a cycle of length~$n$.
\end{remark}
|
1,108,101,563,201 | arxiv | \section{Large covariance matrix estimation via log-det heuristics}
\section{Introduction}
\label{Intro}
Estimating high-dimensional covariance or precision matrices has become a
crucial task nowadays, due to the increasing availability of datasets composed of
a large number of variables $p$ compared to the sample size $n$ in many fields, like
economics, finance, biology, genetics, health, climatology, and social sciences.
The consistency of estimated covariance matrices is a prerequisite
to perform several statistical procedures in high dimensions like principal component analysis,
cluster analysis, graphical model inference, among others.
Recent books on this relevant topic are \cite{pourahmadi2013high} and \cite{zagidullina2021high},
while recent comprehensive reviews include \cite{fan2016overview}, \cite{wainwright2019high},
\cite{lam2020high}, \cite{ledoit2021shrinkage}.
The amplitude of techniques developed to overcome the estimation issues
in high dimensions now provides several state-of-the art solutions,
but also leaves some room to further improve the estimation process in many directions.
Although the theory of covariance matrix estimation for low-dimensional Gaussian data
was developed in the fifties by pioneeristic contributions \citep{anderson1958introduction},
it became soon apparent that the sample covariance matrix $\mf{\Sigma}_n$ is not a reliable estimator
of the true covariance matrix $\mf{\Sigma}^{*}$
when $p/n\rightarrow 1^{-}$, and is not a consistent estimator of $\mf{\Sigma}^{*}$ when $p/n \geq 1$.
As explained in \cite{ledoit2004well}, in fact, sample eigenvalues may be
severely overdispersed when $p/n\rightarrow 1^{-}$, as they follow the Mar$\check{\mathrm{c}}$enko-Pastur law
\citep{marvcenko1967distribution}. This leads $\mf{\Sigma}_n$ to be more and more numerically unstable under that condition.
When $p/n \geq 1$, instead, $p-n$ sample eigenvalues are null,
thus irremediably affecting the consistency of $\mf{\Sigma}_n$.
In addition, a large $p$ easily leads to identifiability issues for $\mf{\Sigma}^{*}$,
as the number of parameters to be recovered grows quadratically in $p$ in absence of further assumptions.
A first relevant idea to approach the shortcomings of $\mf{\Sigma}_n$ dealt with exploiting Stein's loss
(\cite{stein1975estimation,stein1986lectures,dey1985estimation}).
Charles Stein's seminal idea was to keep the sample eigenvectors fixed,
and to recondition sample eigenvalues by applying some shrinkage function to them,
in order to achieve consistency.
This approach requires the adoption of a ‘double asymptotics’ framework,
where both $p$ and $n$ are allowed to vary, to be well founded.
In this respect, we are indebted to \cite{wigner1955}, who first developed
a comprehensive random matrix theory in high dimensions.
At the same time, it was proved only in \cite{el2008spectrum}
that the individual eigenvalues of a large covariance matrix
can be successfully estimated.
Eigenvalue shrinkage has been later extended by Olivier Ledoit and Micheal Wolf,
who proposed linear and nonlinear shrinkage of sample eigenvalues, and
derived a unified high-dimensional asymptotic framework for large covariance matrices
\citep{ledoit2004well,ledoit2012nonlinear,ledoit2015spectrum}.
A later eigenvalue shrinkage methodology based on sample splitting
has been proposed by \cite{lam2016nonparametric}.
Eigenvalue shrinkage is able to improve sample eigenvalues as much as possible,
while \cite{ledoit2011eigenvectors} shows that sample eigenvectors may be far from the corresponding true ones,
in spite of \cite{ledoit2021shrinkage} providing a measure that quantifies the discrepancy between sample and true eigenvectors.
A second relevant approach to large covariance matrix estimation passes by the
assumption of a specific structure for $\mf{\Sigma}^{*}$, in order to drastically reduce the number of parameters.
One option is to assume a factor model structure.
The spiked covariance model, introduced in \cite{johnstone2001distribution}
and recovered in \cite{johnstone2009consistency},
is a successful attempt of this kind.
Linear eigenvalue shrinkage under the same model has been proposed in \cite{donoho2018optimal}.
Another option is to assume some type of sparsity.
Under that assumption, with respect to the underlying structure,
it has been proposed to recover $\mf{\Sigma}^{*}$ by
applying hard-thresholding \citep{bickel2008regularized}, soft-thresholding \citep{bickel2008covariance},
generalized thresholding \citep{rothman2009generalized}, or adaptive thresholding \citep{cai2011adaptive}.
These two types of algebraic structure enforced in $\mf{\Sigma}^{*}$ may both be too restrictive.
In fact, as pointed out by \cite{giannone2021economic} for economic data, it is likely that
the sparsity assumption is too strong in high dimensions, as the interrelation structure among the variables
is actually more dense than sparse.
At the same time, a strict factor model does not allow for any idiosyncratic covariance structure
that may catch specific pairs of extra-correlated variables beyond the factors.
As a consequence, it became clear that conditional sparsity with respect to an underlying factor model,
that is the approximate factor model \citep{chamberlain1983arbitrage},
can effectively merge factor model and sparsity assumptions,
being a parsimonious and flexible approach at the same time.
A covariance matrix estimator assuming conditional sparsity is POET \citep{fan2013large},
that proposes to threshold the principal orthogonal complement to obtain a consistent solution.
Conditional sparsity can be imposed by assuming for $\mf{\Sigma}^{*}$ a low rank plus sparse decomposition, that is \begin{equation}\mf{\Sigma}^{*}=\mf{L}^{*}+\mf{S}^{*}=\mf{B}\mf{B}'+\mf{S}^{*},\label{mod_l+s}\end{equation}
where $\mf{L}^{*}=\mf{B}\mf{B}'=\mf{U}_L \mf{\Lambda}_L \mf{U}_L'$,
with $\mf{U}_L$ $p\times r$ semi-orthogonal matrix
and $\mf{\Lambda}_L$ $r\times r$ diagonal positive definite matrix, and
$\mf{S}^{*}$ is a positive definite and element-wise sparse matrix,
containing only $s \ll \frac{p(p-1)}{2}$ off-diagonal non-zero elements.
Structure \eqref{mod_l+s} has become the reference model
under which several high-dimensional covariance matrix estimators work.
It can be recovered by nuclear norm plus $l_1$ penalization, that is by solving
\begin{equation}
\left(\widehat{\mf{L}},\widehat{\mf{S}}\right)=
\arg\min_{\mf{L},\mf{S}}
\mathfrak{L}(\mf{L},\mf{S})+\mathfrak{P}(\mf{L},\mf{S}),\label{obj_all}
\end{equation}
where $\mathfrak{P}(\mf{L},\mf{S})=\psi \Vert\mf{L}\Vert_{*}+\rho \Vert\mf{S}\Vert_{1}$,
$\Vert\mf{L}\Vert_{*}=\sum_{i=1}^p \lambda_i(\mf{L})$ is the nuclear norm of $\mf{L}$,
i.e. the sum of the eigenvalues of $\mf{L^{*}}$, $\Vert\mf{S}\Vert_{1}=\sum_{i=1}^{p} \sum_{j=1}^p \vert\mf{S}_{ij}\vert$,
$\psi$ and $\rho$ are non-negative threshold parameters, and $\mathfrak{L}(\mf{L},\mf{S})$ is a smooth loss function.
The nuclear norm was first proposed in \cite{fazel2001rank} as an alternative to PCA. \cite{fazel2002matrix} furnishes a proof that $\psi \Vert\mf{L}\Vert_{*}+\rho \Vert\mf{S}\Vert_{1}$ is the tightest convex relaxation of the original non-convex penalty $\psi\mathrm{rk}(\mf{L})+\rho\Vert \mf{S}\Vert _{0}$.
\cite{donoho2006most} proves that the $l_1$ norm minimization provides the sparsest solution to most large underdetermined linear systems, while \cite{recht2010guaranteed} proves that the nuclear norm minimization provides guaranteed rank minimization under a set of linear equality constraints.
\cite{candes2009near} shows that $l_1$ norm minimization selects the best linear model in a wide range of situations.
The nuclear norm has instead been used to solve large matrix completion problems, like in
\cite{srebro2005maximum}, \cite{candes2010power}, \cite{mazumder2010spectral}, and \cite{hastie2015matrix}.
Nuclear norm plus $l_1$ norm minimization was first exploited in \cite{candes2011robust} to provide
a robust version of PCA under grossly corrupted or missing data.
Heuristics \eqref{obj_all} has generated a stream of literature where new covariance matrix estimators in high dimensions are derived.
\cite{agarwal2012noisy} ensures optimal rates for the solutions of \eqref{obj_all} under model \eqref{mod_l+s} via a purely analytical approach, providing the approximate recovery of $\mf{\Sigma}^{*}$ and a bounded non-identifiability radius for $\mf{L}^{*}$.
In \cite{chandrasekaran2012}, a latent graphical model structure, based on a sparse minus low rank decomposition for $\mf{\Sigma}^{*}$,
is learnt by solving a problem of type \eqref{obj_all}, providing both parametric and algebraic consistency,
where the latter one holds if (i) the low rank estimate $\widehat{\mf{L}}$ is positive semidefinite having the true rank $r$, (ii) the residual estimate $\widehat{\mf{S}}$ is positive definite having the true sparsity pattern, and (iii) $\widehat{\mf{\Sigma}}=\widehat{\mf{L}}+\widehat{\mf{S}}$ is positive definite.
The key for this result is to control the algebraic features of
the low rank and sparse matrix varieties containing $\mf{L}^{*}$ and $\mf{S}^{*}$ respectively,
because the low rank variety is locally curve, and so it may be very sensitive to small perturbations in $\mf{\Sigma}_n$.
The nuclear norm plus $l_1$ norm penalty $\mathfrak{P}(\mf{L},\mf{S})$ had previously been exploited in \cite{chandrasekaran2011rank}
to provide exact recovery for structure \eqref{mod_l+s} in absence of noise.
Building over \cite{chandrasekaran2012}, \cite{luo2011high} derives LOREC estimator under model \eqref{mod_l+s} for $\mf{\Sigma}^{*}$
via objective \eqref{obj_all} with $\mathfrak{L}(\mf{L},\mf{S})=\mathfrak{L}^{(F)}(\mf{L},\mf{S})$,
$\mathfrak{L}^{(F)}(\mf{L},\mf{S})=\frac{1}{2}\Vert \mf{\Sigma}_n-(\mf{L}+\mf{S})\Vert_{F}^2$.
In \cite{farne2020large}, UNALCE estimator is derived. UNALCE is based on problem \eqref{obj_all} with $\mathfrak{L}(\mf{L},\mf{S})=\frac{1}{2}\Vert \mf{\Sigma}_n-(\mf{L}+\mf{S})\Vert_{F}^2$,
like LOREC, but overcomes LOREC deficiencies by developing a random matrix theory result that
holds under a wide range of approximate factor models and accounts for the high-dimensional case $p \geq n$.
UNALCE is both algebraically consistent in the sense of \cite{chandrasekaran2012} and
parametrically consistent in Frobenius and spectral norm.
Although UNALCE is the optimal estimator in finite sample
when $\mathfrak{L}(\mf{L},\mf{S})=\mathfrak{L}^{(F)}(\mf{L},\mf{S})$
in terms of Frobenius loss, there is room to further improve it by replacing
$\mathfrak{L}^{(F)}(\mf{L},\mf{S})$ by a different loss. The Frobenius loss optimizes in fact the
entry by entry performance of $\widehat{\mf{\Sigma}}$. A loss able to explicitly control
the spectrum estimation quality may be desirable, in order to ensure
the algebraic consistency of $\left(\widehat{\mf{L}},\widehat{\mf{S}}\right)$,
while simultaneously optimizing the estimated spectrum in terms of distance from the true one.
The loss $\mathfrak{L}^{(ld)}(\mf{L},\mf{S})=\frac{1}{2}\log \det (\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')$,
$\mf{\Delta}_{n}=\mf{\Sigma}-\mf{\Sigma}_n$, $\mf{\Sigma}=\mf{L}+\mf{S}$, is a possible one satisfying
these needs, because it is controlled by the individual singular values of $\mf{\Delta}_n$, since
\begin{equation}\log \det (\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')={\log} \prod_{i=1}^{p}(\lambda_i(\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}'))\leq
\sum_{i=1}^{p}(1+\lambda_i(\mf{\Delta}_n)^2)=p+\sum_{i=1}^{p}(\lambda_i(\mf{\Delta}_n)^2),\label{eigen}
\end{equation}
thus providing an intrinsic eigenvalue correction.
Inequality \eqref{eigen} trivially holds when $\mf{\Delta}_n \mf{\Delta}_n'$ is diagonal, and it can be proved in the general case by
diagonalizing the positive semi-definite symmetric matrix $\mf{\Delta}_n\mf{\Delta}_n'$.
Our estimator pair therefore becomes
\begin{equation}\label{obj}
\left(\widehat{\mf{L}},\widehat{\mf{S}}\right)=
\arg\min_{\mf{L},\mf{S}}
\frac{1}{2}\log \det (\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')
+\psi \Vert\mf{L}\Vert_{*}+\rho \Vert\mf{S}\Vert_{1}.
\end{equation}
Problem \eqref{obj} minimizes a loss controlled by the Euclidean norm of the eigenvalues of the error matrix, while simultaneously minimizing the latent rank and the residual support size, whose the nuclear norm and the $l_1$ norm are the respective tightest convex relaxations (see \cite{fazel2002matrix}).
The mathematical properties of $\mathfrak{L}^{(ld)}(\mf{L},\mf{S})$ have been extensively studied in \cite{stats5030037}. The relevant challenge is that $\mathfrak{L}^{(ld)}(\mf{L},\mf{S})$ is locally convex, i.e. it is convex into a specific range for $\mf{\Delta}_{n}$. In this paper, we exploit this fact to propose a proximal gradient algorithm to compute \eqref{obj}
and to prove algebraic and
parametric consistency for the estimates of $\mf{L}^{*}$, $\mf{S}^{*}$, and $\mf{\Sigma}^{*}$
obtained in this way. To this end, we recall the first and the second derivatives of
$\mathfrak{L}^{(ld)}(\mf{L},\mf{S})$, its local convexity region, and the Lipschitzianity
of $\mathfrak{L}^{(ld)}(\mf{L},\mf{S})$ and of its gradient.
The remainder of the paper is structured as follows.
In Section \ref{model} we define our model, detailing the necessary assumptions to prove consistency.
Section \ref{math_anal} presents the recalled mathematical analysis results
related to $\mathfrak{L}^{(ld)}(\mf{L},\mf{S})=\frac{1}{2}\log \det (\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')$.
Section \ref{Cons_both} presents how the adoption of $\mathfrak{L}^{(ld)}(\mf{L},\mf{S})$ as smooth term in \eqref{obj}
impacts on algebraic consistency. Section \ref{Cons_yes} establishes the algebraic and parametric consistency of $\left(\widehat{\mf{L}},\widehat{\mf{S}}\right)$.
Section \ref{Alg} presents a solution algorithm for problem \eqref{obj}.
Section \ref{sim} reports a wide simulation study.
Section \ref{real} describes a real data example. In the end, some concluding remarks follow.
The proofs of mathematical results are reported in Appendix \ref{proofs}.
\subsection*{Notation}
Given a $p\times p$ symmetric positive semi-definite matrix $\mf{M}$,
we denote by $\lambda_i(\mf{M})$, $i\in\{1,\ldots, p\}$,
the eigenvalues of $\mf{M}$ in descending order.
To indicate that $\mf{M}$ is positive definite or semi-definite we use the notations:
$\mf{M} \succ 0$ or $\mf{M} \succeq 0$, respectively.
Then, we recall the following norm definitions:
\begin{enumerate}
\item[1.] Element-wise:
\begin{enumerate}
\item $l_{0}$ norm: $\Vert \mf{M}\Vert_{0}=\sum_{i=1}^p \sum_{j=1}^p \mathbbm{1}( \mf{M}_{ij}\ne 0)$, which is the total number of non-zeros;
\item $l_{1}$ norm: $\Vert \mf{M}\Vert_{1}=\sum_{i=1}^p \sum_{j=1}^p \vert \mf{M}_{ij}\vert$;
\item Frobenius norm: $\Vert \mf{M}\Vert_{F}=\sqrt{\sum_{i=1}^p \sum_{j=1}^p \mf{M}_{ij}^2}$;
\item Maximum norm: $\Vert \mf{M}\Vert_{\infty}=\max_{i\leq p, j \leq p} \vert \mf{M}_{ij}\vert$.
\end{enumerate}
\item[2.] Induced by vector:
\begin{enumerate}
\item $\Vert \mf{M}\Vert_{0,v}=\max_{i \leq p} \sum_{j \leq p} \mathbbm{1}( \mf{M}_{ij} \ne 0)$,
which is the maximum number of non-zeros per row--column,
defined as the maximum `degree' of $ \mf{M}$;
\item $\Vert \mf{M}\Vert_{1,v}=\max_{j \leq p} \sum_{i \leq p} \vert \mf{M}_{ij}\vert$;
\item Spectral norm: $\Vert \mf{M}\Vert_{2}=\lambda_{1}( \mf{M})$.
\end{enumerate}
\item[3.] Schatten:
\begin{enumerate}
\item Nuclear norm of $ \mf{M}$, here defined as the sum of the eigenvalues of $ \mf{M}$:
$\Vert \mf{M}\Vert_{*}=\sum_{i=1}^p \lambda_i(\mf{M})$
\end{enumerate}
\end{enumerate}
The minimum nonzero off-diagonal element of $\mf{M}$ in absolute value is denoted as
$\Vert \mf{M} \Vert_{min,off}=\min_{\substack{1\le i,j \leq p\\i \ne j , \mf{M}_{ij} \ne 0}}{\vert \mf{M}_{ij} \vert}$.
Given a $p-$dimensional vector $\vf{v}$, we denote by $\Vert \vf{v} \Vert=\sqrt{\sum_{i=1}^p \vf{v}_i^2}$ the Euclidean norm of $\vf{v}$, by $\Vert \vf{v} \Vert_{\infty}=\max_{i=1,\ldots,p}{|\vf{v}_i|}$ the maximum norm of $\vf{v}$,
and by ${v}_{k,i}$ the $i$-th component of the indexed vector $\vf{v}_k$.
Given two sequences $A_\nu$ and $B_\nu$, $\nu \rightarrow \infty$,
we write $A_\nu=O(B_\nu)$ or $A_\nu \preceq B_\nu$, if there exists a positive real $C$ independent of
$\nu$ such that $A_\nu/B_\nu\leq C$, we write $B_\nu=O(A_\nu)$ or $A_\nu \succeq B_\nu$, if there exists a positive real $C$ independent of $\nu$ such that $B_\nu/A_\nu\leq C$,
and we write $A_\nu \simeq B_\nu$ if there exists a positive real $C$ independent of
$\nu$ such that $A_\nu/B_\nu\leq C$ and $B_\nu/A_\nu\leq C$.
Similarly,
we write $A_\nu=o(B_\nu)$ or $A_\nu \prec B_\nu$, if there exists a positive real $C$ independent of
$\nu$ such that $A_\nu/B_\nu < C$, and we write $B_\nu=o(A_\nu)$ or $A_\nu \succ B_\nu$, if there exists a positive real $C$ independent of $\nu$ such that $B_\nu/A_\nu < C$.
\section{The model}
\label{model}
\subsection*{Definition}
We assume for the data the following model structure:
\noindent
\begin{equation}
\mathbf{x}=\mf{B}\mathbf{f}+\mathbf{\epsilon}\label{mod},
\end{equation}
where $\mf{B}$ is a $p \times r$ semi-orthogonal loading matrix
such that $\mf{B}'\mf{B}=\mf{I}_r$,
$\vf{f} \sim (\vf{0}_r,\mf{I}_r)$ is a $r\times 1$ random vector,
and $\vf{\epsilon} \sim (\vf{0}_p, \mf{S^{*}})$ is a $p\times 1$ random vector.
Assuming that $\mathrm{E}(\vf{f}\vf{\epsilon}')=\mf{0}_{r \times p}$,
we obtain that \begin{equation}\mf{\Sigma}^{*}:=\mathrm{E}(\vf{x}\vf{x}')=\mf{B}\mf{B}'+\mf{S^{*}}=\mf{L^{*}}+\mf{S^{*}},\label{cov}\end{equation}
where $\mf{L^{*}}=\mf{B}\mf{B}'$
is positive semi-definite with rank $r<p$, and
$\mf{S^{*}}$ is positive definite with $s<\frac{p(p-1)}{2}$ non-zero off-diagonal elements.
We define the unbiased sample covariance matrix as $\mf{\Sigma}_n=\frac{1}{n-1} \sum_{i=1}^n \vf{x}_i \vf{x}_i'$.
\subsection*{Factor model assumptions}
\label{Ass}
\begin{Ass}\label{eigenvalues}
\begin{inparaenum}
\item [(i)] The eigenvalues of the $r\times r$ matrix $p^{-\alpha_{1}}\mf{B}' \mf{B}$
are bounded away from $0$ for all $p\in\N$ such that
$\lambda_i(\mf{B}' \mf{B}) \simeq p^{\alpha_i}$, $i=1,\ldots,r$,
for some $\frac{1}{2} < {\alpha_r} \leq \ldots \leq {\alpha_{1}} \leq 1$;
\item [(ii)] $\Vert \vf{b}_j\Vert _{\infty}=O(1)$ for all $j=1,\ldots,p$
and $r$ is finite for all $p\in\mathbb\N$.
\end{inparaenum}
\end{Ass}
Assumption \ref{eigenvalues} prescribes different speeds of divergence for latent eigenvalues,
and imposes a finite latent rank $r$. The left limit $O(p^{\frac{1}{2}})$ is imposed on latent eigenvalues to preserve the
factor model structure as $p \to \infty$. Part (ii) could actually be relaxed to cope with
$r=O(\log p)$, but we avoid it for the sake of simplicity. The maximum loading magnitude is imposed to be bounded
as $p \to \infty$, in order to derive a bound for the maximum norm of $\mf{\Delta}_n$.
\begin{Ass}\label{sparsity}
\begin{inparaenum}
For all $p \in \N$:
\item [(i)] there exist $\delta_{1} \in (0,\frac{1}{2}]$ and $\delta_{2}>0$, such that $\Vert \mf{S}^{*}\Vert_{0,v}=\max_{1\le i \leq p}
\sum_{j =1}^p \mathbbm{1}(\mf{S}^{*}_{ij}=0)\leq \delta_{2} p^{\delta_{1}}$;
\item [(ii)] $\Vert \mf{S}^{*} \Vert_{\infty} = O(1)$.
\end{inparaenum}
\end{Ass}
Assumption \ref{sparsity} controls the maximum number of non-zeros per row in the
residual covariance component $\mf{S}^{*}$ and imposes its maximum element to be $O(1)$
as $p\to \infty$. This allows to establish the traditional eigengap between $\lambda_r(\mf{B}' \mf{B})$
and $\lambda_1(\mf{S}^{*})$ as $p\to\infty$, because
$\Vert \mf{S}^{*}\Vert_{2} \leq
\Vert\mf{S}^{*}\Vert_{1,v} \leq \Vert \mf{S}^{*}\Vert_{0,v} \Vert \mf{S}^{*} \Vert_{\infty} \leq \delta_{2} p^{\delta_{1}}$,
and $\delta_{1} \leq \frac{1}{2} < {\alpha_r}$ by Assumption \ref{eigenvalues}(i).
Part (i) is also needed to ensure the identifiability of the sparsity pattern in $\mf{S}^{*}$,
together with Assumption \ref{alg} (see later).
\begin{Ass}\label{tails}
In model \eqref{mod}, $\mathrm{E}(\vf{f})=\vf{0}_r$, $\mathrm{V}(\vf{f})=\mf{I}_r$,
$\mathrm{E}(\vf{\epsilon})=\vf{0}_p$, $\mathrm{V}(\vf{\epsilon})=\mf{S}^{*}$,
$\lambda_p(\mf{S}^{*})>0$, $\mathrm{E}(\vf{f}\vf{\epsilon}')=\vf{0}_{r \times p}$,
and there exist $b_{1},b_{2},c_{1},c_{2}>0$ such that, for any $l>0$, $k\leq n$, $i \leq r$, $j\leq p$:
\begin{eqnarray}
\Pr(\vert{f}_{k,i}\vert>l) \leq \exp\{-(l/{b_{1}})^{c_1}\}, \qquad \Pr(\vert{\epsilon}_{k,j}\vert>l) \leq \exp\{-(l/b_{2})^{c_2}\}. \nonumber
\end{eqnarray}
\end{Ass}
Assumption \ref{tails} completes the framework to make \eqref{mod} an approximate factor model,
and imposes sub-Gaussian tails to factor scores and residuals. This ensures that all the moments of
$\vf{f}$ and $\vf{\epsilon}$ exist, and is crucial to apply to $\vf{f}$ and $\vf{\epsilon}$ large deviation probabilistic results.
Note that the magnitude of residual covariances (i.e., the off-diagonal entries of $\mf{S}^{*}$)
is controlled by Assumption \ref{sparsity}, and that $\mf{S}^{*}$ is imposed to be positive definite as $p\to\infty$.
\subsection*{Identifiability assumptions}
In order to ensure the effectiveness of the composite penalty
$\psi \Vert\mf{L}\Vert_{*}+\rho \Vert\mf{S}\Vert_{1}$
in recovering the latent rank $\mathrm{rk}(\mf{L}^{*})=r$ and
the residual number of nonzeros $\vert\mathrm{supp}(\mf{S}^{*})\vert=s$
(where $\mathrm{supp}(\mf{S})$ is the orthogonal complement of $ker(\mf{S})$ and
$\vert\mathrm{supp}(\mf{S}^{*})\vert$ denotes its dimension),
we need to control the geometric manifolds containing
$\mf{L}^{*}$ and $\mf{S}^{*}$.
As in \cite{chandrasekaran2011rank}, we assume $\mf{L}^{*}\in\mathcal{L}(r)$
and $\mf{S}^{*}\in\mathcal{S}(q)$, where
\begin{eqnarray}
\mathcal{L}(r) = \{\mf{L} \mid \mf{L} \succeq 0, {\mf{L}}={\mf{U}\mf{D}\mf{U}'}, \mf{U} \in \R^{p
\times r}, \mf{U}'\mf{U}=\mf{I}_r, \mf{D} \in \R^{r \times r} \mathrm{diagonal}\},\label{var:L}\\
\mathcal{S}(s) = \{\mf{S}\in \R^{p\times p} \mid \mf{S} \succ 0, \vert \mathrm{supp}(\mf{S})\vert \leq
s\}.\label{var:S}
\end{eqnarray}
$\mathcal{L}(r)$ is the variety of matrices with at most rank $r$,
$\mathcal{S}(s)$ is the variety of (element-wise) sparse matrices with
at most $s$ non-zero elements, and the two varieties
$\mathcal{L}(r)$ and $\mathcal{S}(s)$
can be disentangled if $\mf{L}^{*}$ is far from being sparse,
and $\mf{S}^{*}$ is far from being low rank.
For this reason, \cite{chandrasekaran2011rank} defines
the tangent spaces $T(\mf{L}^{*})$ and $\Omega(\mf{S}^{*})$ to $\mathcal{L}(r)$ and $\mathcal{S}(s)$ respectively,
and proposes the following rank-sparsity measures:
\begin{eqnarray}
\xi(\mathcal{T}(\mf{L}^{*})) & = & \max_{\mf{L} \in T(\mf{L}^{*}), \Vert\mf{L}\Vert_{2} \leq 1} {\Vert\mf{L}\Vert_\infty},\label{xi}\\
\mu(\Omega(\mf{S}^{*})) & = &\max_{\mf{S} \in \Omega(\mf{S}^{*}),\Vert\mf{S}\Vert_\infty \leq 1}\
{\Vert\mf{S}\Vert_{2}}\label{mu}
\end{eqnarray}
\begin{Ass}\label{lowerbounds}
Define $\psi_{0}=\frac{1}{\xi(\mathcal{T}(\mf{L}^{*}))}\sqrt{\frac{\log p}{n}}$.
There exist $\delta_L,\delta_S>0$ such that
\begin{inparaenum}
\item[(i)] the minimum eigenvalue of $\mf{L}^{*}$, $\lambda_r(\mf{L}^{*})$, is greater than
$\delta_L \frac{\psi_{0}}{\xi^2(T(\mf{L}^{*}))}$,
\item[(ii)] the minimum absolute value of the non-zero off-diagonal entries of $\mf{S}^{*}$,
$\Vert\mf{S}^{*}\Vert_{min,off}$, is greater than $\delta_S \frac{\psi_{0}}{\mu(\Omega(\mf{S}^{*}))}$.
\end{inparaenum}
\end{Ass}
Assumption \ref{lowerbounds} is crucial for identifiability, as it guarantees
that the solution pair of \eqref{obj} lies on the ``right" manifolds, i.e. that $\widehat{\mf{L}}\in\mathcal{L}(r)$ and
$\widehat{\mf{S}}\in\mathcal{S}(s)$ with high probability as $n\to\infty$.
Then, according to \cite{chandrasekaran2012}, the identifiability condition to be satisfied
requires a bound on $\xi(\mathcal{T}(\mf{L}^{*}))\mu(\Omega(\mf{S}^{*}))$
(see Section \ref{Cons_both} for more details).
For this reason, recalling from \cite{chandrasekaran2011rank} that
\begin{eqnarray}
inc(\mf{L}^{*}) \leq \xi(\mathcal{T}(\mf{L}^{*})) \leq 2 inc(\mf{L}^{*}),\label{dualL}\\
deg_{min}(\mf{S}^{*}) \leq \mu(\Omega(\mf{S}^{*})) \leq deg_{max}(\mf{S}^{*}),\label{dualS}
\end{eqnarray}
with
\begin{itemize}
\item $inc(\mf{L}^{*})=\max_{j=1,\ldots,p}\Vert \mathbb{P}_{\mf{L}^{*}} \vf{e}_j \Vert$, where $\vf{e}_j$ is the $j$-th canonical basis vector, and $\mathbb{P}_{\mf{L}^{*}}$ is the projection operator onto the row--column space of $\mf{L}^{*}$;
\item $deg_{min}(\mf{S}^{*})=\min_{1\le i \leq p} \sum_{j =1}^p \mathbbm{1}(\mf{S}^{*}_{ij}=0)$,
$deg_{max}(\mf{S}^{*})=\max_{1\le i \leq p} \sum_{j =1}^p \mathbbm{1}(\mf{S}^{*}_{ij}=0)=\Vert \mf{S}^{*}\Vert_{0,v}$;
\end{itemize}
we can control the degree of transversality of $\mathcal{L}(r)$ and $\mathcal{S}(s)$
by the following assumption.
\begin{Ass}\label{alg}
For all $p\in\mathbb N$, there exist $\kappa_L,\kappa_S>0$ with $k_L\geq \frac{\sqrt{r}}{2}$ and $\kappa_S\leq\delta_{2}$, such that $\xi(\mathcal{T}(\mf{L}^{*}))=\frac{\sqrt r}{\kappa_L p^{\delta_{1}}}$ and $\mu(\Omega({\mf{S^{*}}}))=\kappa_S p^{\delta_{1}}$.
\end{Ass}
Assumption \ref{alg} states that the maximum degree of $\mf{S^{*}}$ is $O(p^{\delta_1})$,
where $\delta_1 < \alpha_r$ by Assumptions \ref{eigenvalues} and \ref{sparsity}.
More, the incoherence of $\mf{L}^{*}$ is assumed to scale to $O(p^{-\delta_1})$, in order to keep the product $\xi(\mathcal{T}(\mf{L}^{*}))\mu(\Omega(\mf{S}^{*}))$ proportional to $O(1)$, which is crucial to prove algebraic consistency (see Theorem \ref{thm_main}).
This assumption resembles in nature the approximate factor model of \cite{chamberlain1983arbitrage}, because $\delta_1<\alpha_r$, such that the number of residual nonzeros will become negligible with respect to latent eigenvalues, and the manifold underlying $\mf{L}^{*}$
will be progressively easier to retrieve as $p\to\infty$.
\noindent
\section{Analytic setup}\label{math_anal}
Let us reconsider our optimization problem
\begin{equation}\label{obj2}
\left(\widehat{\mf{L}},\widehat{\mf{S}}\right)=
\min_{\mf{L},\mf{S}, \mf{L} \succeq 0, \mf{S} \succ 0, \mf{L}+\mf{S} \succ 0} \phi(\mf{L},\mf{S}),
\end{equation}
with $\phi(\mf{L},\mf{S})=\mathfrak{L}^{(ld)}(\mf{L},\mf{S})+\mathfrak{P}(\mf{L},\mf{S})$,
where $\mathfrak{L}^{(ld)}(\mf{L},\mf{S})=\frac{1}{2}\log \det (\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')$
and $\mathfrak{P}(\mf{L},\mf{S})=\psi \Vert\mf{L}\Vert_{*}+\rho \Vert\mf{S}\Vert_{1}$.
We can disentangle the objective $\phi(\mf{L},\mf{S})$ in its smooth and nonsmooth components as follows:
$$\phi(\mf{L},\mf{S})=\phi_D(\mf{L},\mf{S})+\phi_{ND}(\mf{L},\mf{S}),$$
where $\phi_D(\mf{L},\mf{S})=\mathfrak{L}^{(ld)}(\mf{L},\mf{S})$ is the smooth component of $\phi(\mf{L},\mf{S})$ and $\phi_{ND}(\mf{L},\mf{S})=\psi \Vert\mf{L}\Vert_{*}+\rho \Vert\mf{S}\Vert_{1}$ is the non-smooth
component of $\phi(\mf{L},\mf{S})$. As explained in \cite{nesterov2013gradient}, problem \eqref{obj2}
can be numerically solved by applying proximal gradient methods (see Section \ref{Alg}).
To implement them, we need to calculate the first and the second derivatives
of $\phi_D(\mf{L},\mf{S})$, to prove the Lipschitzianity of its gradient,
and then to derive the conditions that guarantee its local convexity.
We refer to Appendix \ref{proofs} for the proofs.
\subsection*{First and second derivative}
We explicit the first and the second derivative of $\phi_D(\mf{L},\mf{S})=\frac{1}{2}\log \det \varphi(\mf{\Sigma})$, with $\varphi(\mf{\Sigma})=(\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')$,
$\mf{\Delta}_{n}=\mf{\Sigma}-\mf{\Sigma}_n$ and $\mf{\Sigma}=\mf{L}+\mf{S}$,
\emph{wrt} $\mf{L}$ and $\mf{S}$.
\begin{Prop}\label{first_der}
\begin{equation}
\frac{\partial \phi_D(\mf{L},\mf{S})}{\delta \mf{L}}=
\frac{\partial \phi_D(\mf{L},\mf{S})}{\delta \mf{S}}=
(\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')^{-1}\mf{\Delta}_{n}.\label{grad}
\end{equation}
\end{Prop}
\begin{Prop}\label{second_der}
\begin{align}
\frac{\partial^{2}}{\partial \sigma_{ij}\partial \sigma_{hk}}\frac{1}{2}
\log\det\varphi(\mf{\Sigma}) =& \left( \frac{1}{2}\mathrm{Hess}\log\det
\varphi(\mf{\Sigma})\right)_{i j h k}\nonumber\\
&= \delta_{jk}\left(\varphi^{-1}(\mf{\Sigma})\right)_{ih} -
\sum_{\mu,\sigma}\left(\varphi^{-1}(\mf{\Sigma})\right)_{h\mu}\mf{\Delta}_{\mu
j}\left(\varphi^{-1}(\mf{\Sigma})\right)_{i\sigma}\mf{\Delta}_{\sigma k}\nonumber\\
&-\left(\varphi^{-1}(\mf{\Sigma})\right)_{ih}\sum_{\mu,\lambda}\left(\varphi^{-1}(\mf{\Sigma})\right)_{\lambda\mu}\mf{\Delta}_{\mu
j}\mf{\Delta}_{\lambda k}.\label{second_dev}
\end{align}
\end{Prop}
From Proposition \ref{second_der}, it follows that if $\mf{\Sigma}-\mf{\Sigma}_n = \mf{0}_{p \times p}$,
we get
\begin{equation}
\label{eq:21}
\left(\frac{1}{2}\mathrm{Hess}\log\det
\varphi(\mf{\Sigma})\right)_{i j h k} = \delta_{jk}\otimes
\delta_{i h} = \left(\mf{I}_p \otimes \mf{I}_p\right) _{i j h k},
\end{equation}
that is,
\begin{equation}
\frac{1}{2}\mathrm{Hess}\log\det
\varphi(\mf{\Sigma}) = \mf{I}_p \otimes \mf{I}_p. \label{fishone}
\end{equation}
\noindent
\subsection*{Lipschitz-continuity}
\label{LipC}
Let us recall the two-argument matrix function
$$\phi(\mf{L},\mf{S})=\phi_D(\mf{L},\mf{S})+\phi_{ND}(\mf{L},\mf{S}),$$
where
$\phi_D(\mf{L},\mf{S})=\frac{1}{2}\log\det(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')$,
$\phi_{ND}(\mf{L},\mf{S})=\psi \Vert \mf{L} \Vert_{*}+\rho \Vert \mf{S} \Vert_{1}$,
and
$\mf{\Delta}_n=\mf{\Sigma}-\mf{\Sigma}_n$, with $\mf{\Sigma}=\mf{L}+\mf{S}$.
The gradient of $\phi_D(\mf{L},\mf{S})$ with respect to $\mf{L}$ and to $\mf{S}$ is the same, and corresponds to
$\frac{\partial \phi_D(\mf{L},\mf{S})}{\delta \mf{L}}=\frac{\partial \phi_D(\mf{L},\mf{S})}{\delta \mf{S}}=(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')^{-1}\mf{\Delta}_n$.
We now consider the vectorized gradient $\mathrm{vec}\left(\frac{\partial \phi_D(\mf{L},\mf{S})}{\delta (\mf{L},\mf{S})}\right)$,
that is the $2p^2$-dimensional vector $[\mathrm{vec}(\left(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')^{-1}\mf{\Delta}_n\right)'\mathrm{vec}(\left(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')^{-1}\mf{\Delta}_n\right)']'$.
We define two $p \times p$ matrices $\mf{\Sigma}_2=\mf{L}_2+\mf{S}_2$ and $\mf{\Sigma}_1=\mf{\Sigma}_2+\epsilon \mf{H}$, $\epsilon>0$,
such that $\mf{\Delta}_{1,n}=\mf{\Sigma}_1-\mf{\Sigma}_n$ and $\mf{\Delta}_{2,n}=\mf{\Sigma}_2-\mf{\Sigma}_n$.
We set the difference vector $\vf{d}(\mf{\Sigma}_1,\mf{\Sigma}_2)=\mathrm{vec}\left(\frac{\partial \phi_D(\mf{L}_1,\mf{S}_1)}{\delta (\mf{L}_1,\mf{S}_1)}\right)-\mathrm{vec}\left(\frac{\partial \phi_D(\mf{L}_2,\mf{S}_2)}{\delta (\mf{L}_2,\mf{S}_2)}\right)$, which is
a $2p^2$-dimensional vector, composed of two identical components of $p^2$ elements, stacked one below the other: $\vf{d}(\mf{\Sigma}_1,\mf{\Sigma}_2)=[\mathrm{vec}((\mf{I}_p+\mf{\Delta}_{1,n}\mf{\Delta}_{1,n}')^{-1}
\mf{\Delta}_{1,n}-(\mf{I}_p+\mf{\Delta}_{2,n}\mf{\Delta}_{2,n}')^{-1}\mf{\Delta}_{2,n})'\mathrm{vec}((\mf{I}_p+\mf{\Delta}_{1,n}\mf{\Delta}_{1,n}')^{-1}
\mf{\Delta}_{1,n}-(\mf{I}_p+\mf{\Delta}_{2,n}\mf{\Delta}_{2,n}')^{-1}\mf{\Delta}_{2,n})']'$.
It follows that
\begin{equation}
\Vert\vf{d}(\mf{\Sigma}_1,\mf{\Sigma}_2)\Vert_2\leq 2 \Vert\mathrm{vec}((\mf{I}_p+\mf{\Delta}_{1,n}\mf{\Delta}_{1,n}')^{-1}
\mf{\Delta}_{1,n}-(\mf{I}_p+\mf{\Delta}_{2,n}\mf{\Delta}_{2,n}')^{-1}\mf{\Delta}_{2,n})'\Vert_2,\nonumber
\end{equation}
or
\begin{equation}
\Vert\vf{d}(\mf{\Sigma}_1,\mf{\Sigma}_2)\Vert_F\leq 2 \Vert (\mf{I}_p+\mf{\Delta}_{1,n}\mf{\Delta}_{1,n}')^{-1}
\mf{\Delta}_{1,n}-(\mf{I}_p+\mf{\Delta}_{2,n}\mf{\Delta}_{2,n}')^{-1}\mf{\Delta}_{2,n} \Vert_{F}.\label{diff_eq}
\end{equation}
Therefore, we need to ensure that the rhs of (\ref{diff_eq}) is bounded.
To this purpose, we study the matrix $(\mf{I}_p+\mf{\Delta}_{1,n}\mf{\Delta}_{1,n}')^{-1}
\mf{\Delta}_{1,n}-(\mf{I}_p+\mf{\Delta}_{2,n}\mf{\Delta}_{2,n}')^{-1}\mf{\Delta}_{2,n}$,
that is equal to
$$
(\mf{I}_p + (\mf{\Delta}_{2,n} + \epsilon \mf{H})(\mf{\Delta}_{2,n} +
\epsilon \mf{H})')^{-1}(\mf{\Delta}_{2,n} + \epsilon \mf{H}) - (\mf{I}_p +
\mf{\Delta}_{2,n}\mf{\Delta}_{2,n}')^{-1}\mf{\Delta}_{2,n},
$$
and we prove the Lipschitzianity of the smooth function $\phi_D(\mf{L},\mf{S})=\frac{1}{2}\log\det(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')$,
and of its gradient function, $\frac{\partial \phi_D(\mf{L},\mf{S})}{\delta \mf{L}}=\frac{\partial \phi_D(\mf{L},\mf{S})}{\delta \mf{S}}=F(\mf{\Delta}_n)=(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')^{-1}\mf{\Delta}_n$.
\begin{Lemma}\label{lemma:lipschitz_orig}
The function $\mathfrak{L}(\mf{L},\mf{S})=\frac{1}{2}\log \det \varphi(\mf{\Sigma})$, with $\varphi(\mf{\Sigma})=(\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')$, is Lipschitz continuous with Lipschitz constant equal to $1$:
\begin{equation}
\label{eq:24}
\vert\log\det\varphi(\mf{\Sigma}_{1}) - \log\det\varphi(\mf{\Sigma}_{2})\vert_2 \leq \Vert\mf{\Sigma}_{1} - \mf{\Sigma}_{2}\Vert_{2},
\end{equation}
where $\mf{\Sigma}_1=\mf{\Sigma}_2+\epsilon \mf{H}$.
\end{Lemma}
\begin{Lemma}\label{lemma:lipschitz_first}
The function $\frac{\partial \mathfrak{L}(\mf{L},\mf{S})}{\partial \mf{L}}=\frac{\partial \mathfrak{L}(\mf{L},\mf{S})}
{\partial \mf{S}}=(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')^{-1}\mf{\Delta}_n$ is Lipschitz continuous with Lipschitz constant equal to $\frac{5}{4}$:
\begin{equation}
\label{lips_top}
\Vert F(\mf{\Delta}_n + \epsilon \mf{H}) - F(\mf{\Delta}_n)\Vert_{2} \leq
\frac{5}{4}\epsilon \Vert\mf{H}\Vert_{2} + O(\epsilon^{2}),
\end{equation}
with $F(\mf{\Delta}_n + \epsilon \mf{H}) = (\mf{I}_p+(\mf{\Delta}_{n} + \epsilon \mf{H})(\mf{\Delta}_{n}
+ \epsilon \mf{H})')^{-1}(\mf{\Delta}_n + \epsilon \mf{H})$,
for any $ \epsilon > 0 $.
\end{Lemma}
From \eqref{diff_eq}, noting that Lemma \ref{lemma:lipschitz_first}
holds for the Frobenius norm as well and setting $\epsilon=1$,
it follows that
\begin{eqnarray}
\Vert\vf{d}(\mf{\Sigma}_1,\mf{\Sigma}_2)\Vert_F&\leq& 2 \Vert (\mf{I}_p+\mf{\Delta}_{1,n}\mf{\Delta}_{1,n}')^{-1}
\mf{\Delta}_{1,n}-(\mf{I}_p+\mf{\Delta}_{2,n}\mf{\Delta}_{2,n}')^{-1}\mf{\Delta}_{2,n} \Vert_{F}\nonumber\\
&\leq& \frac{10}{4}\Vert\mf{\Delta}_{1,n}-\mf{\Delta}_{2,n}\Vert_{F}.\label{lips_const}
\end{eqnarray}
\noindent
\subsection*{Local convexity}
\label{LC}
In order to apply proximal gradient methods, we need to prove that
$\phi_D(\mf{L},\mf{S})=\frac{1}{2}\log \det (\mf{\Delta}_{n}\mf{\Delta}_{n}')$ is locally convex around $\mf{\Sigma}^{*}$.
In the univariate context, the function $\frac{1}{2}\log \det (1+x^2)$ is convex if and only if $|x|<\frac{1}{\sqrt{2}}$.
In the multivariate context, it is therefore reasonable to suppose that a similar condition on $\mf{\Delta}_{n}\mf{\Delta}_{n}'$ ensures local convexity. A proof can be given by showing the positive definiteness of the Hessian of the log-det function evaluated around $\mf{\Sigma}^{*}$. In other words, we need to show that there exists a positive $\delta$ such that, whenever $\Vert \mf{\Delta}_{n}\mf{\Delta}_{n}' \Vert < \delta$,
the function $\frac{1}{2}\log \det (\mf{\Delta}_{n}\mf{\Delta}_{n}')$ is convex with high probability.
\begin{Lemma}\label{convexity}
Given $ 0 < \mu \leq \frac{1}{3p} $, we have that the function
\begin{equation}
\label{eq:1}
\log\det\left(\mf{I}_p + \mf{A} \mf{A}'\right)
\end{equation}
is convex on the set $ \mathcal{C_{\mu}}= \{ \mf{A} | \mf{A} \mathrm{~~is~ a~ real~~}
p\times p \mathrm{~matrix~}, \Vert\mf{A}\Vert_{2}
\leq \mu \}$ where $ \Vert\mf{A}\Vert_{2} $ denotes the spectral norm of $ \mf{A} $.
\end{Lemma}
Changing variables in an obvious way, we have therefore proven the following.
\begin{Coroll}\label{coroll:conv_delta}
For any $ \delta >0 $ the function
\begin{equation}
\label{conv_delta}
\log\det\left(\delta^{-2}\mf{I}_p + \mf{A} \mf{A}'\right)
\end{equation}
is convex on the closed ball $ \mathcal{C_{\delta}}= \{ \mf{A} | \mf{A} \mathrm{~~is~ a~ real~~}
p\times p \mathrm{~matrix~}, \Vert\mf{A}\Vert_{2}
\leq \frac{1}{3\delta p} \}$.
\end{Coroll}
In conclusion, even though the function $\log\det(\mf{I}_p + \mf{A}) $ is always concave, Corollary
\ref{coroll:conv_delta} shows that the matrix function $\log\det\left(\delta^{-2}\mf{I}_p + \mf{A} \mf{A}'\right)$
can be made locally convex in arbitrary ball near $ 0 $,
choosing a suitable $ \delta $.
\subsection*{Probabilistic guarantees}
In what follows, we show the asymptotic behaviour of the first and the second derivative of $\phi_D(\mf{L},\mf{S})=
\frac{1}{2}\ln \det (\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')$.
\begin{Lemma}\label{random:first}
Under Assumptions \ref{eigenvalues}, \ref{sparsity}, and \ref{tails},
it holds
\begin{equation}
\frac{1}{p^{\alpha}}\Big\Vert\frac{\partial \phi_D(\mf{L},\mf{S})}{\delta \mf{L}}\Big\Vert=
\frac{1}{p^{\alpha}}\Big\Vert\frac{\partial \phi_D(\mf{L},\mf{S})}{\delta \mf{S}}\Big\Vert
\xrightarrow{n\to\infty} \mf{0}_{p \times p},\label{grad_0}
\end{equation}
\end{Lemma}
\begin{Lemma}\label{random:second}
Under Assumptions \ref{eigenvalues}, \ref{sparsity}, and \ref{tails},
it holds
\begin{equation}
\label{eq:21}
\frac{1}{p^{\alpha}}\left(\frac{1}{2}\mathrm{Hess}\log\det
\varphi(\mf{\Sigma}^{*})\right)_{i j h k} \xrightarrow{n\to\infty} \delta_{jk}\otimes
\delta_{i h} = \left(\mf{I}_p \otimes \mf{I}_p\right) _{i j h k},
\end{equation}
that is,
\begin{equation}
\frac{1}{2p^{\alpha}}\mathrm{Hess}\log\det
\varphi(\mf{\Sigma}^{*}) \xrightarrow{n\to\infty} \mf{I}_p \otimes \mf{I}_p. \label{fishone}
\end{equation}
\end{Lemma}
In order to ensure that the convexity region of Corollary \ref{coroll:conv_delta} is respected,
since $\phi_D(\mf{L},\mf{S})$ is a stochastic function of $\mf{L}$ and $\mf{S}$
because $\mf{\Sigma}_n$ is a random matrix,
it is necessary to assess the probability
$\mathcal{P}(\Vert \mf{\Delta}_{n} \Vert \geq \frac{1}{3\delta p})$.
Lemma \ref{Lemma_cons} shows that the claim $\Vert \mf{\Delta}_n \Vert \leq C \frac{p^{\alpha_1}}{\sqrt{n}}$ holds for some $C>0$ with probability $1-O(1/n^2)$,
under Assumptions \ref{eigenvalues}, \ref{sparsity}, and \ref{tails}.
Therefore, solving the inequality $\frac{1}{3\delta p}\succeq\frac{p^{\alpha_1}}{\sqrt{n}}$,
equivalent to $\frac{1}{9\delta^2 p^2}\succeq\frac{p^{2\alpha_1}}{n}$,
we can derive the condition $n\preceq p^{2\alpha_1+2}\delta^2$ to ensure the convexity region.
In other words, we just need that $\delta^2\succeq\frac{n}{p^{2\alpha_1+2}}$, i.e. $\delta\succeq\frac{\sqrt{n}}{p^{\alpha_1+1}}$.
In general, the condition $\delta \simeq \frac{\sqrt{n}}{p}$ is a sufficient one,
for any finite $p$.
\begin{Lemma}\label{random_conv}
Under Assumptions \ref{eigenvalues}, \ref{sparsity}, and \ref{tails},
Corollary \ref{coroll:conv_delta} holds if and only if $\delta\succeq\frac{\sqrt{n}}{p^{\alpha_1+1}}$.
\end{Lemma}
\section{Algebraic setup}\label{Cons_both}
\label{algebra}
Let us define the following measure of transversality between two algebraic matrix varieties $\mathcal{T}_{1}$ and $\mathcal{T}_{2}$:
\begin{equation}\varrho(\mathcal{T}_{1},\mathcal{T}_{2})=
\max_{\Vert \mf{N} \Vert_{2} \leq 1} \Vert \mathcal{P}_{\mathcal{T}_{1}}\mf{N}-\mathcal{P}_{\mathcal{T}_{2}}\mf{N}\Vert_{2},
\label{varrho_def}
\end{equation}
where $\mathcal{P}_{\mathcal T_{1}}$ and $\mathcal{P}_{\mathcal T_{2}}$ are the projection operators onto $\mathcal T_{1}$ and $\mathcal T_{2}$, respectively. Given two matrices $\mf{M}_{1}$ and $\mf{M}_{2}$ of equal size, we call $\mathcal{A}$ the addition operator, such that $\mathcal{A}(\mf{M}_{1},\mf{M}_{2})=\mf{M}_{1}+\mf{M}_{2}$,
and $\mathcal{A^{\dag}}$ the adjoint operator, such that $\mathcal{A^{\dag}}(\mf{M}_{1})=(\mf{M}_{1},\mf{M}_{1})$.
Hereafter, let $\Omega=\Omega(\mf{S}^{*})$ and
$\mathcal T=\mathcal{T}(\mf{L}^{*})$, where ${\Omega}$ is the space tangent to $\mathcal S(s)$ (see \eqref{var:S}) at $\mf{S}^{*}$ and $\mathcal T$ is the space tangent to $\mathcal L(r)$ (see \eqref{var:L}) at $\mf{L}^{*}$. We define the Cartesian product $\mathcal{Y}={\Omega}\times\mathcal{T}'$, where $\mathcal{T}'$ is a manifold such that $\varrho(\mathcal{T},\mathcal{T}')\leq \xi(\mathcal{T})/2$.
In light of these definitions, the following identities hold:
\begin{align}
&\mathcal{A}^{\dag}\mathcal{A}(\mf{S},\mf{L})=(\mf{S}+\mf{L},\mf{S}+\mf{L});\nonumber\\
&\mathcal{P}_{\mathcal{Y}}\mathcal{A}^{\dag}\mathcal{A}\mathcal{P}_{\mathcal{Y}}(\mf{S},\mf{L})=
(\mf{S}+\mathcal{P}_{\Omega}\mf{L},\mathcal{P}_{\mathcal{T}'}\mf{S}+\mf{L});\nonumber\\
&\mathcal{P}_{\mathcal{Y}^\perp}\mathcal{A}^{\dag}\mathcal{A}\mathcal{P}_{\mathcal{Y}}(\mf{S},\mf{L})=(\mf{S}+\mathcal{P}_{\Omega^{\perp}}\mf{L},\mathcal{P}_{\mathcal{T}'^{\perp}}\mf{S}+\mf{L}).\nonumber
\end{align}
We consider the following norm $g_\gamma$
\begin{equation}
g_\gamma(\widehat{\mf{L}}-\mf{L}^{*},\widehat{\mf{S}}-\mf{S}^{*})=
\max\left(\frac{\Vert\widehat{\mf{S}}-\mf{S}^{*}\Vert_{\infty}}
{\gamma},\frac{\Vert\widehat{\mf{L}}-{\mf{L}^{*}\Vert_{2}}}{\Vert\mf{L}^{*}\Vert_{2}}\right),\label{ggamma}
\end{equation}
with $\gamma \in \R^{+}$,
$\psi_{0}=\frac{1}{\xi(\mathcal{T}(\mf{L}^{*}))}\sqrt{\frac{\log p}{n}}$,
$\rho_{0}=\gamma\psi_{0}$,
$\psi=p^{\alpha}\psi_{0}$, and $\rho=\rho_0$,
where $\psi$ and $\rho$ are the thresholds in \eqref{obj}.
The norm (\ref{ggamma}) is the dual norm of the composite penalty $\psi_{0}\Vert\cdot\Vert_{*}+\rho_{0}\Vert\cdot\Vert_{1}$,
with which the direct sum $\mathcal{L}(r)\oplus\mathcal{S}(s)$ is naturally equipped.
Obviously this $g_\gamma$-consistency implies consistency in $\ell_{2}$ norm.
We now consider the solution of the following algebraic problem:
\begin{equation}\label{probtang}
(\widehat{\mf{S}}_{\Omega},\widehat{\mf{L}}_{\mathcal{T}'})=\arg\!\!\!\!\!
\min_{\underline{\mf{L}} \in \mathcal{T}',\underline{\mf{S}} \in \Omega}
\frac{1}{2 p^{\alpha_1}}\log \det (\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')
+\psi_{0} \Vert \underline{\mf{L}} \Vert_{*}+\rho_{0} \Vert \underline{\mf{S}} \Vert_{1}.
\end{equation}
This is a version of minimization \eqref{obj} rescaled by $p^{\alpha_1}$.
We may regard the smooth function
$-\mathfrak{L}^{(ld)}(\mf{L},\mf{S})=-\frac{1}{2}\log \det (\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')=-\phi_D(\mf{L},\mf{S})$,
with $\mf{\Delta}_n=\mf{\Sigma}_n-\mf{\Sigma}$,
$\mf{\Sigma}=\mf{L}+\mf{S}$, as a nonlinear function of the squared sample covariance matrix,
which we need to maximize. $-\mathfrak{L}^{(ld)}(\mf{L},\mf{S})$ is concave into
the convexity range of Lemma \ref{random_conv}.
In this view, we may call
$\mathcal{I}^{*}$ the operator that associates to
each pair $(\mf{L},\mf{S})$
the Fisher information of $-\phi_D(\mf{L},\mf{S})$, i.e.
$\mathcal{I}^{*}(\mf{L},\mf{S})=-\frac{\partial^{(2)} [-\phi_D(\mf{L},\mf{S})]}{\partial^{(2)} \mf{L}}=
-\frac{\partial^{(2)} [-\phi_D(\mf{L},\mf{S})]}{\partial^{(2)} \mf{S}
=\mathrm{Hess}\log\det\varphi(\mf{L},\mf{S})$, with $\varphi(\mf{L},\mf{S})=(\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')$.
Under the conditions of Lemma \ref{random:second},
$\frac{1}{2p^{\alpha}}\mathrm{Hess}\log\det\varphi(\mf{L}^{*},\mf{S}^{*}) \xrightarrow{n\to\infty} \mf{I}_p \otimes \mf{I}_p$.
$\mathcal{I}^{*}(\mf{L},\mf{S})$ may be regarded as a map.
Thus, we can write
\begin{align}
&\mathcal{A}^{\dag}\mathcal{I}^{*}\mathcal{A}(\mf{S},\mf{L})=\mathcal{I}^{*}(\mf{S}+\mf{L},\mf{S}+\mf{L});\nonumber\\
&\mathcal{P}_{\mathcal{Y}}\mathcal{A}^{\dag}\mathcal{I}^{*}\mathcal{A}\mathcal{P}_{\mathcal{Y}}(\mf{S},\mf{L})=
\mathcal{I}^{*}(\mf{S}+\mathcal{P}_{\Omega}\mf{L},\mathcal{P}_{\mathcal{T}'}\mf{S}+\mf{L});\nonumber\\
&\mathcal{P}_{\mathcal{Y}^\perp}\mathcal{A}^{\dag}\mathcal{I}^{*}\mathcal{A}
\mathcal{P}_{\mathcal{Y}}(\mf{S},\mf{L})=
\mathcal{I}^{*}(\mf{S}+\mathcal{P}_{\Omega^{\perp}}\mf{L},\mathcal{P}_{\mathcal{T}'^{\perp}}\mf{S}+\mf{L}).\nonumber
\end{align}
Let us consider the matrices $\mf{\Sigma}_{S}=\mathcal{P}_{\mathcal{T'}}\mf{S}+\mf{L}$,
$\mf{\Sigma}^{\perp}_{S}=\mathcal{P}_{\mathcal{T'}^{\perp}}\mf{S}+\mf{L}$,
$\mf{\Sigma}_{L}=\mathcal{P}_{\Omega}\mf{L}+\mf{S}$,
$\mf{\Sigma}^{\perp}_{L}=\mathcal{P}_{\Omega^{\perp}}\mf{L}+\mf{S}$.
Following \cite{chandrasekaran2011rank}, to ensure the algebraic consistency of \eqref{obj},
we need to estimate the quantities
\begin{align}
\alpha_\Omega=\min_{\mf{S}\in\Omega,\Vert \mf{S} \Vert_{\infty}=1} \Vert \mathcal{I}^{*}(\phi(\mf{\Sigma}_{L}) \Vert_2,
\delta_\Omega=\max_{\mf{S}\in\Omega^{\perp},\Vert \mf{S} \Vert_{\infty}=1} \Vert \mathcal{I}^{*}(\phi(\mf{\Sigma}^{\perp}_{L}) \Vert_2,
\beta_\Omega=\max_{\mf{S}\in\Omega,\Vert \mf{S} \Vert_{2}=1} \Vert \mathcal{I}^{*}(\phi(\mf{\Sigma}_{{L}})) \Vert_{\infty},\nonumber\\
\alpha_{\mathcal{T'}}=\min_{\mf{L}\in \mathcal{T'},\Vert \mf{L} \Vert_{2}=1} \Vert \mathcal{I}^{*}(\phi(\mf{\Sigma}_{S}) \Vert_{\infty},
\delta_{\mathcal{T'}}=\max_{\mf{L}\in \mathcal{T'}^{\perp},\Vert \mf{L} \Vert_{2}=1} \Vert \mathcal{I}^{*}(\phi(\mf{\Sigma}^{\perp}_{S}) \Vert_{\infty},
\beta_{\mathcal{T'}}=\max_{\mf{L}\in \mathcal{T'}, \Vert \mf{L} \Vert_{\infty}=1} \Vert \mathcal{I}^{*}(\phi(\mf{\Sigma}_{{S}}))\Vert_{2}.\nonumber
\end{align}
From Lemma \ref{lemma:lipschitz_first}, we recall that
\begin{equation}
\Vert F(\mf{\Delta}_n + \epsilon \mf{H}) - F(\mf{\Delta}_n)\Vert_{2} \leq
\frac{5}{4}\epsilon \Vert\mf{H}\Vert_{2} + O(\epsilon^{2}),\label{eq_top}
\end{equation}
with $F(\mf{\Delta}_n + \epsilon \mf{H}) = (\mf{I}_p+(\mf{\Delta}_{n} + \epsilon \mf{H})(\mf{\Delta}_{n}
+ \epsilon \mf{H})')^{-1}(\mf{\Delta}_n + \epsilon \mf{H})$,
for any $ \epsilon > 0 $.
From Lemma \ref{random:second}, we recall that
\begin{equation}
\frac{1}{2p^{\alpha}}\mathrm{Hess}\log\det
\varphi(\mf{\Sigma}^{*}) \xrightarrow{n\to\infty} \mf{I}_p \otimes \mf{I}_p.\label{first_again}
\end{equation}
Therefore, we can define the matrices
$\mf{\Delta}^{*}_{n}=\mf{\Sigma}^{*}-\mf{\Sigma}_n$, with $\mf{\Sigma}^{*}=\mf{L}^{*}+\mf{S}^{*}$,
and $\mf{H}_{\mathcal{T'}}=\mf{S}^{*}-\mathcal{P}_{\mathcal{T'}}(\mf{S}^{*})$,
$\mf{H}_{\mathcal{T'}^{\perp}}=\mf{S}^{*}-\mathcal{P}_{\mathcal{T'}^{\perp}}(\mf{S}^{*})$,
$\mf{H}_{\Omega}=\mf{L}^{*}-\mathcal{P}_{\Omega}(\mf{L}^{*})$,
$\mf{H}_{\Omega^{\perp}}=\mf{L}^{*}-\mathcal{P}_{\Omega^{\perp}}(\mf{L}^{*})$.
Considering that
$\mathcal{I}^{*}(\mf{L},\mf{S})=-\frac{\partial [-\phi_D'(\mf{L},\mf{S})]}{\delta \mf{L}}=
-\frac{\partial [-\phi_D'(\mf{L},\mf{S})]}{\delta \mf{S}}$,
in light of \eqref{eq_top}, we can write:
\begin{align}
\alpha_{\mathcal{T'}}=\frac{\min_{\mf{L}\in \mathcal{T'},\Vert \mf{L} \Vert_{2}=1} \Vert \mf{\Sigma}_S \otimes \mf{\Sigma}_S \Vert_\infty}{\Vert\mf{S}^{*}\Vert^2_\infty}=
\frac{\Vert \mf{H}_{\mathcal{T'}} \otimes \mf{H}_{\mathcal{T'}} \Vert_\infty}{\Vert\mf{S}^{*}\Vert^2_\infty}=\frac{\Vert \mf{H}_{\mathcal{T'}} \Vert_\infty^2}{\Vert\mf{S}^{*}\Vert^2_\infty};\nonumber\\
\delta_{\mathcal{T'}}=\frac{\max_{\mf{L} \in \mathcal{T'}^{\perp},\Vert \mf{L} \Vert_{2}=1} \Vert \mf{\Sigma}_S \otimes \mf{\Sigma}_S \Vert_\infty}{\Vert\mf{S}^{*}\Vert^2_\infty}=\frac{\Vert \mf{H}_{\mathcal{T'}^{\perp}} \otimes \mf{H}_{\mathcal{T'}^{\perp}} \Vert_\infty}{\Vert\mf{S}^{*}\Vert^2_\infty}=
\frac{\Vert\mf{H}_{\mathcal{T'}^{\perp}}\Vert_\infty^2}{\Vert\mf{S}^{*}\Vert^2_\infty}=
1-\frac{\Vert \mf{H}_{\mathcal{T'}} \Vert_\infty^2}{\Vert\mf{S}^{*}\Vert^2_\infty};\nonumber\\
\beta_{\mathcal{T'}}=\frac{\max_{\mf{L}\in \mathcal{T'}, \Vert \mf{L} \Vert_{\infty}=1}\Vert \mf{\Sigma}_S \otimes \mf{\Sigma}_S \Vert_2}{\Vert\mf{S}^{*}\Vert^2_2}=\frac{\Vert \mf{H}_{\mathcal{T'}} \otimes \mf{H}_{\mathcal{T'}} \Vert_2}{\Vert\mf{S}^{*}\Vert^2_2}=\frac{\Vert \mf{H}_{\mathcal{T'}} \Vert_2^2}{\Vert\mf{S}^{*}\Vert^2_2};\nonumber\\
\alpha_{\Omega}=\frac{\min_{\mf{S}\in \Omega,\Vert \mf{S} \Vert_{\infty}=1} \Vert \mf{\Sigma}_L \otimes \mf{\Sigma}_L \Vert_2}{\Vert\mf{L}^{*}\Vert^2_2}=\frac{\Vert \mf{H}_{\Omega} \otimes \mf{H}_{\Omega} \Vert_2}{\Vert\mf{L}^{*}\Vert^2_2}=\frac{\Vert \mf{H}_{\Omega} \Vert_2^2}{{\Vert\mf{L}^{*}\Vert^2_2}};\nonumber\\
\delta_{\Omega}=\frac{\max_{\mf{S} \in \Omega^{\perp},\Vert \mf{S} \Vert_{\infty}=1} \Vert \mf{\Sigma}_L \otimes \mf{\Sigma}_L \Vert_2}{\Vert\mf{L}^{*}\Vert^2_2}=
\frac{\Vert \mf{H}_{\Omega^{\perp}} \otimes \mf{H}_{\Omega^{\perp}} \Vert_2}{\Vert\mf{L}^{*}\Vert^2_2}=
\frac{\Vert\mf{H}_{\Omega^{\perp}}\Vert_2^2}{\Vert\mf{L}^{*}\Vert^2_2}=
1-\frac{\Vert \mf{H}_{\Omega} \Vert^2_2}{\Vert\mf{L}^{*}\Vert^2_2};\nonumber\\
\beta_{\Omega}=\frac{\max_{\mf{S}\in \Omega, \Vert \mf{S} \Vert_{2}=1}\Vert \mf{\Sigma}_L \otimes \mf{\Sigma}_L \Vert_\infty}{\Vert\mf{L}^{*}\Vert^2_\infty}
\frac{\Vert \mf{H}_{\Omega} \otimes \mf{H}_{\Omega} \Vert_\infty}{\Vert\mf{L}^{*}\Vert^2_\infty}=\frac{\Vert \mf{H}_{\Omega} \Vert_\infty^2}{\Vert\mf{L}^{*}\Vert^2_\infty}.\nonumber
\end{align}
These calculations show that the impact of the local curvature of $\phi_D(\mf{L},\mf{S})$
on the identification of the underlying matrix varieties is linear,
due to the linearity of $\phi_D'(\mf{L},\mf{S})$, proved in Lemma \ref{lemma:lipschitz_first}.
At this stage, analogously to \cite{chandrasekaran2011rank}, we can define
$\alpha=\min(\alpha_\Omega,\alpha_{\mathcal{T'}})$, $\delta=\max(\delta_\Omega,\delta_{\mathcal{T'}})$,
$\beta=\max(\beta_\Omega,\beta_{\mathcal{T'}})$, and assume that,
for some $\nu \in (0,\frac{1}{2}]$, the following holds.
\begin{Ass}
\begin{equation}\frac{\delta}{\alpha}\leq 1-2\nu.\end{equation}\label{ass_alg}
\end{Ass}
\begin{Prop}\label{11}
Suppose that Assumptions \ref{alg} and \ref{ass_alg} hold.
Let $\frac{\sqrt{r}\kappa_S}{\kappa_L}\leq\frac{1}{6}\left(\frac{\nu\alpha}{\beta(2-\nu)}\right)^2$,\\
and $\gamma \in [\frac{3\xi(\mathcal{T}(\mf{L}^{*}))(2-\nu)}{\nu\alpha},\frac{\nu\alpha}{2\mu(\Omega(\mf{S}^{*}))\beta(2-\nu)}]$,
with $\alpha,\beta,\gamma,\nu$ as previously defined.
Then, under Assumptions \ref{eigenvalues}-\ref{tails},
for all $(\mf{S},\mf{L})\in\mathcal{Y}$ such that $\mathcal{Y}={\Omega}\times\mathcal{T}'$ with $\varrho(\mathcal{T},\mathcal{T}')\leq \xi(\mathcal{T})/2$, as $n \to \infty$ the following holds with high probability:
\begin{enumerate}
\item
$g_{\gamma}(\mathcal{P}_{\mathcal{Y}^\perp}\mathcal{A}^{\dag}\mathcal{I}^{*}\mathcal{A}
\mathcal{P}_{\mathcal{Y}}(\mf{S},\mf{L}))\geq \frac{\alpha}{2}g_{\gamma}(\mf{S},\mf{L})$;
\item $g_{\gamma}(\mathcal{P}_{\mathcal{Y}^\perp}\mathcal{A}^{\dag}\mathcal{I}^{*}\mathcal{A}\mathcal{P}_{\mathcal{Y}}(\mf{S},\mf{L}))\leq
(1-\nu)g_{\gamma}(\mathcal{P}_{\mathcal{Y}^\perp}\mathcal{A}^{\dag}\mathcal{I}^{*}\mathcal{A}
\mathcal{P}_{\mathcal{Y}}(\mf{S},\mf{L}))$.
\end{enumerate}
\end{Prop}
\begin{proof}
Suppose that Assumption \ref{ass_alg} holds.
Since $\mf{L}\in \mathcal T'$, $\mf{S}\in \Omega$, and\\ $\gamma \in [\frac{3\xi(\mathcal{T}(\mf{L}^{*}))(2-\nu)}{\nu\alpha},\frac{\nu\alpha}{2\mu(\Omega(\mf{S}^{*}))\beta(2-\nu)}]$,
it is enough that $\frac{\sqrt{r}\kappa_S}{\kappa_L}\leq\frac{1}{6}\left(\frac{\nu\alpha}{\beta(2-\nu)}\right)^2$ under Assumption \ref{alg}
to ensure that $\xi(\mathcal{T}(\mf{L}^{*}))\mu(\Omega(\mf{S}^{*}))\leq\frac{1}{6}\left(\frac{\nu\alpha}{\beta(2-\nu)}\right)^2$.
Then, since Assumptions \ref{eigenvalues}-\ref{tails} ensure that Lemma \ref{random:second}
holds, such that
$$\frac{1}{p^{2\alpha}}\Vert\mathcal{I}^{*}(\mf{L}^{*},\mf{S}^{*})\Vert_{\infty} \xrightarrow{n\to\infty} \Vert\mf{I}_p \otimes \mf{I}_p \Vert_{\infty}=1,$$ the proof of Proposition 3.3 in \cite{chandrasekaran2012} straightforwardly applies as $n\to\infty$.
\end{proof}
\begin{Rem}\label{alg_null}
If $\alpha=\beta=1$ and $\delta=0$ as in \cite{luo2011high} and \cite{farne2020large}, it follows that
$\nu=\frac{1}{2}$ and Assumption \ref{ass_alg} is automatically satisfied.
More, the identifiability condition of Proposition \ref{11} simplifies to $\frac{\sqrt{r}\kappa_S}{\kappa_L}\leq\frac{1}{54}$,
and the two claims of Proposition \ref{11} are simplified accordingly.
\end{Rem}
\section{Consistency}\label{Cons_yes}
Let us denote the spectral decomposition of the non-definite symmetric random error matrix
$\mf{\Delta}_n$ as $\mf{U}_{\Delta} \mf{\Lambda}_{\Delta} \mf{U}_{\Delta}'$, such that the one of the semi-definite matrix
$\mf{\Delta}_n \mf{\Delta}_n'$ results to be $\mf{U}_{\Delta} \mf{\Lambda}_{\Delta}^2 \mf{U}_{\Delta}'$.
Recalling Woodbury formula, we can write
\begin{align}
(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')^{-1}=(\mf{I}_p+\mf{U}_{\Delta} \mf{\Lambda}_{\Delta}^2 \mf{U}_{\Delta}')^{-1}=\nonumber\\
=\mf{I}_p-\mf{U}_{\Delta}(\mf{\Lambda}_{\Delta}^{-2}+\mf{U}_{\Delta}'\mf{U}_{\Delta})^{-1}\mf{U}_{\Delta}'=\nonumber\\
=\mf{I}_p-\mf{U}_{\Delta}(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1}\mf{U}_{\Delta}',\label{decomp}
\end{align}
because $\mf{U}_{\Delta}$ is orthogonal.
Therefore,
\begin{align}
(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')^{-1}\mf{\Delta}_n=\nonumber\\
=(\mf{I}_p-\mf{U}_{\Delta}(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1}\mf{U}_{\Delta}')\mf{\Delta}_n=\nonumber\\
=\mf{\Delta}_n-\mf{U}_{\Delta}(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1}\mf{U}_{\Delta}'\mf{\Delta}_n=\nonumber\\
=\mf{U}_{\Delta} \mf{\Lambda}_{\Delta} \mf{U}_{\Delta}'-\mf{U}_{\Delta}
(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1}\mf{\Lambda}_{\Delta} \mf{U}_{\Delta}'.\nonumber
\end{align}
Going on, since $\mf{U}_{\Delta}$ is orthogonal, we can write
\begin{align}
\mf{\Delta}_n-\mf{U}_{\Delta}(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1}\mf{U}_{\Delta}'\mf{\Delta}_n=\nonumber\\
=\mf{U}_{\Delta}\mf{\Lambda}_{\Delta}\mf{U}_{\Delta}'-\mf{U}_{\Delta}(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1}\mf{U}_{\Delta}'\mf{U}_{\Delta}\mf{\Lambda}_{\Delta}\mf{U}_{\Delta}'=\nonumber\\
=\mf{U}_{\Delta} [\mf{\Lambda}_{\Delta}-(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1}\mf{\Lambda}_{\Delta}] \mf{U}_{\Delta}'=\nonumber\\
=\mf{U}_{\Delta} [(\mf{I}_p-(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1})\mf{\Lambda}_{\Delta}] \mf{U}_{\Delta}'.\nonumber
\end{align}
The matrix $\mf{D}_{\Delta}=(\mf{I}_p-(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1})\mf{\Lambda}_{\Delta}$ is of extreme interest.
It is a $p \times p$ diagonal matrix, whose $i$-th element is $\mf{D}_{ii}=\mf{\Lambda}_{\Delta,ii}\left(1-\frac{1}{1+\mf{\Lambda}_{\Delta,ii}^{-2}}\right)$, $i=1,\ldots,p$.
The matrix $\mf{D}_{\Delta}$ acts as an eigenvalue correction matrix: it shrinks down very large or very small sample eigenvalues,
those most affecting matrix inversion. This is why our covariance matrix estimate can be called “eigenvalue-regularized”.
Defining the matrix $\mf{\Psi}_{\Delta}=\mf{U}_{\Delta} [\mf{\Lambda}_{\Delta}-(\mf{\Lambda}_{\Delta}^{-2}+\mf{I}_p)^{-1}\mf{\Lambda}_{\Delta}] \mf{U}_{\Delta}$, we can thus write $\mathcal{I}^{*}(\phi(\mf{L},\mf{S}))=\mf{\Psi}_{\Delta}\otimes\mf{\Psi}_{\Delta}$. Qualitatively,
we can see that the curvature of the smooth loss is taken under control by this intrinsic eigenvalue regularization mechanism.
In Proposition \ref{11}, we have bounded the degree of transversality between the low rank and the sparse variety,
to control the impact of $\mathcal{I}^{*}(\phi(\mf{L},\mf{S}))$ on matrix variety identification.
Now, we want to bound the error norm \eqref{ggamma}, with the solution pair $(\widehat{\mf{L}},\widehat{\mf{S}})$
defined in \eqref{obj}. To reach that goal, following \cite{chandrasekaran2011rank},
we need before to consider the error norm of the solution pair \eqref{probtang}.
\begin{Prop}\label{12}
Let $\varrho(\mathcal{T}',\mathcal T)\leq\xi(\mathcal{T})/2$ and define
$$\widetilde{r}=\max\left\{\frac{4}{\alpha}[g_{\gamma}(\mathcal{A}^{\dag}\mf{\Delta}_n)+g_{\gamma}(\mathcal{A}^{\dag}\mathcal{I}^{*}\mf{C}_{\mathcal{T}'})+\psi_{0}],\Vert \mf{C}_{\mathcal{T}'}\Vert_{2}\right\}$$
where $\mf{C}_{\mathcal{T}'}=\mathcal{P}_{\mathcal{T}'^{\perp}}(\mf{L}^{*})$ and $\mf{\Delta}_n=\mf{\Sigma}_n-\mf{\Sigma}^{*}$.
Then, under the conditions of Proposition \ref{11} and Lemma \ref{random_conv}, the solution of problem \eqref{probtang}
$(\widehat{\mf{S}}_{\Omega},\widehat{\mf{L}}_{\mathcal{T}'})$ satisfies
\begin{equation}
g_\gamma(\widehat{\mf{S}}_{\Omega}-\mf{S}^{*},\widehat{\mf{L}}_{\mathcal{T}'}-\mf{L}^{*})\leq 2\widetilde{r}
\end{equation}
with high probability as $n \to \infty$.
\end{Prop}
\subsection*{Proof of Proposition \ref{12}}
\begin{proof}
Based on Proposition \ref{11}, we know that the optimum $(\widehat{\mf{S}}_{\Omega},\widehat{\mf{L}}_{\mathcal{T}'})$ is unique, because
$\phi_D(\mf{L},\mf{S})$ is strictly convex if $\mf{S}\in\Omega$ and $\mf{L}\in\mathcal{T}'$,
as Lemma \ref{random_conv} holds true.
By the tangent space constraints, we know that
there exist two Lagrangian multipliers in the spaces $\mathcal{T'}^{\perp}$ and $\Omega^{\perp}$, say,
$\mf{Q}_{\mathcal{T'}^{\perp}} \in \mathcal{T'}^{\perp}$ and $\mf{Q}_{\Omega^{\perp}}\in\Omega^{\perp}$,
such that $\phi'(\mf{L},\mf{S})+\mf{Q}_{\mathcal{T'}^{\perp}}\in -\psi_{0}\delta\Vert \widehat{\mf{L}}_{\mathcal{T}'}\Vert_{*}$, and $\phi'(\mf{L},\mf{S})+\mf{Q}_{\Omega^{\perp}}\in -\psi_{0}$,
where $\delta\Vert \widehat{\mf{L}}_{\mathcal{T}'}\Vert_{*}$ and
$\delta\Vert \widehat{\mf{S}}_{\Omega}\Vert_{1}$ denote the sub-differentials of
$\Vert \widehat{\mf{L}}_{\mathcal{T}'}\Vert_{*}$ and $\Vert \widehat{\mf{S}}_{\Omega}\Vert_{1}$,
respectively (see \cite{WATSON199233}).
More, we can write
$$\mathcal{P}_{{\Omega}}(\widehat{\mf{S}}_{{\Omega}})=-\psi_{0}\rho_{0} \; \mathrm{sgn}(\mf{S}^{*}), \qquad
\mbox{and} \; \mathcal{P}_{\mathcal{T'}}(\widehat{\mf{L}}_{\mathcal{T'}})=-\psi_{0} \mf{U}_L \mf{U}_L',$$
because $\widehat{\mf{S}}_{\Omega}\in\Omega$ and $\widehat{\mf{L}}_{\mathcal{T}'}\in\mathcal{T}'$.
It follows that $\Vert \mathcal{P}_{\mathcal{T'}}(\widehat{\mf{L}}_{\mathcal{T'}}) \Vert \leq 2\psi_0$,
$\Vert\mathcal{P}_{{\Omega}}(\widehat{\mf{S}}_{{\Omega}})\Vert_{\infty}=\psi_{0} \gamma$, and therefore,
$g_\gamma\left(\mathcal{P}_{\mathcal{T'}}(\widehat{\mf{L}}_{\mathcal{T'}}),\mathcal{P}_{{\Omega}}
(\widehat{\mf{S}}_{{\Omega}})\right)\leq 2\psi_{0}$.
Recalling \eqref{grad}, we can write
$$\phi'(\mf{L},\mf{S})^{(ld)}=
(\mf{I}_p+(\widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}_n)(\widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}_n)')^{-1}
(\widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}_n).$$
At this stage, we know that $\mf{\Delta}_n\mf{\Delta}_n'$ is a positive semi-definite matrix, that can be written as
$\mf{U}_{\Delta} \mf{\Lambda}_{\Delta}^2 \mf{U}_{\Delta}'$, with $\mf{U}_{\Delta}$ orthogonal matrix.
As a consequence, we can write
$$\mf{\Psi}=\sum_{j=0}^{\infty} (-\mf{\Delta}_n\mf{\Delta}_n')^j=\sum_{j=0}^{\infty}
\mf{U}_{\Delta} (-\mf{\Lambda}_{\Delta})^{2j} \mf{U}_{\Delta}'.$$
It follows that $\Vert \Psi(\mf{L},\mf{S}) \Vert = \sum_{j=0}^{\infty} (-\Vert\mf{\Lambda}_{\Delta}\Vert)^{2j}$.
Moreover, since Lemma \ref{random_conv} holds true under the conditions of Proposition \ref{11},
we are sure that $\Vert \mf{\Delta}_n \mf{\Delta}_n' \Vert < 1$, which leads to
$\sum_{j=0}^{\infty} (-(\Vert\mf{\Lambda}_{\Delta}\Vert)^{2})^j=\frac{1}{1+\Vert\mf{\Delta}_n^2\Vert}$
with $\frac{1}{1+\Vert\mf{\Delta}_n\mf{\Delta}_n'\Vert} < 1$ with high probability.
Consequently, the following inequality
$$\Vert\phi'(\mf{L},\mf{S})\Vert \leq \Vert \widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}_n\Vert$$
and the following lemma descend.
\begin{Lemma}\label{lemmatop}
Under the conditions of Proposition \ref{12},
$$\Vert\phi_D'(\mf{L},\mf{S})^{(ld)}\Vert \leq \Vert\phi_D'(\mf{L},\mf{S})^{(F)}\Vert.$$
\end{Lemma}
\begin{proof}
It is sufficient to recall that, under the conditions of Proposition \ref{12},
$$\Vert\phi_D'(\mf{L},\mf{S})^{(ld)}\Vert \leq \Vert \Psi(\widehat{\mf{L}}_{\mathcal{T}'},\widehat{\mf{S}}_{\Omega})\Vert \Vert \widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}_n\Vert,$$ with
$\Psi(\mf{L},\mf{S})=(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')^{-1}$,
and $\Vert \Psi(\widehat{\mf{L}}_{\mathcal{T}'},\widehat{\mf{S}}_{\Omega}) \Vert = \frac{1}{1+\Vert\mf{\Delta}_n\mf{\Delta}_n'\Vert} <1$,
because Lemma \ref{random_conv} holds.
\end{proof}
Now, following \cite{luo2011high}, we may observe that
$$\widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}_n=\mf{\Delta}_n+\mathcal{A}\mathcal{I}^{*}\mf{\Delta}_n-\mf{C}_{\mathcal{T}'}.$$
We apply Brouwer's fixed point theorem,
to look for the fixed point of the function
\begin{equation}F(\mf{M}_L,\mf{M}_S)=(\mf{M}_L,\mf{M}_S)-(\mathcal{P}_{\mathcal{Y}^\perp}\mathcal{A}^{\dag}\mathcal{I}^{*}\mathcal{A}
\mathcal{P}_{\mathcal{Y}})^{-1}\mathcal{P}_{\mathcal{Y}}\mathcal{A}^{\dag}(\mf{\Delta}_n-\mathcal{A}\mathcal{I}^{*}\mf{C}_{\mathcal{T}'}).
\label{brouwer}
\end{equation}
We know that $\left(\mathcal{P}_{{\Omega}}(\widehat{\mf{S}}_{{\Omega}}),
\mathcal{P}_{\mathcal{T'}}(\widehat{\mf{L}}_{\mathcal{T'}})\right)$ is a fixed point of
\eqref{brouwer}, and it is unique by Lemma \ref{random_conv}.
More, from Proposition \ref{11} (part 1)
and from $g_\gamma\left(\mathcal{P}_{\mathcal{T'}}(\widehat{\mf{L}}_{\mathcal{T'}}),\mathcal{P}_{{\Omega}}
(\widehat{\mf{S}}_{{\Omega}})\right)\leq 2\psi_0$,
we know that
\begin{eqnarray}
g_\gamma(F(\mf{M}_L,\mf{M}_S))
&\leq& \frac{2}{\alpha}g_\gamma(\mathcal{P}_{\mathcal{Y}}(\mathcal{A}^{\dag}\mf{\Delta}_n-\mathcal{A}^{\dag}\mathcal{I}^{*}\mf{C}_{\mathcal{T}'}-\mf{Z}))\nonumber\\
&\leq&\frac{4}{\alpha}g_\gamma(\mathcal{A}^{\dag}\mf{\Delta}_n-\mathcal{A}^{\dag}\mathcal{I}^{*}\mf{C}_{\mathcal{T}'}-\psi_0).\nonumber
\end{eqnarray}
Finally, it is enough to observe that
$$g_\gamma(\widehat{\mf{S}}_{\Omega}-\mf{S}^{*},\widehat{\mf{L}}_{\mathcal{T}'}-\mf{L}^{*})\leq g_\gamma(F(\mf{M}_L,\mf{M}_S))+\Vert\mf{C}_{\mathcal{T}'}\Vert_{2},$$
from which the thesis follows.
\end{proof}
We may now state the main results of this section.
\begin{Thm}\label{thm_main}
Suppose that Proposition \ref{12}, Assumptions \ref{lowerbounds} and \ref{alg} hold.
Define $\psi_{0}=\frac{1}{\xi(\mathcal{T}(\mf{L}^{*}))}\sqrt{\frac{\log p}{n}}$ and $\rho_{0}=\gamma\psi_{0}$.
Suppose that $\delta_{1} \leq \frac{\alpha_r}{3}$.
Then, there exists a positive real $\kappa$ independent of $p$ and $n$
such that, as $p,n\to\infty$ the pair of solutions defined in \eqref{obj} satisfies:
\begin{compactenum}
\item $\mathcal{P} (\frac{1}{p^{\alpha_{1}}}
\Vert\widehat{\mf{L}}-\mf{L}^{*}\Vert_{2} \leq \kappa\psi_{0}) \to 1$;
\item $\mathcal{P} (\Vert\widehat{\mf{S}}-\mf{S}^{*}\Vert_{\infty} \leq
\kappa\rho_{0}) \to 1$;
\item $\mathcal{P} (\mathrm{rk}(\widehat{\mf{L}})=\mathrm{rk}(\mf{L}^{*})) \to 1$;
\item $\mathcal{P} (\mathrm{sgn}(\widehat{\mf{S}})=\mathrm{sgn}(\mf{S}^{*})) \to 1$.
\end{compactenum}
\end{Thm}
\begin{Coroll}\label{coroll_main}
Under all the assumptions and conditions of Theorem \ref{thm_main},
as $p,n\to\infty$ it holds with high probability:
\begin{compactenum}
\item[1.] $\mathcal{P} (\frac{1}{p^{\delta_{1}}}\Vert \widehat{\mf{S}}-\mf{S}^{*}\Vert_{2}
\leq \kappa\sqrt{\frac{\log p}{n}}) \to 1$;
\item[2.] $\mathcal{P} (\frac{1}{p^{\alpha_{1}+\delta_{1}}}\Vert \widehat{\mf{\Sigma}}-\mf{\Sigma}^{*}\Vert _{2} \leq \kappa \sqrt{\frac{\log p}{n}}) \to 1$;
\item[3.] $\mathcal{P} (\lambda_p(\widehat{\mf{S}})>0) \to 1$;
\item[4.] $\mathcal{P} (\lambda_p(\widehat{\mf{\Sigma}})>0) \to 1$.
\end{compactenum}
In addition, supposing that $\lambda_p(\mf{S}^{*})=O(1)$ and $\lambda_p(\mf{\Sigma}^{*})=O(1)$,
the following statements hold as $p,n \to \infty$:
\begin{compactenum}
\item[5.] $\mathcal{P} (\frac{1}{p^{\delta_{1}}}\Vert\widehat{\mf{S}}^{-1}-\mf{S}^{*-1}\Vert_{2}
\leq \kappa\sqrt{\frac{\log p}{n}})\to 1$;
\item[6.] $\mathcal{P} (\frac{1}{p^{\alpha_{1}+\delta_{1}}}\Vert\widehat{\mf{\Sigma}}^{-1}-\mf{\Sigma}^{*-1}\Vert_{2}
\leq \kappa\sqrt{\frac{\log p}{n}})\to 1$.
\end{compactenum}
\end{Coroll}
Theorem \ref{thm_main} and Corollary \ref{thm_main} establish the algebraic
and parametric consistency of the estimator pair in \eqref{obj}. Their proofs can be found in Appendix \ref{proofs}.
The following remarks clarify the most relevant theoretical aspects.
\begin{Rem}
Parts 3 and 4 of Theorem \ref{thm_main} and Corollary \ref{thm_main}, jointly considered,
ensure the algebraic consistency of \eqref{obj}. This result is established by adapting the results of \cite{chandrasekaran2012},
by means of Propositions \ref{11} and \ref{12}, that take into account the random nature of
the second derivative of the smooth loss $\phi_D(\mf{L},\mf{S})^{(ld)}$. Another necessary technical key
is to control the manifolds containing $\mf{L}^{*}$ and $\mf{S}^{*}$ by Assumption \ref{alg},
which causes the term $O(p^{\delta_1})$ to appear in the rates of $\widehat{\mf{S}}$ and
$\widehat{\mf{\Sigma}}$ (parts 1 and 2 of Corollary \ref{thm_main}).
\end{Rem}
\begin{Rem}
Requiring $\delta_1>0$ (Assumption \ref{sparsity}) ensures that the equation
$\xi(\mathcal{T}(\mf{L}^{*}))=\frac{\sqrt r}{\kappa_L p^{\delta_{1}}}$ (Assumption \ref{alg})
leads to an increasingly small identifiability error as $p\to\infty$. In this respect, a high dimension is rather a blessing than a curse for our method.
Note that the prevalence of the latent factor structure versus the residual one
is preserved as $p\to\infty$ by the condition $\delta_1<\alpha_r$, resulting from Assumptions \ref{eigenvalues} and \ref{sparsity}.
\end{Rem}
\begin{Rem}
We note that Theorem \ref{thm_main} and Corollary \ref{thm_main} hold as well for the solution pair of \eqref{obj_all}
with $\mathfrak{L}(\mf{L},\mf{S})=\frac{1}{2} \Vert \mf{\Sigma}_n - (\mf{L}+\mf{S}) \Vert_{F}^2$,
that corresponds to ALCE estimator \citep{farne2020large},
since the second derivative of the smooth component $\phi_D(\mf{L},\mf{S})^{(F)}=\frac{1}{2} \Vert \mf{\Sigma}_n - (\mf{L}+\mf{S}) \Vert_{F}^2$
is constant and equal to the null matrix, i.e., $\phi_D(\mf{L},\mf{S})^{(F)}$ is globally convex.
\end{Rem}
\begin{Rem}
Should we assume $\alpha_1=\ldots=\alpha_r=1$ and $\delta_1=0$, we re-obtain the rates in spectral norm
of \cite{fan2013large} for $\widehat{\mf{L}}$ (part 1 of Theorem \ref{thm_main}), $\widehat{\mf{S}}$ (part 1 of Corollary \ref{coroll_main}),
and $\widehat{\mf{\Sigma}}$ (part 2 of Corollary \ref{coroll_main}),
and the rate in maximum norm of \cite{bickel2008covariance} for $\widehat{\mf{S}}$ (part 2 of Theorem \ref{thm_main}).
\end{Rem}
\begin{Rem}
The error rate of $\widehat{\mf{S}}^{-1}$ is similar to \cite{fan2013large} and \cite{bickel2008covariance}.
The error rate of $\widehat{\mf{\Sigma}}^{-1}$ is instead worse than the corresponding one in \cite{fan2013large}.
It may be improved by explicitly estimating the factor scores via this method, which is left to future research.
\end{Rem}
\begin{Rem}\label{cond_discuss}
The error bound is maximum when $\nu=\frac{1}{2}$. In that case, by Assumption \ref{ass_alg} we obtain $\delta=0$.
It follows that $\mathcal{P}_{\mathcal{T'}}(\mf{S}^{*})=\mf{0}_{p \times p}$ and
$\mathcal{P}_{\Omega}(\mf{L}^{*})=\mf{0}_{p \times p}$, and then, $\alpha=1$ and $\beta=1$.
This means that, in the case $\nu<\frac{1}{2}$, the rate of \eqref{obj} is tighter,
and adapts to the underlying algebraic structure. However, the price to pay is that the identifiability condition
$\frac{\sqrt{r}\kappa_S}{\kappa_L}\leq\frac{1}{6}\left(\frac{\nu\alpha}{\beta(2-\nu)}\right)^2$
becomes more stringent in that case.
\end{Rem}
Suppose now that we explicitly compare the heuristics of \cite{farne2020large}
\begin{equation}
\phi^{(F)}(\mf{L},\mf{S})=\min_{\mf{L},\mf{S}} \frac{1}{2} \Vert \mf{\Sigma}_n - (\mf{L}+\mf{S}) \Vert_{F}^2+\psi \Vert \mf{L} \Vert_{*}
+ \rho \Vert \mf{S} \Vert_{1}\label{fro}
\end{equation}
to the heuristics of this paper:
\begin{equation}\phi(\mf{L},\mf{S})^{(ld)}=\min_{\mf{L},\mf{S}} \frac{1}{2} \log \det (\mf{I}_p+(\mf{\Sigma}_n - (\mf{L}+\mf{S}))(\mf{\Sigma}_n - (\mf{L}+\mf{S}))')+\psi
\Vert \mf{L} \Vert_{*} + \rho \Vert \mf{S} \Vert_{1}.\label{logdet}
\end{equation}
We write $\phi(\mf{L},\mf{S})^{(F)}=\phi_D(\mf{L},\mf{S})^{(F)}+\phi_{ND}(\mf{L},\mf{S})$,
with $\phi_D(\mf{L},\mf{S})^{(F)}=\Vert \mf{\Sigma}_n - (\mf{L}+\mf{S}) \Vert_{F}^2$, and
$\phi(\mf{L},\mf{S})^{(ld)}=\phi_D(\mf{L},\mf{S})^{(ld)}+\phi_{ND}(\mf{L},\mf{S})$,
with $\phi_D(\mf{L},\mf{S})^{(ld)}=\log \det (\mf{I}_p+(\mf{\Sigma}_n - (\mf{L}+\mf{S}))(\mf{\Sigma}_n - (\mf{L}+\mf{S}))')$.
We have shown in \eqref{grad} that $\phi_D'(\mf{L},\mf{S})^{(ld)}=(\mf{I}_p+\mf{\Delta}_n\mf{\Delta}_n')^{-1}\mf{\Delta}_n$,
where $\mf{\Delta}_n=\mf{\Sigma}_n - (\mf{L}+\mf{S})$. We can easily derive
that $\phi_D'(\mf{L},\mf{S})^{(F)}=\mf{\Delta}_n=\mf{\Sigma}_n - (\mf{L}+\mf{S})$.
Let us define the pair of solutions $(\widehat{\mf{L}}^{(F)},\widehat{\mf{S}}^{(F)})=\arg\min_{\mf{L},\mf{S}}\phi^{(F)}(\mf{L},\mf{S})$
and $(\widehat{\mf{L}}^{(ld)},\widehat{\mf{S}}^{(ld)})=\arg\min_{\mf{L},\mf{S}}\phi^{(ld)}(\mf{L},\mf{S})$,
with $\widehat{\mf{\Sigma}}^{(F)}=\widehat{\mf{L}}^{(F)}+\widehat{\mf{S}}^{(F)}$ and
$\widehat{\mf{\Sigma}}^{(ld)}=\widehat{\mf{L}}^{(ld)}+\widehat{\mf{S}}^{(ld)}$. The following important theorem holds.
\begin{Thm}\label{thm_comp}
Let $\nu=\frac{1}{2}$. Then, Theorem \ref{thm_main} and Corollary \ref{coroll_main} hold for $(\widehat{\mf{L}}^{(F)},\widehat{\mf{S}}^{(F)})$.
More, under all the conditions of Theorem \ref{thm_main}, as $p,n \to \infty$ it holds:\\
\begin{compactenum}
\item[1.] $\frac{\Vert\mf{\widehat{\mf{L}}}^{(ld)}-\mf{L}^{*}\Vert}{\Vert\mf{\widehat{\mf{L}}}^{(F)}-\mf{L}^{*}\Vert} \leq 1$ and $\frac{\Vert\mf{\widehat{\mf{\Sigma}}}^{(ld)}-\mf{\Sigma}^{*}\Vert}{\Vert\mf{\widehat{\mf{\Sigma}}}^{(F)}-\mf{\Sigma}^{*}\Vert}
\leq 1$;
\item[2.]$\frac{\Vert\mf{\widehat{\mf{S}}}^{(ld)}-\mf{S}^{*}\Vert_{\infty}}{\Vert\mf{\widehat{\mf{S}}}^{(F)}-\mf{S}^{*}\Vert_{\infty}}
\leq 1$ and $\frac{\Vert\mf{\widehat{\mf{S}}}^{(ld)}-\mf{S}^{*}\Vert}{\Vert\mf{\widehat{\mf{S}}}^{(F)}-\mf{S}^{*}\Vert}
\leq 1$.
\end{compactenum}
In addition, supposing that $\lambda_p(\mf{S}^{*})=O(1)$ and $\lambda_p(\mf{\Sigma}^{*})=O(1)$,
as $p,n \to \infty$ it holds:
\begin{compactenum}
\item[3.] $\frac{\Vert\mf{\widehat{\mf{S}}}^{(ld)-1}-\mf{S}^{*-1}\Vert}{\Vert\mf{\widehat{\mf{S}}}^{(F)-1}-\mf{S}^{*-1}\Vert}
\leq 1$ and $\frac{\Vert\mf{\widehat{\mf{\Sigma}}}^{(ld)-1}-\mf{\Sigma}^{*-1}\Vert}{\Vert\mf{\widehat{\mf{\Sigma}}}^{(F)-1}-\mf{\Sigma}^{*-1}\Vert}
\leq 1$.
\end{compactenum}
\end{Thm}
\begin{proof}
The proof descends from Theorem \ref{thm_main} and Corollary \ref{coroll_main},
that also hold straightforwardly for $\widehat{\mf{L}}^{(F)}$ and $\widehat{\mf{S}}^{(F)}$,
and because Lemma \ref{lemmatop} holds under the conditions of Proposition \ref{12},
such that the error bound in $g_\gamma$ norm $\widetilde{r}_{ld}$ of $(\widehat{\mf{L}}^{(ld)},\widehat{\mf{S}}^{(ld)})$
is smaller than or equal to the corresponding error bound $\widetilde{r}_{F}$ of $(\widehat{\mf{L}}^{(F)},\widehat{\mf{S}}^{(F)})$.
\end{proof}
\begin{Rem}
Theorem \ref{thm_comp} states that, if $\nu=\frac{1}{2}$ (see Remark \ref{alg_null}),
$(\widehat{\mf{L}}^{(F)},\widehat{\mf{S}}^{(F)})$ and $(\widehat{\mf{L}}^{(ld)},\widehat{\mf{S}}^{(ld)})$ are both algebraically and parametrically consistent,
and the error bound in $g_\gamma$ norm of $(\widehat{\mf{L}}^{(ld)},\widehat{\mf{S}}^{(ld)})$ is systematically not larger than the corresponding bound of $(\widehat{\mf{L}}^{(F)},\widehat{\mf{S}}^{(F)})$.
If $\nu<\frac{1}{2}$, the error bound in $g_\gamma$ norm of $(\widehat{\mf{L}}^{(ld)},\widehat{\mf{S}}^{(ld)})$ is tighter than the corresponding bound of $(\widehat{\mf{L}}^{(F)},\widehat{\mf{S}}^{(F)})$, but the identifiability condition $\frac{\sqrt{r}\kappa_S}{\kappa_L}\leq\frac{1}{6}\left(\frac{\nu\alpha}{\beta(2-\nu)}\right)^2$ of Theorem \ref{thm_main}
becomes more stringent (see Remark \ref{cond_discuss}).
\end{Rem}
\section{Solution algorithm}\label{Alg}
Exploiting the results of Section \ref{math_anal}, we provide a solution algorithm for problem \eqref{obj}.
Following \cite{luo2011high}, \cite{nesterov2013gradient} and the supplement of \cite{farne2020large},
and setting the relevant step-size to apply as $\ell=\frac{10}{4}$ from \eqref{lips_const},
we derive the following optimization procedure.
\begin{algorithm}
\caption{Pseudocode to solve problem \eqref{obj} given any input covariance matrix $\mf{\Sigma}_n$.}\label{alg_ld}
\begin{enumerate}
\item \textbf{Set} $(\mf{L}_{0},\mf{S}_{0})=\frac{1}{2\mathrm{tr}(\mf{\Sigma}_n)}(\mathrm{diag}(\mf{\Sigma}_n),\mathrm{diag}(\mf{\Sigma}_n))$, $\eta_{0}=1$.
\item \textbf{Initialize} $\mf{Y}_{0}=\mf{L}_{0}$ and $\mf{Z}_{0}=\mf{S}_{0}$. Set $t=1$.
\item For $t\geq 1$, \textbf{repeat}:
\begin{description}
\item[(i)] \textbf{calculate} $\mf{\Delta}_{t,n}=\mf{Y}_{t-1}+\mf{Z}_{t-1}-\mf{\Sigma}_n$;
\item[(ii)] \textbf{compute}
$\frac{\partial \frac{1}{2}\log \det \left(\mf{I}_p+\mf{\Delta}_{t,n}\mf{\Delta}_{t,n}'\right)}{\partial \mf{Y}_{t-1}}=\frac{\partial \frac{1}{2}\log \det \left(\mf{I}_p+\mf{\Delta}_{t,n}\mf{\Delta}_{t,n}'\right)}{\partial \mf{Z}_{t-1}}=(\mf{I}_p+\mf{\Delta}_{t,n}\mf{\Delta}_{t,n}')^{-1}\mf{\Delta}_{t,n}$;
\item[(iii)] \textbf{apply} the \textbf{singular value thresholding} (SVT, \cite{cai2010singular}) operator $T_{\psi_{0}}$ to $\mf{E}_{Y,t}=\mf{Y}_{t-1}- \frac{1}{\ell}(\mf{I}_p+\mf{\Delta}_{t,n}\mf{\Delta}_{t,n}')^{-1}\mf{\Delta}_{t,n}$, with $\ell=\frac{10}{4}$, and set $\mf{L}_{t}=T_{\psi}(\mf{E}_{Y,t})=\widehat{\mf{U}}\widehat{\mf{D}}_\psi \widehat{\mf{U}}^\top$;
\item[(iv)] \textbf{apply} the \textbf{soft-thresholding} operator \citep{daubechies2004iterative}
$T_{\rho_{0}}$ to $\mf{E}_{Z,t}=\mf{Z}_{t-1}- \frac{1}{\ell}(\mf{I}_p+\mf{\Delta}_{t,n}\mf{\Delta}_{t,n}')^{-1}\mf{\Delta}_{t,n}$, with $\ell=\frac{10}{4}$, and set $\mf{S}_{t}=T_\rho(\mf{E}_{Z,t})$;
\item[(v)] \textbf{set} $(\mf{Y}_{t},\mf{Z}_{t})=(\mf{L}_{t},\mf{S}_{t})+\left\{\frac{{\eta_{t-1}-1}}{{\eta_{t}}}\right\}\{(\mf{L}_{t},\mf{S}_{t})-(\mf{L}_{t-1},\mf{S}_{t-1})\}$
where $\eta_{t}={\frac{1}{2}+\frac{1}{2}\sqrt{1+4 \eta_{t-1}^2}}$;
\item[(vi)] \textbf{stop} if the convergence criterion $\frac{\Vert\mf{L}_{t}-\mf{L}_{t-1}\Vert_F}{{1+\Vert \mf{L}_{t-1}\Vert_F}}
+\frac{\Vert\mf{S}_{t}-\mf{S}_{t-1}\Vert_F}{{1+\Vert \mf{S}_{t-1}\Vert_F}} \leq \varepsilon$.
\end{description}
\item \textbf{Set} $\widehat{\mf{L}}^{(ld)}_{\rm{A}}=\mathrm{tr}(\mf{\Sigma}_n)\mf{Y}_{t}$ and $\widehat{\mf{S}}^{(ld)}_{\rm{A}}=\mathrm{tr}(\mf{\Sigma}_n)\mf{Z}_{t}$.
\end{enumerate}
\end{algorithm}
In Algorithm \ref{alg_ld}, we first rescale, at step 1, by the trace of the input $\mf{\Sigma}_n$,
and we then restore the original scale at step 4.
This approach differs from the original application in \cite{farne2020large}. It has the advantage to set the threshold grid
and to perform threshold selection in a controllable way. By Algorithm \ref{alg_ld}, we derive $(\widehat{\mf{L}}^{(ld)}_{\rm{A}},\widehat{\mf{S}}^{(ld)}_{\rm{A}})$,
where the superscript $\rm{A}$ stands for ALCE (ALgebraic Covariance Estimator).
Note that we derive the step-size $\ell=\frac{10}{4}$ from the Lipschitz constant $l=\frac{5}{4}$ derived in Section \ref{LipC}
(see \eqref{lips_const}).
Algorithm \ref{alg_fro_2} is the analog of Algorithm \ref{alg_ld} for problem \eqref{obj_all} with $\mathfrak{L}(\mf{L},\mf{S})=\mathfrak{L}^{(F)}(\mf{L},\mf{S})$.
\begin{algorithm}
\caption{Pseudocode to solve problem \eqref{obj_all} with $\mathfrak{L}(\mf{L},\mf{S})=\mathfrak{L}^{(F)}(\mf{L},\mf{S})$.}\label{alg_fro_2}
\begin{enumerate}
\item \textbf{Set} $(\mf{L}_{0},\mf{S}_{0})=\frac{1}{2\mathrm{tr}(\mf{\Sigma}_n)}(\mathrm{diag}(\mf{\Sigma}_n),\mathrm{diag}(\mf{\Sigma}_n))$, $\eta_{0}=1$.
\item \textbf{Initialize} $\mf{Y}_{0}=\mf{L}_{0}$ and $\mf{Z}_{0}=\mf{S}_{0}$. Set $t=1$.
\item For $t\geq 1$, \textbf{repeat}:
\begin{description}
\item[(i)] \textbf{calculate} $\frac{\partial \frac{1}{2}\Vert\mf{Y}_{t-1}+\mf{Z}_{t-1}-\mf{\Sigma}_n\Vert^2_{F}}{\partial \mf{Y}_{t-1}}=\frac{\partial \frac{1}{2}\Vert\mf{Y}_{t-1}+\mf{Z}_{t-1}-\mf{\Sigma}_n\Vert^2_{F}}{\partial \mf{Z}_{t-1}}=\mf{Y}_{t-1}+\mf{Z}_{t-1}-\mf{\Sigma}_n$;
\item[(ii)] \textbf{apply} the \textbf{singular value thresholding} (SVT) operator $T_{\psi}$ to $\mf{E}_{Y,t}=\mf{Y}_{t-1}- \frac{1}{2}(\mf{Y}_{t-1}+\mf{Z}_{t-1}-\mf{\Sigma}_n)$ and set $\mf{L}_{t}=T_{\psi}(\mf{E}_{Y,t})=\widehat{\mf{U}}\widehat{\mf{D}}_\psi \widehat{\mf{U}}^\top$;
\item[(iii)] \textbf{apply} the \textbf{soft-thresholding} operator $T_\rho$ to $\mf{E}_{Z,t}=\mf{Z}_{t-1}- \frac{1}{2}(\mf{Y}_{t-1}+\mf{Z}_{t-1}-\mf{\Sigma}_n)$ and set $\mf{S}_{t}=T_\rho(\mf{E}_{Z,t})$;
\item[(iv)] \textbf{set} $(\mf{Y}_{t},\mf{Z}_{t})=(\mf{L}_{t},\mf{S}_{t})+\left\{\frac{{\eta_{t-1}-1}}{{\eta_{t}}}\right\}\{(\mf{L}_{t},\mf{S}_{t})-(\mf{L}_{t-1},\mf{S}_{t-1})\}$
where $\eta_{t}={\frac{1}{2}+\frac{1}{2}\sqrt{1+4 \eta_{t-1}^2}}$;
\item[(v)] \textbf{stop} if the convergence criterion $\frac{\Vert\mf{L}_{t}-\mf{L}_{t-1}\Vert_F}{{1+\Vert \mf{L}_{t-1}\Vert_F}}
+\frac{\Vert\mf{S}_{t}-\mf{S}_{t-1}\Vert_F}{{1+\Vert \mf{S}_{t-1}\Vert_F}} \leq \varepsilon$.
\end{description}
\item \textbf{Set} $\widehat{\mf{L}}^{(F)}_{\rm{A}}=\mathrm{tr}(\mf{\Sigma}_n)\mf{Y}_{t}$ and $\widehat{\mf{S}}^{(F)}_{\rm{A}}=\mathrm{tr}(\mf{\Sigma}_n)\mf{Z}_{t}$.
\end{enumerate}
\end{algorithm}
Algorithms \ref{alg_ld} and \ref{alg_fro_2}, unlike the algorithm in \cite{farne2020large},
allow to define the vector of initial thresholds $\vf{\psi}_{init}$ as
a function of $\frac{1}{p}$, and the vector of initial thresholds $\vf{\rho}_{init}$ as a function of $\frac{1}{p\sqrt{p}}$,
because $\sqrt{p}$ is the maximum allowed degree order in the residual component under Assumption \ref{sparsity}.
This is due to the fact that we rescale by the trace of the input in both algorithms (see step 1).
In the end, for each threshold pair $(\psi,\rho)$ we can calculate
$\widehat{\mf{\Sigma}}^{(ld)}_{\rm{A}}(\psi,\rho)=\widehat{\mf{L}}^{(ld)}_{\rm{A}}(\psi,\rho)+\widehat{\mf{S}}^{(ld)}_{\rm{A}}(\psi,\rho)$,
$\widehat{\mf{\Sigma}}^{(F)}_{\rm{A}}(\psi,\rho)=\widehat{\mf{L}}^{(F)}_{\rm{A}}(\psi,\rho)+\widehat{\mf{S}}^{(F)}_{\rm{A}}(\psi,\rho)$,
and $\widehat{\mf{\Sigma}}^{(F)}_{\rm{A}}(\psi,\rho)=\widehat{\mf{L}}^{(F)}_{\rm{A}}(\psi,\rho)+\widehat{\mf{S}}^{(F)}_{\rm{A}}(\psi,\rho)$.
Following \cite{farne2020large}, we also perform the unshrinkage of estimated latent eigenvalues,
as this operation improves the sample total loss as much as possible in the finite sample. We thus get
the UNALCE (UNshrunk ALCE) estimates as:
\begin{eqnarray}
\widehat{\mf{L}}_{\rm{U}}=\widehat{\mf{U}}_{\rm{A}}(\widehat{\mf{\Lambda}}_{\rm{A}}+\psi \mf{I}_r)\widehat{\mf{U}}_{\rm{A}}',\label{unshr1}\\
\mathrm{diag}(\widehat{\mf{S}}_{\rm{U}})=\mathrm{diag}(\widehat{\mf{\Sigma}}_{\rm{A}})-\mathrm{diag}(\widehat{\mf{L}}_{\rm{U}}),\label{unshr2}\\
\mathrm{off-diag}(\widehat{\mf{S}}_{\rm{U}})=\mathrm{off-diag}(\widehat{\mf{S}}_{\rm{A}}),\label{unshr3}
\end{eqnarray}
where $\psi>0$ is any chosen eigenvalue threshold parameter.
By setting $\widehat{r}_{A}=\mathrm{rk}(\widehat{\mf{L}}_{\rm{A}})$ and defining the spectral decomposition of $\widehat{\mf{L}}_{\rm{A}}$ as
$\widehat{\mf{L}}_{\rm{A}}=\widehat{\mf{U}}_{\rm{A}}\widehat{\mf{D}}_{\rm{A}}\widehat{\mf{U}}_{\rm{A}}'$,
with $\widehat{\mf{U}}_{\rm{A}}$ $p \times \widehat{r}_A$ matrix such that $\widehat{\mf{U}}_{\rm{A}}'\widehat{\mf{U}}_{\rm{A}}=\mf{I}_{\widehat{r}_A}$,
and $\widehat{\mf{D}}_{\rm{A}}$ $\widehat{r}_A \times \widehat{r}_A$ diagonal matrix,
it can be proved \citep{farne2020large} that it holds
\begin{equation}
\left(\widehat{\mf{L}}_{\rm{U}},\widehat{\mf{S}}_{\rm{U}}\right)=\arg\min_{\mf{L} \in \widehat{\mathcal{L}}(\widehat{r}_{A}), \mf{S} \in \widehat{\mathcal{S}}_{diag}}\frac 12 \Vert{\mf{\Sigma}}_{n}-(\mf{L}+\mf{S})\Vert_{2},\label{min2}
\end{equation}
where
\begin{eqnarray}
\widehat{\mathcal{L}}(\widehat{r}_{A}) = \{\mf{L} \mid \mf{L} \succeq 0, {\mf{L}}={\widehat{\mf{U}}_{\rm{A}}\mf{D}\widehat{\mf{U}}_{\rm{A}}'}, \mf{D} \in \R^{r \times r} \mathrm{diagonal}\},\\
\widehat{\mathcal{S}}_{diag}=\{\mf{S} \in \R^{p \times p} \mid \mathrm{diag}(\mf{L})+\mathrm{diag}(\mf{S})=\mathrm{diag}(\widehat{\mf{\Sigma}}_{\rm{A}}),\nonumber\\
\mathrm{off-diag}(\mf{S})=\mathrm{off-diag}(\widehat{\mf{S}}_{\rm{A}}),
\mf{L}\in\widehat{\mathcal{L}}(\widehat{r}_{A})\}.
\end{eqnarray}
For this reason, we calculate $\widehat{\mf{L}}_{\rm{U}}$ and $\widehat{\mf{S}}_{\rm{U}}$ as in \eqref{unshr1}, \eqref{unshr2} and \eqref{unshr3}
by Algorithm \ref{alg_ld} or \ref{alg_fro_2},
and we obtain, for each threshold pair $(\psi,\rho)$,
the pairs of estimates\\
$\left(\widehat{\mf{L}}^{(ld)}_{\rm{U}}(\psi,\rho),\widehat{\mf{S}}^{(ld)}_{\rm{U}}(\psi,\rho)\right)$ and
$\left(\widehat{\mf{L}}^{(F)}_{\rm{U}}(\psi,\rho),\widehat{\mf{S}}^{(F)}_{\rm{U}}(\psi,\rho)\right)$.
As a consequence, we can derive the overall UNALCE estimates as
$\widehat{\mf{\Sigma}}^{(ld)}_{\rm{U}}(\psi,\rho)=\widehat{\mf{L}}^{(ld)}_{\rm{U}}(\psi,\rho)+\widehat{\mf{S}}^{(ld)}_{\rm{U}}(\psi,\rho)$ and
$\widehat{\mf{\Sigma}}^{(F)}_{\rm{U}}(\psi,\rho)=\widehat{\mf{L}}^{(F)}_{\rm{U}}(\psi,\rho)+\widehat{\mf{S}}^{(F)}_{\rm{U}}(\psi,\rho)$.
Then,
given the latent variance proportions $\widehat{\theta}(\psi,\rho)_{\rm{A}}=\frac{\mathrm{tr}(\widehat{\mf{L}}(\psi,\rho)_{\rm{A}})}{\mathrm{tr}(\widehat{\mf{\Sigma}}(\psi,\rho)_{\rm{A}})}$ and
$\widehat{\theta}(\psi,\rho)_{\rm{U}}=\frac{\mathrm{tr}(\widehat{\mf{L}}(\psi,\rho)_{\rm{U}})}{\mathrm{tr}(\widehat{\mf{\Sigma}}(\psi,\rho)_{\rm{U}})}$,
we can select the optimal threshold pairs $(\psi_{U},\rho_{U})$ and $(\psi_{A},\rho_{A})$
by minimizing the MC criteria
\begin{eqnarray}
MC(\psi,\rho)_{U}&=&\max
\left\{\frac{{\widehat{r}\Vert\widehat{\mf{L}}(\psi,\rho)_{\rm{U}}}\Vert_{2}}{\widehat{\theta}(\psi,\rho)_{\rm{U}}},
\frac{{\Vert\widehat{\mf{S}}(\psi,\rho)_{\rm{U}}}\Vert_{1,v}}{{\gamma}(1-\widehat{\theta}(\psi,\rho)_{\rm{U}})}\right\},\label{MC}\\
MC(\psi,\rho)_{A}&=&\max\left\{\frac{{\widehat{r}\Vert\widehat{\mf{L}}(\psi,\rho)_{\rm{A}}}\Vert_{2}}{\widehat{\theta}(\psi,\rho)_{\rm{A}}},
\frac{{\Vert\widehat{\mf{S}}(\psi,\rho)_{\rm{A}}}\Vert_{1,v}}{{\gamma}(1-\widehat{\theta}(\psi,\rho)_{\rm{A}})}\right\},\nonumber
\end{eqnarray}
where ${\gamma}=\frac{\rho}{\psi}$ is the ratio between the sparsity and the latent eigenvalue threshold
(see \cite{farne2020large} for more details). In this way, we can select the optimal threshold pairs
$(\psi_{A},\rho_{A})=\arg \min_{(\psi,\rho)} MC(\psi,\rho)_{A}$ and
$(\psi_{U},\rho_{U})=\arg \min_{(\psi,\rho)} MC(\psi,\rho)_{U}$.
This procedure is applied for Algorithms \ref{alg_ld} and \ref{alg_fro_2}, where
the possible threshold pairs are derived by the Cartesian product of the initial vectors $\vf{\psi}_{init}$ and $\vf{\rho}_{init}$
(see Section \ref{real}).
\section{Simulation study}\label{sim}
In this section, we test the theoretical results of previous sections
on some data simulated with this purpose.
Hereafter, we report the key simulation parameters:
\begin{enumerate}
\item the dimension $p$ and the sample size $n$;
\item the rank $r$ and the condition number $c=\mathrm{cond}(\mf{L}^{*})=\lambda_{1}(\mf{L}^{*})/\lambda_{r}(\mf{L}^{*})$
of the low rank component $\mf{L}^{*}$
\item the trace of $\mf{L}^{*}$, $\tau \theta p$, where $\tau$ is a magnitude parameter
and $\theta=\mathrm{tr}(\mf{L}^{*})/\mathrm{tr}(\mf{\Sigma}^{*})$ is the proportion of variance explained by $\mf{L}^{*}$;
\item the number of off-diagonal non-zeros $s$ in the sparse component $\mf{S}^{*}$;
\item the minimum latent eigenvalue $\lambda_r(\mf{L}^{*})$;
\item the minimum nonzero off-diagonal residual entry in absolute value $\Vert\mf{S}^{*}\Vert_{min,off}$;
\item the proportion of non-zeros over the number of off-diagonal elements, $\pi_s=\frac{2s}{p(p-1)}$ ;
\item the proportion of (absolute) residual covariance $\rho_{\mf{S}^{*}}=\frac{\sum_{i=1}^p\sum_{j \ne i}{\vert \mf{S}_{ij}^*\vert}}{\sum_{i=1}^p\sum_{j \ne i}\vert\mf{\Sigma}_{ij}^*\vert}$;
\item $K=100$ replicates for each setting.
\end{enumerate}
The detailed simulation algorithm is reported in \cite{farne2016large}.
The main parameters of simulated settings are reported in Tables \ref{sett} and \ref{specond}.
Setting 1 presents not so spiked eigenvalues and a very sparse residual component. Setting 2 has spiked eigenvalues and a far less sparse residual. Settings 3 and 4 are intermediately spiked and sparse but present a much lower $p/n$ ratio. In particular, while Settings 1 and 2 have $p/n=0.1$, Setting 3 has $p/n=1$ and Setting 4 has $p/n=2$.
In each setting, the eigenvalues of $\mf{L}^{*}$ and $\mf{\Sigma}^{*}$ almost overlap, while
the eigenvalues of $\mf{S}^{*}$ are much smaller.
Note that the minimum allowed off-diagonal residual element in absolute value, $\Vert\mf{S}^{*}\Vert_{min,off}$, decreases from Setting 1 to Setting 4.
\begin{table}
\caption{\label{sett} Simulated settings: parameters.}
\centering
\begin{tabular}{cccccccccccc}
\hline
Setting & $p$ & $n$ & $p/n$ & $r$ & $\theta$ & $c$ & $\pi_s$ & $\rho_{\mf{S}^{*}}$& {spikiness}& {sparsity}\\% & $\VertL\Vert$\\
\hline
1 & $100$ & $1000$ & $0.1$ & $4$ & $0.7$ & $2$ & $0.0238$ & $0.0045$ & {low} &{high}\\% & $23.33$\\
2 & $100$ & $1000$ & $0.1$ & $3$ & $0.8$ & $4$ & $0.1172$ & $0.0072$ & {high} & {low}\\% & $128$\\
3 & $150$ & $150$ & $1$ & $5$ & $0.8$ & $2$ & $0.0320$ & $0.0033$ & {middle} & {middle}\\% & $32$\\
4 & $200$ & $100$ & $2$ & $6$ & $0.8$ & $2$ & $0.0366$ & $0.0039$ & {middle} & {middle}\\% & $35.56$\\
\hlin
\end{tabular}
\end{table}
\begin{table}
\caption{\label{specond} Simulated settings: spectral norms and condition numbers.}
\centering
\begin{tabular}{ccccccccc}
\hline
Setting & $\Vert\mf{L}^{*}\Vert_{2}$ & $\lambda_r(\mf{L}^{*})$ & $c$ & $\Vert\mf{S}^{*}\Vert_{2}$ & $\Vert\mf{S}^{*}\Vert_{min,off}$ & $\mathrm{cond}({\mf{S}^{*}})$ & $\Vert\mf{\Sigma}^{*}\Vert_{2}$ & $\mathrm{cond}({\mf{\Sigma}^{*}})$\\
\hline
1 & $23.33$ & $11.67$ & $2$ & $3.78$ & $0.0275$ & $2.26e07$ & $24.49$ & $9.49e07$\\
2 & $128$ & $32$ & $4$ & $5.58$ & $0.0226$ & $2.53e05$ & $130.14$ & $4.07e06$\\
3 & $32$ & $16$ & $2$ & $2.56$ & $0.0161$ & $2.35e13$ & $32.48$ & $1.58e10$\\
4 & $35.56$ & $17.78$ & $2$ & $4.69$ & $0.0138$ & $1.17e13$ & $36.39$ & $3.09e09$\\
\hline
\end{tabular}
\end{table}
For each scenario, we simulate $N=100$ replicates from model \eqref{mod},
thus getting $100$ instances of the input sample covariance matrix
$\mf{\Sigma}_n$. Then:
\begin{itemize}
\item we apply Algorithm \ref{alg_ld} to each generated $\mf{\Sigma}_n$ to get the pair of estimates \eqref{obj},
that we call ALCE-ld pair: $\left(\widehat{\mf{L}}^{(ld)}_{A},\widehat{\mf{S}}^{(ld)}_{A}\right)$.
Then, we apply the unshrinkage steps in \eqref{unshr1}, \eqref{unshr2}, \eqref{unshr3},
and we get the UNALCE-ld pair of estimates $\left(\widehat{\mf{L}}^{(ld)}_{U},\widehat{\mf{S}}^{(ld)}_{U}\right)$.
\item we apply Algorithm \ref{alg_fro_2} to each generated $\mf{\Sigma}_n$ to get the pair of estimates \eqref{alg_fro_2},
that we call ALCE-F pair: $\left(\widehat{\mf{L}}^{(F)}_{A},\widehat{\mf{S}}^{(F)}_{A}\right)$.
Then, we apply the unshrinkage steps in \eqref{unshr1}, \eqref{unshr2}, \eqref{unshr3},
and we get the UNALCE-F pair: $\left(\widehat{\mf{L}}^{(F)}_{U},\widehat{\mf{S}}^{(F)}_{U}\right)$.
\end{itemize}
Let us denote the generic low rank estimate as $\widehat{\mf{L}}$, the generic sparse estimate as $\widehat{\mf{S}}$,
and the generic covariance matrix estimate $\widehat{\mf{\Sigma}}=\widehat{\mf{L}}+\widehat{\mf{S}}$.
The performance metrics to assess the quality of estimates are
the Frobenius total loss $TLF = \Vert\widehat{\mf{\Sigma}}-\mf{\Sigma}^{*}\Vert_{F}$;
the spectral total loss $TL2 = \Vert\widehat{\mf{\Sigma}}-\mf{\Sigma}^{*}\Vert_{2}$;
the spectral low rank loss $LL2 = \Vert\widehat{\mf{L}}-\mf{L}^{*}\Vert_{2}$;
the sparse maximum loss $SLM= \Vert\widehat{\mf{S}}-\mf{S}^{*}\Vert_{\infty}$.
The proportion of wrongly recovered latent ranks is
$err(\widehat{r})=\frac{1}{N}\sum_{k=1}^K\mathbbm{1}(\widehat{r}_{k}=r)$.
The estimated proportion of latent variance $\widehat{\theta}$, of residual
covariance $\widehat{\rho}_{\widehat{\mf{S}}}$, and of residual non-zeros $\widehat{\pi}_{\widehat{s}}$ are also computed.
Their estimation performance is measured by
the estimation bias for each parameter, defined as
$\mathrm{bias}(\widehat{\theta}) = \widehat{\theta}_{mean}-\theta$
$\mathrm{bias}(\widehat{\rho}_{\widehat{\mf{S}}}) = \widehat{\rho}_{\widehat{\mf{S}},{mean}}-\rho_{\mf{S}^{*}}$
$\mathrm{bias}(\widehat{\pi}_{\widehat{s}}) = \widehat{\pi}_{\widehat{s},{mean}}-\pi_s$,
where $\widehat{\theta}_{mean}$, $\widehat{\rho}_{\widehat{\mf{S}},{mean}}$ and $\widehat{\pi}_{\widehat{s},{mean}}$
are the mean estimates of $\theta$, $\rho_{\mf{S}^{*}}$ and $\pi_s$
over the $N$ replicates.
The performance in terms of eigen-structure recovery is measured for $\mf{\Sigma}^{*}$ by $\lambda(\widehat{\mf{\Sigma}})$,
which is defined as the Euclidean distance between the estimated and true eigenvalues of $\mf{\Sigma}^{*}$:
\begin{equation}\label{eig}
\lambda(\widehat{\mf{\Sigma}})=\sqrt{\sum_{i=1}^p (\widehat{\lambda}_i(\widehat{\mf{\Sigma}})-\lambda_i(\mf{\Sigma^{*}}))^2}.
\end{equation}
Measure \eqref{eig} is similarly defined for $\mf{L}^{*}$ and $\mf{S}^{*}$ as $\lambda(\widehat{\mf{L}})$ and $\lambda(\widehat{\mf{S}})$,
respectively. All three measures are averaged over the $N$ replicates.
In the end, we calculate the following metrics for sparsity pattern recovery:
\begin{itemize}
\item $poserr=\frac{pos-wr}{pos}$, where $pos-wr$ are the positive elements estimated as zero or negative by mistake;
\item $negerr=\frac{neg-wr}{pos}$, where $neg-wr$ are the negative elements estimated as zero or positive by mistake;
\item $zeroerr=\frac{zero-wr}{pos}$, where $zero-wr$ are the zero elements estimated as positive or negative by mistake.
\end{itemize}
These measures are averaged over the $N$ replicates.
Tables \ref{tab:scenario1}, \ref{tab:scenario2}, \ref{tab:scenario3}, and \ref{tab:scenario4} report simulation results
with respect to UNALCE-ld, ALCE-ld, UNALCE-F, and ALCE-F, for Scenarios 1,2,3,4, respectively.
First, we can note that the latent rank is perfectly recovered by all methods.
The UNALCE estimates have a consistent advantage over ALCE ones for what concerns the latent variance proportion $\theta$,
which is an important parameter in factor modelling. This advantage increases as $\theta$ and $p$ increase.
This fact reflects upon the metrics on eigenvalue estimation.
Concerning the performance metrics, we can see that log-det based estimates tend to be slightly more accurate than Frobenius ones
under Scenario 4, that presents a large $p/n$.
Concerning the sparsity pattern metrics, we note that, as $p/n$ and $S_{min,off}$ increase, the sparsity pattern recovery gets increasingly worse.
\begin{table}[htbp]
\centering
\caption{Simulation results for Scenario 1.}
\begin{tabular}{lrrrrr}
& \multicolumn{1}{l}{UNALCE-ld} & \multicolumn{1}{l}{ALCE-ld} & \multicolumn{1}{l}{UNALCE-F} & \multicolumn{1}{l}{ALCE-F}\\
TLF & 6.9833 & 6.9850 & 6.9826 & 6.9872 \\
TL2 & 4.7471 & 4.7623 & 4.7488 & 4.7869 \\
LL2 & 4.7198 & 4.7340 & 4.7165 & 4.7420 \\
SLM & 0.2105 & 0.2096 & 0.2157 & 0.2078 \\
& & & & \\
$err(\widehat{r})$ & 0 & 0 & 0 & 0\\
$\mathrm{bias}(\widehat{\theta})$ & -0.0039 & -0.0058 & -0.0023 & -0.0071\\
$\mathrm{bias}(\widehat{\rho}_{\widehat{\mf{S}}})$ & -0.0001 & 0.0001 & -0.0009 & -0.0009\\
$\mathrm{bias}(\widehat{\pi}_{\widehat{s}})$ & 0.0172 & 0.0184 & 0.0069 & 0.0068\\
& & & & \\
$\lambda(\widehat{\mf{\Sigma}})$ & 5.5079 & 5.5066 & 5.5083 & 5.5115 \\
$\lambda(\widehat{\mf{S}})$ & 0.2856 & 0.2876 & 0.3013 & 0.2940 \\
$\lambda(\widehat{\mf{L}})$ & 7.7540 & 7.7861 & 7.8553 & 7.7487 \\
& & & & \\
$poserr$ & 0.2058 & 0.2013 & 0.2953 & 0.2821 \\
$negerr$ & 0.2078 & 0.2058 & 0.2906 & 0.2832 \\
$zeroerr$ & 0.0225 & 0.0236 & 0.0141 & 0.0137 \\
\end{tabular
\label{tab:scenario1
\end{table
\begin{table}[htbp]
\centering
\caption{Simulation results for Scenario 2.}
\begin{tabular}{lrrrrl}
& \multicolumn{1}{l}{UNALCE-ld} & \multicolumn{1}{l}{ALCE-ld} & \multicolumn{1}{l}{UNALCE-F} & \multicolumn{1}{l}{ALCE-F}\\
TLF & 11.6898 & 11.6917 & 11.6895 & 11.6931 \\
TL2 & 7.8031 & 7.8324 & 7.7983 & 7.8402 \\
LL2 & 7.7352 & 7.7552 & 7.7271 & 7.7601 \\
SLM & 0.1659 & 0.157 & 0.1559 & 0.1593 \\
& & & & \\
$err(\widehat{r})$ & 0 & 0 & 0 & 0\\
$\mathrm{bias}(\widehat{\theta})$ & -0.0024 & -0.0053 & -0.0015 & -0.0065\\
$\mathrm{bias}(\widehat{\rho}_{\widehat{\mf{S}}})$ & -0.0012 & -0.0019 & -0.0017 & -0.0023\\
$\mathrm{bias}(\widehat{\pi}_{\widehat{s}})$ & -0.0033 & -0.0044 & -0.0036 & -0.0090\\
& & & & \\
$\lambda(\widehat{\mf{\Sigma}})$ & 11.1341 & 11.1005 & 11.1632 & 11.1023 \\
$\lambda(\widehat{\mf{S}})$ & 0.2432 & 0.2257 & 0.2228 & 0.2305 \\
$\lambda(\widehat{\mf{L}})$ & 14.4693 & 14.4831 & 14.4947 & 14.5836 \\
& & & & \\
$poserr$ & 0.3961 & 0.4043 & 0.396 & 0.4362 \\
$negerr$ & 0.4043 & 0.4086 & 0.4039 & 0.4425 \\
$zeroerr$ & 0.0244 & 0.0234 & 0.0241 & 0.0193 \\
\end{tabular
\label{tab:scenario2
\end{table
\begin{table}[htbp]
\centering
\caption{Simulation results for Scenario 3.}
\begin{tabular}{lrrrrl}
& \multicolumn{1}{l}{UNALCE-ld} & \multicolumn{1}{l}{ALCE-ld} & \multicolumn{1}{l}{UNALCE-F} & \multicolumn{1}{l}{ALCE-F}\\
TLF & 13.0149 & 13.0182 & 13.0287 & 13.0098 \\
TL2 & 8.8829 & 8.8599 & 8.8981 & 8.8598 \\
LL2 & 8.8756 & 8.8442 & 8.8823 & 8.8477 \\
SLM & 0.4004 & 0.3935 & 0.3952 & 0.3952 \\
& & & & \\
$err(\widehat{r})$ & 0 & 0 & 0 & 0\\
$\mathrm{bias}(\widehat{\theta})$ & -0.0031 & -0.0069 & -0.0024 & -0.0084\\
$\mathrm{bias}(\widehat{\rho}_{\widehat{\mf{S}}})$ & -0.0002 & 0.0015 & 0.0006 & 0.0005\\
$\mathrm{bias}(\widehat{\pi}_{\widehat{s}})$ & -0.0061 & 0.0134 & 0.0024 & 0.0011\\
& & & & \\
$\lambda(\widehat{\mf{\Sigma}})$ & 6.0518 & 6.0539 & 6.0432 & 6.0730 \\
$\lambda(\widehat{\mf{S}})$ & 0.4864 & 0.4903 & 0.4845 & 0.4938 \\
$\lambda(\widehat{\mf{L}})$ & 6.0577 & 6.0584 & 6.0449 & 6.0789 \\
& & & & \\
$poserr$ & 0.6647 & 0.5403 & 0.5947 & 0.6072 \\
$negerr$ & 0.6548 & 0.5261 & 0.5822 & 0.5972 \\
$zeroerr$ & 0.0145 & 0.0297 & 0.0207 & 0.0197 \\
\end{tabular
\label{tab:scenario3
\end{table
\begin{table}[htbp]
\centering
\caption{Simulation results for Scenario 4.}
\begin{tabular}{lrrrrl}
& \multicolumn{1}{l}{UNALCE-ld} & \multicolumn{1}{l}{ALCE-ld} & \multicolumn{1}{l}{UNALCE-F} & \multicolumn{1}{l}{ALCE-F}\\
TLF & 20.9703 & 20.9719 & 20.9889 & 20.9712 \\
TL2 & 13.1786 & 13.1007 & 13.2233 & 13.0961 \\
LL2 & 13.1599 & 13.0811 & 13.2006 & 13.0717 \\
SLM & 0.6544 & 0.6840 & 0.6564 & 0.6556 \\
& & & & \\
$err(\widehat{r})$ & 0 & 0 & 0 & 0\\
$\mathrm{bias}(\widehat{\theta})$ & -0.0060 & -0.0128 & -0.0028 & -0.0134\\
$\mathrm{bias}(\widehat{\rho}_{\widehat{\mf{S}}})$ & -0.0011 & -0.0003 & -0.0005 & -0.0005\\
$\mathrm{bias}(\widehat{\pi}_{\widehat{s}})$ & -0.0209 & -0.0151 & -0.0164 & -0.0163\\
& & & & \\
$\lambda(\widehat{\mf{\Sigma}})$ & 10.002 & 10.0684 & 9.9909 & 10.0903 \\
$\lambda(\widehat{\mf{S}})$ & 0.7847 & 0.8050 & 0.7699 & 0.8013 \\
$\lambda(\widehat{\mf{L}})$ & 10.2028 & 10.2576 & 10.219 & 10.2874 \\
& & & & \\
$poserr$ & 0.7888 & 0.7405 & 0.7561 & 0.7466 \\
$negerr$ & 0.7853 & 0.7351 & 0.7491 & 0.7453 \\
$zeroerr$ & 0.0073 & 0.0110 & 0.0104 & 0.0102 \\
\end{tabular
\label{tab:scenario4
\end{table
\section{Real data analysis}\label{real}
In this section, we compute $(\widehat{\mf{L}}^{(ld)}_{\rm{U}},\widehat{\mf{S}}^{(ld)}_{\rm{U}})$ and $(\widehat{\mf{L}}^{(F)}_{\rm{U}},\widehat{\mf{S}}^{(F)}_{\rm{U}})$
on a selection of $361$
macroeconomic indicators provided by the European Central Bank
for $364$ systematically important Euro Area banks.
The indicators, taken in logarithms, mainly are financial items in the banks' balance sheet,
reported at a high level of granularity. All data refer to Q4-2014.
Table \ref{ecb_data} reports estimation results. Algorithms \ref{alg_ld} and \ref{alg_fro_2} are applied on the input sample covariance matrix,
setting the vector of eigenvalue thresholds as $\vf{\psi}_{init}=\frac{i}{p}$, with $i=\frac{1}{20},\frac{1}{10},\frac{1}{5},\frac{1}{3},\frac{1}{2},1,2,5,10,20$,
and the vector of initial thresholds $\vf{\rho}_{init}$ as $\frac{1}{\sqrt{p}}\vf{\psi}_{init}$.
\footnote{Whenever one of the thresholds selected by the MC criterion \eqref{MC} lies in the grid extremes, it is advisable to shift the vector $\vf{\psi}_{init}$ to the left. In this case, the MC criterion selected the pair with positions $(9,2)$ for UNALCE-ld and $(5,3)$ for UNALCE-F.}
The scree plot of sample eigenvalues (Figure \ref{eigECB}) highlights the presence of only one latent eigenvalue.
\begin{table}[htbp
\centering
\parbox{0.35\textwidth}{
\begin{footnotesize}
\begin{tabular}{ccc}
\hline
& $(\widehat{\mf{L}}^{(ld)}_{\rm{U}},\widehat{\mf{S}}^{(ld)}_{\rm{U}})$ & $(\widehat{\mf{L}}^{(F)}_{\rm{U}},\widehat{\mf{S}}^{(F)}_{\rm{U}})$\\
\hline
$\widehat{r}$ & 1 & 1\\
$\widehat{\theta}$ & 0.2106 & 0.2219 \\
$\widehat{\rho}_{\widehat{\mf{S}}}$ & 0.0638 & 0.0711\\
$\widehat{\pi}_{\widehat{s}}$ & 0.0548 & 0.0680 \\
$\Vert \widehat{\mf{\Sigma}}-\mf{\Sigma}_n \Vert_F$ & 846.854 & 723.1715 \\
$\Vert \widehat{\mf{\Sigma}}-\mf{\Sigma}_n \Vert_2$ & 505.4935 & 342.2135 \\
\hline
\end{tabular
\end{footnotesize}
\captionof{table}{Supervisory data: estimation results.}
\label{ecb_data}
}
\qquad
\begin{minipage}[c]{0.6\textwidth
\centering
\includegraphics[width=0.6\textwidth]{ecb_eig}
\captionof{figure}{Supervisory data: top six sample eigenvalues.}
\label{eigECB}
\end{minipage}
\end{table}
It follows that the estimated latent rank is $1$ in both cases. The latent variance proportion is a bit smaller for $\widehat{\mf{L}}^{(ld)}_{\rm{U}}$ compared to $\widehat{\mf{L}}^{(F)}_{\rm{U}}$. More, $\widehat{\mf{S}}^{(ld)}_{\rm{U}}$ is a bit more selective than $\widehat{\mf{S}}^{(F)}_{\rm{U}}$
for residual nonzeros, and this results in a lower presence of non-zeros.
From the estimates $\widehat{\mf{L}}$, $\widehat{\mf{S}}$, and $\widehat{\mf{\Sigma}}$,
we get for each variable $i=1,\ldots,p$ the estimated commonality as $\frac{\widehat{\mf{L}}_{ii}}{\widehat{\mf{\Sigma}}_{ii}}$ and
the estimated idiosyncrasy as $\frac{\widehat{\mf{S}}_{ii}}{\widehat{\mf{\Sigma}}_{ii}}$.
The estimated residual degree is obtained as $deg_{\widehat{\mf{S}},i}=\sum_{j=1}^{p}\mathbbm{1}(\widehat{\mf{S}}_{ij} \ne 0)$.
We obtain the spectral decomposition of $\widehat{\mf{L}}$ as $\mf{U}_{\widehat{\mf{L}}}\mf{D}_{\widehat{\mf{L}}}\mf{U}_{\widehat{\mf{L}}}'$,
and we compute the vector of loadings $\mf{U}_{\widehat{\mf{L}}}\mf{D}_{\widehat{\mf{L}}}^{\frac{1}{2}}$.
The estimated loadings are very similar for $\widehat{\mf{L}}^{(ld)}_{\rm{U}}$ and $\widehat{\mf{L}}^{(F)}_{\rm{U}}$,
although for $\widehat{\mf{L}}^{(ld)}_{\rm{U}}$ they are slightly more concentrated,
and denote the contrast between loans and receivables and the rest of supervisory indicators.
Table \ref{Comm_def} shows that the extracted factor is mainly connected to total assets, followed by
variables representing available cash. Table \ref{Deg_def} shows that the variables most connected with all the others are related to
credit risk, deposits, and loans. Table \ref{Idio_def} shows that the most marginal variables \emph{wrt} the factor structure
are related to equity instruments and derivatives.
\begin{table}[htbp]
\caption{\label{Comm_def}Supervisory data: this table reports the top six variables by estimated commonality, with respect to $\widehat{\mf{L}}^{(ld)}_{U}$. This measure provides a ranking of the variables by systemic importance in determining the latent structure.}
\begin{tabular}{ll
\hline
Supervisory indicator & Commonality \\
\hline
Total assets & $0.6593$ \\
Advances that are not loans - Other financial corporations & $0.3747$\\
Held for trading - Equity instruments - Carrying amount & $0.3657$\\%At cost -
Loans and advances - Governments - Allowances for credit risk & $0.3606$\\
\hline
\end{tabular
\end{table}
\begin{table}[htbp]
\caption{\label{Deg_def}Supervisory data: this table reports the top four variables by estimated degree, with respect to $\widehat{\mf{S}}^{(ld)}_{U}$.
This measure provides a ranking of the most connected variables with all the others, conditionally on the latent factor.\\ NFC stands for Non-Financial Corporations.}
\begin{tabular}{ll
\hline
Supervisory indicator & Degree \\
\hline
Credit spread option - Notional amount - Sold & $104$ \\
Deposits - NFC - Fair value & $102$\\% / overnight deposits - Designated at fair value through profit or loss
Deposits - NFC - Repurchase agreements - Held for trading & $94$\\
Loan commitments - Non-performing - Nominal amount & $92$\\
\hline
\end{tabular
\end{table}
\begin{table}[htbp]
\caption{\label{Idio_def}Supervisory data: this table reports the top three variables by estimated idiosyncracy, with respect to $\widehat{\mf{S}}^{(ld)}_{U}$.
This measure provides a ranking of the variables by systemic irrelevance in determining the latent structure.\\ NFC stands for Non-Financial Corporations.}
\begin{tabular}{ll
\hline
Supervisory indicator & Idiosyncracy \\
\hline
Derivatives: Trading - Credit - Sold & $1$ \\
Held for trading - Equity instruments - NFC - Carrying amount & $1$\\
Financial assets - Fair value - Equity instruments - NFC - Carrying amount & $1$\\
\hline
\end{tabular
\end{table}
\section{Conclusions}
In this paper, we provide a study on the estimation of large covariance matrices in high dimensions
under the low rank plus sparse assumption by minimizing a log-det heuristics augmented by a nuclear norm plus $l_1$ norm penalty.
In particular, we prove the local convexity and the Lipschitzianity of the proposed log-det heuristics,
which allows to solve the optimization problem via a proximal gradient algorithm. We bound the curvature of the
log-det heuristics under an appropriate random matrix theory framework. Then, by adapting the results of \cite{chandrasekaran2012},
we solve the algebraic identification problem behind
the low rank and sparse component recovery, due to the linearity of the log-det heuristics, and we prove
the algebraic and parametric consistency of the ensuing pair of low rank and sparse covariance matrix estimators.
We also prove that the same pair of estimators performs systematically not
worse than the corresponding estimator of \cite{farne2020large} obtained by nuclear norm plus $l_1$ norm penalized Frobenius loss minimization.
A new solution algorithm, that also allows to control for the input threshold parameters, is proposed.
A wide simulation study proves the validity of our theoretical results, and an ECB supervisory data example
shows the usefulness of our approach on a real dataset.
\begin{appendix}
\section{Proofs}\label{proofs}
\begin{Lemma}\label{Lemma_cons}
Let ${\lambda}_r(\mf{\Sigma}_n)$ be the $r-$th largest eigenvalue of
the sample covariance matrix
$\mf{\Sigma}_n=\frac{1}{n}\sum_{k=1}^n \vf{x}_k \vf{x}_k'$.
Under Assumptions \ref{eigenvalues}, \ref{sparsity} and \ref{tails},
${\lambda_r(\mf{\Sigma}_n)} \simeq p^{\alpha_r}$ with probability approaching $1$
as $n \to \infty$.
\end{Lemma}
\begin{proof}
On one hand, we note that, since $r+p-p=r \leq p$,
dual Weyl inequality (see \cite{tao2011topics})
can be applied, leading to
\begin{equation}\label{bound_eig_left}
\lambda_r(\mf{\Sigma}^{*})\geq\lambda_r(\mf{L}^{*})+\lambda_p(\mf{S}^{*}).
\end{equation}
From (\ref{bound_eig_left}), we can write
\begin{equation}\label{bound_eig_left_{2}}
\lambda_r(\mf{\Sigma}^{*})\succeq O(p^{\alpha_r})+O(p^{\delta_{1}})=O(p^{\alpha_r}),
\end{equation}
because ${\lambda_r(\mf{L}^{*})} \simeq {p^{\alpha_r}}$ by Assumption \ref{eigenvalues}(i),
and ${\lambda_p(\mf{S}^{*})}=O(p^{\delta_{1}})$ by Assumption \ref{sparsity}, with $\delta_{1}<\alpha_r$.
On the other hand, Lidskii inequality (see \cite{tao2011topics}) leads to
\begin{equation}\label{bound_eig_right}
\lambda_r(\mf{\Sigma}^{*})\leq\lambda_r(\mf{L}^{*})+\sum_{j=1}^r\lambda_j(\mf{S}^{*}).
\end{equation}
From (\ref{bound_eig_right}), we can write
\begin{equation}\label{bound_eig_right_{2}}
\lambda_r(\mf{\Sigma}^{*})\preceq O(p^{\alpha_r})+O(r p^{\delta_{1}})=O(p^{\alpha_r}),
\end{equation}
because ${\lambda_r(\mf{L}^{*})} \simeq p^{\alpha_r}$ by Assumption \ref{eigenvalues}(i),
${\lambda_p(\mf{S}^{*})}=O(p^{\delta_{1}})$ with $\delta_{1}<\alpha_r$ by Assumption \ref{sparsity},
and $r$ is finite for all $p \in \numberset{N}$ by Assumption \ref{eigenvalues}(ii).
It follows that ${\lambda_r(\mf{\Sigma}^{*})} \simeq p^{\alpha_r}$.
Recalling that $\mf{\Sigma}_n=\frac{1}{n}\sum_{k=1}^n \vf{x}_k \vf{x}_k'$ and $\vf{x}_k=\mf{B}\vf{f}_k+{\vf{\epsilon}}_k$,
where ${\vf{f}}_k$ and ${\vf{\epsilon}}_k$, $k=1,\ldots,n$, are respectively the vectors of factor scores and residuals for each observation, we can decompose the error matrix ${\mf{E}}_n=\mf{\Sigma}_n-\mf{\Sigma}^{*}$ in four components as follows
(cf. \cite{fan2013large}):
$${\mf{E}}_n=\mf{\Sigma}_n-\mf{\Sigma}^{*}=\widehat{\mf{D}}_{1}+\widehat{\mf{D}}_{2}+\widehat{\mf{D}}_3+\widehat{\mf{D}}_4,$$ where:
\begin{eqnarray}
\widehat{\mf{D}}_{1}=\frac{1}{n} \mf{B} \left(\sum_{k=1}^n \vf{f}_k \vf{f}_k'-\mf{I}_r\right)\mf{B}',\nonumber\\
\widehat{\mf{D}}_{2}=\frac{1}{n} \sum_{k=1}^n \left({\vf{\epsilon}}_k {\vf{\epsilon}}_k'-\mf{S}^{*}\right),\nonumber\\
\widehat{\mf{D}}_3= \frac{1}{n}\mf{B}\sum_{k=1}^n \vf{f}_k {\vf{\epsilon}}_k',\nonumber\\
\widehat{\mf{D}}_4=\widehat{\mf{D}}_3'.\nonumber
\end{eqnarray}
Following \cite{fan2013large}, we note that
$$\Vert\widehat{\mf{D}}_{1}\Vert_{2} \leq \bigg\Vert\frac{1}{n} \left(\sum_{k=1}^n \vf{f}_k \vf{f}_k'-\mf{I}_r\right)\bigg\Vert_{2}\Vert{\mf{B}\mf{B}'}\Vert_{2}\leq rp^{\alpha_{1}}
\bigg\Vert\frac{1}{n} \sum_{k=1}^n \vf{f}_{ik}\vf{f}_{jk}-\mathrm{E}(\vf{f}_{ik}\vf{f}_{jk})\bigg\Vert_{\infty},$$
since $\mathrm{E}(\vf{f})=\vf{0}_r$ and $\mathrm{V}(\vf{f})=\mf{I}_r$, $\Vert{\mf{B}\mf{B}'}\Vert_{2}=O(p^{\alpha_{1}})$ by Assumption \ref{eigenvalues}(i), and $$\bigg\Vert\frac{1}{n} \left(\sum_{k=1}^n \vf{f}_k \vf{f}_k'-\mf{I}_r\right)\bigg \Vert_{2} \leq r \bigg\Vert\frac{1}{n} \left(\sum_{k=1}^n \vf{f}_k \vf{f}_k'-\mf{I}_r\right)\bigg \Vert_{\infty}$$
by Assumption \ref{eigenvalues}(ii).
Under Assumption \ref{tails}, we can apply Lemma 4 in \cite{fan2013large}, which claims
\begin{equation}
\mathrm{max}_{i,j \leq r} \biggl\vert \frac{1}{n} \sum_{k=1}^n \vf{f}_{ik}\vf{f}_{jk}-\mathrm{E}(\vf{f}_{ik}\vf{f}_{jk})\biggl\vert
\leq C' \frac{1}{\sqrt{n}},\label{Lemma4_{1}}
\end{equation}
with probability $1-O(1/n^2)$ ($C'$ is a real positive constant).
Consequently, we obtain
\begin{equation}\Vert\widehat{\mf{D}}_{1}\Vert_{2} \leq C' {r}\frac{p^{\alpha_{1}}}{\sqrt{n}}
\leq C' \frac{p^{\alpha_{1}}}{\sqrt{n}}\label{bound1}\end{equation}
by Assumption \ref{eigenvalues}(ii).
Then, we note that the diagonal elements of the matrix ${\mf{S}^{*}}$
are bounded by a finite constant, due to Assumption \ref{sparsity}(ii).
Under Assumption \ref{tails}, (12) in \cite{bickel2008covariance} thus holds for the matrix ${\mf{S}^{*}}$, leading to:
\begin{equation}
\Vert\widehat{\mf{D}}_{2}\Vert_\infty=\mathrm{max}_{i,j \leq p}
\biggl\vert \frac{1}{n} \sum_{k=1}^n \vf{\epsilon}_{ik}\vf{\epsilon}_{jk}-\mathrm{ E}(\vf{\epsilon}_{ik}\vf{\epsilon}_{jk})\biggl
\vert \leq C' \sqrt{\frac{\log{p}}{{n}}},\label{Lemma4_{2}}
\end{equation}
that holds with probability $1-O(1/n^2)$.
Since by Assumption \ref{sparsity}(i) $\Vert \mf{S}^{*} \Vert_{0,v} \leq O(p^{\delta_{1}})$,
we can write
\begin{equation}
\Vert \widehat{\mf{D}}_{2} \Vert_{2} \leq \Vert \widehat{\mf{D}}_{2} \Vert_{0,v} \Vert\widehat{\mf{D}}_{2}\Vert_\infty=
C' p^{\delta_{1}}\sqrt{\frac{\log{p}}{n}},\label{bound2}
\end{equation}
where we used the fact that $$\mathcal{P}(\Vert \widehat{\mf{D}}_{2} \Vert_{0,v} = \Vert\mf{S}^{*}\Vert_{0,v}) \to 1$$
as $n \to \infty$.
Now, we study the random term $\mathrm{max}_{i\leq r,j\leq p} \biggl\vert \frac{1}{n} \sum_{k=1}^n \vf{f}_{ik}\vf{\epsilon}_{jk} \biggl\vert$. We know from Lemma 3 in \cite{fan2013large} that this term has sub-exponential tails, due to Assumption \ref{tails}.
Thus, we only need to study how its standard deviation evolves in our context. We consider the following Cauchy-Schwarz inequality:
$$\mathrm{max}_{i\leq r,j\leq p} \biggl\vert\frac{1}{n} \sum_{k=1}^n \vf{f}_{ik}\vf{\epsilon}_{jk} \biggl\vert
\leq C' \mathrm{max}_{i\leq r} \sqrt{{V}(\vf{f}_{i})} \mathrm{max}_{j \leq p} \sqrt{{V}(\vf{\epsilon}_{j})}.$$
From (\ref{Lemma4_{1}}), we know that $\mathrm{max}_i \sqrt{{V}(\vf{f}_{i})} \leq C' \frac{1}{\sqrt[4]{n}}$
with probability $1-O(1/n^2)$.
From (\ref{Lemma4_{2}}), we know that
$\mathrm{max}_j \sqrt{{V}(\vf{\epsilon}_{j})} \leq C' p^{\frac{\delta_{1}}{2}}\sqrt[4]{\frac{\log{p}}{n}}$
with probability $1-O(1/n^2)$.
It follows that, with probability $1-O(1/n^2)$, it holds
\begin{equation}
\biggl\Vert \frac{1}{n} \sum_{k=1}^n \vf{f}_k\vf{\epsilon}_k'\biggl\Vert_{2} \leq \sqrt{\Vert \mf{S}^{*} \Vert_{0,v}r} \biggl\Vert \frac{1}{n} \sum_{k=1}^n \vf{f}_k\vf{\epsilon}_k'\biggl\Vert_{\infty}=C'\sqrt{\frac{r}{n^{\frac{1}{2}}}} p^{\frac{\delta_{1}}{2}}\sqrt[4]{\frac{\log{p}}{n}},
\end{equation}
where the r.h.s is bounded by $C' p^{\frac{\delta_{1}}{2}}\sqrt{\frac{\log{p}}{n}}$,
due to Assumption \ref{eigenvalues}(ii).
Consequently, we obtain with probability $1-O(1/n^2)$ the following claim
\begin{equation}\Vert\widehat{\mf{D}}_3\Vert_{2} \leq \bigg\Vert \frac{1}{n} \sum_{k=1}^n \vf{f}_k\vf{\epsilon}_k'\bigg\Vert \times \Vert{\mf{B}}\Vert \leq C'\left(p^{\frac{\delta_{1}}{2}}\sqrt{\frac{\log{p}}{n}}\right)\left(p^{\frac{\alpha_{1}}{2}}\right)=C' p^{\frac{\alpha_{1}}{2}+\frac{\delta_{1}}{2}}\sqrt{\frac{\log{p}}{n}},\label{bound3}\end{equation}
because $\Vert{\mf{B}}\Vert=O(p^{\frac{\alpha_{1}}{2}})$ by Assumption \ref{eigenvalues}(i).
Putting (\ref{bound1}), (\ref{bound2}), and (\ref{bound3}) together, the following bound is proved with probability $1-O(1/n^2)$:
\begin{equation}\Vert{{\mf{\Sigma}}}_{n}-\mf{\Sigma}^{*}\Vert_{2}\leq C' \frac{p^{\alpha_{1}}}{\sqrt{n}},\label{boundtop}\end{equation}
because $\delta_{1} < \alpha_r \leq \alpha_{1}$ by Assumptions \ref{eigenvalues}(i) and \ref{sparsity}(i). It follows that
\begin{equation}\frac{1}{p^{\alpha_{1}}}\Vert \mf{\Sigma}_n-\mf{\Sigma}^{*} \Vert_{2} \xrightarrow{n\to\infty} 0,\label{Lemma_input}\end{equation}
which proves the thesis.
\end{proof}
\begin{Lemma}\label{Lemma_bmax}
Under Assumptions \ref{eigenvalues}(ii), \ref{sparsity}(ii) and \ref{tails},
\begin{equation}
\Vert \mf{\Sigma}_n-\mf{\Sigma}^{*} \Vert_{\infty}\leq C'\sqrt{\frac{\log p}{n}}.\label{bmax}
\end{equation}
with probability approaching one as $n \to \infty$.
\end{Lemma}
\begin{proof}
Under Assumptions \ref{eigenvalues}(ii) and \ref{tails}, with probability $1-O(1/n^2)$,
\begin{equation}
\Vert\widehat{\mf{D}}_{1}\Vert_{\infty} \leq \bigg\Vert\frac{1}{n} \left(\sum_{k=1}^n \vf{f}_k \vf{f}_k'-\mf{I}_r\right)\bigg\Vert_{\infty}\Vert{\mf{B}\mf{B}'}\Vert_{\infty}\leq C'\sqrt{\frac{1}{n}},\label{bmax1}
\end{equation}
because $\Vert{\mf{B}\mf{B}'}\Vert_{\infty}\leq (\max_{j=1,\ldots,p} \Vert \vf{b}_j \Vert)^2 \leq
r^2 \Vert \mf{B} \Vert_{\infty}^2=O(1)$
for all $p \in \N$.
Under Assumptions \ref{sparsity}(ii) and \ref{tails}, \eqref{Lemma4_{2}} ensures that, with probability $1-O(1/n^2)$,
\begin{equation}
\Vert\widehat{\mf{D}}_{2}\Vert_\infty=\mathrm{max}_{i,j \leq p}
\biggl \vert \frac{1}{n} \sum_{k=1}^n \vf{\epsilon}_{ik}\vf{\epsilon}_{jk}-\mathrm{ E}(\vf{\epsilon}_{ik}\vf{\epsilon}_{jk})
\biggl \vert \leq C' \sqrt{\frac{\log{p}}{{n}}}.\label{bmax2}
\end{equation}
Under Assumptions \ref{eigenvalues}(ii), \ref{sparsity}(ii) and \ref{tails}, from \eqref{bmax1} and \eqref{bmax2} we get
\begin{equation}
\Vert\widehat{\mf{D}}_{3}\Vert_\infty=\biggl\Vert \frac{1}{n} \sum_{k=1}^n \vf{f}_k\vf{\epsilon}_k'\biggl\Vert_{\infty} \leq C' \sqrt{\frac{\log p}{n}},\label{bmax3}
\end{equation}
with probability $1-O(1/n^2)$.
Putting together \eqref{bmax1}, \eqref{bmax2}, \eqref{bmax3}, the thesis follows.
\end{proof}
\subsection*{Proof of Proposition \ref{first_der}}\label{first_dev_proof}
The proof is analogous to the proof of equation (6) in \cite{stats5030037}.
\subsection*{Proof of Proposition \ref{second_der}}\label{second_dev_proof}
The proof is analogous to the proof of equation (9) in \cite{stats5030037}.
\subsection*{Proof of Lemma \ref{lemma:lipschitz_orig}}\label{lipschitz_orig_proof}
The proof is analogous to the proof of Lemma 3 in \cite{stats5030037}.
\subsection*{Proof of Lemma \ref{lemma:lipschitz_first}}\label{lipschitz_first_proof}
The proof is analogous to the proof of Lemma 4 in \cite{stats5030037}.
\subsection*{Proof of Lemma \ref{convexity}}\label{lipschitz_convexity_proof}
The proof is analogous to the proof of Lemma 1 in \cite{stats5030037}.
\subsection*{Proof of Lemma \ref{random:first}}\label{random_first_proof}
\begin{proof}
It is sufficient to observe that, by triangular inequality,
\begin{equation}
\Vert(\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')^{-1}\mf{\Delta}_{n}\Vert\leq\Vert(\mf{I}_p+\mf{\Delta}_{n}\mf{\Delta}_{n}')^{-1}\Vert
\Vert\mf{\Delta}_{n}\Vert,
\end{equation}
and that $\frac{1}{p^{\alpha}}\Vert\mf{\Delta}_{n}\Vert\xrightarrow{n\to\infty} \mf{0}_{p \times p}$
as $n\to \infty$ by \eqref{Lemma_input}, under Assumptions \ref{eigenvalues}, \ref{sparsity}, and \ref{tails}.
\end{proof}
\subsection*{Proof of Lemma \ref{random:second}}\label{random_second_proof}
\begin{proof}
We start by equation \eqref{lips_top}, where we set $\mf{\Sigma}=\mf{\Sigma}^{*}$, $\epsilon=1$,
and $\mf{H}=\mf{\Delta}_{n}=\mf{\Sigma}_n-\mf{\Sigma}^{*}$. Then, we note that
$\frac{1}{p^{\alpha}}\Vert\mf{H}\Vert \xrightarrow{n\to\infty} \mf{0}_{p \times p}$
as $n\to \infty$ by \eqref{Lemma_input} under Assumptions \ref{eigenvalues}, \ref{sparsity}, and \ref{tails},
such that $\frac{1}{2p^{\alpha}}\Vert F(\mf{\Delta}_n)-F(\mf{0})\Vert \xrightarrow{n\to\infty} \mf{0}_{p \times p}$
under those assumptions, which means that
$$\frac{1}{2p^{\alpha}}\Vert\mathrm{Hess}\log\det\varphi(\mf{\Sigma}_n)-\mathrm{Hess}\log\det\varphi(\mf{\Sigma}^{*})\Vert
\xrightarrow{n\to\infty} \mf{0}_{p \times p}.$$ Then, the thesis follows.
\end{proof}
\subsection*{Proof of Lemma \ref{random_conv}}\label{random_conv_proof}
\begin{proof}
In Lemma \ref{Lemma_cons},
the claim $\Vert \mf{\Delta}_n \Vert \leq C \frac{p^{\alpha}}{\sqrt{n}}$ holds for some $C>0$ with probability $1-O(1/n^2)$,
under Assumptions \ref{eigenvalues}, \ref{sparsity}, and \ref{tails} (see \eqref{Lemma_input}).
Solving the inequality $\frac{1}{3\delta p}\succeq\frac{p^{\alpha}}{\sqrt{n}}$,
that becomes $\frac{1}{9\delta^2 p^2}\succeq\frac{p^{2\alpha}}{n}$,
we can derive the condition $n\succeq p^{2\alpha+2}\delta^2$,
ensuring that $\Vert \mf{\Delta}_n \Vert \leq \frac{1}{3\delta p}$,
as prescribed by Corollary \ref{coroll:conv_delta}.
\end{proof}
\begin{Prop}\label{13}
Let $\gamma$ be in the range of Proposition \ref{11} and suppose that the minimum eigenvalue of $\mf{L}^{*}$ is such that $\lambda_r(\mf{L}^{*})>\delta_L \frac{\psi_{0}}{\xi^2(T)}$ and $\Vert \mf{S}^{*} \Vert_{\text{\tiny{min,off}}}>\delta_S\frac{\psi_{0}}{\mu(\Omega)}$ with $\delta_L$ and $\delta_S$ finite positive reals. Suppose also that
\begin{equation}
g_\gamma(\mathcal{A}^{\dag}\mf{\Delta}_n)\leq \frac{\psi_{0}\nu}{6(2-\nu)},\label{bound_g}
\end{equation}
with $\mf{\Delta}_n=\mf{\Sigma}_n-\mf{\Sigma}^{*}$. Then, under the conditions of Proposition \ref{12} and Assumption \ref{lowerbounds}, if $\delta_{1} \leq \frac{\alpha_r}{3}$,
there exists a unique $\widetilde{\mathcal{T}}$ satisfying Proposition \ref{11} when setting $\widetilde{\mathcal T}=\mathcal T'$ therein, and a corresponding unique solution pair $\left(\widehat{\mf{S}}_{\Omega},\widehat{\mf{L}}_{\widetilde{\mathcal T}}\right)$ of \eqref{probtang2},
such that:
\begin{enumerate}
\item $\varrho(\mathcal{T},\widetilde{\mathcal T})\leq \xi(\mathcal{T})/4$, $\text{\upshape{rk}}(\widehat{\mf{L}}_{\widetilde{\mathcal T}})=r$, $\mathrm{sgn}(\widehat{\mf{S}}_{\Omega,ij})=\mathrm{sgn}(\mf{S}^{*}_{ij})$ for all $i,j=1,\ldots, p$;
\item $g_\gamma(\mathcal{A}^{\dag}\mathcal{I}^{*}\mf{C}_{\widetilde{\mathcal T}})\leq \frac{\psi_{0}\nu}{6(2-\nu)}$,
and $\Vert \mf{C}_{\widetilde{\mathcal T}} \Vert_{2} \leq \frac{16(3-\nu)}{3\alpha(2-\nu)}\psi_{0}$,
with $\mf{C}_{\widetilde{\mathcal T}}=\mathcal{P}_{\widetilde{\mathcal T}^{\perp}}(\mf{L}^{*})$;
\item $\left(\widehat{\mf{S}}_{\Omega},\widehat{\mf{L}}_{\widetilde{\mathcal T}}\right)$ is also the unique solution of problem \eqref{obj};
\end{enumerate}
with high probability as $n\to\infty$.
\end{Prop}
\begin{proof}
First, we need to ensure that under Assumption \ref{alg},
Assumptions \ref{lowerbounds}(i) and \ref{eigenvalues}(i) are compatible, i.e. that
$$\lambda_r(\mf{L}^{*}) > \delta_L \frac{\psi_{0}}{\xi^2(T(\mf{L}^{*}))} \geq \delta_L \left(\frac{\sqrt{r}}{\kappa_L}\right)^3 \frac{p^{3\delta_{1}}}{\sqrt{n}}$$ under $\lambda_r(\mf{L}^{*})\simeq p^{\far{\alpha_r}}$,
which holds true if $\delta_1\leq\frac{\alpha_r}{3}$, because $n\to\infty$.
Assumptions \ref{lowerbounds}(ii) and \ref{sparsity}(i) are instead always compatible, as
$$0<\delta_S \frac{\psi_{0}}{\mu(\Omega(\mf{S}^{*}))} s'\leq\frac{\delta_2}{\xi(\mathcal{T}(\mf{L}^{*}))\mu(\Omega(\mf{S}^{*}))}\frac{p^{\delta_{1}}}{\sqrt{n}}
\leq 54\delta_2\frac{p^{\delta_{1}}}{\sqrt{n}}
<\Vert \mf{S}^{*}\Vert_{1,v}\leq \delta_{2}' p^{\delta_{1}},$$
that is always verified as $n\to\infty$.
Then, the proof is analogous to the proof of Proposition 5.3 in \cite{chandrasekaran2012}, by noticing that
$\lambda_r(\mf{L}^{*})>\delta_L \frac{\psi_{0}}{\xi^2(T)}$ and $\Vert \mf{S}^{*} \Vert_{\text{\tiny{min,off}}}>\delta_S\frac{\psi_{0}}{\mu(\Omega)}$
hold under Assumption \ref{lowerbounds}, Propositions \ref{11} and \ref{12} hold
under Assumption \ref{eigenvalues}-\ref{tails} with $\gamma$
in the range of Proposition \ref{11} and $\delta$ satisfying Lemma \ref{random_conv}.
\end{proof}
\subsection*{Proof of Theorem \ref{thm_main}}
Following \cite{chandrasekaran2012}, we need to derive the conditions that guarantee that
$\mathrm{rk}(\widehat{\mf{L}}_{\mathcal{T}'})=r$ and $\mathrm{sgn}(\widehat{\mf{S}}_{\Omega})=\mathrm{sgn}(\mf{S}^{*})$, and that
$(\widehat{\mf{L}}_{{\mathcal{T}'}},\widehat{\mf{S}}_{{\Omega}})$ (see problem \eqref{probtang}) is a \emph{global} solution.
Let us define the tangent space to $\mathcal L(r)$ in a generic $\widetilde{\mf{L}} \ne \mf{L}^{*}$:
\begin{eqnarray}
\widetilde{\mathcal T}(\widetilde{\mf{L}})=\{\mf{M}\in \R^{p \times p} \mid \mf{M}=\mf{U} \mf{Y}_{1}'+\mf{Y}_{2} \mf{U}' \mid \mf{Y}_{1}, \mf{Y}_{2} \in \R^{p \times r},\nonumber\\
\mf{U}\in \R^{p\times r}, \mf{U}' \mf{U}=\mf{I}_r, \mf{U}' \widetilde{\mf{L}}\mf{U} \in \R^{r \times r} \mbox{diagonal}, \widetilde{\mf{L}} \in \mathcal L(r)\}.\nonumber
\end{eqnarray}
Consider the solution pair
\begin{equation}\label{probtang2}
(\widehat{\mf{S}}_{{\Omega}},\widehat{\mf{L}}_{\widetilde{\mathcal{T}}})=\arg
\min_{\substack{\underline{\mf{L}} \in \widetilde{\mathcal T}\\ \underline{\mf{S}} \in \Omega}}\frac{1}{2p^\alpha_{1}}\Vert {\mf{\Sigma}}_n-(\underline{\mf{L}}+\underline {\mf{S}})\Vert_{F}^2
+\psi_{0} \Vert\underline{\mf{L}} \Vert_{*}+\rho_{0} \Vert\underline{\mf{S}} \Vert_{1}.
\end{equation}
We note that, in Proposition \ref{13}, part 1 ensures the identification of the latent rank and the residual sparsity pattern with high probability,
and bounds the identification error; part 2 bounds the contribution
of the orthogonal component to the overall error rate $\widetilde{r}$ (see Proposition \ref{11}),
and
part 3 is the key to prove the following conditions:
\begin{itemize}
\item $\Vert P_{\mathcal{T}'^{\perp}}(\widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}^{*}) \Vert < \psi_{0} $,
\item $\Vert P_{{\Omega}^{\perp}}(\widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}^{*}) \Vert < \psi_{0} \gamma$,
\end{itemize}
that, together with the two previously proved conditions (see the proof of Proposition \ref{12})
\begin{itemize}
\item $P_{\mathcal{T}'}(\widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}^{*})=-\psi_{0} \mf{U}_L\mf{U}_L'$,
\item $P_{\mathcal{T}'}(\widehat{\mf{S}}_{\Omega}+\widehat{\mf{L}}_{\mathcal{T}'}-\mf{\Sigma}^{*})=-\psi_{0} \gamma \mathrm{sgn}(\mf{S}^{*})$,
\end{itemize}
ensure the global optimality of the solution pair $(\widehat{\mf{L}}_{{\mathcal{T}'}},\widehat{\mf{S}}_{{\Omega}})$
(see \cite{boyd2004convex}).
More, recalling the definition of the $\varrho$ measure \eqref{varrho_def},
and specializing it to the context of Proposition \ref{13} part 1, we get
\begin{equation}\varrho(\mathcal{T},{\mathcal T'})=
\max_{\Vert \mf{N} \Vert_{2} \leq 1} \Vert \mathcal{P}_{\mathcal{T}}\mf{N}-\mathcal{P}_{{\mathcal T'}}\mf{N}\Vert_{2},\label{varrho_T}
\end{equation}
where $\mathcal{P}_{\mathcal T}$ and $\mathcal{P}_{{\mathcal T'}}$ are the projection operators onto $\mathcal T$ and ${\mathcal T'}$, respectively.
From \eqref{varrho_T} and $\varrho(\mathcal{T},{\mathcal T'})\leq \xi(\mathcal{T})/4$,
it follows that $\Vert \mathcal{P}_{\mathcal{T}}\mf{L}^{*}-\mathcal{P}_{{\mathcal T'}}\mf{L}^{*}\Vert_{2}=
\Vert \mf{L}^{*}-\mathcal{P}_{{\mathcal T'}}\mf{L}^{*}\Vert_{2}=\Vert\mathcal{P}_{{\mathcal T'}^{\perp}}\mf{L}^{*}\Vert_{2}=\Vert\mf{C}_{\widetilde{\mathcal T}}\Vert_2\leq\frac{\xi(T)}{4}$.
At this stage, from Assumption \ref{alg}, we can recall that $\xi(\mathcal{T}(\mf{L}^{*}))=\frac{\sqrt r}{\kappa_L p^{\delta_{1}}}$,
which means that, since $\delta_1>0$ by Assumption \ref{sparsity}, we get $\frac{\xi(T)}{4}=\frac{\sqrt r}{4\kappa_L p^{\delta_{1}}} \xrightarrow{p\to\infty} 0$. Therefore, since $\delta_1>0$, as $p \to \infty$, the condition
$\Vert \mf{C}_{\widetilde{\mathcal T}} \Vert_{2} \leq \frac{16(3-\nu)}{3\alpha(2-\nu)}\psi_{0}$ is inactive.
It follows that, as $p \to \infty$,
the error bound $\widetilde{r}$ of Proposition \ref{11}
$$\widetilde{r}=\max\left\{\frac{4}{\alpha}[g_{\gamma}(\mathcal{A}^{\dag}\mf{\Delta}_n)
+g_{\gamma}(\mathcal{A}^{\dag}\mathcal{I}^{*}\mf{C}_{\mathcal{T}'})+\psi_{0}],\Vert \mf{C}_{\mathcal{T}'}\Vert_{2}\right\}$$
only depends on the first argument of the maximum, which in turn, if $\alpha=\beta=1$, $\delta=0$, $\nu=\frac{1}{2}$,
according to Proposition \ref{13} attains its maximum, equal to $\frac{40}{9}\psi_0$.
This occurs because \eqref{bound_g} is maximum at $\nu=\frac{1}{2}$.
Note that, in that case, the range for $\gamma$ in Proposition \ref{11}
collapses to $\gamma \in [9\xi(\mathcal{T}),\frac{1}{6\mu(\Omega)}]$,
and the identifiability condition becomes $\frac{\sqrt{r}\kappa_S}{\kappa_L}\leq\frac{1}{54}$.
Now, we note that under Assumptions \ref{eigenvalues}, \ref{sparsity}, \ref{tails},
with probability tending to one as $n \to \infty$, it holds:
\begin{align}
g_\gamma(\mathcal{A}^{\dag}\mf{\Delta}_n)&=g_\gamma(\mf{\Sigma}_n-\mf{\Sigma}^{*},\mf{\Sigma}_n-\mf{\Sigma}^{*})\nonumber\\
&\leq \max\left(
\frac{\Vert \mf{\Sigma}_n-\mf{\Sigma}^{*} \Vert_{\infty}}{\gamma},
\frac{\Vert \mf{\Sigma}_n-\mf{\Sigma}^{*} \Vert_{2}}{\Vert \mf{\Sigma}^{*} \Vert_{2}}
\right)\leq\nonumber\\
&\leq \max\left(
\frac{\Vert \mf{\Sigma}_n-\mf{\Sigma}^{*} \Vert_{\infty}}{\gamma},
\frac{\Vert \mf{\Sigma}_n-\mf{\Sigma}^{*} \Vert_{2}}{p^{\alpha_{1}}}
\right)\leq\nonumber\\
&\leq C'\frac{1}{9\xi(\mathcal{T})}\sqrt{\frac{\log p}{n}}\nonumber.
\end{align}
This results descends from Lemma \ref{Lemma_bmax},
from the condition $\gamma \in [\frac{3\xi(\mathcal{T}(\mf{L}^{*}))(2-\nu)}{\nu\alpha},\frac{\nu\alpha}{2\mu(\Omega(\mf{S}^{*}))\beta(2-\nu)}]$
of Proposition \ref{11},
where the minimum for $\gamma$, $\frac{3\xi(\mathcal{T}(\mf{L}^{*}))(2-\nu)}{\nu\alpha}$, is attained for $\alpha=1$ and $\nu=\frac{1}{2}$,
and from Lemma \ref{Lemma_cons}, under Assumptions \ref{eigenvalues}, \ref{sparsity}, \ref{tails}.
Since we have set
$\psi_{0}=\frac{1}{\xi(\mathcal{T})}\sqrt{\frac{\log p}{n}}$, the condition of Proposition \ref{13} can be written as
$g_\gamma(\mathcal{A}^{\dag}\mf{\Delta}_n) \leq \frac{\psi_{0}}{18}\leq\frac{p^{\delta_{1}} k_L}{18\sqrt{r}}
\sqrt{\frac{\log p}{n}}$ by Assumption \ref{alg}.
Therefore, setting $C=\frac{k_L}{18\sqrt{r}}$ and $C'=\frac{1}{2}$,
under Assumptions \ref{eigenvalues}-\ref{alg},
Proposition \ref{13} (parts 1, 3 and 4) ensures that
the solution $(\widehat{\mf{S}},\widehat{\mf{L}})$ of (\ref{obj}) satisfies
\begin{equation}
g_\gamma(\widehat{\mf{S}}-\mf{S}^{*},\widehat{\mf{L}}-\mf{L}^{*})\leq C\frac{80}{9}\psi_{0}\leq \kappa \frac{p^{\delta_{1}}}{\sqrt{n}},
\end{equation}
where $\kappa=\frac{80}{9} \frac{k_L}{18 \sqrt{r}}$.
Recalling the definition of $g_\gamma$ in \eqref{ggamma}, we can thus write
\begin{eqnarray}
\Vert\widehat{\mf{L}}-\mf{L}^{*}\Vert_{2}\leq Cp^{\alpha_{1}}\frac{80}{9}\psi_{0} \leq \kappa p^{\alpha_{1}+\delta_{1}}\sqrt{\frac{\log p}{n}},\nonumber\\
\Vert\widehat{\mf{S}}-\mf{S}^{*}\Vert_{\infty}\leq C\frac{80}{9}\gamma \psi_{0} \leq \kappa \sqrt{\frac{\log p}{n}}.\nonumber
\end{eqnarray}
This proves parts 1 and 2 of Theorem \ref{thm_main}.
Finally, Proposition \ref{13} (parts 2 and 4) ensures that
\begin{eqnarray}
\mathcal{P}(\mathrm{rk}(\widehat{\mf{L}})=r)\to 1, \nonumber\\
\mathcal{P}(\mathrm{sgn}(\widehat{\mf{S}})=\mathrm{sgn}(\mf{S}^{*}))\to 1, \nonumber
\end{eqnarray}
as $n \to \infty$. This proves parts 3 and 4 of Theorem \ref{thm_main}.
\subsection*{Proof of Corollary \ref{coroll_main}}
\begin{proof}
Suppose that all the assumptions and conditions of Theorem \ref{thm_main} hold, and recall that the pair $(\widehat{\mf{S}},\widehat{\mf{L}})$
is the solution of (\ref{obj}).
Then, part 1 holds true because of Theorem \ref{thm_main} part 2 and Assumption \ref{sparsity}(i), as
\begin{eqnarray}
\Vert\widehat{\mf{S}}-\mf{S}^{*}\Vert_{2} \leq \Vert \widehat{\mf{S}}-\mf{S}^{*}\Vert_{0,v} \Vert\widehat{\mf{S}}-\mf{S}^{*}\Vert_{\infty}\leq\nonumber\\
\leq \kappa \Vert \mf{S}^{*}\Vert_{0,v} \sqrt{\frac{\log p}{n}}
\leq \kappa \delta_2 {p^{\delta_{1}}} \sqrt{\frac{\log p}{n}},\nonumber
\end{eqnarray}
where we used the fact that $\mathcal{P}(\Vert \widehat{\mf{S}}-\mf{S}^{*}\Vert_{0,v} = \Vert\mf{S}^{*}\Vert_{0,v}) \to 1$
as $n \to \infty$.
Part 2 holds true under Assumptions \ref{eigenvalues}(i) and \ref{sparsity}(i) because
\begin{eqnarray}
\Vert\widehat{\mf{\Sigma}}-\mf{\Sigma}^{*}\Vert_{2}\leq \Vert\widehat{\mf{L}}-\mf{L}^{*}\Vert_{2}+\Vert\widehat{\mf{S}}-\mf{S}^{*}\Vert_{2}\leq\nonumber\\
\leq \kappa p^{\alpha_{1}+\delta_{1}} \sqrt{\frac{\log p}{n}} + \kappa \delta_2 p^{\delta_1} \sqrt{\frac{\log p}{n}}.\nonumber
\end{eqnarray}
Then, Proposition \ref{13} (part 4) ensures part 3 of the Corollary, as $\widehat{\mf{S}}\succ 0$ because $\widehat{\mf{S}} \in \mathcal{S}(s)$ as $n \to \infty$ .
Part 4 of the Corollary descends by Proposition \ref{13} (parts 2 and 4),
because $\mathcal{P}(\mathrm{rk}(\widehat{\mf{L}})=r) \to 1$ as $n \to \infty$ (part 3 of Theorem \ref{thm_main}),
and $\widehat{\mf{\Sigma}} \succ 0$ because $$\lambda_p(\widehat{\mf{\Sigma}})\geq\lambda_p(\widehat{\mf{L}})+\lambda_p(\widehat{\mf{S}})>0+\lambda_p(\widehat{\mf{S}})>0,$$
by dual Lidksii inequality and part 3 of the Corollary.
Part 5 of the Corollary holds because
$$\Vert \widehat{\mf{S}}^{-1}-\mf{S}^{*-1} \Vert \leq \Vert \widehat{\mf{S}}-\mf{S}^{*} \Vert
\frac{1}{\lambda_p(\mf{S}^{*})} \frac{1}{\lambda_p(\widehat{\mf{S}})},$$
$\lambda_p(\mf{S}^{*})=O(1)$ by assumption, and $\lambda_p(\widehat{\mf{S}})$ tends to $\lambda_p(\mf{S}^{*})$
as $n \to \infty$.
Analogously, part 6 holds because
$$\Vert \widehat{\mf{\Sigma}}^{-1}-\mf{\Sigma}^{*-1} \Vert \leq \Vert \widehat{\mf{\Sigma}}-\mf{\Sigma}^{*} \Vert \frac{1}{\lambda_p(\mf{\Sigma}^{*})}\frac{1}{\lambda_p(\widehat{\mf{\Sigma}})},$$
$\lambda_p(\mf{\Sigma}^{*})=O(1)$ by assumption, and $\lambda_p(\widehat{\mf{\Sigma}})$
tends to $\lambda_p(\mf{\Sigma}^{*})$ as $n \to \infty$.
\end{proof}
\end{appendix}
\section*{Acknowledgments}
We thank the participants to the Conference “Mathematical Methods of Modern Statistics 3”, held in Luminy (France) in June 2022,
where a preliminary version of this work was presented, for their encouragement and constructive comments.
\bibliographystyle{chicago}
|
1,108,101,563,202 | arxiv | \section{INTRODUCTION}
\label{sect-intro}
A complete census of both star-formation and nuclear activity over cosmic time is crucial to understanding
the assembly and evolution of galaxies and super-massive black holes (SMBHs), as well as the role of mergers
and secular processes in driving their growth. The similar evolution found for the comoving star-formation rate (SFR)
density and the comoving SMBH accretion rate, both peaking at z$\sim$2 \citep[e.g.][for a review] {2014ARA&A..52..415M},
and the tight correlations between galaxy properties and BH mass \citep[e.g.][]{1998AJ....115.2285M, 2000ApJ...539L..13G,
2002ApJ...578...90F, 2009ApJ...706..404G} suggest a synchronized evolution likely driven by related physical processes
\citep[e.g.][]{2005Natur.433..604D, 2007ApJ...659..976H, 2014MNRAS.441.1059V}. A key question is the role of Active
Galactic Nuclei (AGN) in such scenarios, with AGN outflows possibly being responsible for regulating or terminating the
star-formation \citep[][and references therein]{2014ARA&A..52..589H}.
Attempts to derive the star-formation and accretion histories through optical, near-IR (NIR) and X-ray surveys suffer
significant uncertainties because of the large and mostly unconstrained corrections for dust extinction and gas obscuration.
Even the deepest X-ray surveys may fail to detect the most heavily absorbed Compton thick AGN.
Many {\it Spitzer Space Telescope} and {\it Herschel Space Observatory} observations have been dedicated to detect the dust
emission in galaxies up to z$\sim$2 \citep[e.g.][]{2013A&A...549A..59D}. However, these studies are affected
by the poor angular resolution (from few arcsec to $\sim$ 30 arcsec) of IR telescopes which cannot resolve compact
structures at high redshift and are confusion limited.
In contrast, radio continuum imaging is a powerful dust and obscuration-free tool providing
unbiased measures of both star-formation and AGN activity up to high redshift, and, moreover, interferometry
techniques can reach sub-arcsec angular resolution, up to milli-arcsec scales through Very Long
Baseline Interferometry (VLBI).
Increasing observational evidence suggests that the sub-mJy radio source population
is a mixture of star-forming galaxies (SFGs), radiatively efficient AGN (RE-AGN) and radiatively inefficient
AGN (RI-AGN), with the formers dominating at the lowest flux densities below $\sim$100\,$\mu$Jy
\citep[e.g.][]{1999MNRAS.304..199G, 2005MNRAS.358.1159M, 2006MNRAS.372..741S, 2007A&A...463..519B,
2008MNRAS.386.1695S, 2009ApJ...690..610S, 2013MNRAS.436.3759B, 2017arXiv170309719S}.
The unexpected detection in deep radio surveys of large numbers of
RE-AGN \citep[e.g][]{2014ARA&A..52..589H}, emitting over a wide range of the electromagnetic spectrum,
from mid-IR (MIR) to X-rays, but typically radio-quiet, has opened the exciting prospect of studying
the whole AGN population through deep radio surveys.
The nature of the radio emission in RE-AGN is currently hotly debated. Several works suggest that
star-formation related processes can, at least partly, produce the observed radio emission in RE-AGN
\citep[e.g.][]{2011ApJ...739L..29K, 2011ApJ...740...20P, 2012ApJ...758...23C, 2013MNRAS.436.3759B}.
Others point towards composite star-formation and AGN emission \citep[e.g.][]{2015MNRAS.448.2665W}.
The presence of embedded AGN cores has been demonstrated in some systems through VLBI observations
\citep{2016A&A...589L...2H, 2016A&A...589L...3M}.
Assessing the faint AGN component in deep radio fields will provide an important tool to understand the
role of nuclear activity in distant galaxies, the nature and accretion regime of RI- and RE-AGN, and their
possible co-evolution with star-formation processes. The most direct way to identify faint AGN-driven radio
emission is the detection of embedded radio cores in the host galaxies, through
ultra-deep and high resolution radio observations, supported by multi-wavelength observations, crucial
to understand the physical properties and nature of the radio sources and their hosts.
This context motivates the eMERGE survey \citep[e-MERlin Galaxy Evolution survey][]{2008evn..confE..22M},
the largest e-MERLIN (enhanced Multi-Element Remote-Linked Interferometer Network) legacy project, whose goal
is to obtain a resolved view of the radio source population up to high redshift in the Great Observatories
Origins Deep Surveys-North \citep[GOODS-N; ][]{2004ApJ...600L..93G}.
GOODS-N was observed previously at 1.4\, GHz \citep{2000ApJ...533..611R, 2005MNRAS.358.1159M, 2010ApJS..188..178M}
and 8.5\,GHz \citep{1998AJ....116.1039R} with the Karl G. Jansky Very Large Array (VLA).
Preliminary observations at 5.5\,GHz were obtained as part of the e-MERLIN commissioning \citep{2013MNRAS.432.2798G}.
eMERGE is based on the combination of ultra-deep e-MERLIN and VLA observations at 1.4 and
5.5\,GHz. When completed, it will provide sub-$\mu$Jy sensitivity on 0.05-2\,arcsec angular scales, corresponding
to sub-kpc up to tens of kpc linear scales at redshift $z>1$, and will allow the separation of compact AGN-related
emission from more extended, lower surface brightness star-forming disks.
In this paper, we present the first 5.5\,GHz deep image and catalogue of the GOODS-N field based on VLA
observations with sub-arcsec resolution, taken as part of the eMERGE legacy project. This first set of observations
are used to make an exploratory analysis of the $\mu$Jy radio source population, as observed at sub-arcsec resolution,
with a particular focus on the AGN population. Near-IR identifications were obtained from available ultra-deep
$K_s$-band catalogues \citep{2010ApJS..187..251W, 2011PASJ...63S.379K, 2014ApJS..214...24S}
and different diagnostics were used to separate different classes of AGN from SFGs.
A preliminary report of this work has been presented by \citet{2015fers.confE..23G}.
In a forthcoming paper (hereafter referred to as Paper II) we will extend this analysis to the radio spectral
index properties of a larger sample of sources selected at 1.4\,GHz, that will be used, in combination with
the wealth of broad-band information available in the GOODS-N field, to further characterize the properties
of different types of AGN and the population of SFGs.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{pointings2.ps}
\caption[]{Mosaic pattern of the 7 pointing centres (crosses) at 5.5\,GHz.
The dotted and inner full circles show the full width half power primary beam of the VLA ($\simeq$7.5\,arcmin)
and of the Lovell telescope ($\sim$2.5\,arcmin), respectively,
at this frequency. The outer full circle contains the area covered by our 5.5\,GHz catalogue.
The plot clearly illustrates that the region observed by the VLA alone
is oversampled to ease future combination with e-MERLIN observations
including the Lovell telescope.
}
\label{pattern}
\end{figure}
The paper is organised as follows. \S 2 describes the observations and the data reduction, while we present
the catalogue extraction in \S 3. In \S 4 we provide the
results on the polarisation analysis. NIR identifications and redshift information of the
radio sources are presented in \S 5. In \S 6 we identify AGN in our radio-selected sample using different diagnostics:
various IR colour-colour plots, X-ray luminosity, radio-excess and VLBI detection. A discussion of the results is
presented in \S 7, while conclusions and future perspectives are given in \S 8.
Throughout this paper we adopt a concordance cosmology with Hubble constant $H_{0} =70\rm{km s^{-1}/Mpc^{-1}}$,
$\Omega_{\Lambda} = 0.7$ and $\Omega_{M}= 0.3$. All magnitudes referred in this paper are in AB system, unless
otherwise stated, where an AB magnitude is defined as AB $=23.9 - 2.5\log(S{\rm [\mu Jy]})$.
\section{VLA OBSERVATIONS AND DATA REDUCTION}
\label{sect-obs}
\subsection{Observations}
We obtained new VLA observations of the GOODS-N field at a central frequency of 5.5\,GHz with a 2\,GHz
bandwidth in A- and B-configurations. The VLA A-configuration observations were taken over four nights in October 2012
(2012 October 6, 7, 8 and 20, project code 12B-181), for a total observing time of 14\,hrs. The B-configuration data
were taken in one night (2013 September 27, project code 13B-152) for a total time of 2.5\,hrs.
The observations consist of a mosaic of 7 pointings (Fig.\,\ref{pattern}),
centred at $\alpha = 12^{\rm h}36^{\rm m}49^{\rm s}.4$,
$\delta = +62^{\circ}12^{\prime}58^{\prime\prime}$ (J2000).
The pointing centres are separated by $\sim $2\,arcmin to facilitate
combination with future 5.5\,GHz e-MERLIN observations including the 76-m Lovell telescope,
which has a smaller primary beam (2.5\,arcmin) than the VLA antennas (7.5\,arcmin FWHM at 5.5\,GHz, see Fig.\,\ref{pattern}).
For the VLA alone this mosaic pattern is oversampled, and provides ultra-deep sensitivity over the central region.
Each pointing was observed for a total of $\sim$80\,min in A-configuration and 12\,min in
B-configuration.
The nearby unresolved phase calibrator J124129+6020 was monitored for 40\,seconds every 10\,minutes to provide
accurate phase and amplitude calibration. The flux density and bandpass calibrators, 3C\,286 and J1407+28279
(OQ208), were observed once per night.
The data were recorded every 1\,s in spectral line mode using 16 adjacent $64\times2$\,MHz intermediate frequency
channels (IFs), for a total bandwidth of 2048\,MHz. Both circular polarisations were recorded.
This bandwidth synthesis mode minimises chromatic aberration (bandwidth smearing) and reduces the
effects of narrow-band radio frequency interference (RFI) as individual narrow channels can be flagged and rejected
from the data.
\subsection{Editing \& Calibration}
\label{sect-cal}
The data were calibrated and edited using the {\sc aips} software package,
developed by the National Radio Astronomy Observatory\footnote{The National Radio
Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by
Associated Universities, Inc.}, following standard procedures (as briefly outlined below).
The five observing sessions (four in A-configuration and one in B-configuration) were edited
and calibrated separately.
To set the interferometer group delay correctly we used the task {\sc fring} on the calibrator J1407+28297,
selecting a time range of 1\,min. After a first run of automatic flagging done through {\sc rflag}, we
performed a first calibration of the bandpass and of the flux density scale using the source 3C286.
Amplitude calibration was based on the VLA standard spectrum of 3C286, bootstrapped to determine the spectrum
of J1407+28279 and J124129+6020. Specifically, the frequent observations of J124129+6020 (once every 10 minutes)
were used to calibrate the amplitudes and phases of the target fields. We then examined the visibilities of all
the calibrators and performed further editing of residual RFI using the tasks {\sc spflg} and {\sc uvflg}.
A new calibration table was then derived using only the data that had passed the editing process.
In total, about 15 percent of the {\it uv}-data were discarded in the editing process.
We obtained a mean flux density (averaged over the five days) of
2.271$\pm$0.032\,Jy for J1407+28279 and 0.298$\pm$0.003\,Jy for J124129+60200 (where the uncertainty is the standard
error $\sigma/\sqrt{N}$), at the central observing frequency of 5.5\,GHz. We estimate calibration errors of
$\sim$1-1.5 percent for the flux density measurements.
We also performed the polarisation calibration using the sources 3C286 and J1407+28279 (the latter is known to be
unpolarised) as polarisation angle and instrumental polarisation calibrators, respectively. J1407+28279 was then imaged
in total intensity and polarisation to check for residual instrumental
polarisation. We derived a fractional polarisation, averaged over the five days,
of 0.2 percent, that we take as the level of residual instrumental polarisation
in our data.
Finally, the edited and calibrated {\it uv}-data from all five
observing sessions were combined for imaging using the task {\sc dbcon} with parameter {\tt REWEIGHT=1}.
\subsection{Imaging and mosaicing}
\label{sect-imaging}
Imaging wide fields using data sets with a large fractional bandwidth
($\Delta\nu/\nu_c$) is a challenging task given that the field of view, the primary beam correction,
the synthesized beam or point-spread function, and the flux densities of the sources all vary significantly with frequency.
Two different approaches can be followed:
\begin{itemize}
\item{} Split the {\it uv}-data into sub-bands having
$\Delta\nu/\nu_c \ll 1$ (namely the sixteen IFs, each with a bandwidth of 128 MHz), and image separately
each sub-band with a common resolution (tapering and/or changing the weight function with increasing frequency).
At each frequency, the mosaic, resulting from the combination of the seven pointing, is derived applying
the primary beam correction appropriate to the central frequency of each sub-band. Finally, the mosaics
produced from each sub-band can be recombined with appropriate weights to obtain a sensitive wide-band
mosaic \citep[e.g.][]{2012ApJ...758...23C}.
\item{} Imaging the entire bandwidth simultaneously using the multi-scale multi-frequency (MSMF) synthesis
clean algorithm (available in the {\sc casa} package) with nterm $>1$, which takes into account frequency-dependent
variations over the observing band \citep[e.g.][]{2011A&A...532A..71R, 2014arXiv1403.5242R}.
The resulting images for each pointing are then corrected for the primary beam using the {\sc casa} task
{\sc widebandpbcor}. Finally, the mosaic covering the whole field is obtained by a weighted combination of the seven pointings.
\end{itemize}
We have tested and compared both approaches, and selected the MSMF synthesis clean for the following reasons.
While both methods produce highly comparable images in terms of noise properties and image fidelity, the first method
has, in our opinion, two main disadvantages. Firstly, the images produced by the different IFs have to be restored
to the lowest common resolution. This means that we lose resolution in our images, but also that we need to
fine-tune the data and/or change the weighting function used in the gridding process as the image
frequency increases. Secondly, the cleaning threshold is usually set to some
multiple of the expected noise (usually in the range 3 to 5). Using the same
criterion for the cleaning in both methods described above means that the
individual IF images will have, on average, a noise four times larger than the
image produced with the MSMF clean (since 16 individual IFs are summed up).
Therefore a large fraction of the sources detected in the sensitive wide-band
mosaic obtained by recombining the 16 individual IF mosaics will be sources that were not cleaned, or
that were cleaned in some images (some IFs) and not in
others. This may affect the source properties by introducing subtle undesirable effects on the final mosaic.
We imported the combined data-sets (one for each pointing, including all the
A- plus B-configuration data) into {\sc casa} (task {\sc importuvfits}) and ran the
task {\sc clean} with two Taylor terms in the frequency expansion ({\tt mode=mfs, nterms=2}). Wide-field mode was enabled
using {\tt gridmode=widefield, wprojplanes=128, facets=1}, along with three resolution scales.
For each pointing a map with $8192$ pixels on-the-side was produced with a pixel
size of $0.1\times 0.1$ arcsec$^2$.
The fields were cleaned down to about 4 times the expected r.m.s. noise of
each pointing. The final restoring beam was set to $0.56\times 0.47$ arcsec$^2$
with a position angle of $88^\circ$. After the deconvolution, wide-band primary beam correction was applied
using the {\sc casa} task {\sc widebandpbcor} using a primary beam threshold cut-off of 0.15 per cent of the peak.
To construct the mosaic, the images of the seven pointings were transferred
back into {\sc aips}. We re-gridded the images using the task {\sc hgeom} to a
common centre, and produced a noise image for each pointing using the task {\sc rmsd}.
The re-gridded images were combined, using the noise images as weights, with the task {\sc wtsum}.
Finally, a noise image of the mosaic was generated.
The r.m.s. noise is $\simeq 1$ $\mu$Jy in the inner regions, and increases with
distance from the centre of the mosaic.
The visibility function (Fig.\,\ref{visib}), calculated over the region used to extract the source catalogue
(see \S\,\ref{catalogue} ), shows that about 50 percent of the mapped area is characterized by an r.m.s. noise
lower than 3\,$\mu$Jy, and remains $\leq10\mu$Jy across the whole field: this makes our survey the most sensitive
yet at 5.5\,GHz.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{vis_flussi.ps}
\caption[]{Top: Visibility function of the 5.5\,GHz mosaic (area versus r.m.s. sensitivity) calculated over the region used
to derive the source catalogue (see text). Bottom: Observed peak (S$_{\rm p}$) and total flux density (S$_{\rm t}$)
distributions of the 5.5\,GHz sources.
The dashed red line indicates the peak flux density histogram after correcting for the visibility function.
A preliminary version of these plots was shown in \citet{2015fers.confE..23G}.}
\label{visib}
\end{figure}
\section{Source catalogue}
\label{catalogue}
\begin{center}
\begin{table*}
\caption{The catalogue of sources detected above $5\sigma$ at 5.5 GHz. We include here only the first ten sources in R.A. order,
the full version of the catalogue is available as online-only material.
Column 1 gives the name of the source, columns 2 to 5 list the right ascension, declination and the respective errors,
column 6 is the signal-to-noise ratio, columns 7 \& 8 are the peak brightness and the total flux density with the respective
errors.
Finally, column 9 lists the deconvolved FWHM sizes (major axis and minor axis) in arcsec, and the position angle in degrees
(a value of -99 in this column indicates that the source is made of multiple components).
For the radio sources classified as unresolved, the total flux is set equal to the peak brightness and the
deconvolved sizes are set to zero.}
\label{tab_sample}
\begin{tabular}{lccccccccc}
\hline
Source Name & R.A. & Dec &$\sigma_\alpha$& $\sigma_\delta$ & SNR & S$_p$ & S$_T$ & Size \& PA \\
& (J2000) & (J2000) & (arcsec) & (arcsec) & & ($\mu$Jy beam$^{-1}$) & ($\mu$Jy) & (arcsec$\times$arcsec, deg) \\
\hline
J123557+621536 &12 35 57.94 & 62 15 36.83 & 0.19 &0.18 & 7.1 & $52.7\pm 7.5$ & $82.4\pm 11.7$ & $0.74\times 0.25$ 118 \\
J123601+621126 &12 36 01.80 & 62 11 26.41 & 0.20 &0.20 & 5.6 & $28.7\pm 5.2$ & $51.3\pm 9.3$ & $0.83\times 0.37$ ~47 \\
J123603+621110 &12 36 03.25 & 62 11 10.97 & 0.19 &0.19 & 7.0 & $30.4\pm 4.3$ & $54.9\pm 7.8$ & $0.91\times 0.25$ ~55 \\
J123606+620951 &12 36 06.61 & 62 09 51.13 & 0.18 &0.18 & 11.0 & $54.4\pm 5.0$ & $54.4\pm 4.9$ & $0.00\times 0.00$ ~~0 \\
J123606+621021 &12 36 06.83 & 62 10 21.44 & 0.20 &0.20 & 5.6 & $25.7\pm 4.6$ & $43.0\pm 7.7$ & $0.91\times 0.00$ ~53 \\
J123608+621035 &12 36 08.12 & 62 10 35.89 & 0.17 &0.17 & 32.9 & $129.9\pm 4.2$ & $129.9\pm 4.2$ & $0.00\times 0.00$ ~~0 \\
J123609+621422 &12 36 09.71 & 62 14 22.16 & 0.20 &0.20 & 5.1 & $16.7\pm 3.3$ & $16.7\pm 4.5$ & $0.00\times 0.00$ ~~0 \\
J123617+621011 &12 36 17.08 & 62 10 11.32 & 0.18 &0.18 & 12.9 & $40.3\pm 3.2$ & $40.3\pm 3.4$ & $0.00\times 0.00$ ~~0 \\
J123617+621540 &12 36 17.55 & 62 15 40.76 & 0.17 &0.17 & 39.7 & $122.6\pm 3.3$ & $122.6\pm 3.2$ & $0.00\times 0.00$ ~~0 \\
J123618+621550 &12 36 18.33 & 62 15 50.58 & 0.18 &0.18 & 14.6 & $45.1\pm 3.1$ & $61.4\pm 4.2$ & $0.46\times 0.42$ ~32 \\
\hline
\hline
\end{tabular}
\end{table*}
\end{center}
To identify a sample of sources above a given local signal-to-noise ratio (SNR)
threshold in the 5.5\,GHz mosaic, we followed the same approach already successfully tested by other
radio surveys \citep[e.g.][]{2003A&A...403..857B}.
We employed the {\sc aips} task {\sc sad} on both the mosaic image and the noise image to obtain a catalogue of
candidate sources above the threshold of $4.5\sigma$. We limited the source extraction and the following analysis
to a circular region of radius 7 arcmin around the mosaic centre. For each selected source, {\sc sad} estimates
the peak and total fluxes, and the position and size using a Gaussian fit. Since the Gaussian fit may provide
unreliable and biased results for low SNR sources, a better estimate of the peak brightness and position was obtained
with a simple cubic interpolation around the fitted position using {\sc maxfit} in {\sc aips}. Only the sources for which
the ratio of peak brightness and the local noise was $\ge 5$ (i.e. those with SNR$\ge$5) were included in the final
catalogue.
We found a total of 100 components that were visually checked to identify possible multiple
components of a single radio source. Eight components were converted into 2 single radio sources.
In these cases, we ``collapsed'' all the multiple components in the catalogue to a single source entry at the position
of the brightest component, and we derived the total flux density integrating the brightness distribution
over the area occupied by the source.
For all the remaining sources
we assessed the reliability of the total flux densities derived by {\sc sad} using simulated sources,
added to the uv-dataset of the central pointing. The dataset, including the mock sources, was imaged and primary
beam corrected as the original dataset. Forty mock sources, with total flux density in the range 20-200\,$\mu$Jy were
inserted for each run of simulation in a region within a radius of 90\,arcsec from the pointing centre.
Half of the inserted sources were point-like, while the remaining half had intrinsic sizes between 0.2 and 0.8\,arcsec.
This procedure was repeated five times yielding to a sample of 200 mock sources.
The total flux density recovered by {\sc sad} (1-component Gaussian fit) for each mock source was compared to the
intrinsic one injected in the dataset. We found that the total flux densities derived with this procedure are
systematically higher (on average by 15-20 percent) than the true values: the median, mean and standard deviation of the
ratio between the measured and injected total flux using a simple 1-component Gaussian fit are 1.15, 1.20 and 0.25, respectively.
Therefore, we decided to manually fit each of our mock sources with a 2-component fit, including a Gaussian component
and a zero-level baseline contribution.
The total flux densities obtained from these fits are in much better agreement with the true, injected values:
the median, mean and standard deviation of the ratio between the measured and injected total flux densities
using the 2-component (Gaussian + baseline) fit are 0.98, 1.02 and 0.15, respectively.
Summarising, for each single component source the peak brightness is measured with {\sc maxfit}, and the total
flux density and sizes with the 2-component Gaussian fit.
\begin{figure}
\centering
\includegraphics[width=8cm]{stsp.ps}
\caption[]{Ratio between S$_{\rm t} $ and S$_{\rm p}$ as a function of the local SNR.
Sources below the dashed line (red points) are considered unresolved,
while those above (blue points) are considered resolved (see text for details).
The horizontal solid line is drawn at $S_{\rm t} / S_{\rm p} =1$.
The lower solid-line envelope contains 90 percent of the sources with $S_{\rm t} / S_{\rm p} <1$
}
\label{flux_dist}
\end{figure}
The final catalogue contains 94 sources with SNR $\ge 5$. Table\,\ref{tab_sample} lists the source name, position in R.A.
and Dec. with errors, signal-to-noise ratio, peak flux and total flux density with errors, and deconvolved sizes.
Sources that were classified as unresolved using the distribution of the total to peak flux ratio (see \S\,\ref{resolved})
have their total flux set equal to the peak flux and deconvolved sizes set to zero. The full version of
Table\,\ref{tab_sample} is available as online-only material. Since the formal relative errors determined by
Gaussian fits are generally smaller than the true uncertainties of the source parameters, we used the
\citet{1997PASP..109..166C} error propagation equations to estimate the true errors on fluxes and positions
\citep[e.g.][]{2000A&AS..146...41P, 2003A&A...403..857B}.
The contour plots of the 94 radio sources are shown in the Appendix (online material only).
The peak brightness ($S_p$) and total flux density ($S_t$) distributions for the 94 sources in our sample are shown in
Fig.\,\ref{visib}, together with the expected peak flux density distribution corrected for local r.m.s. variations
using the visibility function.
To test the reliability of lower SNR sources,
we quantified the number of possible spurious detections
due to random noise fluctuations and
associated to negative brightness peaks in the following way.
By assuming that negative and positive noise spikes
have a similar distribution in the 5.5\,GHz mosaic image,
we ran the task {\sc sad} on the negative mosaic map
(i.e. the map multiplied by -1), with the same input parameters
used to extract the source catalogue.
We found 7 components with $\rm{SNR}\ge 5$
within the extraction area of the catalogue (7\,arcmin radius).
All these components are in the range $5\le \rm{SNR} < 5.5$.
In the radio catalogue there are 19 sources with $\rm{SNR}< 5.5$,
as shown by our analysis these lower SNR sources
may be significantly contaminated by
false detection (7/19, $\sim$40 percent),
Since the following analysis is based on sources
with NIR identifications, we are confident that the fraction of spurious
radio sources is negligible, even at the lowest SNR values.
\subsection{Unresolved and extended sources}
\label{resolved}
Our source classification as unresolved or extended is based on the ratio between the total and
peak flux densities of the sources \citep[e.g.][]{2000A&AS..146...41P}:
\begin{equation}\label{eq-dec1}
S_{\rm t}/S_{\rm p}=(\theta_{\rm min} \; \theta_{\rm maj})
/(b_{\rm min} \; b_{\rm maj})
\end{equation}
where $\theta_{\rm min}$ and $\theta_{\rm maj}$ are the fitted source FWHM axes and
$b_{\rm min}$ and $b_{\rm maj}$ are the synthesized beam FWHM axes.
The flux density ratio distribution for all the 94 sources is shown
in Fig.~\ref{flux_dist}. As expected, at low SNRs we have sources with
S$_{\rm t}$/S$_{\rm p}< 1$: this is due to the r.m.s. statistical errors which affect
our source size and, in turn, the flux density estimates.
To distinguish between unresolved and extended sources,
we derived the lower envelope of the flux ratio
distribution in Fig.\,\ref{flux_dist}
by fitting a curve above which there are at least
90 percent (to discard possible outliers) of the sources with S$_{\rm T}$/S$_{\rm P}< 1$:, and then
mirrored it above the S$_{\rm T}$/ S$_{\rm P}$=1 value.
The sources located above the upper envelope are considered extended,
while those below it are considered unresolved.
We stress that the total flux density of the sources is calculated
through a 2 component-fit (source $+$ background) which has proved to
be more reliable in measuring the total flux.
The upper curve can be described by the equation:
\begin{equation}\label{eq-dec2}
S_{\rm t} /S_{\rm p} = 1 +
\left[ \frac{ a }{ (S_{\rm p}/\sigma)^{\beta}}\right]
\end{equation}
where $a=6$ and $\beta=1.5$.
With this criterion, 56 (38) sources are considered resolved (unresolved).
We considered as reliable only the deconvolved angular sizes of the resolved sources,
while those of unresolved sources are set to zero in the catalogue
(see Table\,\ref{tab_sample}).
\subsection{Radio sources detected at 5.5\,GHz without a 1.4\,GHz counterpart}
The deepest observations at 1.4\,GHz of the GOODS-N field, published by \citet{2010ApJS..188..178M},
have a circular beam size of $\sim 1.7$\,arcsec and r.m.s noise level of $\sim 3.9$ $\mu$Jy\,beam$^{-1}$.
The 1.4\,GHz catalogue lists more than 1200 radio sources above a $5\sigma$ threshold of $\sim 20$
$\mu$Jy\,beam$^{-1}$ at the field centre, within a region of $40\times 40$ arcmin$^2$.
A significant fraction of the sources detected at 5.5\,GHz (17/94, 18 percent) have no
counterpart in the 1.4\,GHz catalogue.
About half (9/17) of the sources without a 1.4\,GHz counterpart have SNR$>5.5$ or have a NIR
identification (see \S\,\ref{id}) and these are the most reliable sources.
These sources have upper limits in the
spectral index ranging from $\alpha < 0.66$ to $\alpha < -0.5$ (adopting the spectral index definition
$S\propto \nu^{-\alpha}$), and will be discussed in more details in Paper II (Guidetti et al. in preparation).
The number of the remaining sources (8/17) is consistent with the number of the expected spurious sources
derived in \S\,\ref{catalogue}.
\section{Polarisation properties}
The mosaics of the Stokes parameters Q and U were imaged and assembled using the same
method applied to derive the total intensity mosaic. Noise images were also
derived.
The Stokes Q and U mosaics were combined to derive the polarised intensity
mosaic using the task {\sc comb} in {\sc aips}. The noise images were used to
clip signals below a threshold of $3\sigma$ in the polarised intensity image.
We then searched for polarised emission at the
positions of the sources in our sample.
We detected polarised emission in only eight sources. These are
listed in Table\,\ref{tab_pol}.
For each source we give the peak in polarised emission and its SNR, the total
polarised flux and the average fractional polarisation (calculated as the
total polarised flux divided by the total flux density of the source from Table\,\ref{tab_sample}).
Only two sources in Table\,\ref{tab_pol} show extended polarised emission.
In particular J123726+621128 has a wide-angle tail (WAT) morphology with polarisation detected in the
twin jets and in both
lobes as shown in Fig.\,\ref{fig-pol}.
The second extended, polarised source, J123644+621133, represents
the other galaxy in this field showing the classical FRI structure
(core+jets).
We estimated the bandwidth effects on the polarised emission of the sources,
assuming a rotation measure (RM) in the range 10-100\,radm$^{-2}$.
These are plausible values for the integrated RM of GOODS-N sources:
typical intrinsic RM values for extragalactic radio sources are in the range from a few radm$^{-2}$
in the poorest environments, through intermediate values of 30-100\,radm$^{-2}$, up to thousands
of radm$^{-2}$ in the centres of cool core clusters \citep{2011MNRAS.413.2525G, 2012MNRAS.423.1335G}.
The high Galactic latitude of the GOODS-N field ensures
a small contribution from the Galactic foreground.
For the worst case (i.e.
at the lowest frequency of our observations, 4.5\,GHz), the average rotation
across the bandwidth is $\sim$10\,degrees,
resulting in a depolarisation of 0.017 which is negligible compared to
the errors due to noise.
\begin{table}
\caption{Polarised sources. Col.\,1: Source name.
Col.\,2: Peak of polarised emission. Col.\,3: SNR of the polarised emission. Col\,4:
Total polarised emission. Col.\,5 Average fractional polarisation.
}
\begin{center}
\begin{tabular}{lcccccc}
\hline
Source Name & P & SNR$_P$ & P$_{\rm tot}$ & Pol. \\
&($\mu$Jy)& &($\mu$Jy) & (\%) \\
\hline
J123623+621642 & 10.4 & 3.3 & 10.4 & 4.9 \\
J123642+621545 & 6.0 & 3.3 & 6.0 & 10.8 \\
J123644+621133 & 20.8 & 12.6 & 39.9 & 4.8 \\
J123646+621629 & 10.9 & 5.0 & 10.9 & 7.5 \\
J123700+620909 & 7.3 & 3.1 & 7.3 & 7.2 \\
J123714+620823 & 19.5 & 4.8 & 19.5 & 1.0 \\
J123721+621129 & 10.1 & 4.2 & 10.1 & 2.6 \\
J123726+621128 & 28.4 & 9.4 & 86.8 & 8.2 \\
\hline
\end{tabular}
\end{center}
\label{tab_pol}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=14cm]{3726+1128-fpol.ps}
\caption[]{Polarisation vectors with directions along the apparent electric field and lengths proportional
to the degree of polarisation at 5.5\,GHz, superimposed on the radio contours of total intensity
across J123726+621128 at the same frequency. The angular resolution is $\simeq 0.5$\,arcsec FWHM.
}
\label{fig-pol}
\end{figure*}
\citet{2014ApJ...785...45R} published the results of a polarisation
survey of the GOODS-N field at 1.5\,GHz. The observations had
a detection threshold of 14.5 $\mu$Jy in polarised emission and cover a
much larger area than that covered by our 5.5\,GHz data.
They detected 14 radio sources; two are in
the field of view covered by our observations: J123644.3+621132 and
J123725.9+621128 (wrongly named as J1237744.1+621128 in Table 1 in
\citet{2014ApJ...785...45R}) and both show polarised emission at 5.5\,GHz.
These are the two extended sources discussed above, and the sources with the strongest polarised emission at 5.5\,GHz.
Taking into account that the synthesized beam of the 1.5\,GHz observations is
$1.6\times 1.6$ arcsec$^2$ (compared to $0.56\times 0.47$ arcsec$^2$ of our 5.5\,GHz observations),
the amount of polarised flux detected at the two frequencies is consistent.
Of the remaining sources we detect at 5.5\,GHz, only J123714+620823
could have enough polarised flux to be detected at 1.5\,GHz.
\section{Near-Infrared identifications}
\label{id}
We searched for counterparts of the 5.5\,GHz sources in the ultra-deep $K_s$-band catalogue
of \citet{2010ApJS..187..251W}. The $K_s$-band imaging was performed with the Wide-field InfraRed Camera
(WIRCam) on the 3.6 m Canada-France-Hawaii Telescope (CFHT) and covers an area of 0.25 deg$^2$
with a $5\sigma$ point-source depth of $K_{s,AB}=24.45$ mag.
The field imaged at 5.5\,GHz is entirely covered by the WIRCam observations.
We initially searched for the nearest $K_s$-band counterpart within one arcsecond of each radio source:
80 associations were found.
The distribution of the positional offsets between the radio sources
and the NIR nearest counterparts is shown in Figure\,\ref{id_dist}.
The majority of the counterparts are found within 0.5\,arcsec of the radio source positions.
The median shift in R.A. and Dec, considering all the counterparts within 0.5\,arcsec, is
$\Delta_{\rm R.A.}=-0.04\pm 0.10$\,arcsec and $\Delta_{\rm Dec}=+0.04\pm 0.05$\,arcsec, respectively.
These shifts are less than 1/10 of the radio synthesized beam and smaller than the errors
(as estimated from the median absolute deviations); indicating that there is no evidence for any
measurable systematic offset between the radio and NIR positions.
\begin{figure}
\centering
\label{id_dist}
\includegraphics[width=8cm]{id_dist.ps}
\caption[]{Separation of the nearest K$_s$-band counterparts from the radio sources (solid black line),
and by shifting the radio sources by 1\,arcmin in four directions and
repeating the matching (dashed red line).}
\end{figure}
We checked for possible random coincidences by shifting the positions of the radio sources in the catalogue
and searching for the nearest K$_s$-band counterparts,
obtaining the distribution of the separation for random coincidences (dashed line in Fig.\,\ref{id_dist}).
Based on the sky density of the NIR objects the probability of having a false association
with a separation $\ge 0.5$ arcsec is $\sim 50$ percent.
Assuming $r=0.5$\,arcsec as the cut-off separation for a real identification, we
have 75 single identifications and expect two random coincidences.
A total of 19 sources have no NIR identification with $r\le0.5$\,arcsec.
We examined each of these sources and found that three are extended sources where the
radio peak is offset with respect to the possible optical counterpart, but the radio morphology
suggests that the optical association is correct,
resulting in 78/94
radio sources with a NIR counterpart in \citet{2010ApJS..187..251W}.
For the remaining unidentified objects we searched the catalogues of the Subaru
Multi Objects InfraRed Camera and Spectrograph (MOIRCS) ultra-deep survey
\citep{2011PASJ...63S.379K} and of the Cosmic Assembly Near-infrared Deep
Extragalactic Legacy Survey \citep[CANDELS][]{2014ApJS..214...24S}
with 5 $\sigma$ depth of $K_{s,Vega}=24.1$
and $K_{s,AB}=24.7$, respectively.
We found 5 new NIR counterparts
within 0.5\,arcsec. The level of random
coincidences with these further NIR catalogues is still $\lsim 3$ percent.
In summary, we have a secure identifications for 83 radio sources
(88 percent of the whole radio catalogue). In the following discussion we will refer to these
83 sources as the NIR-identified sample.
If we restrict the radio sample to sources with SNR $\ge 5.5$,
then we have 79 radio sources, 76 of which have a secure NIR counterpart (96 percent of the radio sample).
The high fraction ($\sim 90$ percent) of reliable identifications is a natural consequence
of the depth of the NIR catalogues used to cross identify the radio sources, and is consistent
with that found in other studies \citep[e.g.][]{2006MNRAS.372..741S, 2017arXiv170309719S}.
The distribution of the $K_s$ magnitude for the NIR-identified sample is shown in
Fig.\,\ref{ks},
together with that of the \citet{2010ApJS..187..251W} $K_s$ selected sample, restricted to sources
in the area of our radio observations (dotted histogram).
The $K_s$ magnitude histogram of radio sources
displays a much flatter distribution
than the overall NIR sample, this should be due to
our radio/NIR selection function and demonstrating that we are probing
a different source population than purely NIR-selected samples.
\begin{figure}
\centering
\includegraphics[width=8cm]{ksdist_wang.ps}
\caption[]{$K_s$ magnitude distribution for the radio source counterparts (solid line) and
for the NIR-selected sample \citep{2010ApJS..187..251W} limited to sources within the same area covered by the
radio mosaic (dashed line).
}
\label{ks}
\end{figure}
\subsection{Sources without near-IR counterparts}
Eleven sources are still unidentified at a limiting $K_s$ magnitude of 24.5-25.0. All of them are low
SNR radio sources (SNR $< 5.8$) and eight of them have SNR $< 5.5$. Moreover, none of the 11 unidentified sources
has a counterpart at 1.4 GHz in \citet{2010ApJS..188..178M}.
We searched for possible Infrared Array Camera (IRAC) and Multi-band Imaging Photometer for Spitzer (MIPS)
counterparts of the NIR unidentified radio sources in the
S-CANDELS catalogue \citep{2015ApJS..218...33A} and GOODS-N Legacy Survey 24\,$\mu$m catalogue \citep{2011A&A...528A..35M}.
We used a matching radius of 2 and 4\,arcsec for searching, respectively, IRAC and 24\,$\mu$m MIPS counterparts.
One source is identified in both the IRAC and MIPS catalogues, for two radio sources an association is found in the
IRAC catalogue and, finally, for one source a counterpart is found in the 24 $\mu$m MIPS catalogue only.
In summary, we found four possible MIR counterparts among the eleven source without NIR
identifications.
One of the remaining sources
is actually a famous galaxy, being associated to
HDF\,850.1, the brightest submillimiter source in the field
\citep[][and references within]{2012Natur.486..233W}.
These radio sources, without a deep NIR identification but with a MIR
or sub-mm counterpart, are potentially very interesting objects and we will
investigate their properties in a later publication, but since they do not
possess detections in other bands we cannot include them in the analysis described
in \S\,\ref{agn}, which is limited to the NIR-identified sample.
\subsection{Spectroscopic \& photometric redshifts}
Many papers presents spectroscopic and/or photometric redshifts obtained in the
GOODS-N field \citep[e.g.][]{2001ApJ...551L...9C, 2004AJ....127.3121W, 2008ApJ...689..687B,
2011PASJ...63S.379K, 2014ApJS..214...24S, 2016ApJS..225...27M}.
In order to obtain a homogeneous and updated set of redshifts we adopt,
when available, the
redshifts from the 3D-HST Treasury survey catalogues \citep{2014ApJS..214...24S, 2016ApJS..225...27M}.
GOODS-N is one of the five CANDELS fields for which WFC3 G141 spectroscopic data are available.
Spectroscopic redshifts are either measured with space-based dispersion grisms \citep{2016ApJS..225...27M}
or obtained from ground-based slit spectroscopy from the literature \citep{2014ApJS..214...24S}. Photometric
redshifts are determined with the EAZY code \citep{2008ApJ...686.1503B} and listed in \citet{2014ApJS..214...24S}.
The normalized median absolute deviation (MAD) scatter between the photometric and spectroscopic redshifts is
$\sigma_{\rm NMAD}= 1.48\times {\rm MAD} < 0.027\times (1+z)$ \citep{2014ApJS..214...24S}.
For five sources, not included in the 3D-HST photometric
catalogue of GOODS-N we searched the literature for an appropriate redshift.
Spectroscopic redshifts are available for 51 NIR counterparts and photometric redshifts for 28 sources
yielding to $95$ percent (79/83) the fraction of NIR identified
sources with a redshift.
There are 4/83 NIR identified radio sources with no redshift measurement.
These are all located at the edges of the GOODS-N radio field in a region not covered by the IRAC
observations.
\begin{figure}
\centering
\includegraphics[width=8.7cm]{z_dist.ps}
\caption[]{Top: Redshift distribution (solid black line) for the 79 sources with known redshift
(51 spectroscopic and 28 photometric), and for the 28 sources with only photometric redshift (hatched red line).
Bottom: Isotropic 5.5\,GHz luminosity as a function of redshift for the 79 sources
with known redshift, spectroscopic (black filled points) or photometric (red empty points).
A fixed spectral index of $\alpha=0.7$ is used to convert flux densities to radio luminosities.}
\label{z_dist}
\end{figure}
Hereafter we use spectroscopic redshifts where available, and photometric redshifts otherwise.
The redshift distribution is shown in Fig.\,\ref{z_dist}.
It appears to follow a bimodal distribution
which shows a correspondence to the (less prominent) peaks noted in the K$_s$
magnitude distribution.
There is a peak around $z\simeq 0.5$ with a tail extending to $z\simeq 1$ populated by the brighter sources
($17 < K_s < 21$) and a secondary peak around $z\simeq 2$.
One radio source is identified with a local galaxy at $z<0.1$.
The $z$-fitting procedure used in \citet{2016ApJS..225...27M}, based on combining grism and multi-band
photometric datasets, provides photometric redshifts for some low redshift ($z\lsim 0.7$),
and often red and/or faint galaxies, although they might have a spectroscopic redshift in the literature.
For these objects, the photometric fit alone provides a more accurate redshift, and the contribution
of the grism spectrum to the combined fit is negligible.
This explains the photometric redshifts assigned to some low redshift ($z\lsim 0.7$) sources in
Fig.\,\ref{z_dist}. It is worth noting that in all these cases
the photometric redshifts reported in \citet{2016ApJS..225...27M} are consistent with the spectroscopic
values that are listed in other catalogues.
The median redshift for the 79 NIR identified radio sources with redshift information is $z=1.32$
($z=1.02$ for those with only spectroscopic redshifts), 8 radio sources have a spectroscopic (1) or
photometric (7) redshift larger than 3.
The isotropic intrinsic radio luminosities at 5.5\,GHz (L$_{5.5{\rm GHz}}$)
for the sources with known redshifts are shown in Fig.\,\ref{z_dist}, where we assumed
a radio spectral index $\alpha=0.7$ for the K-correction.
The median L$_{5.5{\rm GHz}}$ of the sources with redshift is
$4.4\times 10^{23}$ W\,Hz$^{-1}$.
\section{The AGN content of the 5.5\,GHz radio sample}
\label{agn}
In this section we use the exquisite multi-wavelength ancillary catalogues
of the GOODS-N field to identify systems in the 5.5\,GHz radio sample that are likely hosting
an AGN.
Multi-wavelength observations have led to the identification
of a two-fold mode of nuclear activity, RI-AGN and RE-AGN (see \S 1 ),
which may reflect two different types of SMBH accretion and feedback.
RI-AGN and RE-AGN are also
named as ``radio-'' and ``quasar-'' mode AGN, respectively.
A full review of these two AGN populations and their properties
can be found in \citet{2014ARA&A..52..589H}.
In brief, RI-AGN stand out in the radio band, without
accretion-related X-ray, optical, or MIR emission
\citep[e.g.][]{2007MNRAS.376.1849H}, and
showing only low excitation emission lines \citep{1979MNRAS.188..111H}.
Such AGN are associated with very low nuclear accretion rates (Eddington fraction $\ll$1 percent,
\citealt{1994ApJ...428L..13N, 1995ApJ...444..231N})
involving hot gas from the halo's atmosphere and
hosted in massive galaxies within dense environments.
The feedback is based on the presence of
powerful radio jets that mechanically transfer the AGN energy into
the surrounding environment. It is widely accepted that recurrent
radio-mode AGN activity is a fundamental component of the lifecycle of massive galaxies,
responsible for maintaining these sources as ``red and dead'' once they have migrated
on the red sequence \citep[e.g.][]{2006MNRAS.365...11C, 2006MNRAS.368L..67B}
In contrast, RE-AGN emit powerfully over a
wide range of the electromagnetic spectrum
(MIR to X-rays). They are typically faint radio sources, although
a small fraction emit large scale, relativistic radio
jets. They are also characterized by the presence of high-excitation emission lines.
Quasar-mode AGN are associated with
radiatively efficient ($>1$ percent of the Eddington rate) optically thick, geometrically thin
accretion disks \citep{1973A&A....24..337S}, accreting cold gas via secular processes,
hosted by galaxies found in less dense environments, and often showing
ongoing star-formation \citep[][and references therein]{2014ARA&A..52..589H}.
AGN feedback may occur through high-velocity winds, outflows generated close to the AGN,
radiation pressure on dusty gas, or thermal heating \citep[e.g.][]{2005ARA&A..43..769V, 2012ARA&A..50..455F}.
To assess whether a galaxy is hosting an active nucleus,
we applied a number of AGN selection criteria at IR, X-ray and radio wavelengths.
Throughout this analysis we will adopt the nomenclature RE- and RI-AGN to refer to
these two distinct AGN populations, even if we
identify objects in these two classes not
directly deriving the accretion efficiency, but rather
on the basis of of their radio/IR/X-ray properties (IR diagnostic plots, radio-excess or X-ray luminosity).
This means that objects classified as RE- or RI-AGN might not necessarily
have two different accretion modes, but simply reflect the reliability of the combined
radio/IR/X-ray diagnostics.
The origin of the radio emission and its link with an AGN- or SF-dominated
host galaxy is deferred to Paper II, here we focus on deriving the fraction
of our 5.5\,GHz radio sample that is dominated by an AGN in at least one of the radio, infrared
or X-ray bands.
We first analyse the AGN content separately for each of the IR, X-ray, radio bands, then
the final classification scheme is obtained by combining all the criteria as follows.
Radio sources identified as AGN by the IR or X-ray diagnostics
are classified as RE-AGN.
Among the remaining sources, the RI-AGN are identified as those radio sources
having MIR colours typical of red and passive galaxies, or those showing a radio-excess.
All other sources, those not identified as AGN hosts by any of the aforementioned criteria,
are classified as SFGs.
\subsection{AGN infrared colour diagnostics}
\label{IR}
IR colour-colour criteria, based on
surveys conducted with {\it Spitzer} and {\it WISE},
are currently used to separate AGN from star-forming
or quiescent galaxies
\citep[e.g.][]{2004ApJS..154..166L, 2005ApJ...631..163S, 2007AJ....133..186L,
2012ApJ...748..142D}.
These diagnostics are based on the evidence
of a prominent dip in the SED of SFGs between the 1.6-$\mu$m stellar bump
and the emission from star-formation-heated dust at longer wavelengths. On the other hand,
luminous AGN should have a monotonically increasing power law SED across the
IRAC bands \citep[e.g.][]{1979ApJ...230...79N}, a consequence of X-ray-to-UV
AGN radiation reprocessed to IR wavelengths by a dusty torus surrounding the
central region \citep{1989ApJ...347...29S, 1992ApJ...399L..23P}.
This scenario applies to RE-AGN
and throughout this paper, when we use the term AGN without further specification, we refer
to the class of radiatively efficient AGN.
In the following, we make use of four IR colour-colour diagrams developed in recent years
to distinguish between AGN- and star-formation-dominated sources.
The four IR criteria are those presented by \citet{2012ApJ...748..142D}, \citet{2012ApJ...754..120M},
\citet{2012ApJ...759..139K, 2013ApJ...763..123K}.
All of these provide low contamination diagnostics ($\sim 10$ percent) in separating
SFGs and AGN up to high redshift ($z\sim 4$), by taking into account
the redshift evolution of IR colours.
This is essential for a sample of sources
spanning a wide range of redshifts in order to properly classify
high redshift objects.
\citet{ 2012ApJ...748..142D} redefined, in a more restricted way,
the IRAC colours-based AGN selection criteria previously
developed by \citet{2004ApJS..154..166L, 2007AJ....133..186L}
obtaining a highly-reliable classification
for deep IRAC surveys (referred to as the ``Donley wedge'' hereafter).
\citet{2012ApJ...754..120M} developed their own IR
colour-colour diagram, using $K_s$ and IRAC bands (the KI diagram) and,
finally, \citet{2012ApJ...759..139K, 2013ApJ...763..123K} presented two different combinations of colours that
combine MIR and far-IR (FIR) photometry ({\it Spitzer} IRAC/MIPS and {\it Herschel} PACS/SPIRE)
to classify high redshift ($z=0.5-4$) galaxies selected at 24\,$\mu$m with
{\it Spitzer} IRS spectroscopy.
These criteria will be briefly presented in the following.
For a full discussion we refer to the
original papers listed above.
\subsection{IR classification of the 5.5\,GHz radio sources}
In \S\,\ref{ir_class} we present the results obtained applying the
IR colour-colour diagnostics to the radio selected sample in GOODS-N, adopting
the following nomenclature:
\begin{itemize}
\item those radio sources classified as AGN by only one diagnostic diagram
are dubbed as RE-AGN-candidates, while RE-AGN are those classified as
AGN by at least two of the four IR colour-colour plots.
\item Radio sources with MIR colours consistent with those of red passive galaxies
are defined as RI-AGN.
Indeed, in these sources the AGN can be detected only in the radio band with no evidence
of accretion-related emission or recent star-formation in the IR or X-ray bands.
\item Galaxies which do not fit in the AGN regions of \textit{any} of the used
IR colour-colour plots are classified as SF/hybrid systems (SF/hyb).
\end{itemize}
It is important to clearly state that the term SF/hyb is chosen to underline that some of these sources
may not be necessarily pure SFGs,
as most of the IR criteria here used
are conservative at expenses of completeness.
Of course, the SF/hyb radio sources are not AGN-dominated in the IR, they could include purely SFGs,
hybrid sources where AGN and star-formation coexist, or IR weak AGN.
In \S\,\ref{xray_class}, \S\,\ref{rx_class} and \S\,\ref{vlbi} we will check the SF/hyb radio sources
for AGN-related radio emission using
other diagnostics (X-ray luminosity, radio-excess, and compactness).
\subsection{MIR to FIR photometry}
The GOODS-North field has a wealth of ancillary information at IR wavelengths. In particular we used the
IRAC photometry (3.6, 4.5, 5.8, 8.0\,$\mu$m) reported in
\citet{2010ApJS..187..251W}, and MIPS photometry at 24\,$\mu$m from
\citet{2011A&A...528A..35M, 2013A&A...553A.132M}, both measured by {\it Spitzer}.
{\it Herschel} imaging covers the entire GOODS-N field with the Photoconductor Array
Camera and Spectrometer \citep[PACS, 100 and 160\,$\mu$m;][]{2010A&A...518L...2P}
and Spectral and Photometric Imaging Receiver \citep[SPIRE, 250, 350 and 500\,$\mu$m;][]{2010A&A...518L...3G}
data, as part of the PACS Evolutionary Probe \citep[PEP][]{2011A&A...532A..90L}
and the GOODS-{\it Herschel} \citep[GOODS-H][]{2011A&A...533A.119E}.
The FIR photometry is taken from the PEP DR1 catalogue \citep{2013A&A...553A.132M} and
the GOODS-H catalogue \citep{2011A&A...533A.119E}.
IRAC photometry in all four bands is available for 77/83 radio sources.
Four of the six missing sources lie just outside of the IRAC area coverage, while
the remaining two are not detected in any of the IRAC bands.
The number of radio sources with a detection in all of the four mid/far-IR bands
(250, 24, 8.0, 3.6\,$\mu$m) is 47. To these we add 19 sources for which we
use a $3\sigma$ upper limit for the 250$\mu$m flux. On the other hand, we have 52 radio sources detected in all the four
mid/far-IR bands (100, 24, 8.0, 3.6\,$\mu$m)
plus 14 for which we use a $3\sigma$ upper limit
for the 100$\mu$m flux. In summary, the
sample of radio sources to which we can apply the mid/far-infrared colour-colour AGN selection
criteria contains 66 objects.
\begin{figure*}
\centering
\includegraphics[width=8cm]{plot_donley.ps}
\includegraphics[width=8cm]{plot_ki.ps}
\includegraphics[width=8.cm]{plot_herschel250.ps}
\includegraphics[width=8.cm]{plot_herschel100.ps}
\caption[]{The four IR colour-colour plots used to classify the 5.5\,GHz radio sources in the GOODS-N
field. In each panel different colours are used to identify different classes of sources based on the classification
obtained by combining all the four IR plots: RE-AGN are shown in
green, RE-AGN-candidates in cyan, RI-AGN in red, and the remaining sources (SF/hyb
galaxies) in blue.
Top-Left: IRAC colour-colour plot, the dashed-line wedge shows the AGN selection region from \citet{2007AJ....133..186L},
while the smaller area enclosed by a solid-line is the revised wedge from \citet{2012ApJ...748..142D},
used in this paper.
Top-Right: KI colour-colour plot \citep{2012ApJ...754..120M}. The region populated by AGN is
delimited by the solid line
defined as $K_s-[4.5] > 0$ and $[4.5]-[8.0]>0$.
Magnitudes are in the AB photometric system.
Bottom-Left: IR colour-colour plot showing $\log(S_{8.0}/S_{3.6})$ versus $\log(S_{250}/S_{24})$.
The line separate RE-AGN (above the line) from star-forming or passive galaxies (below the line),
according to \citet{2012ApJ...759..139K, 2013ApJ...763..123K}
Bottom-Right: IR colour-colour plot showing $\log(S_{8.0}/S_{3.6})$ versus $\log(S_{100}/S_{24})$.
The line separates RE-AGN (above the line) from star-forming or passive galaxies (below the line),
according to \citet{2012ApJ...759..139K, 2013ApJ...763..123K}.
In the two lower plots, empty symbols refer to 100\,$\mu$m and 250\,$\mu$m upper limits.
}
\label{midir1_plot}
\end{figure*}
\subsection{IR classification}
\label{ir_class}
Fig.\,\ref{midir1_plot} shows the four IR colour-colour plots used in this work.
The IRAC colour-colour diagnostic (top-left panel in Fig.\,\ref{midir1_plot}) can be applied to 77 of our 5.5\,GHz
selected sources, detected in all the IRAC bands.
The original AGN selection wedge is plotted
\citep{2007AJ....133..186L} as well as
the revised region assumed by \citet{2012ApJ...748..142D}.
Thirteen radio sources (about 17 percent) are inside the Donley wedge, and
all are power-law AGN: their flux densities are such that
$S_{3.6}<S_{4.5}<S_{5.8}<S_{8.0}$, within the photometric errors.
All of them are classified as RE-AGN since they are selected by at least two different IR criteria.
This confirms that AGN selected using the Donley wedge are highly reliable and not significantly affected
by contamination.
The drawback of this method is the lower level of completeness: only $68$ percent (13/19) of the RE-AGN
and $41$ percent (13/32) of the total number of RE-AGN and RE-AGN-candidates
in our 5.5\,GHz radio-selected sample are found inside the Donley wedge.
The RE-AGN (6 sources) and RE-AGN-candidates (13 sources) outside the Donley wedge are roughly evenly split
in two groups. Those with $z>$1.5 are found clustered in the region close to the wedge and bridging the
gap with the population of SF/hyb galaxies ($\log(S_{5.8}/S_{3.6})\simeq 0.25$ and
$\log(S_{8.0}/S_{4.5})\simeq 0.1$).
On the other hand, sources with $z<$1.5 span a wider range of $\log(S_{8.0}/S_{4.5})$ (indicating different
levels of reddening) and are mixed to the SF/hyb galaxies.
A group of radio sources is closely clustered in the bottom-left of the diagram (top-left panel in Fig.\,\ref{midir1_plot}
).
These galaxies have MIR colours consistent with those expected for red and passive galaxies at $z\lsim 1$.
To allow for possible higher redshift ($z\simeq 2$) quiescent galaxies (see Fig.\,2 in \citet{2012ApJ...748..142D} for
the evolutionary tracks of passive galaxies), we consider all the 15 radio sources
with $\log(S_{5.8}/S_{3.6})<0.05$ and $\log(S_{8.0}/S_{4.5})< -0.25$
as radio-emitting RI-AGN hosted by red passive galaxies.
The KI criterion (top-right panel in Fig.\,\ref{midir1_plot})
identifies as AGN the sources with $K_s-[4.5] > 0$ and $[4.5]-[8.0]>0$, where AB magnitudes are used.
This method selects the largest number of AGN (15 RE-AGN and 11 RE-AGN-candidates) and
delivers the highest level of completeness both for the RE-AGN (88 percent, 15/19 sources) and the
overall RE-AGN and RE-AGN-candidates (81 percent, 26/32 sources).
However, given that 11 radio sources are identified as AGN-candidates only by this diagnostic, we must consider
the possibility that some of these 11 sources could be misclassified SF/hyb galaxies.
We will return to this point later, we just note
that the KI diagnostic is most effective in selecting AGN at $z\simeq$ 2-3 \citep{2012ApJ...754..120M},
and that 7/11 of the RE-AGN-candidates are indeed in this redshift range.
Before reviewing the results from the diagnostics using FIR photometry \citep{2012ApJ...759..139K, 2013ApJ...763..123K},
we note that this is possible for a smaller number
of sources (66 compared to 77 objects). Of the 11 sources without a FIR counterpart,
9 are associated with
RI-AGN hosted by red passive galaxies which are typically faint in the MIR, and
therefore mostly undetected at 24 $\mu$m.
The IRAC/{\it Herschel-250} ($\log(S_{8.0}/S_{3.6})$ vs. $\log(S_{250}/S_{24})$, lower-left panel in Fig.\,\ref{midir1_plot}),
selects only 58 percent (11/19) of the RE-AGN and 41 percent (13/32) of
RE-AGN plus RE-AGN-candidates.
We cannot exclude the possibility that a few sources classified as RE-AGN by at least two of the other criteria
could shift into the AGN region,
since they have only an upper limit for the {\it Herschel} flux.
The IRAC/{\it Herschel-100} ($\log(S_{8.0}/S_{3.6})$ vs. $\log(S_{100}/S_{24})$, lower-right panel in Fig.\,\ref{midir1_plot}), is
more effective in selecting RE-AGN (89 percent, 17/19) and all the radio sources above the threshold
line are confirmed as RE-AGN by at least one other
criterion, implying a high reliability coupled with a high level of completeness.
The RE-AGN-candidates occupy an intermediate region between the RE-AGN and the population
of SF/hyb galaxies, while the few RI-AGN with a 24\,$\mu$m detection have
bluer $\log(S_{8.0}/S_{3.6})$ colours.
\begin{figure}
\centering
\includegraphics[width=8cm]{plot_donley-rx-x.ps}
\caption[]{The IRAC colour-colour diagram for the 77 5.5\,GHz selected radio sources detected at all IRAC bands,
highlighting X-ray AGN (magenta crosses) and radio-excess sources (black squares).}
\label{donley2}
\end{figure}
We note that for the present sample of radio sources, the four IR diagnostics we have used
are practically equivalent to selecting sources in the IRAC colour-colour diagram with the cuts
$\log(S_{5.8}/S_{3.6}) > -0.1$ and $\log(S_{8.0}/S_{4.5}) >0$ with a $\simeq 10$ percent level of
incompleteness (1 RE-AGN and 2 RE-AGN-candidates are located outside this region)
and contamination (3 SF/hyb sources fall within this region).
In summary, using the four IR colour-colour diagnostics we find 19 RE-AGN,
13 RE-AGN-candidates,
and 15 RI-AGN associated with red passive galaxies.
Considering both RE-AGN and RE-AGN-candidates and the RI-AGN, about 61 percent (47/77) of the radio sources
with available infrared photometry are classified as AGN.
The remaining 30 sources, the SF/hyb systems, are not identified as AGN hosts by any
of the four IR criteria.
It is important to underline that we cannot exclude the presence of weak nuclear activity,
since the IR colour-colour plots
just tell us that the AGN, if present, does not dominate the IR emission (for the RE-AGN) or
has MIR colours not compatible with those of a red passive elliptical (for the RI-AGN).
So, these SF/hyb galaxies could be either pure SFGs, hybrid systems, or IR-weak AGN.
\subsection{X-ray AGN}
\label{xray_class}
We searched for X-ray bright AGN in our 5.5\,GHz catalogue, by using the 2\,Ms {\it Chandra} Deep Field-North
improved point-source catalogue \citep{2016ApJS..224...15X}, which covers the whole area of our radio observations.
The main catalogue lists 683 X-ray sources detected using {\tt WAVDETECT} with the following criteria: 1) a false positive
probability threshold of $10^{-5}$ in at least one of the three standard X-ray bands (full band, 0.2-7\,keV;
soft band, 0.2-2\,keV; hard band, 2-7\,keV); and 2) a binomial probability source-selection criterion of $P<0.004$
\citep{2016ApJS..224...15X}.
This new approach maximizes the number of reliable sources detected, yielding 196 main catalogue
new sources compared to \citet{2003AJ....126..539A}.
Using the same {\tt WAVDETECT} threshold but
$0.004<P<0.1$, and limited to NIR-bright counterparts ($K_s< 22.9$\,mag), results in a supplementary catalogue of 72 additional X-ray sources.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{plot_q100.ps}
\caption[]
{The $q_{100}=\log(S_{100\rm \mu m}/S_{\rm 1.4GHz})$ as a function of redshift for the 5.5\,GHz radio-selected sample
with full IRAC detection. Symbols are the same as in Fig.\ref{midir1_plot}, except that open symbols
are used for sources that have only an upper limit at 100\,$\mu$m.
}
\label{plotq100}
\end{figure}
The X-ray sources in the catalogue are already associated to a K$_s$-band counterpart using the catalogue of \citet{2010ApJS..187..251W}.
We find that 50 radio sources have an X-ray counterpart in the main catalogue, and two
radio sources have an association in the supplementary catalogue.
We checked for possible X-ray counterparts to the radio sources without a NIR identification, but
found no X-ray counterpart within 1\,arcsec.
The fraction of radio sources
with an X-ray counterpart is $\simeq 55$ percent (52/94), raising up to $\simeq 63$ percent (52/83) for NIR-identified
radio sources. These fractions are higher than those found
in similar works on extragalactic radio sources \citep[e.g.][]{2013MNRAS.436.3759B}.
For the {\it Chandra} Deep Field-South, \citet{2013MNRAS.436.3759B}
exploited X-ray data sets twice as deep, 4\,Ms \citep{2011ApJS..195...10X} plus 250\,ksec observations
\citep{2005ApJS..161...21L}, obtaining X-ray detections for
25 percent of their radio sources. Therefore, we reasonably conclude that the higher fraction of radio sources
with an X-ray association found in the 5.5\,GHz catalogue is not a consequence of the depth
of the X-ray data, but rather of the larger fraction of AGN sources in our catalogue. This is likely due
to the higher frequency (5.5 against 1.4\,GHz) and to the better angular
resolution (see \S\,\ref{disc}).
We classified our radio sources as X-ray AGN if
the observed hard-band 2-7 keV luminosity (not corrected for intrinsic absorption)
$L_{2-7 keV,obs} > 10^{42}$ erg\,s$^{-1}$, i.e. the typical X-ray luminosity threshold adopted to
separate AGN-related from star-formation-related X-ray emission.
If the hard-band X-ray luminosity is derived from an upper limit of the 2-7 keV flux,
we require a de-absorbed total-band luminosity $L_{0.2-7 keV,int}> 2\times 10^{42}$ erg\,s$^{-1}$.
On the basis of these two criteria, a total of 30 radio sources are classified
as X-ray AGN. These are marked with magenta crosses overlaid on the IR class symbols
in Fig.\ref{donley2}.
As already shown in many papers \citep[e.g][]{2008ApJ...680..130C}, in the IRAC colour-colour plot
the X-ray AGN tend to have colours consistent with a
power-law SED from the bluest to the reddest colours.
Only 3 bright X-ray sources are found in the region of the RI-AGN, which,
by definition, are X-ray weak.
We derived the median value of
the 2-7\,keV luminosity for the IR classified sources,
taking into account the upper limits by using the Kaplan-Meier
estimator and the code {\tt ASURV},
which implements the methods described by \citet{1985ApJ...293..192F} and \citet{1986ApJ...306..490I}
to properly handle censored data. Out of the 19 RE-AGN, 14 are
detected in the 2\,Ms catalogue with a median 2-7\,keV luminosity
(including 3 upper limits) of $2.4\times 10^{43}$ erg\,s$^{-1}$,
10 of these are X-ray AGN.
All the 5 RE-AGN undetected in the X-ray band are located close to the lower diagonal line
of the Donley wedge and have $z> 2.6$. Therefore, we can reasonably assume they are not detected in the
2\,Ms X-ray image for sensitivity reasons, and otherwise they would have been classified as X-ray AGN due
to their high redshift.
Out of the 13 RE-AGN candidates, 11 have a detection or upper limit in the hard-band, with 8 sources
that can be classified as X-ray AGN on the basis of the hard-band or total-band X-ray flux density.
The radio sources classified as RE-AGN candidates are typically
almost one order of magnitude less luminous than the RE-AGN
in the 2-7\,keV X-ray band: they have
a median X-ray luminosity (including 5 upper limits) of $3.1\times 10^{42}$ erg\,s$^{-1}$.
Out of the 15 RI-AGN associated with passive galaxies, 9 have a detection or upper limit in the hard-band,
and 3 are X-ray AGN.
The median 2-7\,keV X-ray luminosity of the 9 X-ray-detected RI-AGN (including 3 upper limits)
is $1.6\times 10^{41}$ erg\,s$^{-1}$.
In summary, 56 percent (18/32) of the radio selected RE-AGN (RE-AGN-candidates included)
are also X-ray AGN, compared to the 20 percent (3/15) of the RI-AGN.
It is interesting to note that we also find
X-ray bright sources in the remaining sources that were not classified
as AGN from any of the four IR colour-colour diagrams, the so-called SF/hyb sources.
Out of the 30 SF/hyb systems, 18 are detected in the X-ray, with 9 (30 percent) having
X-ray luminosities typical of AGN.
A fraction of the X-ray emission can be produced by star-formation processes
and, therefore, we might wrongly classify some of these sources as X-ray AGN.
To investigate this possibility, we applied two further tests to
the 9 SF/hyb sources with X-ray luminosities typical of an AGN.
Firstly, we derived the X-ray to optical flux ratio, defined as
$\log(f_X/f_{\rm opt})=\log(f_{\rm 0.5-2keV}) + 0.4\times R + 5.71$, where R is the R-band magnitude.
Sources with $\log(f_X/f_{\rm opt})> -1$ are assumed to be powered by an AGN
\citep[e.g.][]{2002AJ....124.2351B, 2004AJ....128.2048B}. The X-ray to optical flux ratio is effective in separating
AGN and SFGs up to $z\sim2$ \citep{2014MNRAS.443.3728S}, and
the nine SF/hyb sources have redshifts in the range $z=0.5-2.5$, with only two objects with $z>2$ (2.08 and 2.5).
Then we compared the X-ray luminosities with those expected from
star-formation processes using the relation between X-ray luminosity and total IR luminosity
derived by \citet{2014MNRAS.443.3728S}.
The total IR luminosities of these nine sources were derived fitting the mid-to-far infrared
SED using the IDL code developed by \citet{2012MNRAS.425.3094C}:
a simultaneous joint fit to a single dust temperature greybody in the FIR
(for the cold dust component representing the reprocessed SF emission)
plus a MIR power law (for the hot dust component from AGN).
The total IR luminosity is derived from the greybody
component.
In just one case we find that the X-ray-to-optical ratio is consistent with that expected from a SFG
and the contribution of the star-formation to the X-ray luminosity is significant ($\sim 25$ percent).
Even correcting for this contribution, the hard-band X-ray luminosity of this source exceeds the threshold here used.
Therefore, we conclude that in all the SF/hyb X-ray powerful sources (with the possible exception of one source
that has mixed diagnostics) the X-ray emission is dominated
by an AGN component. All these nine sources, previously not classified as AGN on the basis
of the colour-colour diagnostics are X-ray bright AGN, confirming
that a significant fraction of radio-selected SF/hyb objects show nuclear activity
that is not detected by the IR colour-colour diagnostics.
We note that all the SF/hyb sources with $z>1.5$ are either undetected in the X-ray or, if detected, are classified
as AGN on the basis of their hard-band luminosity.
The median 2-7\,keV luminosity of the 18 SF/hyb detected in the full band X-ray image
(including 7 hard-band upper limits) is
$4.1\times 10^{41}$ erg\,s$^{-1}$.
\subsection{Radio-excess sources}
\label{rx_class}
The correlation between the total IR luminosity and the 1.4\,GHz radio luminosity
for galaxies with ongoing star-formation is one of the
tightest in astrophysics.
The relationship holds over a very wide range of redshifts and
luminosities, from normal, radio-quiet spirals to ultra-luminous IR galaxies (ULIRGs)
\citep{2001ApJ...554..803Y, 2002A&A...384L..19G, 2004ApJS..154..147A},
and is one of the most useful diagnostic tools in revealing excess
radio emission exceeding that
expected from pure star-formation processes.
Several studies have replaced the FIR flux with the monochromatic flux at 24\,$\mu$m
\citep[e.g.][]{2013MNRAS.436.3759B} or at 100\,$\mu$m \citep[e.g.][]{2013A&A...549A..59D}
as a proxy for the FIR emission.
In particular, \citet{2013A&A...549A..59D} showed that a simple cut at $q_{100}<1.5$
(where $q_{100}=\log(S_{100\mu m}/S_{1.4\rm GHz})$) selects $\sim 80$ percent of the
radio-excess sources defined using the total FIR.
Following \citet{2013A&A...549A..59D} we use $q_{100}<1.5$ to identify radio-excess sources in our
radio selected sample.
Clearly, such a criterion is simplistic and does not take into account the $z$ dependence, such that it may
suffer of contamination, especially at high redshift, by strong IR sources (i.e. hyper-luminous IR galaxies):
ideally, $q_{100}$ should be compared to the expected tracks for different representative
types of galaxies.
However, here we use the $q_{100}$ parameter not to build a clean sample of radio-excess sources, but rather to
mainly pinpoint embedded nuclear activity (RI-AGN) among the SF/hyb sources, identified via IR criteria.
Moreover, as we show below, the SF/hyb sources characterised by a radio excess are in a range of
redshift and $q_{100}$ values supporting an AGN contribution to the observed radio emission
regardless of the possible redshift evolution of the $q_{100}$ parameter.
A full IR SED
model fitting applied to a larger sample (selected from 1.4\,GHz observations) will be
presented in Paper II.
\begin{table*}
\caption{Summary of X-ray \& Radio-excess AGN}
\begin{center}
\begin{tabular}{lccccccc}
\hline
IR Class & \# X-ray & \# Radio-exc & \# X-ray \& Radio-exc.\\
\hline
RE-AGN & 10/19 & ~7/19 & 7/19 \\
RE-AGN-candidate& ~8/13 & ~4/13 & 2/13 \\
RI-AGN & ~3/15 & 13/15 & 2/15 \\
SF/hybrid & ~9/30 & ~7/30 & 2/30 \\
\hline
\end{tabular}
\end{center}
\label{tab:xray}
\end{table*}
Fifty-two sources (out of the 77 radio sources with full IRAC photometry)
have a {\it Herschel-PACS} detection at 100\,$\mu$m, and for the remaining 25 objects
we use a $3\sigma$ upper limit of 1.0\,mJy \citep{2013A&A...553A.132M, 2015A&A...573A..45M}.
The 1.4\,GHz flux densities are taken from
\citet{2010ApJS..188..178M} for all but three sources which are detected only at 5.5\,GHz.
For these three sources we derive the 1.4\,GHz flux density from that measured at 5.5\,GHz using a
spectral index of 0.7.
In Fig.\,\ref{plotq100} we plot the observed $q_{100}$ values against redshift
for all the catalogued sources with an infrared classification.
Considering $q_{100}<1.5$ as a proxy for selecting radio-excess sources, we have
31 radio-excess sources (41 percent of the sub-sample with full IRAC coverage).
As expected, radiatively-inefficient AGN
are typically associated with radio-excess sources: 87 percent (13/15) of this class of
sources have $q_{100}<1.5$ and for the two remaining sources the $q_{100}$ value
is an upper limit.
About one third (11/32) of RE-AGN (including also the RE-AGN-candidates),
are associated with radio-excess sources.
This significant fraction is not entirely surprising as
we are dealing with radio-selected RE-AGN.
However, the number of radio-excess RE-AGN
could vary significantly, since many of those at high redshift
are close to the $q_{100}$ threshold adopted here.
Moreover, at $z\gsim 2$ the criterion $q_{100}<1.5$ could be too simplistic
and a redshift evolution of this parameter should be taken into account.
In any case, this does not influence the census of nuclear activity as
most of the sources at such redshifts are already classified
as AGN on the basis of the IR colours or hard-band X-ray emission.
We also note that RE-AGN is the only class for which there is a clear
link between the radio-excess and the X-ray emission:
all the radio sources classified as RE-AGN and showing a
radio-excess (7 out of 19) are also X-ray AGN.
We find five radio sources at $z>1.5$, classified as SF/hyb systems
and not strong X-ray emitters, that
show a clear radio-excess ($q_{100}<1$ for four of them).
Even taking into account the evolution with redshift of $q_{100}$,
all these 5 sources would fall in the radio-excess region \citep[e.g. see Fig.5 in][]{2013A&A...549A..59D}.
Finally, there is an IR-excess source at $z=2.5$: here
the IR emission should be dominated by an AGN and indeed the sources is classified as such on the basis of both
the IR and X-ray diagnostics. It does not show a radio excess as typically observed in many RE-AGN.
The radio excess sources are shown as black squares in Fig.\,\ref{donley2}.
In Table\,\ref{tab:xray} we list, for each of the IR classes,
the number of X-ray AGN, the number of radio-excess sources and the number of sources which are both
X-ray and radio-excess AGN.
At this point it is important to recall that we have classified as RE-AGN-candidates those sources selected
as AGN by only one of the four IR diagnostics. The reason for this choice was to have
a separate class of sources potentially hosting nuclear activity for which there is a
significant possibility of contamination by non-AGN sources.
All but three (77 percent, 10/13) of the RE-AGN-candidates
\textit{are confirmed as AGN} by the X-ray luminosity or by their radio-excess.
We conclude that the difference between RE-AGN and RE-AGN-candidates is mainly due to the AGN dominance in the IR,
but it is reasonable to consider all these sources as true AGN.
\subsection{1.4 GHz VLBI detections}
\label{vlbi}
The most direct confirmation of a radio AGN is
provided by the observation of high brightness temperature
components on milli-arcsec (mas) angular scales as probed by Very Long Baseline Interferometry (VLBI).
Although in the local Universe
compact and intense radio emission on mas-scales
might be also associated with Supernovae and Supernova Remnants,
these are difficult to detect with VLBI at $z>0.1$.
Twenty GOODS-N sources were detected by recent global VLBI observations at 1.6\,GHz
with S$_{VLBI} >50\,\mu$Jy and an angular resolution of 4\,mas \citep{2016A&A...587A..85R}.
Ten of them were previously
detected with VLBI at 1.4\,GHz by \citet{2013A&A...550A..68C}.
VLBI cores are found to
account for, on average, 30 percent of the total 1.6\,GHz emission.
At 5.5\,GHz, we catalogued 19 of the 20 sources
with a VLBI detection. The remaining source is located outside
the region covered by our 5.5\,GHz mosaic.
About half (9/19) of the 5.5\,GHz sources with VLBI detections
belong to the class of
RI-AGN and hence are associated with optically-passive galaxies.
This means that about 60 percent (9/15)
of the radio sources belonging
to this class have a VLBI detection, confirming the presence of radio bright compact cores in RI-AGN.
Regarding RE-AGN, about 26 percent (5/19) have a VLBI detection, with four of them
classified as radio-excess sources with a redshift $z\lsim 1$.
The remaining object is a RE-AGN with $q_{100}=1.7$ and $z\sim 0.6$,
very close to the dividing line defining radio-excess sources).
Only one of the RE-AGN-candidates is detected by VLBI, and is also a radio-excess source and X-ray AGN.
Among the sources IR-classified as SF/hyb, three galaxies have a VLBI detection. All
three were classified as radio-excess sources and show a bright, compact core in the
VLA observations.
Finally, one source with a VLBI detection does not possess an IR classification.
\section{Discussion}
\label{disc}
In the previous section we investigated the AGN content of the radio catalogue, selected at 5.5\,GHz in the
GOODS-N field, via multi-wavelength AGN selection criteria.
The final classification scheme was obtained by combining all the criteria, as anticipated
in \S 6. Radio sources falling in at least one of the four IR AGN diagnostics
or which fulfil the X-ray luminosity requirement are
classified as RE-AGN.
Among the remaining sources, the RI-AGN are identified as those radio sources
having MIR colours typical of red and passive galaxies or those showing a radio-excess
(on the basis of the $q_{100}$ parameter).
All the other sources,
which are not identified as AGN hosts by any of the criteria used here,
are classified as SFGs.
In Fig.\,\ref{donley_all} we plot the IRAC colour-colour diagram, already shown in Fig.\,\ref{midir1_plot},
updating the source classification and in Table\,\ref{tab_allclass} we list the 77 classified sources
indicating, together with the redshift,
the classification based on the four IR colour-colour plots, which source was classified as X-ray or
radio-excess AGN, or was VLBI detected, and the final classification as RE-AGN, RI-AGN or SFG.
\begin{table*}
\caption{Sample table listing the multi-wavelength classification for the first ten 5.5\,GHz sources with NIR identification.
The full version of the table is available as online-only material. Column\,1 gives the source name.
Column\,2 lists the spectroscopic ($^s$) or photometric ($^p$ ) redshift.
Column\,3 gives the classification based on the four IR colour-colour diagnostics.
Columns\,4 to 6: the crosses ("$\times$") identify the radio sources classified as X-ray AGN, radio-excess and VLBI sources.
Column\,7 is the final classification, determined by combining all the multi-wavelength diagnostics.
}
\label{tab_allclass}
\begin{center}
\begin{tabular}{lcccccc}
\hline
Source Name & $z$ & class$_{\rm IR}$ & X-ray & Radio-exc & vlbi & class \\
\hline
J123557+621536 & 0.433$^p$ & SF/Hyb & - & - & - & SFG \\
J123601+621126 & 0.913$^s$ & RI-AGN & - & $\times$ & - & RI-AGN \\
J123603+621110 & 0.638$^s$ & SF/Hyb & - & - & - & SFG \\
J123606+620951 & 0.772$^s$ & SF/Hyb & $\times$ & - & - & RE-AGN \\
J123606+621021 & 2.505$^s$ & SF/Hyb & $\times$ & $\times$ & - & RE-AGN \\
J123608+621035 & 0.679$^s$ & RE-AGN & $\times$ & - & $\times$ & RE-AGN \\
J123609+621422 & 0.779$^s$ & SF/Hyb & - & - & - & SF \\
J123617+621011 & 0.846$^s$ & SF/Hyb & $\times$ & $\times$ & - & RE-AGN \\
J123617+621540 & 1.993$^s$ & SF/Hyb & - & $\times$ & $\times$ & RI-AGN \\
J123618+621550 & 2.186$^p$ & RE-AGN$_{\rm cand}$ & $\times$ & $\times$ & - & RE-AGN \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{plot_donley_all.ps}
\caption[]{ The IRAC colour-colour diagram for the 77
5.5\,GHz selected radio sources, detected at all IRAC bands.
The classification reported here has been
updated based on the X-ray luminosity and radio-excess criteria.
All the RE-AGN (including the RE-AGN candidates) are shown with green symbols, the RI-AGN and SFGs
respectively with red and blue symbols.}
\label{donley_all}
\end{figure}
Putting together all our multi-wavelength AGN classifications,
the first notable thing is the large fraction of AGN in the 5.5\,GHz catalogue:
about 79 percent (61/77) of the IR-classified sources
show evidence for nuclear activity, in at least one of the radio, infrared or X-ray bands.
This fraction becomes even higher, 95 percent (35/37),
when considering only the radio sources with $z>1.5$.
This fraction of AGN is very large,
especially if we consider that we are sampling a
population of faint radio sources: the median peak brightness and total flux densities
for the 77 sources with full IRAC coverage are about 20\,$\mu$Jy beam$^{-1}$ and 40\,$\mu$Jy at
5.5\,GHz, respectively.
Moreover, we note that radio sources classified as AGN (both RE and RI-AGN)
dominate at all flux density levels.
Other radio surveys with comparable sensitivity, e.g. E-CDFS \citep{2013ApJS..205...13M, 2013MNRAS.436.3759B}
or VLA-COSMOS 3GHz \citep{2017arXiv170309713S, 2017arXiv170309720D},
derive a fraction of radio detected AGN of about 40 percent, a factor of two lower than that derived in this paper.
As argued below, we think that the large fraction of AGN is a selection effect mainly due to the
the lack of (or limited) short spacing information in our
VLA (A-array dominated) data that limits
the largest scale structure that can be imaged, introducing a bias against extended
($>1-2$\,arcsec) low-surface brightness sources.
Indeed, AGN- and star-formation-related radio emission should display distinct morphological structures.
In particular, radio sources hosting nuclear activity should preferentially have a compact component,
while star-forming galaxies should be characterised by extended/diffuse radio emission on kpc scales,
associated with the galactic disk.
We tested this by analysing the deconvolved angular size
of the sources derived from the source fitting procedure (see \S\,\ref{catalogue}).
We used the major axis as an estimate of the source size.
For those sources that are classified
as unresolved, on the basis of the relation between the total-to-peak ratio and the SNR,
we assumed that the fitted size is an upper limit.
For a more homogeneous comparison, we restricted this analysis to the sources with redshift $z<1.5$,
since 14 of the 16 SFGs are within this limit.
We derived the Kaplan-Meier median estimator using the {\sc ASURV} package.
We find that RE-AGN and RI-AGN have the same median sizes, and for this reason we combine all the
AGN, deriving a median size of $0.29\pm 0.22$ arcsec for the AGN and
$0.79\pm 0.21$ arcsec for the SFGs (the quoted error is the MAD).
This result is consistent with our classification based on the IR colours, X-ray luminosities or
radio-excess. Indeed, the median angular size
of the SFGs corresponds, at $z=1$, to $\sim 6$ kpc, consistent with a radio emission distributed over a galactic
disk.
So far we have found that, among the sources detected at 5.5\,GHz, those classified as AGN
are more compact than those classified as star-forming galaxies.
On the other hand, the overwhelming fraction of AGN in our 5.5\,GHz-selected sample are,
apparently, at odds with the results of other deep radio surveys and the classifications reported in
\citet[][hereafter M05]{2005MNRAS.358.1159M} for a complete sample selected at 1.4\,GHz in GOODS-N.
The sample in M05 contains 92 sources with flux densities at 1.4\,GHz above
40\,$\mu$Jy, from a 10$\times$10\,arcmin$^2$ region,
within the area covered by our 5.5\,GHz VLA observations.
The classification adopted in M05 divided the radio sources in secure or candidate AGN or SFGs
on the basis of the radio/optical combined morphology, radio spectral index, X-ray luminosity, and ISO detection.
We emphasize that the criteria adopted in M05 to classify a radio source as AGN or SFG
are either different to those applied in this paper or based on shallower data at X-ray and infrared wavelengths.
Adding together the candidates and the secure classifications, both for AGN and SFGs,
more than half of the sources in M05 are classified as SFGs
(48/92, $52$ percent), while only one fifth are classified as AGN (18/92, $20$ percent).
The remaining sources (26/92, $28$ percent) are unclassified, meaning that
the radio properties could be associated either with AGN or starburst activity.
In principle, by assuming a radio spectral index of 0.7, we should be able to detect
89 out of the 92 sources ($\simeq 97$ percent) listed in M05,
at the point source sensitivity of our 5.5\,GHz mosaic.
In practice, since most of the sources are resolved at the angular resolution of our 5.5\,GHz image,
we detect only $\sim$ 60 percent of the 1.4\,GHz selected complete sample.
Using the values listed in Table A2 from M05 we derived the median values of the 1.4\,GHz total flux and
largest angular size for the sources detected and not detected at 5.5\, GHz,
Out of the 48 SFGs classified by M05, 25 are detected at 5.5\,GHz.
These are the objects
with higher flux density and smaller sizes: for the 25 sources detected at 5.5\,GHz the
median 1.4\,GHz flux density and median largest angular size
(LAS) are 71\,$\mu$Jy and 0.8\,arcsec, compared with
53 $\mu$Jy and 1.2 arcsec for the 23 radio sources undetected at 5.5\,GHz.
The same is observed for the sources unclassified by M05: 14 out of 26 are detected at
5.5\,GHz with a median flux density and LAS of 124\,$\mu$Jy and 0.8\,arcsec,
compared to 72\,$\mu$Jy and 2.1\,arcsec for those undetected at 5.5\,GHz.
On the other hand, 16 out of 18 AGN classified by M05 are detected at 5.5. These 16 sources have a median
1.4\,GHz flux density and LAS of 217\,$\mu$Jy and 0.6\,arcsec. The only two AGN undetected at 5.5\,GHz are
the two weakest AGN with LAS $\ge 2.5$ arcsec in M05.
We conclude that we are systematically missing faint sources with sizes $\gsim 1$ arcsec, and that these sources
are usually identified with SFGs in M05. This is not surprising, since while the point source sensitivity of our
observations at 5.5\,GHz and that used by M05 at 1.4 are comparable (once scaled to take into account the radio
spectral index) the two surveys have different beam solid angles ($0.56\times 0.47$ arcsec$^2$ at 5.5\,GHz and
$2\times 2$ arcsec$^2$ at 1.4\,GHz), and
therefore different brightness sensitivities: the lower resolution observations in M05 are about 15 times as sensitive
as our high resolution observations for sources with sizes larger than the beam \citep[e.g.][]{2015arXiv150205616C}.
Even the VLA-COSMOS 3GHz survey, that is the closest in frequency and resolution ($0.75\times 0.75$ arcsec) to our
observations is five times as sensitive as our observations in terms of brightness sensitivity.
If we assume that the 35 SFGs and unclassified sources in M05, not detected at 5.5\,GHz, are indeed all
SFGs, the overall fraction of AGN in our sample becomes less extreme and more in line with
expectation at the tens of $\mu$Jy level.
As pointed out above, short spacing data are required to properly sample low-surface brightness sources
on arcsec scale: this would be achievable by adding VLA C-configuration
observations to our data. Nonetheless, our results highlight the usefulness of
the present 5.5\,GHz observations in selecting radio-emitting AGN at the faintest flux levels
($\lsim 100\,\mu$Jy).
One further point that needs to be mentioned is the different classifications in M05 and the present paper.
We note that while all the sources classified as AGN in M05 and detected at 5.5\,GHz are also classified as
AGN in the present paper, this is not the true for sources previously classified as SFGs. The majority of
the sources classified as SFGs or unclassified by M05 and detected in our observations turn out to be classified
as AGN by our criteria.
This can be explained by the deeper infrared and X-ray observations used in the present paper
that allowed to detect lower luminosity AGN with respect to the ancillary data used in M05.
\section{Conclusions}
Using ultra-deep sub-arcsec-resolution radio observations at 5.5\,GHz obtained
with the VLA in the framework of the {\it e}MERGE legacy project, we have produced a
mosaic (including seven different pointings) with a median r.m.s. noise of
3\,$\mu$Jy\,beam$^{-1}$ and an angular resolution of $0.56\times 0.47$ arcsec$^2$
over a circular region with a diameter of 14\,arcmin centred on the
GOODS-N radio field.
The main results presented in this paper can can be summarised
as follows:
\begin{itemize}
\item We extracted a catalogue containing 94 radio sources above the local $5\sigma$
threshold, with about 50 percent of the sources in the range $10-30$\,$\mu$Jy beam$^{-1}$
and less than 20 percent with peak flux $> 100$\,$\mu$Jy\,beam$^{-1}$. About
60 percent (56/94) of the radio sources are classified as resolved on the basis
of the total-to-peak flux ratio versus SNR plot.
\item
We used deep NIR catalogues, mainly \citet{2010ApJS..187..251W}, but also \citet{2011PASJ...63S.379K}
and \citet{2014ApJS..214...24S},
to identify the radio sources. We find that 88 percent (83/94)
of the radio catalogue have secure NIR identifications, with the
fraction raising to 96 percent (76/79) when only the radio sources above
$5.5\sigma$ are considered.
\item
Redshift information is available for 95 percent (79/83) of the NIR identified
radio sources (51 redshifts are spectroscopic and 28 photometric).
The median redshift is $z_{med}=1.32$.
\item
We used multi-band AGN diagnostics (IR colour-colour plots, X-ray luminosity,
radio-excess parameter and VLBI detection) to separate AGN-driven
(both radiatively efficient and inefficient) radio sources from
SFGs in a subsample of 77 radio sources with a detection in all the four
IRAC bands. We find that 79 percent (61/77) of the sources show evidence for nuclear activity
and this fraction is about 92 percent if we consider only the sources with redshift $z>1.5$.
Such a large fraction of AGN is unusual considering we are sampling a population of radio sources
with a median peak brightness of $\simeq$ 20\,$\mu$Jy\,beam$^{-1}$.
\item
Our conclusion is that we are missing SFGs because of
the limited surface brightness sensitivity, due to the limited availability of short
spacings. This favours the detection of compact kpc/sub-kpc radio sources at the expense of sources
with radio emission distributed on scales of several kpc.
Indeed, the AGN populations (both RE- and RI-AGN) have very similar
median angular sizes ($\simeq 0.2-0.3$\,arcsec for $z<1.5$), the SFGs have larger sizes
($\simeq 0.8$ arcsec for $z<1.5$).
Finally, AGN-hosting radio sources (RE- and RI-AGN) dominate the population of our catalogue
at all flux density levels.
\end{itemize}
The afore-mentioned selection effects need to be taken into account in planning future surveys.
Such effects
will be further discussed in a forthcoming paper based on
a comparative analysis of radio-selected samples with different angular resolutions and frequency,
but comparable depths to GOODS-N.
In that paper we will also discuss the origin of the radio emission in RE-AGN.
\section*{Acknowledgements}
DG, MB, IP acknowledge support from PRIN-INAF 2014 (PI M. Bondi).
DG and IP acknowledge support of the Ministry of Foreign Affairs and
International Cooperation, Directorate General for the Country Promotion
(Bilateral Grant Agreement ZA14GR02 - Mapping the Universe on the Pathway
to SKA)
|
1,108,101,563,203 | arxiv | \section{Semantics and type system for $\mu$XQ\xspace queries}
We will use an XQuery-like core language called $\mu$XQ\xspace, introduced by
Colazzo et al. (2006). Following that paper, we distinguish between
\emph{tree variables} $\bar{x} \in \mathit{TVar}$, introduced by $\kw{for}$, and
\emph{forest variables}, $x \in \mathit{Var}$, introduced by $\kw{let}$. The
other syntactic classes of our variant of $\mu$XQ\xspace include labels $l,m,n
\in Lab$ and expressions $e \in \mathit{Expr}$; the abstract syntax of
expressions is defined by the following BNF grammar:
\begin{eqnarray*}
e &::=& \texttt{()} \mid e,e' \mid n[e] \mid w \mid x \mid \letin{x=e}{e'}\\
&\mid& \kw{true} \mid \kw{false}\mid \ifthenelse{c}{e}{e'} \mid e \approx e'\\
&\mid & \bar{x} \mid \bar{x}/\kw{child} \mid e::n \mid \forreturn{\bar{x} \in e}{e'}
\end{eqnarray*}
The distinguished variables $\bar{x}$ in $\forreturn{\bar{x} \in
e}{e'(\bar{x})}$ and $x$ in $\letin{x=e}{e''(x)}$ are bound in
$e'(\bar{x})$ and $e''(x)$ respectively. Here and elsewhere, we
employ common conventions such as considering expressions containing
bound variables equivalent up to $\alpha$-renaming and employing a
richer concrete syntax including, for example, parentheses.
Recursive queries can be added to $\mu$XQ\xspace in the same manner as in
XQuery without damaging the properties of the system needed in this
paper.
To simplify the presentation, we split $\mu$XQ\xspace's projection operation
$\bar{x}~\kw{child}::l$ into two expressions: child projection
($\bar{x}/\kw{child}$) which returns the children of $\bar{x}$, and node
name filtering ($e::n$) which evaluates $e$ to an arbitrary sequence
and selects the nodes labeled $n$. Thus, the ordinary child axis
expression $\bar{x}~\kw{child}::n$ is syntactic sugar for $
(\bar{x}/\kw{child})::n$ and the ``wildcard'' child axis is definable as
$\bar{x}~\kw{child}::* = \bar{x}/\kw{child}$. We also consider only one
built-in operation, string equality.
\begin{figure}
\begin{eqnarray*}
\mathit{children}(n[f]) &=& f\\
\mathit{children}(v) &=& \texttt{()} \quad (v \not\approx n[v'])
\end{eqnarray*}
\[\begin{array}{rclcrcl}
\SB{\kw{true}}\gamma &=& \kw{true}&&
\SB{\kw{false}}\gamma &=& \kw{false}\\
\SB{\texttt{()}}\gamma &=& \texttt{()}&&
\SB{e,e'}\gamma &=& \SB{e}\gamma, \SB{e'}\gamma\\
\SB{n[e]}\gamma &=& n[\SB{e}\gamma]&&
\SB{w}\gamma &=& w\\
\SB{x}\gamma &=& \gamma(x)&&
\SB{\bar{x}}\gamma &=& \gamma(\bar{x})\\
\end{array}\]
\begin{eqnarray*}
\SB{\letin{x=e_1}{e_2}}\gamma &=& \SB{e_2}\gamma[x:=\SB{e_1}\gamma]\\
\SB{\ifthenelse{c}{e_1}{e_2}}\gamma &=&
\left\{\begin{array}{ll}
\SB{e_1}\gamma & \SB{c}\gamma \approx \kw{true}\\
\SB{e_2}\gamma & \SB{c}\gamma \approx \kw{false}\\
\end{array}\right.\\
\SB{e = e'}\gamma &=& \left\{\begin{array}{ll}
\kw{true} & \SB{e}\gamma \approx \SB{e'}\gamma
\\
\kw{false} & \SB{e}\gamma \not\approx \SB{e'}\gamma
\end{array}\right.\\
\SB{e::n}\gamma &=& [n[v] \mid n[v] \in \SB{e}\gamma]\\
\SB{\bar{x}/\kw{child}}\gamma &=& \mathit{children}(\gamma(\bar{x}))\\
\SB{\forreturn{\bar{x} \in e_1}{e_2}}\gamma &=&[\SB{e_2}\gamma[\bar{x}:=t] \mid t \in \SB{e_1}\gamma]
\end{eqnarray*}
\caption{Semantics of query expressions.}\labelFig{query-semantics}
\fbox{$\wf{\Gamma}{e}{\tau}$}
\[
\begin{array}{c}
\infer{\wf{\Gamma}{\texttt{()}}{\texttt{()}}}{}
\quad
\infer{\wf{\Gamma}{w}{\kw{string}}}{}
\smallskip\\
\infer{\wf{\Gamma}{\kw{true}}{\kw{bool}}}{}
\quad
\infer{\wf{\Gamma}{\kw{false}}{\kw{bool}}}{}
\smallskip\\
\infer{\wf{\Gamma}{e,e'}{\tau,\tau'}}{\wf{\Gamma}{e}{\tau} & \wf{\Gamma}{e'}{\tau'}}
\quad
\infer{\wf{\Gamma}{n[e]}{n[\tau]}}{\wf{\Gamma}{e}{\tau}}
\quad
\infer{\wf{\Gamma}{e=e'}{\kw{bool}}}{\wf{\Gamma}{e,e'}{\kw{string}}}
\smallskip\\
\infer{\wf{\Gamma}{\bar{x}}{\alpha}}{\bar{x}{:}\alpha \in \Gamma}
\quad
\infer{\wf{\Gamma}{x}{\tau}}{x{:}\tau \in \Gamma}
\quad
\infer{\wf{\Gamma}{\letin{x=e_1}{e_2}}{\tau_2}}
{\wf{\Gamma}{e_1}{\tau_1} & \wf{\Gamma,x{:}\tau_1}{e_2}{\tau_2}}
\smallskip\\
\infer{\wf{\Gamma}{\ifthenelse{c}{e_1}{e_2}}{\tau_1|\tau_2}}
{\wf{\Gamma}{c}{\kw{bool}} &
\wf{\Gamma}{e_1}{\tau_1} &
\wf{\Gamma}{e_2}{\tau_2}
}
\smallskip\\
\infer{\wf{\Gamma}{\bar{x}/\kw{child}}{\tau}}{\bar{x}{:}n[\tau] \in \Gamma}
\quad
\infer{\wf{\Gamma}{e::n}{\tau'}}{\wf{\Gamma}{e}{\tau}
& \tylab{\tau}{n}{\tau'}}
\smallskip\\
\infer{\wf{\Gamma}{\forreturn{\bar{x} \in e_1}{e_2}}{\tau_2}}
{\wf{\Gamma}{e_1}{\tau_1}
&
\wfin{\Gamma}{x}{\tau_1}{e_2}{\tau_2}}
\quad
\infer{\wf{\Gamma}{e}{\tau'}}
{\wf{\Gamma}{e}{\tau} & \tau \mathrel{{<}{:}} \tau'}
\end{array}
\]
\fbox{$\tylab{\tau}{n}{\tau'}$}
\[\begin{array}{c}
\infer{\tylab{n[\tau]}{n}{n[\tau]}}{}
\quad
\infer{\tylab{\alpha}{n}{ \texttt{()}}}{\alpha \neq n[\tau]}
\quad
\infer{\tylab{\texttt{()}}{n}{\texttt{()}}}{}
\quad
\infer{\tylab{\tau_1^*}{n}{\tau_2^*}}
{\tylab{\tau_1}{n}{\tau_2}}
\smallskip\\
\infer{\tylab{\tau_1,\tau_2}{n}{\tau_1',\tau_2'}}
{\tylab{\tau_1}{n}{\tau_1'}
&
\tylab{\tau_2}{n}{\tau_2'}}
\quad
\infer{\tylab{\tau_1|\tau_2}{n}{\tau_1'|\tau_2'}}
{\tylab{\tau_1}{n}{\tau_1'}
&
\tylab{\tau_2}{n}{\tau_2'}}
\end{array}
\]
\fbox{$\wfin{\Gamma}{x}{\tau}{e}{\tau'}$}
\[\begin{array}{c}
\infer{\wfin{\Gamma}{x}{\texttt{()}}{e}{\texttt{()}}}{}
\quad
\infer{\wfin{\Gamma}{x}{\alpha}{e}{\tau}}{\wf{\Gamma,\bar{x}{:}\alpha}{e}{\tau}}
\quad
\infer{\wfin{\Gamma}{x}{\tau_1^*}{e}{\tau_2^*}}{\wfin{\Gamma}{x}{\tau_1}{e}{\tau_2}}
\smallskip\\
\infer{\wfin{\Gamma}{x}{\tau_1,\tau_2}{e}{\tau_1',\tau_2'}}
{\wfin{\Gamma}{x}{\tau_1}{e}{\tau_1'} &
\wfin{\Gamma}{x}{\tau_2}{e}{\tau_2'}}
\smallskip\\
\infer{\wfin{\Gamma}{x}{\tau_1|\tau_2}{e}{\tau_1'|\tau_2'}}
{\wfin{\Gamma}{x}{\tau_1}{e}{\tau_1'} &
\wfin{\Gamma}{x}{\tau_2}{e}{\tau_2'}}
\end{array}\]
\caption{Query well-formedness.}\labelFig{query-wf}
\end{figure}
\section{Type soundness for updates}
\if 0
We need the following standard lemmas, which we assert without proof.
\begin{lemma}[Weakening]
If $\Gamma \vdash J$ is derivable and $X \not\in FV(\Gamma)$ then
$\Gamma,X{:}\tau \vdash J$ is derivable.
\end{lemma}
\begin{lemma}[Exchange]
If $\Gamma,\Gamma' \vdash J$ is derivable then $\Gamma',\Gamma \vdash J$ is
derivable.
\end{lemma}
\begin{lemma}[Substitution]
If $\Gamma \vdash e:\tau$ and $\Gamma,X{:}\tau \vdash J$ are derivable
judgments then $\Gamma \vdash J[e/X]$ is derivable.
\end{lemma}
\fi
Type soundness for updates relies on pre-existing results for type
soundness for queries, which we repeat here:
\begin{theorem}[Query soundness]\labelThm{query-soundness-app}
If $\wf{\Gamma}{e}{\tau}$ and $\gamma \in \SB{\Gamma}$ then
$\Downarrow{\gamma}{e}{v}$ with $v \in \SB{ \tau}$.
\end{theorem}
We need the following lemmas summarizing properties of test
subtyping and of the operational behavior of iteration.
\begin{lemma}[Subtyping and tests]\labelLem{subtyping-tests}
If $t \in \SB{\alpha}$ then $\alpha \mathrel{{<}{:}} \phi$ if and only if
$t \in \SB{\phi}$.
\end{lemma}
\begin{proof}
First note that if $\tau \in \SB{\alpha} $ and $\alpha \mathrel{{<}{:}} \phi$
then $t \in \SB{\phi}$. For the reverse direction, suppose $\tau
\in \SB{\alpha}$ and $\alpha \not\mathrel{{<}{:}} \phi$, and consider all
combinations of cases.
\end{proof}
\begin{lemma}[Iteration]\labelLem{iteration}
If $\evalu{\gamma}{v_1}{\kw{iter}[s]}{v_1'}$,
$\evalu{\gamma}{v_2}{\kw{iter}[s]}{v_2'}$ are derivable then
$\evalu{\gamma}{v_1,v_2}{\kw{iter}[s]}{v_1',v_2'}$ is derivable.
\end{lemma}
\begin{theorem}[Update soundness]\labelThm{update-soundness-app}
~
\begin{enumerate}
\item If $\wfupd{\Gamma}{a}{\tau}{s}{\tau'}$, $v \in \SB{\tau}$, and
$\gamma \in \SB{\Gamma}$, then $\evalu{\gamma}{v}{s}{v'}$ implies
$v' \in \SB{ \tau'}$.
\item If $\wfiter{\Gamma}{\tau}{s}{\tau'}$, $v \in \SB{\tau}$,
and $\gamma \in \SB{\Gamma}$, then $\evalu{\gamma}{v}{\kw{iter}[s]}{v'}$
implies $v' \in \SB{ \tau'}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Parts (1) and (2) must be proved simultaneously by induction; in
each case the induction is on the typing derivation. The cases
involving standard constructs (if, let, skip, sequencing) are
omitted. For each case, we first show the typing derivation and
then the (unique) corresponding operational derivation that can be
constructed, with remarks as appropriate.
\begin{itemize}
\item Case ($\kw{insert}$):
\[\infer{\wfupd{\Gamma}{*}{\texttt{()}}{\kw{insert}~e}{\tau}}
{\wf{\Gamma}{e}{\tau}}
\quad\infer{\evalu{\gamma}{\texttt{()}}{\kw{insert}~e}{v}}
{\Downarrow{\gamma}{e}{v}}
\]
Follows by query soundness (\refThm{query-soundness}).
\item Case ($\kw{delete}$):
\[\infer{\wfupd{\Gamma}{a}{\tau}{\kw{delete}}{\texttt{()}}}
{}
\quad\infer{\evalu{\gamma}{v}{\kw{delete}}{\texttt{()}}}
{}
\]
Immediate.
\item Case ($\kw{rename}$):
\[\infer{\wfupd{\Gamma}{*}{n'[\tau]}{\kw{rename}~n}{n[\tau]}}
{}
\quad\infer{\evalu{\gamma}{n'[v]}{\kw{rename}~n}{n[v]}}
{}
\]
Follows since $n'[v] \in \SB{n'[\tau]}$ implies $v \in \SB{\tau}$ so
$n[v] \in \SB{n[\tau]}$.
\item Case ($\kw{snapshot}$):
\[\infer{\wfupd{\Gamma}{a}{\tau}{\snapshot{x}{s}}{\tau'}}{\wfupd{\Gamma,x{:}\tau}{a}{\tau}{s}{\tau'}}
\quad\infer{\evalu{\gamma}{v}{\snapshot{x}{s}}{v'}}
{\evalu{\gamma[x:=v]}{v}{s}{v'}}
\]
Follows by induction, using the fact that $v \in \SB{\tau}$ so that
$\gamma[x:=v] \in \SB{\Gamma,x{:}\tau}$.
\item Case (test1):
\[\infer{\wfupd{\Gamma}{1}{\alpha}{\phi?s}{\tau}}
{\alpha \mathrel{{<}{:}} \phi & \wfupd{\Gamma}{1}{\alpha}{s}{\tau}}
\quad
\infer{\evalu{\gamma}{t}{\phi?s}{v}}
{t \in \SB{\phi} & \evalu{\gamma}{t}{s}{v}}
\]
This case follows immediately by appealing to \refLem{subtyping-tests}
and then the induction hypothesis.
\item Case (test2):
\[\infer{\wfupd{\Gamma}{1}{\alpha}{\phi?s}{\alpha}}
{\alpha \not\mathrel{{<}{:}} \phi}
\quad\infer{\evalu{\gamma}{t}{\phi?s}{t}}
{t \not\in \SB{\phi}}
\]
This case is immediate by \refLem{subtyping-tests}.
\item Case (children):
\[\infer{\wfupd{\Gamma}{1}{n[\tau]}{\kw{children}[s]}{n[\tau']}}
{\wfupd{\Gamma}{*}{\tau}{s}{\tau'}}
\]
\[\infer{\evalu{\gamma}{n[v]}{\kw{children}[s]}{n[v']}}
{\evalu{\gamma}{v}{s}{v'}}
\]
Clearly, since $n[v] \in \SB{n[\tau]}$, we must have $v \in \SB{\tau}$. By
induction, we have that $v' \in \SB{\tau'}$, from which it is
immediate that $n[v'] \in \SB{n[\tau]}$.
\item Case ($\kw{right}$):
\[\infer{\wfupd{\Gamma}{a}{\tau}{\kw{right}[s]}{\tau,\tau'}}
{\wfupd{\Gamma}{*}{\texttt{()}}{s}{\tau'}}
\quad\infer{\evalu{\gamma}{v}{\kw{right}[s]}{v,v'}}
{\evalu{\gamma}{\texttt{()}}{s}{v'}}
\]
By assumption, $v \in \SB{\tau}$, induction, we have that $v' \in
\SB{\tau'}$, so $v,v' \in \SB{\tau,\tau'}$.
\item Case ($\kw{left}$): Symmetric.
\item Case ($\kw{iter}$):
\[\infer{\wfupd{\Gamma}{*}{\tau}{\kw{iter}[s]}{\tau'}}
{\wfiter{\Gamma}{\tau}{s}{\tau'}}
\]
We proceed using induction hypothesis (3).
\item Case $P(\vec{e})$:
Suppose the typing derivation is of the form
\[\infer{\wfupd{\Gamma}{a}{\sigma_1'}{P(\vec{e})}{\sigma_2}}{
\begin{array}{l}
\procdecl{P(\vec{x}:\vec{\tau})}{\sigma_1}{\sigma_2}\triangleq s \in \Delta \quad \sigma_1' \mathrel{{<}{:}} \sigma_1\\
\wf{\Gamma}{e_1}{\tau_1'} \quad \tau_1' \mathrel{{<}{:}} \tau_1\\
\quad \cdots \quad\\
\wf{\Gamma}{e_n}{\tau_n'} \quad \tau_n' \mathrel{{<}{:}} \tau_n
\end{array}
}
\]
Hence the operational semantics derivation must be of the form:
\[\infer{\evalu{\gamma}{v}{P(\vec{e})}{v'}}{
\begin{array}{l}
\procdecl{P(\vec{x}:\vec{\tau})}{\sigma_1}{\sigma_2}\triangleq s \in \Delta\\
\Downarrow{\gamma}{e_1}{v_1} \\
\cdots\\
\Downarrow{\gamma}{e_n}{v_n} \\
\evalu{\gamma[x_1:=v_1,\ldots,x_n:=v_n]}{v}{s}{v'}
\end{array}}\]
Then by query soundness and the definition of $\mathrel{{<}{:}}$ we have $v_i
\in \SB{\tau_i'} \subseteq\SB{\tau_i'}$ for each $i \in
\{1,\ldots,n\}$. Hence, $\gamma[x_1:=v_1,\ldots,x_n:=v_n] \in
\SB{\Gamma,x_1{:}\tau_1,\ldots,x_n{:}\tau_n}$. Moreover, $v \in
\SB{\sigma_1'}\mathrel{{<}{:}} \SB{\sigma_1}$ again by definition of subtyping,
so by induction, we can conclude that $v' \in \SB{\sigma_2}$ as
desired.
\item Case ($\kw{iter}$1): If the derivation is of the form
\[\infer{\wfiter{\Gamma}{\texttt{()}}{s}{\texttt{()}}}{} \quad \infer{\evalu{\gamma}{\texttt{()}}{\kw{iter}[s]}{\texttt{()}}}{}
\]
then the conclusion is immediate.
\item Case ($\kw{iter}$2): If the derivation is of the form
\[\infer{\wfiter{\Gamma}{\alpha}{s}{\tau}}
{\wfupd{\Gamma}{1}{\alpha}{s}{\tau}}
\]
then by assumption, $v \in \SB{\alpha}$. Hence, $v = t,\texttt{()}$, so
by induction hypothesis (2), we have $\evalu{\gamma}{t}{s}{v'}$ with
$v' \in \SB{\tau}$ and can derive
\[\infer{\evalu{\gamma}{t,\texttt{()}}{\kw{iter}[s]}{v',\texttt{()}}}
{\evalu{\gamma}{t}{s}{v'}
&
\infer{\evalu{\gamma}{\texttt{()}}{\kw{iter}[s]}{\texttt{()}}}{}}
\]
\item Case ($\kw{iter}$3): If the derivation is of the form
\[\infer{\wfiter{\Gamma}{\tau_1,\tau_2}{s}{\tau_1',\tau_2'}}
{\wfiter{\Gamma}{\tau_1}{s}{\tau_1'}
&
\wfiter{\Gamma}{\tau_2}{s}{\tau_2'}}
\]
then we must have $v = v_1,v_2$ where $v_i \in \SB{\tau_i}$ for $i \in \{1,2\}$; by induction we have $\evalu{\gamma}{v_i}{\kw{iter}[s]}{v_i'}$, where $v_i' \in \SB{\tau_i'}$ for $i \in \{1,2\}$.
Hence, by \refLem{iteration} we can conclude $\evalu{\gamma}{v_1,v_2}{\kw{iter}[s]}{v_1',v_2'}$ where $v_1',v_2' \in \SB{\tau_1',\tau_2'}$.
\item Case ($\kw{iter}$4): If the derivation is of the form
\[\infer{\wfiter{\Gamma}{\tau_1|\tau_2}{s}{\tau_1'|\tau_2'}}
{\wfiter{\Gamma}{\tau_1}{s}{\tau_1'}
&
\wfiter{\Gamma}{\tau_2}{s}{\tau_2'}}
\]
then we must have $v \in \SB{\tau_1} \cup \SB{\tau_2}$; the cases are symmetric, so without loss suppose $v \in \SB{\tau_1}$. By
induction we have that $\evalu{\gamma}{v}{\kw{iter}[s]}{v'}$ where $v' \in \SB{\tau_i'} \mathrel{{<}{:}} \SB{\tau_1 | \tau_2}$.
\item Case ($\kw{iter}$5): If the derivation is of the form
\[\infer{\wfiter{\Gamma}{\tau_1^*}{s}{\tau_2^*}}
{\wfiter{\Gamma}{\tau_1}{s}{\tau_2}}
\]
then since $v \in \tau^*$ we must have that either $v = \texttt{()}$ (in
which case the conclusion is immediate) or $v = v_1,\ldots,v_n$ where
each $v_i \in \SB{\tau}$. By induction, we can obtain derivations
$\evalu{\gamma}{v_i}{\kw{iter}[s]}{v_i'}$ where $v_i' \in \SB{\tau_2}$
for each $i \in\{1,\ldots,n\}$; hence, by repeated application of
\refLem{iteration} we can conclude that
$\evalu{\gamma}{v}{\kw{iter}[s]}{v'}$, where by definition $v' =
v_1',\ldots,v_n' \in \SB{\tau_2^*}$.
\item Case ($\kw{iter}$6): If the derivation is of the form
\[
\infer{\wfiter{\Gamma}{X}{s}{\tau}}
{\wfiter{\Gamma}{E(X)}{s}{\tau}}
\]
then we have that $v \in \SB{X} = \SB{E(X)}$ so the induction
hypothesis applies directly and we can conclude that $v' \in
\SB{\tau}$.
\end{itemize}
This exhausts all cases and completes the proof.
\end{proof}
\section{Normalizing and typechecking source updates}
Figures \ref{fig:source-tc-stmt}, \ref{fig:source-tc-upd},
\ref{fig:source-tc-path}, and \ref{fig:source-tc-filt} show the main
typechecking judgments for source statements, simple updates, and
paths. The statement typechecking rules are straightforward; note
however that we require that both statements and updates start and end
with atomic types. The simple update typechecking rules each follow
the procedure outlined above. The path typechecking rules match $p$
against the input type $\alpha$. Note that paths may bind variables,
and the same variable may be bound to different types for different
cases; this is why we need to include contexts $\Gamma$ in the
substitutions $\Theta$. In certain rules, we choose fresh type
variables. The scope with respect to which we require freshness is
all type variables mentioned elsewhere in the surrounding derivation.
For many of the typechecking judgments, we need to typecheck an
expression against all of the bindings of a context-tagged
substitution $\Theta$. We therefore introduce several
\emph{simultaneous typechecking} judgments shown in
\refFig{simult-source-tc}.
\subsection{Metatheory}
For this source language, we first need some auxiliary lemmas to establish soundness.
\begin{lemma}
If $\wfupds{\Theta\oplus x}{s}{\Theta'\oplus x}$
then $\wfupds{\Theta}{\snapshot{x}{s}}{\Theta'}$.
\end{lemma}
\begin{proof}
Induction on derivation of $\wfupds{\Theta\oplus x}{s}{\Theta'\oplus x}$.
\end{proof}
We also need an auxiliary notation $\Theta|\Theta'$, which merges two context-tagged type substitutions provided their bindings and contexts match:
\[\small\begin{array}{rcl}
\emptyset | \emptyset &=& \emptyset\\
(\Theta,Z \mapsto (\Gamma\triangleright \tau) ) | (\Theta',Z \mapsto (\Gamma\triangleright\tau')) &=& (\Theta | \Theta'), Z \mapsto (\Gamma,\tau|\tau')
\end{array}\]
\begin{lemma}
If $\wfs{\Theta}{e}{\kw{bool}}$ and $\wfupds{\Theta}{s}{\Theta'}$ then
$\wfupds{\Theta}{\ifthenelse{e}{s}{\kw{skip}}}{\Theta|\Theta'}$.
\end{lemma}
\begin{proof}
Induction on derivation of $\wfupds{\Theta}{s}{\Theta'}$.
\end{proof}
\begin{lemma}
\begin{enumerate}
\item If the free type variables of $\tau$ are disjoint from those
of $\Theta'$ then $\tau\subst{\Theta \uplus \Theta'} =
\tau\subst{\Theta}$.
\item If $\wfupdfilt{\Gamma}{\tau}{\phi}{\tau'}{\Theta}$ then
$\tau=\tau'\subst{\Theta}$.
\item If $\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta}$ then
$\alpha = \alpha'\subst{\Theta}$.
\item If $\wfupdpaths{\Theta}{p}{\Theta'}{\Theta''}$ then
$\Theta = \Theta'\subst{\Theta''}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For part (1), proof is by induction on the structure of types. For
part (2), proof is by induction on the structure of derivations,
using part (1) for the cases involving types $\tau_1, \tau_2$ and
$\tau_1 | \tau_2$. Parts (3) and (4) follow by simultaneous
induction on derivations.
\end{proof}
\begin{lemma}
If $\wfupdfilt{\Gamma}{\tau}{\phi}{\tau'}{\Theta}$ and
$\wfupds{\Theta}{s}{\Theta'}$ then
$\wfiter{\Gamma}{\tau}{\phi?s}{\tau'\subst{\Theta'}}$.
\end{lemma}
\begin{proof}
Induction on derivation of $\wfupdfilt{\Gamma}{\tau}{\phi}{\tau'}{\Theta}$.
\end{proof}
\begin{theorem}[Soundness]
Assume $\Gamma,\alpha,\alpha',\Theta_1$ have no free type variables $Z$.
~\begin{enumerate}
\item If $\wfupdstmt{\Gamma}{\tau}{s}{\tau'}$ then
$\wfupd{\Gamma}{1}{\tau}{\nstmt{s}}{\tau'}$.
\item If $\wfupdsimp{\Gamma}{\alpha}{u}{\alpha'}$ then
$\wfupd{\Gamma}{1}{\alpha}{\nupd{u}}{\alpha'}$.
\item If $\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta}$ and
$\wfupds{\Theta}{s}{\Theta'}$ then
$\wfupd{\Gamma}{1}{\alpha}{\npath{p}(s)}{\alpha'\subst{\Theta'}}$.
\item If $\wfupdpaths{\Theta_1}{p}{\Theta_2}{\Theta_3}$ and
$\wfupds{\Theta_3}{s}{\Theta_3'}$ then
$\wfupds{\Theta_1}{\npath{p}(s)}{\Theta_2\subst{\Theta_3'}}$.
\end{enumerate}
\end{theorem}
Now, to prove completeness, we need lemmas establishing that the earlier lemmas are invertible:
\begin{lemma}
If $\wfupds{\Theta}{\snapshot{x}{s}}{\Theta'}$ then
$\wfupds{\Theta\oplus x}{s}{\Theta'\oplus x}$.
\end{lemma}
\begin{proof}
Induction on the structure of derivations of
$$\wfupds{\Theta}{\snapshot{x}{s}}{\Theta'}$$ followed by inversion.
\end{proof}
\begin{lemma}
If $\wfupds{\Theta}{\ifthenelse{e}{s}{\kw{skip}}}{\Theta'}$ then there exists
$\Theta''$ such that $\Theta' = \Theta|\Theta''$ and
$\wfs{\Theta}{e}{\kw{bool}}$ and $\wfupds{\Theta}{s}{\Theta''}$.
\end{lemma}
\begin{proof}
Induction on the structure of derivations of
$$\wfupds{\Theta}{\ifthenelse{e}{s}{\kw{skip}}}{\Theta'}$$ followed by inversion.
\end{proof}
\begin{lemma}
If $\wfiter{\Gamma}{\tau}{\phi?s}{\tau'}$ then there exists $\tau'',
\Theta,\Theta'$ such that $\tau''\subst{\Theta'} = \tau'$,
$\wfupdfilt{\Gamma}{\tau}{\phi}{\tau''}{\Theta}$ and
$\wfupds{\Theta}{s}{\Theta'}$.
\end{lemma}
\begin{proof}
Induction on the structure of derivations of
$$\wfiter{\Gamma}{\tau}{\phi?s}{\tau'}$$ and then using inversion and
properties of substitutions.
\end{proof}
\begin{theorem}[Completeness]
~
\begin{enumerate}
\item If $\wfupd{\Gamma}{1}{\tau}{\nstmt{s}}{\tau'}$ then
$\wfupdstmt{\Gamma}{\tau}{s}{\tau'}$.
\item If $\wfupd{\Gamma}{1}{\alpha}{\nupd{u}}{\alpha'}$ then
$\wfupdsimp{\Gamma}{\alpha}{s}{\alpha'}$.
\item If $\wfupd{\Gamma}{1}{\alpha}{\npath{p}(s)}{\alpha'}$ then there
exist $\alpha'',\Theta,\Theta'$ such that $\alpha' =
\alpha''\subst{\Theta}$,
$\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta}$, and
$\wfupds{\Theta}{s}{\Theta'}$.
\item If $\wfupds{\Theta}{\npath{p}(s)}{\Theta'}$ then there exists
$\Theta_1,\Theta_2,\Theta_2'$ such that $\Theta' =
\Theta_1\subst{\Theta_2'}$,
$\wfupdpaths{\Theta}{p}{\Theta_1}{\Theta_2}$, and
$\wfupds{\Theta_2}{s}{\Theta_2'}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Parts (3) and (4) follow by simultaneous induction using previous
lemmas. Parts (1) and (2) then follow by simultaneous induction,
using parts (3) and (4).
\end{proof}
\begin{figure}
\fbox{$\wfupdstmt{\Gamma}{\tau}{s}{\tau'}$}
\[\begin{array}{c}
\infer{\wfupdstmt{\Gamma}{\tau}{s_1;s_2}{\tau''}}{\wfupdstmt{\Gamma}{\tau}{s_1}{\tau'} & \wfupdstmt{\Gamma}{\tau'}{s_2}{\tau''}}
\smallskip\\
\infer{\wfupdstmt{\Gamma}{\tau}{\IFTHEN{e}{s}}{\tau|\tau'}}
{\wf{\Gamma}{e}{\kw{bool}} & \wfupdstmt{\Gamma}{\alpha}{s}{\tau'}}
\smallskip\\
\infer{\wfupdstmt{\Gamma}{\tau}{\LETIN{x=e}{s}}{\tau'}}{\wf{\Gamma}{e}{\tau_0} & \wfupdstmt{\Gamma,x:\tau+_0}{\tau}{s}{\tau'}}
\smallskip\\
\infer{\wfupdstmt{\Gamma}{\alpha}{u}{\alpha'}}{\wfupdsimp{\Gamma}{\alpha}{u}{\alpha'}}
\end{array}\]
\caption{Typechecking rules for compound updates}\labelFig{source-tc-stmt}
\end{figure}
\begin{figure}
\fbox{$\wfupdsimp{\Gamma}{\alpha}{u}{\alpha'}$}
\[\small\begin{array}{c}
\infer{\wfupdsimp{\Gamma}{\alpha}{\insertvalue{\kw{BEFORE}}{p}{e}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} & \wfupds{\Theta}{\kw{left}[\kw{insert}~e]}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\insertvalue{\kw{AFTER}}{p}{e}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} & \wfupds{\Theta}{\kw{right}[\kw{insert}~e]}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\insertinto{\kw{FIRST}}{p}{e}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} & \wfupds{\Theta}{\kw{children}[\kw{left}[\kw{insert}~e]]}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\insertinto{\kw{LAST}}{p}{e}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfupds{\Theta}{\kw{children}[\kw{right}[\kw{insert}~e]]}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\delete{p}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfupds{\Theta}{\kw{delete}}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\deletefrom{p}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfupds{\Theta}{\kw{children}[\kw{delete}]}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\rename{p}{n}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfupds{\Theta}{\kw{rename}~n}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\replace{p}{e}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfupds{\Theta}{\kw{delete};\kw{insert}~e}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\replacein{p}{e}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfupds{\Theta}{\kw{children}[\kw{delete};\kw{insert}~e]}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\update{p}{s}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfupdstmts{\Theta}{s}{\Theta'}}
\end{array}\]
\caption{Typechecking rules for simple updates}\labelFig{source-tc-upd}
\end{figure}
\begin{figure}
\fbox{$\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta}$}
\[\begin{array}{c}
\infer{\wfupdpath{\Gamma}{\alpha}{.}{\alpha}{\emptyset}}{}
\smallskip\\
\infer{\wfupdpath{\Gamma}{\alpha}{p/p'}{\alpha'\subst{\Theta_2}}{\Theta_2'}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta_1}
&
\wfupdpaths{\Theta_1}{p'}{\Theta_2}{\Theta_2'}}
\smallskip\\
\infer{\wfupdpath{\Gamma}{\alpha}{p[e]}{\alpha'\subst{\maybe{\Theta}}}{\Theta}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfs{\Theta}{e}{\kw{bool}}}
\smallskip\\
\infer{\wfupdpath{\Gamma}{\alpha}{x ~\kw{as}~p}{\alpha'}{\Theta\oplus x}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta}}
\smallskip\\
\infer{\wfupdpath{\Gamma}{n[\tau]}{\phi}{n[\tau']}{\Theta}}
{\wfupdfilt{\Gamma}{\tau}{\phi}{\tau'}{\Theta}}
\end{array}\]
\caption{Typechecking rules for paths}\labelFig{source-tc-path}
\end{figure}
\begin{figure}
\fbox{$\wfupdfilt{\Gamma}{\tau}{\phi}{\tau'}{\Theta}$}
\[\begin{array}{c}
\infer{\wfupdfilt{\Gamma}{\texttt{()}}{\phi}{\texttt{()}}{\emptyset}}{}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{\alpha}{\phi}{\alpha}{\emptyset}}{\alpha \not\mathrel{{<}{:}} \phi}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{\alpha}{\phi}{Z}{Z \mapsto (\Gamma \triangleright\alpha)}}{\alpha \mathrel{{<}{:}} \phi & \text{$Z$ fresh}}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{\tau_1,\tau_2}{\phi}{(\tau_1',\tau_2')}{\Theta_1 \uplus \Theta_2}}
{\wfupdfilt{\Gamma}{\tau_1}{\phi}{\tau_1'}{\Theta_1}
&
\wfupdfilt{\Gamma}{\tau_2}{\phi}{\tau_2'}{\Theta_2}}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{\tau_1|\tau_2}{\phi}{\tau_1'|\tau_2'}{\Theta_1 \uplus \Theta_2}}
{\wfupdfilt{\Gamma}{\tau_1}{\phi}{\tau_1'}{\Theta_1}
&
\wfupdfilt{\Gamma}{\tau_2}{\phi}{\tau_2'}{\Theta_2}}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{\tau_1^*}{\phi}{\tau_2^*}{\Theta}}
{\wfupdfilt{\Gamma}{\tau_1}{\phi}{\tau_2}{\Theta}}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{X}{\phi}{\tau'}{\Theta}}
{\wfupdfilt{\Gamma}{E(X)}{\phi}{\tau'}{\Theta}}
\end{array}\]
\caption{Typechecking rules for path filters}\labelFig{source-tc-filt}
\end{figure}
\begin{figure}
\fbox{$\wfs{\Theta}{e}{\tau}$}
\[\begin{array}{c}
\infer{\wfs{\emptyset}{e}{\tau}}{}
\quad
\infer{\wfs{\Theta,Z \mapsto (\Gamma \triangleright \alpha)}{e}{\tau}}
{\wfs{\Theta}{e}{\tau} &
\wf{\Gamma}{e}{\tau}}
\end{array}\]
\fbox{$\wfupds{\Theta}{s}{\Theta'}$}
\[\begin{array}{c}
\infer{\wfupds{\emptyset}{s}{\emptyset}}{}
\quad
\infer{\wfupds{\Theta,Z \mapsto (\Gamma \triangleright \alpha)}{s}{\Theta',Z \mapsto (\Gamma \triangleright \tau)}}{\wfupds{\Theta}{s}{\Theta'} &\wfupd{\Gamma}{1}{\alpha}{s}{\tau}}
\end{array}\]
\fbox{$\wfupdstmts{\Theta}{s}{\Theta'}$}
\[\small\begin{array}{c}
\infer{\wfupdstmts{\emptyset}{s}{\emptyset}}{}
\quad
\infer{\wfupdstmts{\Theta,Z \mapsto (\Gamma \triangleright \tau)}{s}{\Theta',Z \mapsto (\Gamma \triangleright \tau')}}{\wfupdstmts{\Theta}{s}{\Theta'} & \wfupdstmt{\Gamma}{\tau}{s}{\tau'}}
\end{array}\]
\fbox{$\wfupdpaths{\Theta}{p}{\Theta'}{\Theta''}$}
\[\begin{array}{c}
\infer{\wfupdpaths{\emptyset}{p}{\emptyset}{\emptyset}}{}
\smallskip\\
\infer{\wfupdpaths{\Theta,Z \mapsto (\Gamma \triangleright \alpha)}{p}{\Theta',Z \mapsto (\Gamma \triangleright \alpha')}{\Theta'' \uplus \Theta'''}}
{\wfupdpaths{\Theta}{p}{\Theta'}{\Theta''} &
\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta'''}}
\end{array}\]
\caption{Simultaneous typechecking judgments}\labelFig{simult-source-tc}
\end{figure}
\if 0
\section{Path-error analysis}
\begin{figure*}
\fbox{$\wfeff{\Gamma}{\hat{e}}{\tau}{L}$}
\[
\begin{array}{c}
\infer{\wfeff{\Gamma}{\texttt{()}_{l}}{\texttt{()}}{\{l\}}}{}
\quad
\infer{\wfeff{\Gamma}{w_{l}}{\kw{string}}{\emptyset}}{}
\quad
\infer{\wfeff{\Gamma}{\kw{true}_{l}}{\kw{bool}}{\emptyset}}{}
\quad
\infer{\wfeff{\Gamma}{\kw{false}_{l}}{\kw{bool}}{\emptyset}}{}
\smallskip\\
\infer{\wfeff{\Gamma}{(e_{l_1},e'_{l_2})_{l}}{\tau,\tau'}{L \cup L' \cup \{l \mid l_1 \in L_1 ,l_2 \in L_2\}}}{\wfeff{\Gamma}{e_{l_1}}{\tau}{L_1} & \wfeff{\Gamma}{e'_{l_2}}{\tau'}{L_2}}
\quad
\infer{\wfeff{\Gamma}{n[e_{l_0}]_{l}}{n[\tau]}{L\cup\{l \mid l_0 \in L\}}}{\wfeff{\Gamma}{e_{l_0}}{\tau}{L}}
\smallskip\\
\infer{\wfeff{\Gamma}{(\hat{e_1}=\hat{e_2})_{l}}{\kw{bool}}{L_1 \cup L_2}}{\wfeff{\Gamma}{\hat{e_1}}{\kw{string}}{L_1} & \wfeff{\Gamma}{\hat{e_2}}{\kw{string}}{L_2}}
\smallskip\\
\infer{\wfeff{\Gamma}{\bar{x}_l}{\alpha}{\emptyset}}{\bar{x}{:}\alpha \in \Gamma}
\quad
\infer{\wfeff{\Gamma}{x_l}{\tau}{\{l \mid \tau \mathrel{{<}{:}} \texttt{()}\}}}{x{:}\tau \in \Gamma}
\quad
\infer{\wfeff{\Gamma}{\letin{x=\hat{e}_1}{(e_2)_{l_2}}}{\tau_2}{L_1 \cup L_2 \cup \{l \mid l_2 \in L_2\} }}
{\wfeff{\Gamma}{\hat{e}_1}{\tau_1}{L_1} & \wf{\Gamma,x{:}\tau_1}{(e_2)_{l_2}}{\tau_2}}
\smallskip\\
\infer{\wfeff{\Gamma}{(\ifthenelse{(e_0)_{l_0}}{(e_1)_{l_1}}{(e_2)_{l_2}})_l}{\tau_1|\tau_2}{L_0 \cup L_1 \cup L_2 \cup \{l \mid l_1 \in L_1 , l_2 \in L_2\}}}
{\wfeff{\Gamma}{(e_0)_{l_0}}{\kw{bool}}{L_0} &
\wfeff{\Gamma}{(e_1)_{l_1}}{\tau_1}{L_1} &
\wfeff{\Gamma}{(e_2)_{l_2}}{\tau_2}{L_2}
}
\smallskip\\
\infer{\wfeff{\Gamma}{\bar{x}/\kw{child}}{\tau}{\emptyset}}{\bar{x}{:}n[\tau] \in \Gamma}
\quad
\infer{\wfeff{\Gamma}{(\hat{e}::n)_l}{\tau'}{L\cup \{l \mid \tau' \mathrel{{<}{:}} \texttt{()}\}}}{\wfeff{\Gamma}{\hat{e}}{\tau}{L}
& \tylab{\tau}{n}{\tau'}}
\smallskip\\
\infer{\wfeff{\Gamma}{(\forreturn{\bar{x} \in (e_1)_{l_1}}{(e_2)_{l_2}})_l}{\tau_2}{L_1 \cup L_2 \cup \{l \mid l_1 \in L_1\text{ or }l_2\in L_2\}}}
{\wfeff{\Gamma}{(e_1)_{l_1}}{\tau_1}{L_1}
&
\wfineff{\Gamma}{x}{\tau_1}{(e_2)_{l_2}}{\tau_2}{L_2}}
\quad
\infer{\wfeff{\Gamma}{e}{\tau'}{L}}
{\wfeff{\Gamma}{e}{\tau}{L} & \tau \mathrel{{<}{:}} \tau'}
\end{array}
\]
\fbox{$\wfin{\Gamma}{x}{\tau}{e}{\tau'}$}
\[\begin{array}{c}
\infer{\wfineff{\Gamma}{x}{\texttt{()}}{e_{l}}{\texttt{()}}{\{l\}}}{}
\quad
\infer{\wfineff{\Gamma}{x}{\alpha}{\hat{e}}{\tau}{L}}{\wfeff{\Gamma,\bar{x}{:}\alpha}{\hat{e}}{\tau}{L}}
\quad
\infer{\wfineff{\Gamma}{x}{\tau_1^*}{\hat{e}}{\tau_2^*}{L}}
{\wfineff{\Gamma}{x}{\tau_1}{\hat{e}}{\tau_2}{L}}
\smallskip\\
\infer{\wfineff{\Gamma}{x}{\tau_1,\tau_2}{\hat{e}}{\tau_1',\tau_2'}{L_1 \cap L_2}}
{\wfineff{\Gamma}{x}{\tau_1}{\hat{e}}{\tau_1'}{L_1} &
\wfineff{\Gamma}{x}{\tau_2}{\hat{e}}{\tau_2'}{L_2}}
\quad
\infer{\wfineff{\Gamma}{x}{\tau_1|\tau_2}{\hat{e}}{\tau_1'|\tau_2'}{L_1 \cap L_2}}
{\wfineff{\Gamma}{x}{\tau_1}{\hat{e}}{\tau_1'}{L_1} &
\wfineff{\Gamma}{x}{\tau_2}{\hat{e}}{\tau_2'}{L_2}}
\end{array}\]
\caption{Query path-error analysis}\labelFig{query-path-analysis}
\end{figure*}
TODO
\fi
\section{Introduction}
\if 0
SQL is a pure, functional language for querying relational databases.
Like most popular, standardized languages it is far from ideal,
but it serves as a high-level, implementation-independent language
with a clean semantics that programmers can use to understand and
predict the behavior of queries and that implementers can use to judge
the correctness of optimizations.
\fi
XQuery is a World Wide Web Consortium (W3C) standard, typed, purely
functional language intended as a high-level interface for querying
XML databases. It is meant to play a role for XML databases analogous
to that played by SQL for relational databases. The operational
semantics and type system of XQuery 1.0 has been formalized
\cite{xquery-semantics-w3c-20070123}, and the W3C recently endorsed the
formal semantics as a \emph{recommendation}, the most mature phase for
W3C standards.
Almost all useful databases change over time. The SQL standard
describes a \emph{data manipulation language} (DML), or, more briefly,
\emph{update language}, which facilitates the most common patterns of
changes to relational databases: insertion, deletion, and in-place
modification of rows, as well as addition or deletion of columns or
tables. Despite the effectful nature of these operations, their
semantics is still relatively clear and high-level. SQL updates are
relatively inexpressive, but they are considered sufficient for most
situations, as witnessed by the fact that in many SQL databases, data
can \emph{only} be updated using SQL updates in transactions.
Moreover, the presence of SQL updates does no damage to the purely
functional nature of SQL queries: updates are syntactically distinct
from queries, and the language design and transactional mechanisms
ensure that aliasing difficulties cannot arise, even when an update
changes the structure of the database (for example, if a column is
added or removed from a table).
The XQuery standard lacks update language features analogous to SQL's
DML. While XML querying has been the subject of a massive research
and development effort, high-level XML update languages have received
comparatively little attention. Many programming languages for
transforming \emph{immutable} XML trees have been studied, including
XML stylesheets (XSLT \cite{clark99xslt}), and XML programming
languages such as XDuce, CDuce, Xtatic, or OCamlDuce
\cite{hosoya03toit,benzaken03icfp,DBLP:conf/planX/GapeyevGP06,frisch06icfp}.
However, these languages are not well-suited to specifying updates.
Updates typically change a small part of the document and leave most
of the data fixed. To simulate this behavior by transforming
immutable XML values one must explicitly describe how the
transformation preserves unchanged parts of the the input. Such
transformations are typically executed by building a new version of
the document and then replacing the old one. This is inefficient when
most of the data is unchanged. Worse, XML databases may employ
auxiliary data structures (such as indices) or invariants (such as
validity or key constraints) which need to be maintained when an
update occurs, and updating a database by deleting its old version and
loading a new version forces indices and constraints to be
re-evaluated for the whole database, rather than incrementally.
Instead, therefore, several languages specifically tailored for
updating XML data \emph{in-place} have been proposed. While the
earliest proposal, called XUpdate \cite{laux00xmldb}, was relatively
simple and has been widely adopted, it lacks first-class conditional
and looping constructs. These features have been incorporated into
more recent proposals~\cite{DBLP:conf/sigmod/TatarinovIHW01,sur04planx,DBLP:conf/edbtw/GhelliRS06,chamberlin06ximep,ghelli07dbpl}.
The W3C is also developing a standard XQuery Update
Facility~\cite{xquery-update-w3c-10-20080314}.
Although they have some advantages over XUpdate, we argue that these
approaches all have significant drawbacks, because they unwisely
combine imperative update operations with XQuery's purely-functional
query expressions. We shall focus our critique on XQuery!, since it
is representative of several other proposals, including the W3C's XQuery
Update Facility.
A defining principle of XQuery! is that update operations should be
``fully compositional'', which \citet{DBLP:conf/edbtw/GhelliRS06} take
to mean that an update operation should be allowed anywhere in an
XQuery expression. Thus, the atomic update operations such as
insertion, deletion, replacement, renaming, etc. may all appear
within XQuery's query expressions. Node-identifiers can be used as
mutable references. To avoid nondeterminism, XQuery!\xspace fixes a
left-to-right evaluation order and employs a two-phase semantics that
first collects updates into a \emph{pending update list} by evaluating
an expression without altering the data, and then performs all of the
updates at once. An additional operator called \verb|snap| provides
programmer control over when to apply pending updates.
XQuery!\xspace seems to sacrifice most of the good properties of XQuery.
Most equational laws for XQuery expressions are invalid for
XQuery!\xspace, and the semantics is highly sensitive to arbitrary
choices. For example, consider the following XQuery!\xspace update.
\begin{verbatim}
for $x in $doc//a,
$y in $doc//b
return (insert $y into $x, delete $x//c)
\end{verbatim}
Its behavior on two trees is shown in \refFig{xquerybang}. In the
first example, consider input tree (a) with regular structure.
Running the above update on this tree yields the update sequence:
\begin{verbatim}
insert(1,<b><d/><b>); delete(4);
insert(1,<b><e/><b>); delete(4);
insert(2,<b><d/><b>); delete(6);
insert(2,<b><e/><b>); delete(6);
\end{verbatim}
The numbers refer to the node identifiers shown in
\refFig{xquerybang}(a) as superscripts. When these updates are
performed, the result is output (b). Note that each subtree labeled
$a$ in the output contains three $b$-subtrees, one corresponding to
the original $b$ and one for each occurrence of $b$ in the tree. In
the second example, tree (c) is transformed to (d) via updates
\begin{verbatim}
insert(1,<b><c/><b>); delete(3);
insert(1,<b><a><c/><a/><b/>); delete(3);
insert(5,<b><c/><b>); delete(6);
insert(5,<b><a><c/><a/><b/>); delete(6);
\end{verbatim}
Observe that both occurrences of $a$ have as subtrees both
occurrences of $b$ in the input. This is because the snapshot
semantics of XQuery!\xspace first collects the updates to be performed,
then performs them in sequence. Inserts always copy data from the
original version of the document, whereas deletes mark nodes in the
new version of the document for deletion. This is why some
occurrences of $c$ remain below occurrences of $a$. Although this is
not an update that a user would typically write, an implementation,
type system, or static analysis must handle all of the expressions in
the language, not just the well-behaved ones.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{xquerybang1}
\end{center}
\caption{XQuery!\xspace examples}\labelFig{xquerybang}
\end{figure}
Furthermore, the XQuery!\xspace approach seems quite difficult to
statically typecheck. There are several reasons for this. First, as
the examples in \refFig{xquerybang} show, the structure of the result
can depend on the data in ways difficult to predict using types.
Second, XQuery!\xspace also permits side-effects to be made visible
before the end of an update, using a ``snapshot'' operator called
\verb|snap|. The \verb|snap| operator forces all of the delayed
side-effects of an expression to be performed. This means that the
values of variables may change during an update, so it would be
necessary to update the types of variables to typecheck such updates.
Since variables may alias parts of the document, this requires a
nontrivial alias analysis.
We argue that the combination of features considered in XQuery!\xspace
and similar proposals are unnecessarily complex for the problem of
updating XML databases. While high expressiveness is certainly a
reasonable design goal, we believe that for XML database updates,
expressiveness must be balanced against other concerns, such as
semantic transparency and static typechecking. We believe that it is
worthwhile to consider an alternative approach that sacrifices
expressiveness for semantic clarity and the ability to typecheck and
analyze \emph{typical} updates \emph{easily}.
In previous work~\cite{DBLP:conf/planX/Cheney07}, we introduced a core
\emph{FunctionaL Update language for XML}, called
\textsc{Flux}\xspace.\footnote{originally \textsc{Lux}\xspace, for ``Lightweight Updates for XML''.}
\textsc{Flux}\xspace is \emph{functional} in the same sense that imperative
programming in Haskell using monads\footnote{Technically, \textsc{Flux}\xspace's
approach to typechecking updates is closer to
\emph{arrows}~\cite{hughes00scp}; however, we will not investigate
this relationship in detail here.} is functional. Side-effects may
be present, but they are encapsulated using syntactic and type
constraints, so that queries remain purely functional. \textsc{Flux}\xspace provides
sufficient expressive power to handle most common examples of XML
database-style updates (e.g. all of the relational use cases in the
XQuery Update Facility Requirements
\cite{xquery-update-requirements-w3c-062006}), while avoiding
complications resulting from the interaction of unrestricted iteration
and unconstrained side-effects.
\textsc{Flux}\xspace admits a relatively simple, one-pass operational semantics, so
side-effects can be performed eagerly; it can also be typechecked
using regular expression types, extending previous work on
typechecking
XQuery~\cite{hosoya05toplas,colazzo06jfp,xquery-semantics-w3c-20070123}.
The decidability of typechecking for the core language (with recursive
types and functions) was later established by \citet{cheney08esop}
along with a related result for an XQuery core language.
However, our preliminary proposal ~\cite{DBLP:conf/planX/Cheney07} had
several limitations. This paper presents an improved design. In
particular, our contributions relative to prior work are:
\begin{compactitem}
\item We extend the core language with recursive types and update
procedures and provide a sound type system.
\item We adapt the idea of \emph{path-error analysis}, introduced by
\citet{colazzo06jfp} for a core XML query language, to the setting
of updates, and design a correct static analysis for conservatively
under-approximating update path-errors.
\item We formalize a high-level \textsc{Flux}\xspace source language and show how to
translate to core \textsc{Flux}\xspace. We also present a source-level type system
and prove that a source update is well-formed \emph{if and only if}
its translation is well-formed.
\end{compactitem}
The structure of the rest of this paper is as follows.
\refSec{examples} briefly recapitulates the \textsc{Flux}\xspace source language
introduced in \cite{DBLP:conf/planX/Cheney07} with a few examples.
\refSec{formalization} formalizes the core language and its
operational semantics; its type system is presented and proved sound
in \refSec{types}. \refSec{normalization} presents the translation
from the high-level \textsc{Flux}\xspace language to Core \textsc{Flux}\xspace and shows how to
typecheck high-level updates. \refSec{path-analysis} presents the
path-error analysis.
We provide a more detailed comparison with related approaches in
\refSec{related}; \refSec{extensions} presents extensions and future
work; and \refSec{concl} concludes.
\confonly{Certain definitions and proofs have been omitted from this paper due
to space limitations, but can be found in the companion technical
report~\cite{flux-tr}.}
\tronly{Certain definitions and proofs have been placed in appendices.}
\section{Overview and examples}\labelSec{examples}
\subsection{Syntax}
As with many similar languages, particularly SQL,
XQuery~\cite{xquery-semantics-w3c-20070123}, and
CPL+~\cite{DBLP:conf/ssdbm/LiefkeD99}, we will introduce a high-level, readable
source language syntax which we will translate to a much simpler core
language. We will later formalize the operational semantics and type
system for the core language. In what follows, we assume familiarity
with XQuery and XPath syntax, and with XDuce-style regular expression
types.
The high-level syntax of \textsc{Flux}\xspace updates is shown in
\refFig{lux-concrete}. XQuery variables $\mathit{Var}$ are typically written
\verb|$x|, \verb|$y|, etc. We omit the syntactic class $\mathit{Expr}$
consisting of ordinary XQuery expressions, respectively. Statements
$\mathit{Stmt}$ include conditionals, let-binding, sequential composition, and
update statements $\mathit{Upd}$, which may be guarded by a $\kw{WHERE}$-clause.
We use braces to parenthesize statements $\{\mathit{Stmt}\}$. Updates $\mathit{Upd}$
come in two flavors, \emph{singular} and \emph{plural}. Singular
updates expect a single tree and are executed once for each selected
tree; plural updates operate on arbitrary sequences and are executed
on the children of each selected tree. Singular insertions
(\verb|INSERT BEFORE/AFTER|) insert a value before or after each node
selected by the path expression, while plural insertions
(\verb|INSERT AS FIRST/LAST INTO|) insert a value at the beginning or
end of the child-list of each selected node. Similarly, singular
deletes (\verb|DELETE|) delete individual nodes selected by the given
$\mathit{Path}$, whereas plural deletes (\verb|DELETE| \verb|FROM|) delete the
child-list of each selected node. Singular replacement
\verb|REPLACE WITH| replaces a subtree, while plural replacement
\verb|REPLACE| \verb|IN| replaces the content of a path with new
content. The renaming operation \verb$RENAME TO$ is always singular;
it renames a subtree's label. The $\kw{UPDATE}~\mathit{Path}~\kw{BY}~\mathit{Stmt}$
operation is singular; it applies $\mathit{Stmt}$ to each tree matching
$\mathit{Path}$. Update procedure declarations are not shown but can be added
easily to the source language.
\begin{figure}[tb]
\[\begin{array}{rcl}
\mathit{Stmt} &::=& \mathit{Upd}~[\kw{WHERE}~\mathit{Expr}] \\
&|& \IFTHEN{\mathit{Expr}}{\mathit{Stmt}}\\
&|& \mathit{Stmt};\mathit{Stmt}'\\
&|& \LETIN{\mathit{Var}:=\mathit{Expr}}{\mathit{Stmt}}\\
&|& \{\mathit{Stmt}\}\\
\mathit{Upd}&::=& \insertvalue{(\kw{BEFORE}|\kw{AFTER})}{\mathit{Path}}{\mathit{Expr}}\\
&|& \insertinto{(\kw{LAST}|\kw{FIRST})}{\mathit{Path}}{\mathit{Expr}}\\
&|& \delete{[\kw{FROM}]~\mathit{Path}}\\
&|& \rename{\mathit{Path}}{\mathit{Lab}}\\
&|& \replace{[\kw{IN}]~\mathit{Path}}{\mathit{Expr}}\\
&|& \update{\mathit{Path}}{\mathit{Stmt}}\\
\mathit{Path}&::=& . \mid \mathit{Lab} \mid \mathtt{node}() \mid \mathtt{text}() \\
&\mid& \mathit{Path}/\mathit{Path} \mid \mathit{Var}~\kw{AS}~\mathit{Path} \mid \mathit{Path}[\mathit{Expr}]
\end{array}\]
\if 0
\begin{verbatim}
Stmt ::= PathUpd
| PathUpd WHERE Expr
| IF Cond THEN Stmt (ELSE Stmt)?
| Stmt ; Stmt
| LET $var := Expr IN Stmt
| { Stmt }
PathUpd ::= INSERT (BEFORE|AFTER) UPath VALUE Expr
| INSERT (AS (FIRST|LAST)) INTO UPath
VALUE Expr
| UPDATE IN? UPath BY Stmt
| DELETE FROM? UPath
| REPLACE UPath IN? WITH Expr
| RENAME UPath TO Name
UPath ::= . | UPath/Name | UPath/*
| $x AS UPath | UPath[Cond]
Expr ::= XQuery expressions
Cond ::= XQuery/XPath conditional expressions
\end{verbatim}
\fi
\caption{Concrete syntax of \textsc{Flux}\xspace updates.}\labelFig{lux-concrete}
\end{figure}
The \emph{path expressions} $\mathit{Path}$ in \textsc{Flux}\xspace are based on the XPath
expressions that are allowed in XQuery. Paths include the empty path
$.$, sequential composition $\mathit{Path}/\mathit{Path}$, the XPath child axis tests
(\verb$text()$, \verb$\mathit{Lab}$ and \verb$node()$), filters
($\mathit{Path}[\mathit{Expr}]$), and variable binding steps ($\mathit{Var}~ \kw{AS}~ \mathit{Path}$).
The ``as'' path expression \verb|$x AS Path| (not present in XPath)
binds the subtree matching \verb|Path| to \verb|$x| in each iteration
of a path update. We often write $*$ instead of $node()$. We only
describe the syntax of paths used to perform \emph{updates}; arbitrary
XPath or XQuery expressions may be used in subqueries $\mathit{Expr}$.
Both the \textsc{Flux}\xspace source language described here and the core language
introduced later are case-insensitive with respect to keywords (like
XQuery); however, we use uppercase for the source language and
lowercase for the core language to prevent confusion.
\subsection{Execution model, informally}
In general, an update is evaluated as follows: The path expression is
evaluated, yielding a \emph{focus selection}, or a set of parts of the
updatable store on which the update \emph{focuses}. The
\verb|WHERE|-clause, if present, is evaluated with respect to the
variables bound in the path and if the result is \verb|true| then the
corresponding basic update operation (insert, delete, etc.) is
applied to each element of this set in turn. Order of evaluation is
unspecified and the semantics is consistent with parallel evaluation
of iterations.
Unlike most other proposals, in \textsc{Flux}\xspace, arbitrary XPath or XQuery
expressions cannot be used to select foci. If this were allowed, it
would be easy to construct examples for which the result of an update
depends on the order in which the focus selection is processed. For
example, suppose the document is of the form \verb|<a><b/></a>|. If
the following update were allowed:
\begin{verbatim}
UPDATE //* BY { DELETE a/b; RENAME * TO c }
\end{verbatim}
then the result would depend on the order in which the updates are
applied. Two possible results are \verb|<c/>| and
\verb|<c><c/></c>|. This nondeterministic behavior is difficult to
typecheck. For this reason, we place severe restrictions on the path
expressions that may be used to select foci.
We identify two key properties which help to ensure that updates are
deterministic and can be typechecked. First, \emph{an update can only
modify data at or beneath its current focus}. We call this
the \emph{side-effect isolation property}. For example, navigating to
the focused value's parent and then modifying a sibling is not
allowed. In addition, whenever we perform an iterative update
traversing a number of nodes, we require that \emph{the result of an
iterative update is independent of the order in which the nodes are
updated}. We call this the \emph{traversal-order independence
property}.
To ensure isolation of side effects and traversal-order independence,
it is sufficient to restrict the XPath expressions that can be used to
select foci. Specifically, only the child axis\footnote{The attribute
axis can also be handled easily, but the descendant, parent, and
sibling axes seem nontrivial to handle.} is allowed, and absolute
paths starting with $/$ cannot be used to backtrack to the root of the
document in order to begin iterating over some other part. This
ensures that only descendants of a given focused value can be selected
as the new focus and that a selection contains no overlapping parts.
Consequently, the side effects of an update are confined to the
subtrees of its focus, and the result of an iteration is independent of
the traversal order. This keeps the semantics deterministic and helps
make typechecking feasible.
\subsection{Examples}
Suppose we start an XML database with no pre-loaded data; its type is
$db[\texttt{()}]$. We want to create a small database listing books and
authors. The following \textsc{Flux}\xspace updates accomplish this:
\begin{verbatim}
U1 : INSERT AS LAST INTO db VALUE books[];
INSERT AS LAST INTO db VALUE authors[]
\end{verbatim}
After this update, the database has type
\begin{verbatim}
books[],authors[]
\end{verbatim}
Suppose we want to load some XML data into the database. Since XML
text is included in XQuery's expression language, we can just do the
following:
\begin{verbatim}
U2 : INSERT INTO books VALUE
<book><author>Charles Dickens</author>
<title>A Tale of Two Cities</title>
<year>1858</year></book>
<book><author>Lewis Carroll</author>
<title>Alice in Wonderland</title>
<year>??</year></book>;
INSERT INTO authors VALUE
<author><name>Charles Dickens</name>
<born>1812</born>
<died>1870</died></author>
<author><name>Lewis Carroll</name>
<born>1832</born>
<died>1898</died></author>
\end{verbatim}
This results in a database with type
\begin{verbatim}
books[ book[author[string],title[string],
year[string]]* ],
authors[ author[name[string],born[string],
died[string]]* ]
\end{verbatim}
The data we initially inserted had some missing dates. We can fill
these in as follows:
\begin{verbatim}
U3 : UPDATE $x AS books/book BY
REPLACE IN year WITH "1859"
WHERE $x/title/text() = "A Tale of Two Cities"
U4 : UPDATE $x AS books/book BY
REPLACE IN year WITH "1865"
WHERE $x/title/text() = "Alice in Wonderland"
\end{verbatim}
Note that here, we use an XQuery expression \verb|$x/name/text()| for
the $\kw{WHERE}$-clause. Both updates leave the structure of the database unchanged.
We can add an element to each \verb|book| in \verb|books| as follows:
\begin{verbatim}
U5 : INSERT AS LAST INTO books/book
VALUE publisher["Grinch"]
\end{verbatim}
After \verb|U5|, the books database has type
\begin{verbatim}
books[ book[author[string],title[string],
year[string],publisher[string]]* ]
\end{verbatim}
Now perhaps we want to add a co-author; for example, perhaps Lewis
Carroll collaborated on ``Alice in Wonderland'' with Charles
Dickens. This is not as easy as adding the publisher field to the end
because we need to select a particular node to insert before or after.
In this case we happen to know that there is only one author, so we
can insert after that; however, this would be incorrect if there were
multiple authors, and we would have to do something else (such as
inserting before the title).
\begin{verbatim}
U6 : UPDATE $x AS books/book BY
INSERT AFTER author
VALUE <author>Charles Dickens</author>
WHERE $x/name/text() = "Alice in Wonderland"
\end{verbatim}
Now the \verb|books| part of the database has
the type:
\begin{verbatim}
books[ book[author[string]*,title[string],
year[string],publisher[string]]* ]
\end{verbatim}
Now that some books have multiple authors, we might want to change the
flat author lists to nested lists:
\begin{verbatim}
U7 : REPLACE $x AS books/book WITH
<book><authors>{$x/author}</authors>
{$x/title}{$x/year}{$x/publisher}</book>
\end{verbatim}
This visits each book and changes its structure so that the authors
are grouped into an \verb|authors| element. The resulting
\verb|books| subtree has type:
\begin{verbatim}
books[ book[authors[author[string]* ],title[string],
year[string],publisher[string]]* ]
\end{verbatim}
Suppose we later decide that the publisher field is unnecessary after
all. We can get rid of it using the following update:
\begin{verbatim}
U8 : DELETE books/book/publisher
\end{verbatim}
The \verb|books| subtree in the result has type
\begin{verbatim}
books[ book[authors[author[string]* ],
title[string],year[string]]* ]
\end{verbatim}
Now suppose Lewis Carroll retires and we wish to remove all of
his books from the database.
\begin{verbatim}
U9 : DELETE $x AS books/book
WHERE $x/authors/author/text() = "Lewis Carroll"
\end{verbatim}
This update does not modify the type of the database. Finally, we can
delete a top-level document as follows:
\begin{verbatim}
U10 : DELETE authors
\end{verbatim}
\subsection{Non-design goals}
There are several things that other proposals for updating XML do that
we make no attempt to do. We believe that these design choices are
well-motivated for \textsc{Flux}\xspace's intended application area, database
updates.
\textbf{Node identity:} The XQuery data model provides identifiers for
all nodes. Many XML update proposals take node identities into
account and can use them as to update parts of the tree ``by
reference''. In contrast, \textsc{Flux}\xspace's semantics is purely value-based.
Although there are currently no examples involving node identity for
XQuery database updates in the W3C's requirements documents
\cite{xquery-update-requirements-w3c-062006}, node identity is
important in other XML update settings such as the W3C's Document
Object Model (DOM). We believe it is possible to adapt \textsc{Flux}\xspace to a
data model with node identity as long as the identifiers are not used
as mutable references.
\textbf{Pattern matching:} Many transformation/query languages
(e.g. \cite{hosoya05toplas,clark99xslt}) and some update languages
(e.g. \cite{DBLP:conf/ssdbm/LiefkeD99,DBLP:conf/ideas/WangLL03})
allow defining transformations by \emph{pattern matching}, that is,
matching tree patterns against the data. Pattern matching is very
useful for XML transformations in Web programming (e.g. converting an
XML document into HTML), but we believe it is not as important for
typical XML database updates. We have not considered general pattern
matching in \textsc{Flux}\xspace, in order to keep the type system and operational
semantics as simple as possible.
\textbf{Side-effects in queries:} Several motivating examples for
XQuery!\xspace \cite{DBLP:conf/edbtw/GhelliRS06} and XQueryP
\cite{chamberlin06ximep} depend on the ability to perform side-effects
within queries. Examples include logging accesses to particular data
or profiling or debugging an XQuery program. \textsc{Flux}\xspace cannot be used for
these applications. However, it is debatable whether adding
side-effects to XQuery is the best way to support logging, profiling,
or debugging for XQuery.
\section{Core language formalization}\labelSec{formalization}
The high-level update language introduced in the last section is
convenient for users, but its operations are complex, overlapping, and
difficult to typecheck. Just as for XQuery and many other languages,
it is more convenient to define a core language with orthogonal
operations whose semantics and typing rules are simple and transparent, and
then translate the high-level language to the core language. \footnote{Such
core languages are also typically easier to optimize, though we do not
consider optimization in this paper.} We first review the XML data
model, regular expression types, and the $\mu$XQ\xspace core query language of
\citet{colazzo06jfp}.
\subsection{XML values and regular expression types}
Following \citet{colazzo06jfp}, we distinguish between \emph{tree
values} $t \in \mathit{Tree}$, which include strings $w \in \Sigma^*$ (for
some alphabet $\Sigma$), boolean values $\kw{true},\kw{false} \in \mathit{Bool}$,
and singleton trees $n[v]$ where $n \in \mathit{Lab}$ is a node label; and
\emph{(forest) values} $v \in \mathit{Val} = \mathit{Tree}^*$, which are sequences of
tree values:
\[\begin{array}{lrcl}
\text{Tree values} & t &::=& n[v] \mid w \mid \kw{true} \mid \kw{false}
\smallskip\\
\text{(Forest) values} & v&::=& \texttt{()} \mid t,v
\end{array}\]
We overload the set membership symbol $\in$ for trees and
forests: that is, $t \in v$ means that $t$ is a member of $v$
considered as a list. Two forest values can be concatenated by
concatenating them as lists; abusing notation, we identify trees $t$
with singleton forests $t,\texttt{()}$ and write $v, v'$ for forest
concatenation. We define a comprehension operation on forest values
as follows:
\begin{eqnarray*}
{}[f(x) \mid x \in \texttt{()}] &=& \texttt{()}\\
{}[f(x) \mid x \in t,v] &=& f(t), [f(x) \mid x \in v]
\end{eqnarray*}
This operation takes a forest $(t_1,\ldots,t_n)$ and a function $f(x)$
from trees to forests and applies $f$ to each tree $t_i$,
concatenating the resulting forests in order. Comprehensions satisfy
basic monad laws as well as some additional equations (see
\cite{fernandez01icdt}). We use $=$ for (mathematical) equality of
tree or forest values.
We consider a regular expression type system with structural
subtyping, similar to those considered in several transformation and
query languages for XML
\cite{hosoya05toplas,colazzo06jfp,fernandez01icdt}.
\[\begin{array}{lrcl}
\text{Atomic types} & \alpha &::=& \kw{bool} \mid \kw{string} \mid n[\tau]
\\
\text{Sequence types} & \tau,\sigma &::=& \alpha \mid \texttt{()} \mid \tau |\tau' \mid \tau,\tau' \mid \tau^* \mid X
\end{array}\]
We call types of the form $\alpha \in \mathit{Atom}$ \emph{atomic} types (or
sometimes tree or singular types), and types $\tau,\sigma \in \mathit{Type}$
of all other forms \emph{sequence types} (or sometimes forest or
plural types). Sequence types are constructed using regular
expression operations such as the empty sequence $\texttt{()}$,
alternative choice $\tau | \tau'$, sequential composition $\tau,\tau'$
and iteration (or Kleene star) $\tau^*$. Type variables $X \in
\mathit{TyVar}$ denoting recursively defined types are also allowed; these
must be declared in signatures as discussed below.
A value of singular type must always be a sequence of length one (that
is, a tree, string, or boolean); plural types may have values of any length. There exist
plural types with only values of length one, but which are not
syntactically singular (for example $\kw{string} | \kw{bool}$). As usual,
the $+$ and $?$ quantifiers are definable as follows: $\tau^+ =
\tau,\tau^*$ and $\tau^? = \tau|\texttt{()}$.
We define \emph{type definitions} and \emph{signatures} as follows:
\[\begin{array}{lrcl}
\text{Type definitions} & \tau_0 &::=& \alpha \mid \texttt{()} \mid \tau_0 |\tau_0' \mid \tau_0,\tau_0' \mid \tau_0^*\\
\text{Type signatures} &E & ::= & \cdot \mid E,\typedecl{X}{\tau_0}
\end{array}\]
Type definitions $\tau_0$ are types with no top-level variables (that
is, every variable is enclosed in a $n[-]$ context). A signature
$E$ is well-formed if all type variables appearing in definitions are
also declared in $E$. Given a well-formed signature $E$, we write
$E(X)$ for the definition of $X$. A type $\tau$ denotes the set of
values $\SB{\tau}_E$, defined as follows.
\[\begin{array}{l}
\begin{array}{rcl}
\SB{\kw{string}}_E &=& \Sigma^*\smallskip\\
\SB{\kw{bool}}_E &=& \mathit{Bool}\smallskip\\
\SB{\texttt{()}}_E &=& \{\texttt{()}\}
\end{array}\quad
\begin{array}{rcl}
\SB{n[\tau]}_E &=& \{n[v] \mid v \in \SB{\tau}_E\}\smallskip\\
\SB{\tau|\tau'}_E &=& \SB{\tau}_E \cup \SB{\tau'}_E\smallskip\\
\SB{X}_E &=& \SB{E(X)}
\end{array}
\smallskip\\
\begin{array}{rcl}
\SB{\tau,\tau'}_E &=& \{v, v' \mid v \in \SB{\tau}_E,v' \in \SB{\tau'}_E\}
\smallskip\\
\SB{\tau^*}_E &=& \bigcup_{n=0}^\infty \{v_1, \ldots, v_n \mid v_1 ,\ldots,v_n \in \SB{\tau}_E\}\\
\end{array}
\end{array}
\]
Formally, $\SB{\tau}_E$ is defined by a straightforward least fixed
point construction which we omit (see e.g.~\cite{hosoya05toplas}).
Henceforth, we treat $E$ as fixed and define $\SB{\tau} \triangleq
\SB{\tau}_{E}$. This semantics validates standard identities such as
associativity of ',' ($\SB{(\tau_1,\tau_2),\tau_3} =
\SB{\tau_1,(\tau_2,\tau_3)}$), unit laws ($\SB{\tau,\texttt{()}} =
\SB{\tau} = \SB{\texttt{()},\tau}$), and idempotence of '*'
($\SB{(\tau^*)^*} = \SB{\tau^*}$).
A type $\tau_1$ is a \emph{subtype} of $\tau_2$ ($\tau_1 \mathrel{{<}{:}}
\tau_2$), by definition, if $\SB{\tau_1} \subseteq \SB{\tau_2}$. The
use of regular expressions (including untagged unions) for XML typing
poses a number of problems for subtyping and typechecking which have
been resolved in previous work on XDuce~\cite{hosoya05toplas}. Our
types are essentially the same as those used in XDuce, so subtyping
reduces to XDuce subtyping; although this problem is EXPTIME-complete
in general, the algorithm of \citet{hosoya05toplas} is well-behaved in
practice. Therefore, we shall not give explicit inference rules for
checking or deciding subtyping, but treat it as a ``black box''.
\subsection{Core query language}
Because \textsc{Flux}\xspace uses queries for insertion, replacement, and
conditionals, we need to introduce a query language and define its
semantics before doing the same for \textsc{Flux}\xspace. In our implementation, we
use a variant of the $\mu$XQ\xspace core language introduced by
\citet{colazzo06jfp}, which has the following syntax:
\begin{eqnarray*}
e &::=& \texttt{()} \mid e,e' \mid n[e] \mid w \mid x \mid \letin{x=e}{e'}\\
&\mid& \kw{true} \mid \kw{false}\mid \ifthenelse{c}{e}{e'} \mid e \approx e'\\
&\mid & \bar{x} \mid \bar{x}/\kw{child} \mid e::n \mid \forreturn{\bar{x} \in e}{e'}
\end{eqnarray*}
We follow the convention in \cite{colazzo06jfp} of using $\bar{x}$ for
variables introduced by $\kw{for}$, which are always bound to tree
values; ordinary variables $x$ may be bound to any value.
An \emph{environment} is a pair of functions $\gamma : (\mathit{Var} \to
\mathit{Val})\times(\mathit{TVar} \to \mathit{Tree})$. Abusing notation, we write $\gamma(x)$
for $\pi_1(\gamma)(x)$ and $\gamma(\bar{x})$ for
$\pi_2(\gamma)(\bar{x})$; similarly, $\gamma[x:=v]$ and
$\gamma[\bar{x}:=t]$ denote the corresponding environment updating
operations. The semantics of queries is defined via the large-step
operational semantics judgment $\Downarrow{\gamma}{e}{v}$, meaning ``in
environment $\gamma$, expression $e$ evaluates to value $v$''.
\confonly{The contributions of this paper do not require detailed
understanding of the query language semantics, so the rules are
relegated to the technical report~\cite{flux-tr}.} \tronly{The
contributions of this paper do not require detailed understanding of
the query language, so the rules are relegated to the appendix.} We
omit recursive queries but they can be added without difficulty.
\begin{figure*}[tb]
\fbox{$\evalu{\gamma}{v}{s}{v'}$}
\[\small\begin{array}{c}
\infer{\evalu{\gamma}{v}{\kw{skip}}{v}}{}
\quad
\infer{\evalu{\gamma}{v}{s;s'}{v_2}}
{\evalu{\gamma}{v}{s}{v_1} &
\evalu{\gamma}{v_1}{s'}{v_2}}
\quad
\infer{\evalu{\gamma}{v}{\ifthenelse{e}{s_1}{s_2}}{v'}}
{\Downarrow{\gamma}{e}{\kw{true}}
& \evalu{\gamma}{v}{s_1}{v'}}
\quad
\infer{\evalu{\gamma}{v}{\ifthenelse{e}{s_1}{s_2}}{v'}}
{\Downarrow{\gamma}{e}{\kw{false}}
& \evalu{\gamma}{v}{s_2}{v'}}
\smallskip\\
\infer{\evalu{\gamma}{v_1}{\letin{x=e}{s}}{v_2}}
{\Downarrow{\gamma}{e}{v}
&
\evalu{\gamma[x:=v]}{v_1}{s}{v_2}
}
\quad
\infer{\evalu{\gamma}{\texttt{()}}{\kw{insert}~e}{v}}
{\Downarrow{\gamma}{e}{v}}
\quad\infer{\evalu{\gamma}{v}{\kw{delete}}{\texttt{()}}}
{}
\quad
\infer{\evalu{\gamma}{n'[v]}{\kw{rename}~n}{n[v]}}
{}
\smallskip\\
\infer{\evalu{\gamma}{v}{\snapshot{x}{s}}{v'}}
{\evalu{\gamma[x:=v]}{v}{s}{v'}}
\quad\infer{\evalu{\gamma}{t}{\phi?s}{v}}
{t \in \SB{\phi} & \evalu{\gamma}{t}{s}{v}}
\quad
\infer{\evalu{\gamma}{t}{\phi?s}{t}}
{t \not\in \SB{\phi}}
\quad
\infer{\evalu{\gamma}{n[v]}{\kw{children}[s]}{n[v']}}
{\evalu{\gamma}{v}{s}{v'}}
\smallskip\\
\infer{\evalu{\gamma}{v}{\kw{left}[s]}{v',v}}
{\evalu{\gamma}{\texttt{()}}{s}{v'}}
\quad
\infer{\evalu{\gamma}{v}{\kw{right}[s]}{v,v'}}
{\evalu{\gamma}{\texttt{()}}{s}{v'}}
\quad
\infer{\evalu{\gamma}{t_1,v_2}{\kw{iter}[s]}{v_1',v_2'}}
{\evalu{\gamma}{t_1}{s}{v_1'}
&
\evalu{\gamma}{v_2}{\kw{iter}[s]}{v_2'}}
\quad\infer{\evalu{\gamma}{\texttt{()}}{\kw{iter}[s]}{\texttt{()}}}{}
\smallskip\\
\infer{\evalu{\gamma}{v}{P(\vec{e})}{v'}}{
\procdecl{P(\vec{x}:\vec{\tau})}{\tau_1}{\tau_2}\triangleq s \in \Delta &
\Downarrow{\gamma}{e_1}{v_1} &
\cdots &
\Downarrow{\gamma}{e_n}{v_n} &
\evalu{\gamma[x_1:=v_1,\ldots,x_n:=v_n]}{v}{s}{v'}}
\end{array}\]
\caption{Operational semantics of
updates.}\labelFig{lux-core-semantics}
\end{figure*}
\subsection{Core update language}
We now introduce the core \textsc{Flux}\xspace update language, which includes
statements $s \in \mathit{Stmt}$, tests $\phi \in \mathit{Test}$, and directions $d
\in \mathit{Dir}$:
\begin{eqnarray*}
s &::=& \kw{skip} \mid s;s' \mid \ifthenelse{e}{s}{s'} \mid\letin{x=e}{s} \\
&\mid & \kw{insert}~e \mid \kw{delete} \mid \kw{rename}~n \\
&\mid& \snapshot{x}{s} \mid \phi?s \mid d[s] \mid P(\vec{e})\\
\phi &::=& n \mid \mathtt{node}() \mid \mathtt{text}()\\
d &::=& \kw{left}\mid \kw{right} \mid \kw{children} \mid \kw{iter}
\end{eqnarray*}
Here, $P$ denotes an \emph{update procedure} name. Procedures are
defined via declarations $P(\vec{x}:\vec{\tau}) : \tau_1 \Rightarrow \tau_2
\triangleq s$, meaning $P$ takes parameters $\vec{x}$ of types $\tau$ and
changes a database of type $\tau_1$ to one of type $\tau_2$. We
collect these declarations into a set $\Delta$, which we take to be
fixed throughout the rest of the paper. Procedures may be recursive.
Updates include standard constructs such as the no-op $\kw{skip}$,
sequential composition, conditionals, and $\kw{let}$-binding. Recall
that updates work by \emph{focusing} on selected parts of the mutable
store. The basic update operations include insertion $\kw{insert}~e$,
which inserts a value provided the focus is the empty sequence;
deletion $\kw{delete}$, which deletes the focus (replacing it with the
empty sequence); and $\kw{rename}~n$, which renames the current focused
value (provided it is a singleton tree). The ``snapshot'' operation
$\snapshot{x}{s}$ binds $x$ to the current focused value and then
applies an update $s$, which may refer to $x$. There is no way to
refer to the focus of an update within a $\mu$XQ\xspace query without using
$\kw{snapshot}$. Also, $\kw{snapshot}$ is \emph{not} equivalent to
XQuery!\xspace's \verb|snap| operator; $\kw{snapshot}$ binds $x$ to an
immutable value which can be used in $s$, whereas \verb|snap| forces
execution of pending updates in XQuery!\xspace.
Updates also include \emph{tests} $\phi?s$ which allow us to examine
the local structure of a tree value and perform an update if the
structure matches. The node label test $n?s$ checks whether the
focus is of the form $n[v]$, and if so executes $s$, otherwise
is a no-op; the wildcard test $\mathtt{node}()?s$ only checks that the value is a
singleton tree. Similarly, $\mathtt{text}()?s$ tests whether the focus
is a string. The $?$ operator binds tightly; for example, $\phi?s;s'
= (\phi?s);s'$.
Finally, updates include \emph{navigation} operators that change the
selected part of the tree and perform an update on the sub-selection.
The $\kw{left}$ and $\kw{right}$ operators move to the left or right of a
value. The $\kw{children}$ operator shifts focus to the children of a
tree value. The $\kw{iter}$ operator shifts focus to all of the tree
values in a forest.
We distinguish between \emph{singular} (unary) updates which apply to
tree values and \emph{plural} (multi-ary) updates which apply to
sequences. Tests $\phi?s$ are always singular. The $\kw{children}$
operator applies a plural update to the children of a single node; the
$\kw{iter}$ operator applies a singular update to all of the elements of
a sequence. Other updates can be either singular or plural in
different situations.
\refFig{lux-core-semantics} shows the operational semantics of Core
\textsc{Flux}\xspace. We write $\evalu{\gamma}{v}{s}{v'}$ to indicate that given
environment $\gamma$ and focus $v$, statement $s$ updates $v$
to value $v'$. The rules for tests are defined in terms of the
following semantic interpretation of tests:
\[\begin{array}{rcl}
\SB{\mathtt{text}()} &=& \Sigma^*
\smallskip\\
\SB{n} &=& \{n[v] \mid v \in \mathit{Val}\}
\smallskip\\
\SB{\mathtt{node}()} &=& \mathit{Tree}
\end{array}\]
Note that we define the semantics entirely in
terms of forest and tree values, without needing to define an explicit
store. This would not be the case if we considered full XQuery, which
includes node identity comparison operations. However, we believe our
semantics is compatible with allowing node-identity tests in queries.
\begin{theorem}[Update determinism]
Let $\gamma,v,s, v_1,v_2$ be given such that
$\evalu{\gamma}{v}{s}{v_1}$ and $\evalu{\gamma}{v}{s}{v_2}$. Then
$v_1 = v_2$.
\end{theorem}
\begin{proof}
Straightforward by induction on the structures of the two
derivations. The interesting cases are those for conditionals,
tests, and iteration, since they are the only statements that have
more than one applicable rule. However, in each case, only matching
pairs of rules are applicable.
\end{proof}
\section{Type system}\labelSec{types}
As noted earlier, certain \emph{singular} updates expect that the
input value is a singleton (for example, $\kw{children}$, $n?s$, etc.)
while \emph{plural} updates work for an arbitrary sequence of trees.
Singular updates fail if applied to a sequence. Our type system
should prevent such run-time failures. Moreover, as with all XML
transformation languages, we often would like to ensure that when
given an input tree of some type $\tau$, an update is guaranteed to
produce an output tree of some other type $\tau'$. For example,
updates made by non-privileged users are usually required to preserve
the database schema.
We define a matching relation between tree types and tests: we
say that $\alpha \mathrel{{<}{:}} \phi$ if $\SB{\alpha} \subseteq \SB{\phi}$.
This is decidable using the following rules:
\[
\infer{\kw{string} \mathrel{{<}{:}} \mathtt{text}()}{}\quad \infer{n[\tau] \mathrel{{<}{:}}
n}{}\quad \infer{\alpha \mathrel{{<}{:}} \mathtt{node}()}{}\]
\begin{figure}[tb]
\fbox{$\wfupd{\Gamma}{a}{\tau}{s}{\tau'}$}
\[\small\begin{array}{c}
\infer{\wfupd{\Gamma}{a}{\tau}{\kw{skip}}{\tau}}{}
\quad
\infer{\wfupd{\Gamma}{a}{\tau}{s;s'}{ \tau''}}
{\wfupd{\Gamma}{a}{\tau}{s}{\tau'} & \wfupd{\Gamma}{a}{\tau'}{s'}{ \tau''}}
\smallskip\\
\infer{\wfupd{\Gamma}{a}{\tau}{\ifthenelse{e}{s}{s'}}{\tau_1 | \tau_2}}
{\wf{\Gamma}{e}{\kw{bool}} & \wfupd{\Gamma}{a}{\tau}{s}{\tau_1} & \wfupd{\Gamma}{a}{\tau}{s'}{\tau_2}}
\smallskip\\
\infer{\wfupd{\Gamma}{*}{\texttt{()}}{\kw{insert}~e}{\tau}}
{\wf{\Gamma}{e}{\tau}}
\quad
\infer{\wfupd{\Gamma}{a}{\tau}{\kw{delete}}{\texttt{()}}}
{}
\smallskip\\
\infer{\wfupd{\Gamma}{1}{n'[\tau]}{\kw{rename}~n}{n[\tau]}}
{}
\quad
\infer{\wfupd{\Gamma}{a}{\tau_1}{\letin{x=e}{s}}{\tau_2}}
{\wf{\Gamma}{e}{\tau} & \wfupd{\Gamma,x{:}\tau}{a}{\tau_1}{s}{\tau_2}}
\smallskip\\
\infer{\wfupd{\Gamma}{a}{\tau}{\snapshot{x}{s}}{\tau'}}{\wfupd{\Gamma,x{:}\tau}{a}{\tau}{s}{\tau'}}
\quad
\infer{\wfupd{\Gamma}{1}{\alpha}{\phi?s}{\tau}}
{\alpha \mathrel{{<}{:}} \phi & \wfupd{\Gamma}{1}{\alpha}{s}{\tau}}
\smallskip\\
\infer{\wfupd{\Gamma}{1}{\alpha}{\phi?s}{\alpha}}
{\alpha \not\mathrel{{<}{:}} \phi}
\quad
\infer{\wfupd{\Gamma}{1}{n[\tau]}{\kw{children}[s]}{n[\tau']}}
{\wfupd{\Gamma}{*}{\tau}{s}{\tau'}}
\smallskip\\
\infer{\wfupd{\Gamma}{a}{\tau}{\kw{left}[s]}{\tau',\tau}}
{\wfupd{\Gamma}{*}{\texttt{()}}{s}{\tau'}}
\quad
\infer{\wfupd{\Gamma}{a}{\tau}{\kw{right}[s]}{\tau,\tau'}}
{\wfupd{\Gamma}{*}{\texttt{()}}{s}{\tau'}}
\smallskip\\
\infer{\wfupd{\Gamma}{*}{\tau}{\kw{iter}[s]}{\tau'}}
{\wfiter{\Gamma}{\tau}{s}{\tau'}}
\quad
\infer{\wfupd{\Gamma}{a}{\tau_1}{s}{\tau_2}}
{
\wfupd{\Gamma}{a}{\tau_1}{s}{\tau_2'} &
\tau_2' \mathrel{{<}{:}} \tau_2}
\smallskip\\
\infer{\wfupd{\Gamma}{a}{\sigma_1'}{P(\vec{e})}{\sigma_2}}{
\begin{array}{l}
\procdecl{P(\vec{x}:\vec{\tau})}{\sigma_1}{\sigma_2}\triangleq s \in \Delta \quad \sigma_1' \mathrel{{<}{:}} \sigma_1\\
\wf{\Gamma}{e_1}{\tau_1'} \quad \tau_1' \mathrel{{<}{:}} \tau_1 \quad \cdots \quad
\wf{\Gamma}{e_n}{\tau_n'} \quad \tau_n' \mathrel{{<}{:}} \tau_n
\end{array}
}
\end{array}\]
\fbox{$\wfiter{\Gamma}{\tau}{s}{\tau'}$}
\[\small\begin{array}{c}
\infer{\wfiter{\Gamma}{\texttt{()}}{s}{\texttt{()}}}{}
\quad
\infer{\wfiter{\Gamma}{\alpha}{s}{\tau}}
{\wfupd{\Gamma}{1}{\alpha}{s}{\tau}}
\quad
\infer{\wfiter{\Gamma}{\tau_1^*}{s}{\tau_2^*}}
{\wfiter{\Gamma}{\tau_1}{s}{\tau_2^*}}
\smallskip\\
\infer{\wfiter{\Gamma}{\tau_1,\tau_2}{s}{\tau_1',\tau_2'}}
{\wfiter{\Gamma}{\tau_1}{s}{\tau_1'}
&
\wfiter{\Gamma}{\tau_2}{s}{\tau_2'}}
\smallskip\\
\infer{\wfiter{\Gamma}{\tau_1|\tau_2}{s}{\tau_1'|\tau_2'}}
{\wfiter{\Gamma}{\tau_1}{s}{\tau_1'}
&
\wfiter{\Gamma}{\tau_2}{s}{\tau_2'}}
\quad
\infer{\wfiter{\Gamma}{X}{s}{\tau}}
{\wfiter{\Gamma}{E(X)}{s}{\tau}}
\end{array}\]
\fbox{$\wfdecl{\Delta}$}
\[\small\begin{array}{c}
\infer{\wfdecl{\emptyset}}{} \quad
\infer{\wfdecl{\Delta,\procdecl{P(\vec{x}:\vec{\tau})}{\tau_1}{\tau_2}\triangleq s}}
{\wfdecl{\Delta} & \wfupd{\vec{x}:\vec{\tau}}{*}{\tau_1}{s}{\tau_2}}
\end{array}\]
\caption{Update, iteration, and declaration
well-formedness.}\labelFig{update-wf}
\end{figure}
We employ a type system for queries similar to that developed by
\citet{colazzo06jfp}. We consider type environments $\Gamma$
consisting of sets of bindings $x{:}\tau$ of variables to types and
$\bar{x}{:}\alpha$ of tree variables to atomic types. (We never need
to bind a tree variable to a sequence type). As usual, we assume that
variables in type environments are distinct; this convention
implicitly constrains all inference rules. We write $\SB{\Gamma}$ for
the set of all environments $\gamma$ such that $\gamma(x) \in
\SB{\Gamma(x)}$ and $\gamma(\bar{x}) \in \SB{\Gamma(\bar{x})}$ for all
$x \in dom(\Gamma)$ and $\bar{x} \in dom(\Gamma)$ respectively.
The typing judgment for queries is $\wf{\Gamma}{e}{\tau}$, meaning
\emph{in type environment $\Gamma$, expression $e$ has type $\tau$}.
The typing rules are essentially the same as those
in~\cite{colazzo06jfp}.
The main typing judgment for updates is
$\wfupd{\Gamma}{a}{\tau}{s}{\tau'}$, meaning \emph{in type environment
$\Gamma$, an $a$-ary update $s$ maps values of type $\tau$ to type
$\tau'$}. Here, $a \in \{1,*\}$ is the arity of the update, and
singular update judgments always have $\tau = \alpha$ atomic. In
addition, we define auxiliary judgments
$\wfiter{\Gamma}{\tau}{s}{\tau'}$ for typechecking iterations and
$\wfdecl{\Delta}$ for typechecking declarations $\Delta$. The rules
for update well-formedness are shown in \refFig{update-wf}.
\subsection{Discussion}
In many functional languages, and several XML update proposals,
side-effecting operations are treated as expressions that return
$\texttt{()}$. Thus, we could typecheck such updates as expressions of
type $\texttt{()}$. This is straightforward provided the types of values
reachable from the free variables in $\Gamma$ do not change; for
example, this is the case for ML-style references. However, if the
side-effects do change the types of the values of variables, then
$\Gamma$ needs to be updated to take these changes into account. One
possibility is to typecheck updates using a residuating judgment
$\Gamma \vdash s : \texttt{()} \mid \Gamma'$; here, $\Gamma'$ is the
updated type environment reflecting the types of the variables after
update $s$. This approach quickly becomes complicated, especially if
it is possible for variables to ``alias'', or refer to overlapping
parts of the data.
In \textsc{Flux}\xspace, we take a completely different approach to typechecking
updates. The judgment $\wfupd{\Gamma}{a}{\tau}{s}{\tau'}$ assigns an
update much richer type information that describes the type of the
updatable context before and after running $s$. The variables in
$\Gamma$ are immutable, so their types never need to be updated.
The most unusual rules are those involving the $\kw{iter}$, test, and
$\kw{children}$, $\kw{left}/\kw{right}$, and $\kw{insert}/\kw{rename}/\kw{delete}$
operators. The following example illustrates how the rules work
for these constructs. Consider the update:
\[\kw{iter}~[a?\kw{children}~[\kw{iter}~[b?~\kw{right}~[\kw{insert}~c[]]]]]\]
Intuitively, this update inserts a $c$ after every $b$ under a
top-level $a$. Now consider the input type $a[b[]^*,c[],b[]^*],d[]$.
Clearly, the output type \emph{should} be
$a[(b[],c[])^*,c[],(b[],c[])^*],d[]$. To see why this is the case,
first note that the following can be derived for any $\tau,\tau',s$:
\[\infer{\wfupd{}{*}{a[\tau],d[]}{\kw{iter}~[a?s]}{a[\tau'],d[]}}
{\wfupd{}{1}{a[\tau]}{s}{a[\tau']}}\]
Using the rule for $\kw{children}$, we can see that it suffices to check
that $\kw{iter}~[b?\kw{right}~[\kw{insert}~c[]]]$ maps type $b[]^*,c[],b[]^*$ to
$(b[],c[])^*,c[],(b[],c[])^*$. This is also an instance of a
derivable rule
\[\infer{\wfupd{}{*}{b[]^*,c[],b[]^*}{\kw{iter}~[b?s]}{\tau^*,c[],\tau^*}}
{\wfupd{}{1}{b[]}{s}{\tau}}\]
Hence, we now need to show only that $\kw{right}~[\kw{insert}~c[]]$ maps type
$b[]$ to $b[],c[]$, which is immediate:
\[
\infer{\wfupd{}{1}{b[]}{\kw{right}~[\kw{insert}~c[]]}{b[],c[]}}
{\infer{\wfupd{}{*}{\texttt{()}}{\kw{insert}~c[]}{c[]}}
{\infer{\wf{}{c[\texttt{()}]}{c[\texttt{()}]}}{\hyp{\wf{}{\texttt{()}}{\texttt{()}}}}}}
\]
\subsection{Metatheory}
We take for granted the following type soundness property for queries
(this was proved for $\mu$XQ\xspace in \citet{colazzo06jfp}).
\begin{theorem}[Query soundness]\labelThm{query-soundness}
If $\wf{\Gamma}{e}{\tau}$ and $\gamma \in \SB{\Gamma}$ then
$\Downarrow{\gamma}{e}{v}$ implies $v \in \SB{ \tau}$.
\end{theorem}
\confonly{The corresponding result also holds for updates, by a straightforward
structural induction argument (presented in the technical
report~\cite{flux-tr}):}
\tronly{The corresponding result also holds for updates, by a straightforward
structural induction argument (presented in the appendix):}
\begin{theorem}[Update soundness]\labelThm{update-soundness}
Assume $\wfdecl{\Delta}$ holds.
\begin{enumerate}
\item If $\wfupd{\Gamma}{a}{\tau}{s}{\tau'}$, $v \in \SB{\tau}$, and
$\gamma \in \SB{\Gamma}$, then $\evalu{\gamma}{v}{s}{v'}$ implies $v'
\in \SB{ \tau'}$.
\item If $\wfiter{\Gamma}{\tau}{s}{\tau'}$, $v \in \SB{\tau}$, and
$\gamma \in \SB{\Gamma}$, then $\evalu{\gamma}{v}{\kw{iter}[s]}{v'}$
implies $v' \in \SB{ \tau'}$.
\end{enumerate}
\end{theorem}
Moreover, typechecking is decidable for both $\mu$XQ\xspace and \textsc{Flux}\xspace in the
presence of the subsumption rules~\cite{cheney08esop}.
\section{Normalization}\labelSec{normalization}
\begin{figure*}[tb]
\[\small
\begin{array}{c}
\begin{array}{rcl}
\nstmt{u} &=& \nupd{u}\\
\nstmt{\IFTHEN{e}{s}} &=& \ifthenelse{e}{\nstmt{s}}{\kw{skip}}\\
\nstmt{s_1;s_2} &=& \nstmt{s_1};\nstmt{s_2}\\
\nstmt{\LETIN{x=e}{s}} &=& \letin{x=e}{\nstmt{s}}
\bigskip\\
\npath{.}(s) &=& s\\
\npath{p/p'}(s) &=& \npath{p}(\npath{p'}(s))\\
\npath{\phi}(s) &=& \kw{children}[\kw{iter}[\phi?s]]\\
\npath{p[e]}(s) &=& \npath{p}(\ifthenelse{e}{s}{\kw{skip}})\\
\npath{x~\kw{AS}~p}(s) &=& \npath{p}(\snapshot{x}{s})
\end{array}
\end{array}
\!\!\!\!\!
\!\!\!\!\!
\begin{array}{rcl}
\nupd{\insertvalue{\kw{BEFORE}}{p}{e}} &=& \npath{p}(\kw{left}[\kw{insert}~e])\\
\nupd{\insertvalue{\kw{AFTER}}{p}{e}} &=& \npath{p}(\kw{right}[\kw{insert}~e])\\
\nupd{\insertinto{\kw{LAST}}{p}{e}} &=& \npath{p}(\kw{children}[\kw{left}[\kw{insert}~e]])\\
\nupd{\insertinto{\kw{FIRST}}{p}{e}} &=& \npath{p}(\kw{children}[\kw{right}[\kw{insert}~e]])\\
\nupd{\delete{p}} &=& \npath{p}(\kw{delete})\\
\nupd{\deletefrom{p}} &=& \npath{p}(\kw{children}[\kw{delete}])\\
\nupd{\rename{p}{n}} &=& \npath{p}(\kw{rename}~n)\\
\nupd{\replace{p}{e}} &=& \npath{p}(\kw{delete};\kw{insert}~e)\\
\nupd{\replacein{p}{e}} &=& \npath{p}(\kw{children}[\kw{delete};\kw{insert}~ e])\\
\nupd{\update{p}{s}} &=& \npath{p}(\nstmt{s})
\end{array}\]
\caption{Source update normalization}\labelFig{norm-upd}
\end{figure*}
There is a significant gap between the high-level \textsc{Flux}\xspace language we
presented in \refSec{examples} and the core language in the previous
section. In this section, we formalize a translation from the source
language presented in \refSec{examples} to Core \textsc{Flux}\xspace. In XQuery,
this kind of translation is called \emph{normalization}. We define
three normalization functions called \emph{path expression
normalization} $\elab[\mathit{Path}]{-}(-)$, \emph{update statement
normalization} $\elab[\mathit{Stmt}]{-}$, and \emph{simple update
normalization} $\elab[\mathit{Upd}]{-}$. These functions are defined in
\refFig{norm-upd}.
Path expression normalization takes an extra parameter, which must be
a core \textsc{Flux}\xspace update; that is, $\elab[\mathit{Path}]{p}(s)$ normalizes a path
$p$ by expanding it to an expression which navigates to $p$ and then
does $s$. Compound statement normalization is straightforward. Simple
updates are first normalized by translation to $p$. We omit the cases
needed to handle $\kw{WHERE}$-clauses; however, they can be handled by
the existing translation if we consider e.g. $\replace{p}{e}~\kw{WHERE}~c$
to be an abbreviation for $\replace{p[c]}{e}$, etc. In particular,
note that the translation places both $c$ and $e$ into the scope of all
variables declared in $p$.
Since the translation rules cover all cases and are orthogonal, it is
straightforward to see that the normalization functions are total
functions from the source language to Core \textsc{Flux}\xspace.
\subsection{Typechecking source updates}
\begin{figure}
\fbox{$\wfupdstmt{\Gamma}{\tau}{s}{\tau'}$}
\[\begin{array}{c}
\infer{\wfupdstmt{\Gamma}{\tau}{s_1;s_2}{\tau''}}{\wfupdstmt{\Gamma}{\tau}{s_1}{\tau'} & \wfupdstmt{\Gamma}{\tau'}{s_2}{\tau''}}
\smallskip\\
\infer{\wfupdstmt{\Gamma}{\tau}{\IFTHEN{e}{s}}{\tau|\tau'}}
{\wf{\Gamma}{e}{\kw{bool}} & \wfupdstmt{\Gamma}{\alpha}{s}{\tau'}}
\smallskip\\
\infer{\wfupdstmt{\Gamma}{\tau}{\LETIN{x=e}{s}}{\tau'}}{\wf{\Gamma}{e}{\tau_0} & \wfupdstmt{\Gamma,x:\tau+_0}{\tau}{s}{\tau'}}
\smallskip\\
\infer{\wfupdstmt{\Gamma}{\alpha}{u}{\alpha'}}{\wfupdsimp{\Gamma}{\alpha}{u}{\alpha'}}
\end{array}\]
\caption{Typechecking rules for compound updates}\labelFig{selected-source-tc-stmt}
\end{figure}
Normalization complicates type-error reporting, since we cannot always
easily explain why the translation of an update fails to typecheck in
source-level terms familiar to the user. We therefore also develop a
type system for the source language that is \emph{both sound and
complete} with respect to core \textsc{Flux}\xspace typechecking. This type system
can therefore be used to report type errors to users in terms of the
source language.
We assume that query subexpressions $e$ have already been normalized
to $\mu$XQ\xspace according to the standard XQuery normalization rules
\cite{xquery-semantics-w3c-20070123}. The problem of typechecking
unnormalized XQuery expressions is an orthogonal issue (and one that
has to our knowledge not been addressed).
Typechecking source-level updates is challenging because simple
updates may change the types of many parts of the document
simultaneously, depending on the structure of the path $p$. In
contrast, core \textsc{Flux}\xspace updates are easy to typecheck because they break
the corresponding navigation, selection, and modification of types
into small, manageable steps.
To deal with the non-local nature of source updates, we employ
\emph{type variables} $Z$ and \emph{context-tagged type substitutions}
$\Theta$. These substitutions are defined as follows:
\[\Theta ::= \emptyset \mid \Theta,Z \mapsto (\Gamma\triangleright \tau)\]
We distinguish the type variables $Z$ we will use here for
typechecking source updates from the type variables $X$ used in
recursive type definitions $E$; we refer to the latter as
\emph{defined type variables}. We require the bindings $Z$ in
$\Theta$ to be unique.
We often treat substitutions $\Theta$ as sets or finite maps and in
particular write $\Theta \uplus \Theta'$ for the context-tagged
substitution resulting from taking the union of the bindings in
$\Theta$ and $\Theta'$, provided their domains are disjoint. We also
write $\tau\subst{\Theta}$ for the result of replacing each occurrence
of an undefined type variable in $\tau$ with its binding in $\Theta$.
Moreover, we write $\Theta\subst{\Theta'}$ for the result of applying
$\Theta'$ to each type in $\Theta$, again ignoring contexts.
Substitution application ignores the contexts $\Gamma$; they are only
used to typecheck updates within the scope of a path. We consider the
free type variables of $\Theta$ to be the free variables of $\Gamma$
and $\tau$ in the bindings $\Gamma \triangleright\tau$.
To typecheck a simple update such as $\delete{a/b}$ against an atomic
type such as $a[b[],c[]]$, we proceed as follows:
\begin{enumerate}
\item First \emph{match $p$ against the input type, and
split the type $\alpha$ of the document into a pair
$(\alpha',\Theta)$ such that $\alpha = \alpha'\subst{\Theta}$.}
For example, $a[b[],c[]] = a[Z,c[]]\subst{Z \mapsto \cdot \triangleright b[]}$.
\item Next \emph{modify $\Theta$ according to the update operation to
obtain $\Theta'$.} For this we use the Core \textsc{Flux}\xspace type system to
update each binding in $\Theta$. This is only a convenience.
Continuing the example, for a deletion we update $Z \mapsto \cdot \triangleright b[]$ to
$Z \mapsto \cdot \triangleright \texttt{()}$.
\item Finally, \emph{apply $\Theta'$ to $\alpha'$ to get the desired
final type after the update}.
For example, applying $a[Z,c[]]\subst{Z \mapsto \cdot \triangleright \texttt{()}}$ we
get $a[(),c[]]$, as desired (this is equivalent to $a[c[]]$).
\end{enumerate}
Figures \ref{fig:selected-source-upd-tc} and
\ref{fig:selected-source-path-tc} show selected typechecking judgments
for simple updates and paths.
We introduce auxiliary judgments such as path filter
checking ($\wfupdfilt{\Gamma}{\tau}{\phi}{\tau'}{\Theta}$, shown in
\refFig{selected-source-path-filt-tc}), simultaneous core statement
checking ($\wfupds{\Theta}{s}{\Theta'}$), and simultaneous path
checking ($\wfupdpaths{\Theta}{p}{\Theta'}{\Theta''}$. The
For many of the typechecking judgments, we also need to typecheck an
expression against all of the bindings of a context-tagged
substitution $\Theta$. We therefore introduce several
\emph{simultaneous typechecking} judgments. \confonly{The full system, including the
(straightforward) compound statement typechecking judgment
$\wfupdstmt{\Gamma}{\alpha}{s}{\alpha'}$ and all auxiliary judgments,
is shown in the technical report~\cite{flux-tr}.}
\tronly{The full system, including the
(straightforward) compound statement typechecking judgment
$\wfupdstmt{\Gamma}{\alpha}{s}{\alpha'}$ and all auxiliary judgments,
is shown in the appendix.}
The simple update typechecking rules each follow the procedure
outlined above. The path typechecking rules match $p$ against the
input type $\alpha$ as described in step 2 above. Note that paths may
bind variables, and the same variable may be bound to different types
in different cases; this is why we need to include contexts $\Gamma$
in each binding of the substitutions $\Theta$.
The following operation $\Theta \oplus x$ is used to typecheck
$x~\kw{AS}~p$; it adds the binding $x:\tau$ to each binding $\Gamma \triangleright
\tau$ in $\Theta$.
\[\small
\begin{array}{rcl}
\emptyset \oplus x &=& \emptyset \smallskip\\
(\Theta,Z\mapsto (\Gamma \triangleright \tau)) \oplus x &=& \Theta \oplus x, Z \mapsto (\Gamma,x:\tau \triangleright \tau)
\smallskip\\
\end{array}\]
The typechecking rule for conditional paths $p[e]$ is slightly subtle.
After typechecking $p$, we obtain a pair $(\alpha',\Theta)$ that
splits $\alpha$ into an unchanged part $\alpha'$ and a substitution
$\Theta$ showing where changes may occur. Since we do not know
whether $e$ will hold, we must adjust $\alpha'$ by replacing each
occurrence of a variable $Z$ with $Z|\Theta(Z)$. This is accomplished
using the substitution $\maybe{\Theta}$:
\[\small
\begin{array}{rcl}
\maybe{\emptyset} &=& \emptyset
\smallskip\\
\maybe {\Theta,Z\mapsto (\Gamma \triangleright \alpha)} &=& \maybe{\Theta}, Z \mapsto (\Gamma \triangleright Z|\alpha)
\end{array}\]
\begin{figure}
\fbox{$\wfupdsimp{\Gamma}{\alpha}{u}{\alpha'}$}
\[\small\begin{array}{c}
\infer{\wfupdsimp{\Gamma}{\alpha}{\insertvalue{\kw{BEFORE}}{p}{e}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} & \wfupds{\Theta}{\kw{left}[\kw{insert}~e]}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\insertinto{\kw{LAST}}{p}{e}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} & \wfupds{\Theta}{\kw{children}[\kw{left}[\kw{insert}~e]]}{\Theta'}}
\smallskip\\
\infer{\wfupdsimp{\Gamma}{\alpha}{\delete{p}}{\alpha'\subst{\Theta'}}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfupds{\Theta}{\kw{delete}}{\Theta'}}
\end{array}\]
\caption{Selected typechecking rules for simple updates}\labelFig{selected-source-upd-tc}
\fbox{$\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta}$}
\[\small\begin{array}{c}
\infer{\wfupdpath{\Gamma}{\alpha}{p/p'}{\alpha'\subst{\Theta_2}}{\Theta_2'}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta_1}
&
\wfupdpaths{\Theta_1}{p'}{\Theta_2}{\Theta_2'}}
\smallskip\\
\infer{\wfupdpath{\Gamma}{\alpha}{.}{\alpha}{\emptyset}}{}
\quad
\infer{\wfupdpath{\Gamma}{\alpha}{p[e]}{\alpha'\subst{\maybe{\Theta}}}{\Theta}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta} &
\wfs{\Theta}{e}{\kw{bool}}}
\smallskip\\
\infer{\wfupdpath{\Gamma}{\alpha}{x ~\kw{AS}~p}{\alpha'}{\Theta\oplus x}}
{\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta}}
\quad
\infer{\wfupdpath{\Gamma}{n[\tau]}{\phi}{n[\tau']}{\Theta}}
{\wfupdfilt{\Gamma}{\tau}{\phi}{\tau'}{\Theta}}
\end{array}\]
\caption{Typechecking rules for
paths}\labelFig{selected-source-path-tc}
\fbox{$\wfupdfilt{\Gamma}{\tau}{\phi}{\tau'}{\Theta}$}
\[\begin{array}{c}
\infer{\wfupdfilt{\Gamma}{\texttt{()}}{\phi}{\texttt{()}}{\emptyset}}{}
\quad
\infer{\wfupdfilt{\Gamma}{\alpha}{\phi}{\alpha}{\emptyset}}{\alpha \not\mathrel{{<}{:}} \phi}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{\alpha}{\phi}{Z}{Z \mapsto (\Gamma \triangleright\alpha)}}{\alpha \mathrel{{<}{:}} \phi & \text{$Z$ fresh}}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{\tau_1,\tau_2}{\phi}{(\tau_1',\tau_2')}{\Theta_1 \uplus \Theta_2}}
{\wfupdfilt{\Gamma}{\tau_1}{\phi}{\tau_1'}{\Theta_1}
&
\wfupdfilt{\Gamma}{\tau_2}{\phi}{\tau_2'}{\Theta_2}}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{\tau_1|\tau_2}{\phi}{\tau_1'|\tau_2'}{\Theta_1 \uplus \Theta_2}}
{\wfupdfilt{\Gamma}{\tau_1}{\phi}{\tau_1'}{\Theta_1}
&
\wfupdfilt{\Gamma}{\tau_2}{\phi}{\tau_2'}{\Theta_2}}
\smallskip\\
\infer{\wfupdfilt{\Gamma}{\tau_1^*}{\phi}{\tau_2^*}{\Theta}}
{\wfupdfilt{\Gamma}{\tau_1}{\phi}{\tau_2}{\Theta}}
\quad
\infer{\wfupdfilt{\Gamma}{X}{\phi}{\tau'}{\Theta}}
{\wfupdfilt{\Gamma}{E(X)}{\phi}{\tau'}{\Theta}}
\end{array}\]
\caption{Typechecking rules for path filters}\labelFig{selected-source-path-filt-tc}
\fbox{$\wfs{\Theta}{e}{\tau}$}
\[\begin{array}{c}
\infer{\wfs{\emptyset}{e}{\tau}}{}
\quad
\infer{\wfs{\Theta,Z \mapsto (\Gamma \triangleright \alpha)}{e}{\tau}}
{\wfs{\Theta}{e}{\tau} &
\wf{\Gamma}{e}{\tau}}
\end{array}\]
\fbox{$\wfupds{\Theta}{s}{\Theta'}$}
\[\begin{array}{c}
\infer{\wfupds{\emptyset}{s}{\emptyset}}{}
\quad
\infer{\wfupds{\Theta,Z \mapsto (\Gamma \triangleright \alpha)}{s}{\Theta',Z \mapsto (\Gamma \triangleright \tau)}}{\wfupds{\Theta}{s}{\Theta'} &\wfupd{\Gamma}{1}{\alpha}{s}{\tau}}
\end{array}\]
\fbox{$\wfupdstmts{\Theta}{s}{\Theta'}$}
\[\small\begin{array}{c}
\infer{\wfupdstmts{\emptyset}{s}{\emptyset}}{}
\quad
\infer{\wfupdstmts{\Theta,Z \mapsto (\Gamma \triangleright \tau)}{s}{\Theta',Z \mapsto (\Gamma \triangleright \tau')}}{\wfupdstmts{\Theta}{s}{\Theta'} & \wfupdstmt{\Gamma}{\tau}{s}{\tau'}}
\end{array}\]
\fbox{$\wfupdpaths{\Theta}{p}{\Theta'}{\Theta''}$}
\[\begin{array}{c}
\infer{\wfupdpaths{\emptyset}{p}{\emptyset}{\emptyset}}{}
\smallskip\\
\infer{\wfupdpaths{\Theta,Z \mapsto (\Gamma \triangleright \alpha)}{p}{\Theta',Z \mapsto (\Gamma \triangleright \alpha')}{\Theta'' \uplus \Theta'''}}
{\wfupdpaths{\Theta}{p}{\Theta'}{\Theta''} &
\wfupdpath{\Gamma}{\alpha}{p}{\alpha'}{\Theta'''}}
\end{array}\]
\caption{Simultaneous typechecking judgments}\labelFig{selected-simult-source-tc}
\end{figure}
\subsection{Metatheory}
Whenever we translate between two typed languages, we would like to
know whether the translation is \emph{sound} (i.e.,
\emph{type-preserving}). This ensures that if we typecheck the
expression in the source language then its translation will also
typecheck. \footnote{In an implementation, one often wants to
re-typecheck after translation anyway as a sanity check for the
translator.} Conversely, if the source language expression fails to
typecheck, it is preferable to report the error in terms of the source
language using the source type system. We have established that the
translation is indeed sound:
\begin{theorem}[Soundness]
Assume $\Gamma,\tau,\tau'$ have no free type variables $Z$. Then if
$\wfupdstmt{\Gamma}{\tau}{s}{\tau'}$ then
$\wfupd{\Gamma}{*}{\tau}{\nstmt{s}}{\tau'}$.
\end{theorem}
Conversely, another concern is that the source-level type system might
be too restrictive. Are there source-level expressions whose
\emph{translations} are well-formed, but which are not well-formed in
the source-level system? This is the question of \emph{completeness},
that is, whether the translation \emph{reflects} typability. If this
completeness property did not hold, this would indicate that the
source type system could be made more expressive. Fortunately,
however, completeness does hold:
\begin{theorem}[Completeness]
Assume $\Gamma,\tau,\tau'$ have no free type variables $Z$. Then if
$\wfupd{\Gamma}{*}{\tau}{\nstmt{s}}{\tau'}$ then
$\wfupdstmt{\Gamma}{\tau}{s}{\tau'}$.
\end{theorem}
\section{Path-errors and dead-code analysis}\labelSec{path-analysis}
\begin{figure*}[tb]
\fbox{$\wfupdeff{\Gamma}{a}{\tau}{s}{\tau'}{L}$}
\[\small\begin{array}{c}
\infer{\wfupdeff{\Gamma}{a}{\tau}{\kw{skip}_l}{\tau}{\{l\}}}{}
\quad
\infer{\wfupdeff{\Gamma}{a}{\tau}{((s_1)_{l_1};(s_2)_{l_2})_l}{ \tau''}{(L_1 \cup L_2)[l_1,l_2 {\Rightarrow} l]}}
{\wfupdeff{\Gamma}{a}{\tau}{(s_1)_{l_1}}{\tau'}{L_1} & \wfupdeff{\Gamma}{a}{\tau'}{(s_2)_{l_2}}{ \tau''}{L_2}}
\quad
\infer{\wfupdeff{\Gamma}{a}{\tau}{(\snapshot{x}{s_{l'}})_l}{\tau'}{L[l' {\Rightarrow} l]}}
{\wfupdeff{\Gamma,x{:}\tau}{a}{\tau}{s_{l'}}{\tau'}{L}}
\smallskip\\
\infer{\wfupdeff{\Gamma}{a}{\tau}{(\ifthenelse{e}{(s_1)_{l_1}}{(s_2)_{l_2}})_l}{\tau_1 | \tau_2}{(L_1 \cup L_2)[l_1,l_2 {\Rightarrow} l]}}
{\wf{\Gamma}{e}{\kw{bool}} &
\wfupdeff{\Gamma}{a}{\tau}{(s_1)_{l_1}}{\tau_1}{L_1} &
\wfupdeff{\Gamma}{a}{\tau}{(s_2)_{l_2}}{\tau_2}{L_2}}
\quad
\infer{\wfupdeff{\Gamma}{a}{\tau_1}{(\letin{x=e}{s_{l'}})_l}{\tau_2}{L[l' {\Rightarrow} l]}}
{\wf{\Gamma}{e}{\tau} & \wfupdeff{\Gamma,x{:}\tau}{a}{\tau_1}{s}{\tau_2}{L}}
\smallskip\\
\infer{\wfupdeff{\Gamma}{*}{\texttt{()}}{(\kw{insert}~e)_l}{\tau}{\{l \mid \tau \mathrel{{<}{:}} \texttt{()}\}}}
{\wf{\Gamma}{e}{\tau}}
\quad
\infer{\wfupdeff{\Gamma}{a}{\tau}{\kw{delete}_l}{\texttt{()}}{\{l \mid \tau \mathrel{{<}{:}} \texttt{()}\}}}
{}
\quad
\infer{\wfupdeff{\Gamma}{1}{n'[\tau]}{(\kw{rename}~n)_l}{n[\tau]}{\{l \mid n = n'\}}}
{}
\smallskip\\
\infer{\wfupdeff{\Gamma}{1}{\alpha}{(\phi?s_{l'})_l}{\tau}{L[l' {\Rightarrow} l]}}
{\alpha \mathrel{{<}{:}} \phi & \wfupdeff{\Gamma}{1}{\alpha}{s_{l'}}{\tau}{L}}
\quad
\infer{\wfupdeff{\Gamma}{1}{\alpha}{(\phi?s_{l'})_{l}}{\alpha}{\{l\}}}
{\alpha \not\mathrel{{<}{:}} \phi}
\qua
\infer{\wfupdeff{\Gamma}{*}{\tau}{\kw{iter}[s_{l'}]_l}{\tau'}{L[l'{\Rightarrow}l]}}
{\wfitereff{\Gamma}{\tau}{s_{l'}}{\tau'}{L}}
\quad
\infer{\wfupdeff{\Gamma}{a}{\tau}{P(\vec{e})}{\tau'}{\emptyset}}
{\wfupd{\Gamma}{a}{\tau}{P(\vec{e})}{\tau'}}
\smallskip\\
\infer{\wfupdeff{\Gamma}{1}{n[\tau]}{\kw{children}[s_{l'}]_l}{n[\tau']}{L[l' {\Rightarrow} l]}}
{\wfupdeff{\Gamma}{*}{\tau}{s_{l'}}{\tau'}{L}}
\quad
\infer{\wfupdeff{\Gamma}{a}{\tau}{\kw{left}[s_{l'}]_l}{\tau',\tau}{L[l'{\Rightarrow}l]}}
{\wfupdeff{\Gamma}{*}{\texttt{()}}{s}{\tau'}{L}}
\quad
\infer{\wfupdeff{\Gamma}{a}{\tau}{\kw{right}[s_{l'}]_l}{\tau,\tau'}{L[l'{\Rightarrow}l]}}
{\wfupdeff{\Gamma}{*}{\texttt{()}}{s}{\tau'}{L}}
\end{array}\]
\fbox{$\wfitereff{\Gamma}{\tau}{s}{\tau'}{L}$}
\[\small\begin{array}{c}
\infer{\wfitereff{\Gamma}{\texttt{()}}{s_{l}}{\texttt{()}}{\{l\}}}{}
\quad
\infer{\wfitereff{\Gamma}{\alpha}{s}{\tau}{L}}
{\wfupdeff{\Gamma}{1}{\alpha}{s}{\tau}{L}}
\quad
\infer{\wfitereff{\Gamma}{\tau_1^*}{s}{\tau_2^*}{L}}
{\wfitereff{\Gamma}{\tau_1}{s}{\tau_2}{L}}
\quad
\infer{\wfitereff{\Gamma}{X}{s}{\tau}{L}}
{\wfitereff{\Gamma}{E(X)}{s}{\tau}{L}}
\smallskip\\
\infer{\wfitereff{\Gamma}{\tau_1,\tau_2}{s}{\tau_1',\tau_2'}{L_1 \cap L_2}}
{\wfitereff{\Gamma}{\tau_1}{s}{\tau_1'}{L_1}
&
\wfitereff{\Gamma}{\tau_2}{s}{\tau_2'}{L_2}}
\quad
\infer{\wfitereff{\Gamma}{\tau_1|\tau_2}{s}{\tau_1'|\tau_2'}{L_1 \cap L_2}}
{\wfitereff{\Gamma}{\tau_1}{s}{\tau_1'}{L_1}
&
\wfitereff{\Gamma}{\tau_2}{s}{\tau_2'}{L_2}}
\end{array}\]
\caption{Path-error analysis for updates}\labelFig{path-analysis}
\end{figure*}
Besides developing a type system for $\mu$XQ\xspace, \citet{colazzo06jfp}
studied the problem of identifying subexpressions of the query that
always evaluate to $\texttt{()}$, but are not syntactically equal to
$\texttt{()}$. Such ``unproductive'' subexpressions typically indicate
errors in a query. For example, the query $\forreturn{y\in x/a}{a[]}$
is well-formed in context $\Gamma = x:b[c[]^*,d[]^*]$, but
unproductive when evaluated against $\Gamma$ since $x/a$ will always
be empty. \citet{colazzo06jfp} formally defined such
\emph{path-errors}\footnote{Arguably, the term ``path-errors'' is
inaccurate in that there are expressions such as
$\forreturn{\bar{x} \in ()}{\bar{x}}$ that do not mention path expressions, yet
contain path-errors. Nevertheless, we follow the existing
terminology here.} and introduced a type-based analysis that detects
them. In this section, we define path-errors for updates and derive a
path-error analysis for core \textsc{Flux}\xspace. We first introduce technical
machinery, then define path-errors and the analysis, and prove its
correctness.
\if 0
Consider a query $\forreturn{y \in \bar{x}/b}{e}$ where
$\bar{x}{:}a[c^*]$, that is, the type of $\bar{x}$ does not mention
$b$. Clearly, this query will not produce any interesting output; it
will return the empty sequence no matter what the input is. This is
considered an error in \cite{colazzo06jfp}; more generally, it is an
error if a subterm of a query can be replaced with $\texttt{()}$ without
changing the meaning of the query relative to a given type environment.
Let $l$ be a set of \emph{locations}. We consider \emph{labeled
expressions} $e$ and \emph{labeled statements} $s$ in which each
abstract syntax constructor carries a location. We ignore locations
as convenient to view such expressions and statements as ordinary
unlabeled ones. Suppose $e$ is distinctly labeled. We write $e[l]$ for
the unique subexpression of $s$ labeled by $l$ and write $e|_l$ for
the result of replacing $e[l]$ with $\texttt{()}$. For example,
$(\forreturn{y \in \bar{x}/b_{l_1}}{y_{l_2}})|_{l_1} = \forreturn{y
\in ()}{y}$.
\begin{definition}
We say that a distinctly labeled expression $e$ that is well-formed
in context $\Gamma$ has a \emph{path error at $l$} if $e.l \neq
\texttt{()}$ but for every $\gamma \in \SB{\Gamma}$, we have
$\SB{e}\gamma = \SB{e|_l}\gamma$; that is, if $e$ and $e|_l$ are
equivalent over $\SB{\Gamma}$. If $e$ has no path errors (relative
to $\Gamma$) then it is said to be \emph{path-correct} (relative to
$\Gamma$).
\end{definition}
Path-errors are dependent on both the expression $e$ and the typing
context in which we consider $e$. Moreover, a path-incorrect $e$ with
respect to $\Gamma$ can become path-correct if we weaken to a
context $\Gamma \mathrel{{<}{:}} \Gamma'$. For example, $\forreturn{y \in
\bar{x}/b}{y}$ has path errors with respect to $\bar{x}{:}a[c^*]$
but has no path-errors with respect to the larger type
$\bar{x}{:}a[(b|c)^*]$. Thus, path-correctness does not respect
subtyping. This is, however, irrelevant since we consider path-error
analysis only for well-formed terms.
\fi
Consider \emph{locations} $l$. We will work with \emph{distinctly
labeled statements} $s_l$ in which each core \textsc{Flux}\xspace subexpression
carries a distinct location $l$. We ignore locations as convenient
when we wish to view $s_l$ as an ordinary statement $s$. Suppose $s$
is distinctly labeled. We write $s[l]$ for the unique subexpression
of $e$ labeled by $l$ and write $s|_l$ for the result of replacing the
subexpression at $l$ in $s$ with $\kw{skip}$. For example,
$(\kw{iter}[\kw{children}[s_l]_{l'}]_{l''})|_{l'} = \kw{iter}[\kw{skip}]$.
We now define a form of path-errors suitable for updates, based on
replacing subexpressions with the trivial update $\kw{skip}$ instead of
the empty sequence $\texttt{()}$.
\begin{definition}
Suppose $\wfupd{\Gamma}{a}{\tau}{s}{\tau'}$, where $s$ is distinctly
labeled. We say $s$ is \emph{unproductive} at $l$ provided
$\evalu{\gamma}{v}{s}{v'} \iff \evalu{\gamma}{v}{s|_l}{v'}$ for
every $\gamma \in \SB{\Gamma}, v \in \SB{\tau}, v' \in \SB{\tau'}$.
Recall that update evaluation is functional so this means that $s$
and $s|_l$ are equivalent over inputs from $\SB{\Gamma},\SB{\tau}$.
Moreover, we say that $s$ has an \emph{update path-error} at $l$
provided $s$ is unproductive at $l$ and $s[l] \neq \kw{skip}$, and say
that $s$ is \emph{update path-correct} if $s$ has no update path
errors.
\end{definition}
We define a static analysis for identifying update path-errors via the
rules in \refFig{path-analysis}. The main judgment is
$\wfupdeff{\Gamma}{a}{\tau}{s_l}{\tau'}{L}$, meaning \emph{$s$ is
well-formed and is unproductive at each $l \in L$}. We employ an
auxiliary judgment $\wfitereff{\Gamma}{\tau}{s_l}{\tau'}{L}$ to handle
iteration. We also define a ``conditional union'' operation:
\[
L[l_1,\ldots,l_n \Rightarrow l] = \left\{
\begin{array}{ll} L \cup \{l\} & \{l_1,\ldots,l_n\} \subseteq L \\
L & \text{otherwise}
\end{array}\right.
\]
Note that the analysis is intraprocedural. It gives up when we
consider a procedure call $P(\vec{e})$: we conservatively assume that
there is no path error at $P(\vec{e})$, and we do not proceed to
analyze the body of $P$. We can, of course, extend the analysis to
declarations $\Delta$ by analyzing each procedure body individually.
We first establish that the analysis produces results whenever $s$ is
well-formed. This is straightforward by induction on derivations.
\begin{lemma}
~
\begin{enumerate}
\item If $\wfupd{\Gamma}{a}{\tau}{s}{\tau'}$ then there exists $L$ such
that $\wfupdeff{\Gamma}{a}{\tau}{s}{\tau'}{L}$.
\item If $\wfiter{\Gamma}{\tau}{s}{\tau'}$ then there exists $L$ such
that $\wfitereff{\Gamma}{\tau}{s}{\tau'}{L}$.
\end{enumerate}\end{lemma}
The goal of the analysis is to conservatively \emph{underestimate} the
set of possible unproductive locations in $s$.
\begin{theorem}[Path-Error Analysis Soundness]
~
\begin{enumerate}
\item If $\wfupdeff{\Gamma}{a}{\tau}{s}{\tau'}{L}$ then $s$ is
unproductive at every $l \in L$.
\item If $\wfitereff{\Gamma}{\tau}{s}{\tau'}{L}$ then $\kw{iter}[s]$ is
unproductive at every $l \in L$.
\end{enumerate}
\end{theorem}
Moreover, all of the labels in the set $\{l \in L\mid
s[l] \neq \kw{skip}\}$ can be reported as update path-errors and used
to optimize $s$ by replacing each $s[l]$ with $\kw{skip}$.
Using more sophisticated rules for typechecking $\kw{let}$- and
$\kw{for}$-expressions, \cite{colazzo06jfp} were also able to show that
their path-error analysis is \emph{complete} for $\mu$XQ\xspace (without
recursion); thus, path-correctness is decidable for the fragment of
XQuery they studied --- a nontrivial result. Similar techniques can
be used to make update path-error analysis more precise, but it is not
obvious that this yields a complete analysis, even in the absence of
recursion. We leave this issue for future work.
It is, of course, also of interest to perform path-error analysis at
the source level, so that the errors can be reported in terms familiar
to the user. We believe that the path-error analysis can be
``lifted'' to the source type system, but leave this for future work.
However, it appears that many path-errors show up in the source type
system as empty substitutions $\Theta$ resulting from analyzing path
expressions.
\section{Related work}\labelSec{related}
\paragraph{Other database update languages}
\citet{DBLP:conf/ssdbm/LiefkeD99}'s update language CPL+, a typed
language for updating complex-object databases using path-based
insert, update, and delete operations. High-level CPL+ updates were
translated to a simpler core language with orthogonal operations for
iteration, navigation, insertion/deletion, and replacement. The \textsc{Flux}\xspace
core language was strongly influenced by CPL+.
\paragraph{Static typing for XML processing}
We will focus on only the most closely related work on XML
typechecking; \citet{DBLP:conf/icdt/MollerS05} provide a much more
complete survey of type systems for XML transformation languages.
\citet{hosoya05toplas} introduced XDuce, the first statically typed
XML transformation language based on regular expressions.
\citet{fernandez01icdt} introduced many of
the ideas for using XDuce-style regular expression subtyping for
typechecking an XML query language based on monadic comprehensions.
XQuery's type system~\cite{xquery-semantics-w3c-20070123} is also
based on regular expression types and subtyping, but its rules for
typechecking iteration are relatively imprecise: they discard
information about the order and multiplicity of the elements of a
sequence. As discussed by \citet{cheney08esop}, taking this approach
to typechecking iterations in \emph{updates} would be disastrous since
many updates iterate over a part of the database while leaving its
structure intact. \citet{colazzo06jfp} showed how to provide more
precise regular expression types to XQuery $\kw{for}$-iteration; we have
already discussed this work in the body of the paper.
\citet{cheney08esop} showed how to add subtyping and subsumption to
$\mu$XQ\xspace and \textsc{Flux}\xspace while retaining decidable typechecking.
\paragraph{XML update languages}
\cite{DBLP:conf/planX/Cheney07} provided a detailed discussion of XML update
language proposals and compared them with the \textsc{Flux}\xspace approach. Here,
we will only discuss closely related or more recent work.
\citet{calcagno05popl} investigated DOM-style XML updates using
\emph{context logic}, a logic of ``trees with holes''.
\citet{calcagno05popl} studied a Hoare-style logic for sequences of
atomic update operations on unordered XML. \citet{gardner08pods}
extended this approach to ordered XML and while-programs over atomic
DOM updates. This approach is very promising for reasoning about
low-level DOM updates, for example in Java or JavaScript programs. It
should be possible to translate core \textsc{Flux}\xspace to their variant of DOM; it
would interesting to see whether \textsc{Flux}\xspace type information can also be
compiled down to context logic in an appropriate way.
The W3C XQuery Update Facility \cite{xquery-update-w3c-10-20080314} has
been under development for several years. However, the typing rules in the
current draft treat updates as expressions of type $()$, and to
our knowledge this type system has not been proved sound.
\citet{ghelli07dbpl} have developed XQueryU, a variant of XQuery!\xspace
that is translated to an ``algebraic'' core language intended for
optimization. However, the semantics of the core language is
defined by translation back to XQueryU, which seems circular.
Static typechecking has not been studied for any other extant XML
update language proposals, even though the W3C's XQuery Update
Facility Requirements
document~\cite{xquery-update-requirements-w3c-062006} lists static
typechecking as a strong requirement. \textsc{Flux}\xspace shows that applying
well-known functional language design principles leads to a language
with a relatively simple semantics and relatively straightforward type
system.
Static analysis techniques have been studied for only a few of these
languages. \citet{DBLP:conf/cav/BenediktBFV05} and
\citet{DBLP:conf/ximep/BenediktBFV05} studied static analysis
techniques for optimizing updates in UpdateX, an earlier XML update
language proposal due to \citet{sur04planx}. \citet{ghelli07icdt}
have developed a commutativity analysis for determining when two
side-effecting expressions in XQuery!\xspace can be reordered.
No prior work has addressed path-error or dead
code analysis for XML updates.
The design goals of many of these
proposals differ from those that motivate this work. \textsc{Flux}\xspace is not
meant to be a full-fledged programming language for mutable XML data.
Instead, it is meant to play a role for XML and XQuery similar to that
of SQL's update facilities relative to relational databases and SQL.
Its goal is only to be expressive enough for typical updates to XML
databases while remaining simple and statically typecheckable.
\paragraph{Mutability in functional languages} \textsc{Flux}\xspace takes a ``purely
functional'' approach to typechecking updates. The type of an update
reflects the changes to the mutable store an update may make. This is
similar to side-effect encapsulation using monads or arrows in
Haskell. An alternative possibility might be to use ML-like
references. This could easily handle updates to parts of an XML
database whose type is fixed; however, handling updates that change
the \emph{type} of a part of the database would likely be problematic,
due to aliasing issues. \textsc{Flux}\xspace does not allow aliasing of the mutable
store, so avoids this problem.
\section{Extensions and future work}\labelSec{extensions}
\paragraph{Additional XQuery features}
To simplify the discussion, we have omitted features such as
attributes, comments, and processing instructions that are present in
the official XQuery data model, as well as the XPath axes needed to
access them. We have also omitted the many additional base types and
built-in functions (such as \verb|position()| or \verb|last()|)
present in full XQuery/XPath. All of these features can be added
without damaging the formal properties of the core language.
We have also omitted the descendant axis. Many DTDs and XML Schemas
encountered in database applications of XML are nonrecursive and
``shallow'' \cite{DBLP:conf/webdb/Choi02,bex04webdb}. Thus, in
practice, vertical recursion (the descendant axis \verb|//|) can
usually be avoided. Simple updates involving \verb|//| can, however,
be simulated using recursive update procedures; in fact, for
non-recursive input types, updates involving \verb|//| can often
already be expressed in \textsc{Flux}\xspace. Further work is needed to
understand the expressiveness and usability tradeoffs involved in
typechecking more complex updates involving recursive types.
\paragraph{Transformations}
The XQuery Update Facility \cite{xquery-update-w3c-10-20080314}
includes \emph{transformations}, which allow running an update
operation within an XQuery expression, with side-effects confined
(somewhat like \verb|runST| in Haskell). Such a facility can easily
be added using \textsc{Flux}\xspace updates:
\[e ::= \cdots \mid \transform{e}{s}\] with semantics and typing
rules:
\[
\infer{\Downarrow{\gamma}{\transform{e}{s}}{v'}}{\Downarrow{\gamma}{e}{v} &
\evalu{\gamma}{v}{s}{v'}} \quad
\infer{\wf{\Gamma}{\transform{e}{s}}{\tau_2}}{\wf{\Gamma}{e}{\tau_1} &
\wfupd{\Gamma}{*}{\tau_1}{s}{\tau_2}}
\]
\paragraph{Dynamic typechecking, incremental validation and maintenance}
We believe it is important to combine \textsc{Flux}\xspace-style static typechecking
with efficient dynamic techniques in order to handle cases where
static type information is imprecise.
\citet{DBLP:conf/icde/BarbosaMLMA04} and
\citet{DBLP:journals/tods/BalminPV04} have studied efficient
\emph{incremental validation} techniques for checking that sequences
of atomic updates preserve a database's schema. These techniques
impose (manageable, but nonzero) run-time costs per atomic update
operation and storage overhead proportional to the database size;
also, they require that the input and output types are equal, a
significant limitation compared to \textsc{Flux}\xspace.
\paragraph{Efficient implementation within XML databases}
We have built a prototype \textsc{Flux}\xspace interpreter in OCaml, in order to
validate our type system and normalization translation designs and
experiment with variations. The obvious next step is developing
efficient implementations of \textsc{Flux}\xspace, particularly within XML database
systems. \citet{DBLP:conf/ssdbm/LiefkeD99} investigated efficient
implementation techniques for CPL+ updates to complex-object
databases, which have much in common with XML databases. One initial
implementation strategy could simply be to generate XQuery!\xspace or
XQuery Updates from core \textsc{Flux}\xspace after normalization, typechecking and
high-level optimization; this should not be difficult since these
languages are more expressive than \textsc{Flux}\xspace. However, more sophisticated
techniques may be necessary to obtain good performance.
\section{Conclusions}\labelSec{concl}
The problem of updating XML databases poses many challenges to
language design. In previous work, we introduced \textsc{Flux}\xspace, a simple core
language for XML updates, inspired in large part by the language CPL+
introduced by \citet{DBLP:conf/ssdbm/LiefkeD99} for updating complex
object databases. In contrast to all other update proposals of which
we are aware, \textsc{Flux}\xspace preserves the good features of XQuery such as its
purely functional semantics, while offering features convenient for
updating XML. Moreover, \textsc{Flux}\xspace is the first proposal for updating XML
to be equipped with a sound, static type system.
In this paper we have further developed the foundations of \textsc{Flux}\xspace,
relaxing the limitations present in our preliminary proposal. First,
we have extended its operational semantics and type system to handle
\emph{recursive types and updates}. This turned out to be
straightforward. Second, although the \textsc{Flux}\xspace core language is easy to
understand, typecheck and optimize, it is not easy to use. Therefore,
we have developed a \emph{high-level source language} for updates, and
shown how to translate it to core \textsc{Flux}\xspace. Since it is difficult to
propagate useful type error information from translated updates back
to source updates, we have also developed a type system for the source
language, and validated its design by proving that the translation
both \emph{preserves} and \emph{reflects} typability. Third, we
developed a novel definition of \emph{update path-errors}, a form of
dead code analysis, and introduced a static analysis that identifies
them.
At present we have implemented a proof-of-concept prototype \textsc{Flux}\xspace
interpreter, including typechecking for the source language and core
language, normalization, and path-error analysis. There are many
possible directions for future work; the most immediate is to develop
efficient optimizing implementations of \textsc{Flux}\xspace within existing XML
databases or other XML-processing systems.
\small
\bibliographystyle{plainnat}
|
1,108,101,563,204 | arxiv | \section{Introduction}
Recently, Krenn and Zeilinger \cite{KZ} (hereafter referred to as KZ) have shown that situations can arise where the property of entanglement of quantum systems is itself an entangled property. For that purpose they considered a three-particle system described by the Greenberger-Horne-Zeilinger (GHZ) state \cite{GHZ}
\begin{equation}
|\Psi \rangle = \frac{1}{\sqrt{2}} (|z+ \rangle_1 |z+ \rangle_2 |z+ \rangle_3
+ |z- \rangle_1 |z- \rangle_2 |z- \rangle_3),
\end{equation}
where $|z+ \rangle_i$ ($|z- \rangle_i$) represents a state of spin-up (-down) for particle $i$, $i=1,2,3$, along some quantization $z$ axis which in general will differ from one particle to the other. We can regard the three particles as flying apart from a common source, each of them subsequently entering its own Stern-Gerlach apparatus oriented along an arbitrary measurement direction $\vec{e}_i$ in three-dimensional space, this direction being specified by the polar and azimuthal angles $\vartheta_i$ and $\varphi_i$. It is assumed that at the time of measurement the particles 1, 2, and 3 may be arbitrarily far apart so that the acts of measurement by respective observers 1, 2, and 3 can be considered to have a spacelike separation. KZ first showed that the correlation function $E_{12}$ obtained by unconditionally averaging the products of the results of the measurements on particles 1 and 2 factorizes into a product of two functions, one of them related to particle 1 and the other related to particle 2, so that we can always think of such results as being clasically correlated. Next the authors examined a situation in which observer 3, independent of observers 1 and 2, performs spin measurements on particle 3 along direction $\vec{e}_3$. The results obtained by observer 3 are then used to classify the results of observers 1 and 2 into two distinct subensembles: whenever a result $+1$ ($-1$) is found for particle 3 in a particular run of the experiment, the corresponding results for particles 1 and 2 are assigned to subensemble $+$ ($-$). KZ demonstrated that an entanglement between particles 1 and 2 indeed occurs by showing that for certain measurement directions $\vec{e}_3$ the resulting correlation function $E^{+}_{12}$ ($E^{-}_{12}$) for subensemble $+$ ($-$) can yield a violation of Bell's inequality. Since the degree of entanglement within either subensemble $+$ or $-$ depends on the measurement direction $\vec{e}_3$, and due to the fact that the spin measurements are carried out on the particles in spacelike separated regions, KZ came to the conclusion that the property of entanglement depends on the whole measurement context and therefore becomes an entangled property itself.
In this paper we state the general condition for the violation of a Bell inequality involving the selected results in either one of the above-defined subensembles. In particular, constraints for a maximal violation are given. This is done in a way that explicitly shows the dependence of the entanglement of the subensembles on the setting of a measuring apparatus which can be located in a spacelike separated region. Furthermore, we extend the analysis of KZ to include a more general type of three-particle states than that appearing in Eq.\ (1). Specifically, we shall consider the class of states which can be written in the triorthogonal form \cite{Elby-Bub}
\begin{equation}
|\Psi \rangle = c_1 |z_1 \rangle |z_2 \rangle |z_3 \rangle
+ c_2 |-z_1 \rangle |-z_2 \rangle |-z_3 \rangle,
\end{equation}
where, for simplicity, the coefficients $c_1$ and $c_2$ are chosen real, with $c^{2}_{1} + c^{2}_{2} =1$, and where $|z_i \rangle$ ($|-z_i \rangle$) denotes the eigenvector of the spin operator along the $z$ axis for particle $i$, with eigenvalue $z_i =\pm1$ ($-z_i = \mp1$). Clearly, the state in Eq.\ (2) reduces to the GHZ state (1) when we put $c_1 =c_2 = 1/{\sqrt{2}}$, and $z_1 = z_2 = z_3 =+1$. However, for the class of states considered, it is shown that no violations of local realism can arise within the unconditional ensemble constructed by selecting the results obtained by two of the observers (say, by observers 1 and 2) irrespective of the result obtained in the corresponding measurement by the third observer. As will presently be seen, this unrestricted selection of the data provided by the results of observers 1 and 2 involves the incoherent mixture of two pure states $|\Psi^{+}_{12}\rangle$ and $|\Psi^{-}_{12}\rangle$ for particles 1 and 2, with weighting factors given by the probability of getting the result $z_3$ and $-z_3$, respectively, in a spin measurement on particle 3 along the axis $\vec{e}_3 (\vartheta_{3},\varphi_{3})$. Such a mixture, however, turns out to be completely equivalent to a mixture of two product states for particles 1 and 2, and then no violation of a Bell inequality will occur for the above-defined unconditional ensemble.
\section{Unconditional two-particle correlations for the class of triorthogonal states}
In order to determine under which conditions the correlation between the measurement results for particles 1 and 2 can lead to a violation of Bell's inequality it is convenient to introduce another set of basis states $\{|z_3 \rangle^{\textstyle \ast},|-z_3 \rangle^{\textstyle\ast}\}$ for particle 3, related to the original basis vectors by
\begin{mathletters}
\begin{equation}
|z_{3} \rangle = \cos \frac{\vartheta_3}{2} e^{iz_{3}\varphi_{3}/2} |z_{3} \rangle^{\textstyle \ast} - z_{3} \sin \frac{\vartheta_3}{2} e^{iz_{3}\varphi_{3}/2} |-z_{3} \rangle^{\textstyle\ast},
\end{equation}
\begin{equation}
|-z_{3} \rangle = z_{3} \sin \frac{\vartheta_3}{2} e^{-iz_{3}\varphi_{3}/2} |z_{3} \rangle^{\textstyle\ast}
+ \cos \frac{\vartheta_3}{2} e^{-iz_{3}\varphi_{3}/2} |-z_{3} \rangle^{\textstyle\ast}.
\end{equation}
\end{mathletters}
Using Eqs.\ (3a) and (3b) we can rewrite the three-particle state (2) as
\begin{equation}
|\Psi \rangle = p^{1/2}_{+}|\Psi^{+}_{12} \rangle |z_{3} \rangle^{\textstyle\ast}
+ p^{1/2}_{-}|\Psi^{-}_{12} \rangle |-z_{3} \rangle^{\textstyle\ast},
\end{equation}
where
\begin{mathletters}
\begin{eqnarray}
|\Psi^{+}_{12} \rangle &=& p^{-1/2}_{+} \left( c_1 \cos \frac{\vartheta_{3}}{2}
|z_1 \rangle |z_2 \rangle \right. \nonumber \\
&& + \left. c_2 z_3 \sin \frac{\vartheta_{3}}{2} e^{-iz_{3}\varphi_{3}}
|-z_1 \rangle |-z_2 \rangle \right),
\end{eqnarray}
\begin{eqnarray}
|\Psi^{-}_{12} \rangle &=& p^{-1/2}_{-} \left( -c_1 z_3 \sin \frac{\vartheta_{3}}{2}
|z_1 \rangle |z_2 \rangle \right. \nonumber \\
&& + \left. c_2 \cos \frac{\vartheta_{3}}{2} e^{-iz_{3}\varphi_{3}}
|-z_1 \rangle |-z_2 \rangle \right),
\end{eqnarray}
\end{mathletters}
and
\begin{mathletters}
\begin{eqnarray}
p_+ &=& c^{2}_{1} \cos^2 \left( \frac{\vartheta_{3}}{2} \right)
+ c^{2}_{2} \sin^2 \left( \frac{\vartheta_{3}}{2} \right), \\
p_- &=& c^{2}_{1} \sin^2 \left( \frac{\vartheta_{3}}{2} \right)
+ c^{2}_{2} \cos^2 \left( \frac{\vartheta_{3}}{2} \right).
\end{eqnarray}
\end{mathletters}
Since the basis vectors $|\pm z_3 \rangle$ represent definite spin $\pm z_3$ along the $z$ axis for particle 3, the basis vectors $|\pm z_3 \rangle^{\textstyle\ast}$ represent definite spin $\pm z_3$ for particle 3 along the direction characterized by the angles $\vartheta_3$ and $\varphi_3$. In view of Eq.\ (4) it thus follows that, provided $\vartheta_3 \neq n\pi$, $n=0,\pm1,\pm2,\ldots ,$ a spin measurement on particle 3 along (an otherwise arbitrary) direction $\vec{e}_3 (\vartheta_{3},\varphi_{3})$ leaves the particles 1 and 2 in an entangled state. Specifically, if the measurement result for particle 3 is found to be $z_3$, the normalized two-particle state for particles 1 and 2 becomes $|\Psi^{+}_{12}\rangle$, whereas for a measurement result equal to $-z_3$ it becomes $|\Psi^{-}_{12}\rangle$. However, from expression (4), we can see that the probability of obtaining the result $z_3$ ($-z_3$) in a spin measurement on particle 3 along the axis $\vec{e}_3 (\vartheta_{3},\varphi_{3})$ is $p_+$ ($p_-$). Of course we have $p_+ + p_- =1$. It will further be noted that the states $|\Psi^{+}_{12}\rangle$ and $|\Psi^{-}_{12}\rangle$ are orthogonal to each other provided that $|c_1|=|c_2|=1/{\sqrt{2}}$.
If we analyze the results of observers 1 and 2 only when observer 3 obtains the result $z_3$ ($-z_3$) (i.e., if we restrict ourselves to either one of subensembles $+$ or $-$) then the adequate state describing the spin of particles 1 and 2 will be the pure state $|\Psi^{+}_{12}\rangle$ ($|\Psi^{-}_{12}\rangle$). However, if we attemp to analyze the results of observers 1 and 2 irrespective of what result happens to be measured by observer 3 in the corresponding measurement along direction $\vec{e}_3$ (i.e., if we consider the total ensemble formed out of subensembles $+$ and $-$), then the appropriate state accounting for particles 1 and 2 will consist of a mixture of both pure states $|\Psi^{+}_{12}\rangle$ and $|\Psi^{-}_{12}\rangle$ with respective weights $p_+$ and $p_-$; that is,
\begin{equation}
\rho_{12} = p_+ |\Psi^{+}_{12}\rangle \langle \Psi^{+}_{12}|
+ p_- |\Psi^{-}_{12}\rangle \langle \Psi^{-}_{12}| .
\end{equation}
It is a rather simple matter to show that the density matrix (7) can also be decomposed in the form
\begin{equation}
\rho_{12} = c^{2}_{1} |\phi^{+}_{12}\rangle \langle \phi^{+}_{12}|
+ c^{2}_{2} |\phi^{-}_{12}\rangle \langle \phi^{-}_{12}| ,
\end{equation}
with $|\phi^{+}_{12}\rangle = |z_1\rangle |z_2\rangle$, and $|\phi^{-}_{12}\rangle = |-z_1\rangle |-z_2\rangle$. It should be noted at this point that both expressions (7) and (8) for $\rho_{12}$ can also be obtained by taking the partial trace of the density operator $|\Psi\rangle \langle\Psi|$ over the states corresponding to particle 3. So, on choosing the basis states for particle 3 to be $\{|z_3 \rangle^{\textstyle\ast},|-z_3 \rangle^{\textstyle\ast}\}$, we can put
\[
\rho_{12} = { }^{\textstyle\ast} \langle z_3 |\Psi\rangle\langle\Psi|z_3 \rangle^{\textstyle\ast}
+ { }^{\textstyle\ast} \langle -z_3 |\Psi\rangle\langle\Psi|-z_3 \rangle^{\textstyle\ast} .
\]
Replacing now $|\Psi\rangle$ by the state in Eq.\ (4) we obtain quickly the density matrix (7). Likewise, by tracing over the states $\{|z_3 \rangle,|-z_3 \rangle\}$ we get
\[
\rho_{12} = \langle z_3 |\Psi\rangle\langle\Psi|z_3 \rangle
+ \langle -z_3 |\Psi\rangle\langle\Psi|-z_3 \rangle ,
\]
which can be identified with the density matrix (8) upon substitution of $|\Psi\rangle$ by the state vector (2).
The important point about the decomposition in Eq.\ (8) is that, as both $|\phi^{+}_{12}\rangle$ and $|\phi^{-}_{12}\rangle$ are product states, none of them violates Bell's inequalities, and then the same will be true for the mixed state $\rho_{12}$ \cite{Barnett-Phoenix}. Indeed, one can easily find that the quantum prediction for the unconditional correlation function of the results of spin measurements on particles 1 and 2 along directions $\vec{e}_1$ and $\vec{e}_2$, respectively, is
\begin{eqnarray}
E_{12}(\vec{e}_1,\vec{e}_2) &=& \text{Tr}[\rho_{12} \sigma(\vec{e}_1)\otimes
\sigma(\vec{e}_2)] \nonumber \\
&=& \pm \cos \vartheta_1 \cos \vartheta_2 ,
\end{eqnarray}
where the matrix $\rho_{12}$ used in Eq.\ (9) stands for either one of expressions (7) or (8), and where the $+$ ($-$) sign applies for $\text{sgn}z_1 = \text{sgn}z_2$ ($\text{sgn}z_1 \neq \text{sgn}z_2$). So, in order to search for genuinely quantal correlations involving the pair of particles 1 and 2 it is necessary to consider the measurement results pertaining to either subensemble $+$ or $-$ separately. As a result, in the following we shall deal with the pure state $|\Psi^{+}_{12}\rangle$ or $|\Psi^{-}_{12}\rangle$, rather than with the mixed state $\rho_{12}$.
It is worth noting that the inability of the entangled three-particle state (2) to yield nonlocal correlations involving the unconditional measurement results for two of the particles can be traced back to the orthogonality of the states $|z_i\rangle$ and $|-z_i\rangle$, for each $i=1,2,$ and 3. So we can say that the lack of such nonlocal, unconditional two-particle correlations indeed constitutes a significant feature inherent in the class of triorthogonal states. In Sec.\ IV we shall generalize this result to the class of $n$-orthogonal states.
\section{Maximal violation of the CHSH inequality: necessary conditions}
The Clauser-Horne-Shimony-Holt (CHSH) form of the Bell inequality is \cite{CHSH}
\begin{equation}
|\langle B_{\text{CHSH}}\rangle | \leq 2 \, ,
\end{equation}
where $\langle B_{\text{CHSH}}\rangle$ denotes the expectation value of the Bell operator \cite{BMR}
\begin{eqnarray}
B_{\text{CHSH}} &\equiv & \sigma(\vec{e}_1) \otimes [\sigma(\vec{e}_2) + \sigma(\vec{e}\,%
^{\prime}_{2})] + \sigma(\vec{e}\,^{\prime}_{1}) \nonumber \\
&& \otimes \, [\sigma(\vec{e}_2) - \sigma(\vec{e}\,^{\prime}_{2})] \, ,
\end{eqnarray}
with $\vec{e}_i$ and $\vec{e}\,^{\prime}_{i}$ denoting two different alternative directions for spin measurements on particle $i$. As long as $\vartheta_3 \neq n\pi$, either two-particle state (5a) or (5b) will be able to violate the CHSH inequality for a suitable choice of measurement directions $\vec{e}_1$, $\vec{e}\,^{\prime}_{1}$, $\vec{e}_2$, and $\vec{e}\,^{\prime}_{2}$ \cite{Gisin,PR}. Now, the quantum prediction for the correlation function $E^{+}_{12}$ ($E^{-}_{12}$) associated with subensemble $+$ ($-$) is given by
\begin{eqnarray}
E^{\pm}_{12}(\vec{e}_1,\vec{e}_2) &=& \langle \Psi^{\pm}_{12}|\sigma(\vec{e}_1) \otimes \sigma(\vec{e}_2) |\Psi^{\pm}_{12}\rangle \nonumber \\
&=& \gamma \cos\vartheta_1 \cos\vartheta_2 \pm z_3 (c_1 c_2 /p_{\pm}) \sin\vartheta_1
\sin\vartheta_2 \nonumber \\
&& \times\sin\vartheta_3 \cos (\varphi_1 +\gamma\varphi_2 +z_1 z_3 \varphi_3),
\end{eqnarray}
where $\gamma$ is a sign factor with value $+1$ ($-1$) for $\text{sgn}z_1 = \text{sgn}z_2$ ($\text{sgn}z_1 \neq \text{sgn}z_2$). Incidentally, it will be noted that for $\vartheta_3 = n\pi$ we have $E^{\pm}_{12} = \gamma \cos\vartheta_1 \cos\vartheta_2$, and hence it is not possible a violation of Bell's inequality for any choice of directions $\vec{e}_1$, $\vec{e}\,^{\prime}_{1}$, $\vec{e}_2$, $\vec{e}\,^{\prime}_{2}$. Of course this arises from the fact that the superposition state (2) reduces to the unentangled term $|z_1 \rangle |z_2 \rangle |z_3 \rangle$ or $|-z_1 \rangle |-z_2 \rangle |-z_3 \rangle$ whenever a spin measurement along the $z$ axis is performed on particle 3. Further, as expected, for either $c_1 =0$ or $c_2 =0$ (i.e., for product states) no violation can happen. Therefore, for general directions $\vec{e}_1$, $\vec{e}\,^{\prime}_{1}$, $\vec{e}_2$, $\vec{e}\,^{\prime}_{2}$, and $\vec{e}_3$, the relevant predictions by quantum mechanics will violate inequality (10) if
\begin{eqnarray}
|\gamma &[& \cos\vartheta_1 \cos\vartheta_2 + \cos\vartheta_1
\cos\vartheta^{\prime}_{2} + \cos\vartheta^{\prime}_{1} \cos\vartheta_2 \nonumber \\
&& - \cos\vartheta^{\prime}_{1} \cos\vartheta^{\prime}_{2}]
\pm z_3 (c_1 c_2 /p_{\pm}) [\sin\vartheta_1 \sin\vartheta_2 \sin\vartheta_3 \nonumber \\
&& \times \cos (\varphi_1 +\gamma \varphi_2+z_1 z_3 \varphi_3) +\sin\vartheta_1 \sin\vartheta^{\prime}_{2} \sin\vartheta_3 \nonumber \\
&& \times \cos (\varphi_1 +\gamma\varphi^{\prime}_{2} +z_1 z_3 \varphi_3)
+ \sin\vartheta^{\prime}_{1} \sin\vartheta_2 \sin\vartheta_3 \nonumber \\
&& \times \cos (\varphi^{\prime}_{1} +\gamma\varphi_2 +z_1 z_3 \varphi_3)
- \sin\vartheta^{\prime}_{1} \sin\vartheta^{\prime}_{2} \sin\vartheta_3 \nonumber \\
&& \times \cos (\varphi^{\prime}_{1} +\gamma\varphi^{\prime}_{2} +z_1 z_3 \varphi_3)] | > 2.
\end{eqnarray}
It should be realized that, by suitable choosing the $z$ axis for each of the particles, any entangled state for two spin-$\frac{1}{2}$ particles can be put into the form displayed by either one of Eqs.\ (5a) or (5b). So the above condition (13) for the violation of the CHSH inequality turns out to be completely general as far as pure states are concerned. For the special case in which $\vartheta^{\prime}_{1} = \vartheta_1$, $\vartheta^{\prime}_{2} = \vartheta_2$, $\varphi^{\prime}_{1} = \varphi_1 +\pi/2$, $\gamma\varphi^{\prime}_{2} = \gamma\varphi_2 +\pi/2$, and $\varphi_1 +\gamma\varphi_2 +z_1 z_3 \varphi_3 = 3\pi/4 +n\pi$, condition (13) simplifies to
\begin{equation}
|\gamma \cos\vartheta_1 \cos\vartheta_2 \pm \mu z_3 (c_1 c_2 /p_{\pm}) \sqrt{2}
\sin\vartheta_1 \sin\vartheta_2 \sin\vartheta_3 | >1,
\end{equation}
where $\mu$ is a sign factor equal to $+1$ ($-1$) for $n$ odd (even). From expression (14) it follows that, as long as $\vartheta^{\prime}_{1} = \vartheta_1$ and $\vartheta^{\prime}_{2} = \vartheta_2$, a maximal violation of the CHSH inequality occurs provided that (i) $|c_1|=|c_2|=1/\sqrt{2}$, and (ii) $\vartheta_1 =\vartheta_2 =\vartheta_3 =\pi/2$. Clearly, condition (ii) entails that all measurement directions $\vec{e}_1$, $\vec{e}\,^{\prime}_{1}$, $\vec{e}_2$, $\vec{e}\,^{\prime}_{2}$, and $\vec{e}_3$ must lie in the $x$-$y$ plane.
A few remarks should be added here. In the first place, it is to be noted that the requirements
$|c_1|=|c_2|=1/\sqrt{2}$ and $\vartheta_3 =\pi/2$ together imply that either state (5a) or (5b) is maximally entangled. This is consistent with the fact according to which a maximally entangled state not only gives the maximum violation of the CHSH inequality but also gives the {\it largest\/} violation attainable for any pairs of four spin observables $\sigma(\vec{e}_1)$, $\sigma(\vec{e}\,^{\prime}_{1})$, $\sigma(\vec{e}_2)$, and $\sigma(\vec{e}\,^{\prime}_{2})$ (provided $\vec{e}_1 \!\nparallel \! \vec{e}\,^{\prime}_{1}$, and $\vec{e}_2 \!\nparallel \! \vec{e}\,^{\prime}_{2}$) \cite{Kar}. Therefore, both condition (i) and the requirement $\vartheta_3 = \pi/2$ turn out to be absolutely necessary in order to achieve the largest violation, no matter what the orientation of $\vec{e}_1$, $\vec{e}\,^{\prime}_{1}$, $\vec{e}_2$, and $\vec{e}\,^{\prime}_{2}$ may be. However, for the special case leading to Eq.\ (14) we have that the two measurement directions $\vec{e}_i$ and $\vec{e}\,^{\prime}_{i}$ for particle $i$ ($i=1,2$) giving the maximal violation are perpendicular between themselves. That this orthogonality condition is not incidental can be seen by computing the square of the Bell operator (11). This is given by \cite{Kar}
\begin{equation}
B^{2}_{\text{CHSH}} = 4(I+\sin\theta_1 \sin\theta_2 \, \sigma_{\perp 1} \otimes
\sigma_{\perp 2}),
\end{equation}
where $\theta_i$ is the angle included between the vectors $\vec{e}_i$ and $\vec{e}\,^{\prime}_{i}$, and $\sigma_{\perp i}$ denotes the spin operator for particle $i$ along the direction perpendicular to the plane containing $\vec{e}_i$ and $\vec{e}\,^{\prime}_{i}$. From Eq.\ (15) it follows that the largest eigenvalue of $B_{\text{CHSH}}$ is (in terms of the absolute value) $\lambda_l = 2(1+|\sin\theta_1 \sin\theta_2|)^{1/2}$, so that a maximum violation is obtained when both $\theta_1$ and $\theta_2$ are $\pi/2$ (mod $\pi$). The eigenvector associated with $\lambda_l$ will consist of a superposition of the states $|\sigma_{\perp 1}\rangle|\sigma_{\perp 2}\rangle$ and $|-\sigma_{\perp 1}\rangle|-\sigma_{\perp 2}\rangle$ with an appropriate relative phase between them, where $|\sigma_{\perp i}\rangle$ ($|-\sigma_{\perp i}\rangle$) denotes the eigenvector of $\sigma_{\perp i}$ with eigenvalue $\sigma_{\perp i}=\pm 1$ ($-\sigma_{\perp i}=\mp 1$), and where $\text{sgn}\sigma_{\perp 1} =\text{sgn}\sigma_{\perp 2}$ ($\text{sgn}\sigma_{\perp 1} \neq \text{sgn}\sigma_{\perp 2}$) for $\text{sgn}(\sin\theta_1) = \text{sgn}(\sin\theta_2)$ [$\text{sgn}(\sin\theta_1) \neq \text{sgn}(\sin\theta_2)$]. As was mentioned, such a superposition state has to be completely entangled in order to get the largest possible violation \cite{Kar}.
A question that naturally arises is whether the above conditions on the measurement directions, namely, $\vartheta_1 = \vartheta^{\prime}_{1} =\pi/2$, $\vartheta_2 = \vartheta^{\prime}_{2} =\pi/2$, $\varphi^{\prime}_{1} = \varphi_1 +\pi/2$, $\gamma\varphi^{\prime}_{2} = \gamma\varphi_2 +\pi/2$, and $\varphi_1 +\gamma\varphi_2 +z_1 z_3 \varphi_3 = 3\pi/4 +n\pi$ exhaust all the possibilities for a maximal violation of the CHSH inequality. The answer is certainly no. Indeed, there are infinitely many ways of achieving such a maximum. This follows directly from the fact that the Schmidt decomposition of a maximally entangled state is not unique. So, for example, consider the state (5a) with $c_1 = -c_2 = 1/\sqrt{2}$, $\vartheta_3 =\pi/2$, $\varphi_3 =0$, $z_1 =-z_2 =+1$, and $z_3 =+1$. The resulting singlet state can be equally expressed in an infinite number of alternative forms by replacing the quantization $z$ axis with any other unit vector ${\bf{\text n}}$ in three-dimensional space. Put it, for instance, as
\begin{equation}
|\Psi^{+}_{12}\rangle = \frac{1}{\sqrt{2}} (|y+\rangle_1 |y-\rangle_2
- |y-\rangle_1 |y+\rangle_2 ),
\end{equation}
where $|y+\rangle_1$ represents spin-up along the $y$ axis for particle 1. Hence, according to the previous paragraph, there must be measurement directions $\vec{e}_1$, $\vec{e}\,^{\prime}_{1}$, $\vec{e}_2$, and $\vec{e}\,^{\prime}_{2}$ in the $x$-$z$ plane (with $\vec{e}_1 \perp \vec{e}\,^{\prime}_{1}$ and $\vec{e}_2 \perp \vec{e}\,^{\prime}_{2}$) such that inequality (10) is maximally violated for the singlet state. As an example, take the choice $\vartheta_1 =0$, $\varphi_1 =0$, $\vartheta^{\prime}_{1}=\pi/2$, $\varphi^{\prime}_{1}=0$, $\vartheta_2 =\pi/4$, $\varphi_2 =0$, $\vartheta^{\prime}_{2}=-\pi/4$, and $\varphi^{\prime}_{2}=0$. Substituting these values (together with $c_1 = -c_2 = 1/\sqrt{2}$, $\vartheta_3 =\pi/2$, $\varphi_3 =0$, $\gamma =-1$, and $z_3 =+1$) into the left-hand side of inequality (13) gives the maximum violation $2\sqrt{2}$. Since the unit vector ${\bf{\text n}}$ is quite arbitrary, we conclude that there are infinitely many sets of directions $\vec{e}_1$, $\vec{e}\,^{\prime}_{1}$, $\vec{e}_2$, and $\vec{e}\,^{\prime}_{2}$ satisfying the equality
\begin{eqnarray}
|&& \cos \vartheta_1 \cos\vartheta_2 + \cos\vartheta_1
\cos\vartheta^{\prime}_{2} + \cos\vartheta^{\prime}_{1} \cos\vartheta_2 \nonumber \\
&& - \cos\vartheta^{\prime}_{1} \cos\vartheta^{\prime}_{2}
+ \sin\vartheta_1 \sin\vartheta_2 \cos(\varphi_1 - \varphi_2) \nonumber \\
&& + \sin\vartheta_1 \sin\vartheta^{\prime}_{2} \cos(\varphi_1 - \varphi^{\prime}_{2})
+ \sin\vartheta^{\prime}_{1} \sin\vartheta_2 \nonumber \\
&& \times \cos(\varphi^{\prime}_{1} - \varphi_2) - \sin\vartheta^{\prime}_{1}
\sin\vartheta^{\prime}_{2} \cos(\varphi^{\prime}_{1} - \varphi^{\prime}_{2})|
= 2 \sqrt{2}.
\end{eqnarray}
Another example which provides the maximum level of violation is $\varphi_1 =\varphi^{\prime}_{1} =\varphi_2 =\varphi^{\prime}_{2} =\varphi_{0}$, $\vartheta_1 =\vartheta_{0} - \pi/4$, $\vartheta^{\prime}_{1}=\vartheta_0 + \pi/4$, $\vartheta_2 =\vartheta_0$, and $\vartheta^{\prime}_{2}=\vartheta_0 -\pi/2$, with $\varphi_0$ and $\vartheta_0$ taking on any arbitrary value. Note that this example includes the previous one when we make $\varphi_0 =0$ and $\vartheta_0 =\pi/4$. In any case, as was shown, to achieve the maximum violation it is necessary that the vectors $\vec{e}_i$ and $\vec{e}\,^{\prime}_{i}$ ($i=1,2$) be perpendicular between themselves. This orthogonality condition, however, is not a sufficient one, as is clear from the preceding example (indeed, for this example, in addition to this condition it is necessary that $\vartheta_2 =\vartheta_1 +\pi/4$). Note also that the directions giving the largest violation of the CHSH inequality for the singlet state must satisfy in all cases the constraint $\text{sgn}(\sin\theta_1) \neq \text{sgn}(\sin\theta_2)$. This is because the singlet state is an eigenvector of the operator $\sigma_{n1}\otimes\sigma_{n2}$ (where $\sigma_{ni}$ denotes the spin operator along the ${\bf{\text n}}$ axis for particle $i$) with eigenvalue $-1$, and so, from Eq.\ (15), the factors $\sin\theta_1$ and $\sin\theta_2$ must be opposite in sign in order to get the largest violation.
Consider now the state (5a) with $c_1 = c_2 = 1/\sqrt{2}$, $\vartheta_3 =\pi/2$, $\varphi_3 =0$, $z_1 =-z_2 =+1$, and $z_3 =+1$. Unlike the singlet state, the resulting triplet state is not rotationally invariant. However, it can also be written in an infinity of equivalent biorthogonal forms. Put it, for instance, as
\begin{equation}
|\Psi^{+}_{12}\rangle = \frac{1}{\sqrt{2}} (|x+\rangle_1 |x+\rangle_2
- |x-\rangle_1 |x-\rangle_2 ),
\end{equation}
where $|x+\rangle_1$ represents spin-up along the $x$ axis for particle 1. So, there will be measurement directions $\vec{e}_1$, $\vec{e}\,^{\prime}_{1}$, $\vec{e}_2$, and $\vec{e}\,^{\prime}_{2}$ (with $\vec{e}_1 \perp \vec{e}\,^{\prime}_{1}$ and $\vec{e}_2 \perp \vec{e}\,^{\prime}_{2}$) in the $y$-$z$ plane allowing maximal violation of inequality (10) for the triplet state. As an example, take the choice $\vartheta_1 =0$, $\varphi_1 =\pi/2$, $\vartheta^{\prime}_{1}=-\pi/2$, $\varphi^{\prime}_{1}=\pi/2$, $\vartheta_2 =\pi/4$, $\varphi_2 =\pi/2$, $\vartheta^{\prime}_{2}=-\pi/4$, and $\varphi^{\prime}_{2}=\pi/2$. As may easily be checked, these values fulfill the equality
\begin{eqnarray}
|&& \cos \vartheta_1 \cos\vartheta_2 + \cos\vartheta_1
\cos\vartheta^{\prime}_{2} + \cos\vartheta^{\prime}_{1} \cos\vartheta_2 \nonumber \\
&& - \cos\vartheta^{\prime}_{1} \cos\vartheta^{\prime}_{2}
- \sin\vartheta_1 \sin\vartheta_2 \cos(\varphi_1 - \varphi_2) \nonumber \\
&& - \sin\vartheta_1 \sin\vartheta^{\prime}_{2} \cos(\varphi_1 - \varphi^{\prime}_{2})
- \sin\vartheta^{\prime}_{1} \sin\vartheta_2 \nonumber \\
&& \times \cos(\varphi^{\prime}_{1} - \varphi_2) + \sin\vartheta^{\prime}_{1}
\sin\vartheta^{\prime}_{2} \cos(\varphi^{\prime}_{1} - \varphi^{\prime}_{2})|
= 2 \sqrt{2}.
\end{eqnarray}
Actually, having established that there exist infinitely many sets of directions satisfying the equality (17) for the singlet state, it is immediate to see that the same hols for the above equality (19) corresponding to the triplet state. Indeed, comparing the expressions in Eqs.\ (17) and (19), it follows at once that if the set of directions $\vec{e}_1(\vartheta_1,\varphi_1)$, $\vec{e}\,^{\prime}_{1}(\vartheta^{\prime}_{1},\varphi^{\prime}_{1})$, $\vec{e}_2(\vartheta_2,\varphi_2)$, and $\vec{e}\,^{\prime}_{2}(\vartheta^{\prime}_{2}, \varphi^{\prime}_{2})$ fulfill the equality (17), then the set of directions $\vec{e}_1(-\vartheta_1,\varphi_1)$, $\vec{e}\,^{\prime}_{1}(-\vartheta^{\prime}_{1}, \varphi^{\prime}_{1})$, $\vec{e}_2(\vartheta_2,\varphi_2)$, and $\vec{e}\,^{\prime}_{2}(\vartheta^{\prime}_{2}, \varphi^{\prime}_{2})$ do satisfy the equality (19) [alternatively, the set of directions $\vec{e}_1(\vartheta_1,\varphi_1)$, $\vec{e}\,^{\prime}_{1}(\vartheta^{\prime}_{1}, \varphi^{\prime}_{1})$, $\vec{e}_2(-\vartheta_2,\varphi_2)$, and $\vec{e}\,^{\prime}_{2}(-\vartheta^{\prime}_{2}, \varphi^{\prime}_{2})$ also will do].
A reasoning similar to that developed for the singlet and triplet states could equally be established for any other state of the form (5a) or (5b) for which $|c_1|=|c_2|=1/\sqrt{2}$ and $\vartheta_3 =\pi/2$, thereby showing that for every maximally entangled state there are infinitely many sets of directions giving the maximal violation of the CHSH inequality. As we have already said, such infinity of directions arises due to the {\it nonuniqueness\/} of the Schmidt decomposition of a maximally entangled state.
\section{Concluding remarks}
Lastly, a further remark is in order about the relationship between the conditions needed to maximally violate the CHSH inequality and those required by the three-particle state (2) to produce a direct (``all or nothing'') nonlocality contradiction \cite{GHZ,GHSZ}. As pointed out by KZ, it is remarkable that the same type of spin measurements involved in the two-particle case also forms the basis of the argument leading to the GHZ contradiction \cite{GHSZ}. This connection can be best appreciated when we generalize the CHSH inequality to a measurement scheme involving three spin-$\frac{1}{2}$ particles. The appropriate inequality is of the form \cite{Hardy}
\begin{equation}
|\langle B_{\text H} \rangle | \leq 2 \, ,
\end{equation}
where now the relevant Bell operator is
\begin{eqnarray}
B_{\text H} \equiv &[& \sigma(\vec{e}_1)\otimes \sigma(\vec{e}\,^{\prime}_{2}) +
\sigma(\vec{e}\,^{\prime}_{1})\otimes \sigma(\vec{e}_{2})]\otimes
\sigma(\vec{e}\,^{\prime}_{3}) \nonumber \\
&& + \, [\sigma(\vec{e}\,^{\prime}_{1})\otimes \sigma(\vec{e}\,^{\prime}_{2}) -
\sigma(\vec{e}_1)\otimes \sigma(\vec{e}_{2})]\otimes
\sigma(\vec{e}_{3}) .
\end{eqnarray}
As in the two-particle case, it can be shown \cite{Cereceda} that in order for the three-particle state (2) to yield the largest violation of the inequality (20) it is necessary that $|c_1|=|c_2|=1/\sqrt{2}$. However, the largest eigenvalue of the Bell operator (21) is given by \cite{Cereceda}
\begin{eqnarray}
\lambda_l = && 2 \, (1+|\sin\theta_1 \sin\theta_2| + |\sin\theta_2 \sin\theta_3| \nonumber \\
&& + \, |\sin\theta_1 \sin\theta_3|)^{1/2} ,
\end{eqnarray}
so that a maximum violation occurs provided that $\vec{e}_i \perp \vec{e}\,^{\prime}_{i}$ for each $i=1,2,3$. The corresponding eigenvector will consist of an equally weighted superposition of the states $|\sigma_{\perp 1}\rangle|\sigma_{\perp 2}\rangle|\sigma_{\perp 3}\rangle$ and $|-\sigma_{\perp 1}\rangle|-\sigma_{\perp 2}\rangle|-\sigma_{\perp 3}\rangle$ with a relative phase between them. As before, $\theta_i$ is the angle between the vectors $\vec{e}_i$ and $\vec{e}\,^{\prime}_{i}$, and $|\sigma_{\perp i}\rangle$ ($|-\sigma_{\perp i}\rangle$) denotes the eigenvector [with eigenvalue $\sigma_{\perp i}=\pm 1$ ($-\sigma_{\perp i}=\mp 1$)] of the spin operator for particle $i$ along the direction perpendicular to both $\vec{e}_i$ and $\vec{e}\,^{\prime}_{i}$. Likewise, the relative signs of the eigenvalues $\sigma_{\perp 1}$, $\sigma_{\perp 2}$, and $\sigma_{\perp 3}$ entering the superposition will depend on the relative signs of $\sin\theta_1$, $\sin\theta_2$, and $\sin\theta_3$. [Note that the maximum amount of violation of inequality (20) predicted by quantum mechanics is by a factor of 2 instead of the factor $\sqrt{2}$ achieved in the CHSH inequality. This fact conforms to the existence of Bell type inequalities which yield a violation increasing exponentially with the number of particles \cite{Mermin1}.] The point we want to emphasize here is that, as shown by Hardy \cite{Hardy}, a maximum violation of inequality (20) always entails a contradiction of the GHZ type. A simple illustration of this statement is provided by the choice $\sigma(\vec{e}_i)\equiv \sigma(x_i)$ and $\sigma(\vec{e}\,^{\prime}_{i})\equiv \sigma(y_i)$, $i=1,2,3$, where $\sigma(x_i)$ [$\sigma(y_i)$] denotes the spin operator along the $x$ axis ($y$ axis) for particle $i$. For this choice of operators we find that $|\langle B_{\text H} \rangle| =4$ whenever the expectation value is evaluated for the state vector (1). As is well known, both the operators $\sigma(x_i)$ and $\sigma(y_i)$, and the state vector (1), form the basis of Mermin's exposition on the GHZ theorem \cite{Mermin2}.\footnote{%
Actually, the state employed by Mermin was $|\phi\rangle = 1/\sqrt{2}\,(|z+ \rangle_1 |z+ \rangle_2 |z+ \rangle_3 - |z- \rangle_1 |z- \rangle_2 |z- \rangle_3)$. For this state, and for the above choice of operators, we have $\langle B_{\text H} \rangle =4$, whereas for the state in Eq.\ (1) we have $\langle B_{\text H} \rangle =-4$. In any case, both $|\phi \rangle$ and the state vector (1) provide the maximum violation of inequality (20) in terms of the absolute value, and both states can lead to the GHZ contradiction.}
It is worth noting that the plane containing the measurement directions $\vec{e}_i$ and $\vec{e}\,^{\prime}_{i}$ ($i=1,2,3$) giving the largest violation of inequality (20) is fixed by the quantum state since, in contrast to the two-particle case, the triorthogonal decomposition in Eq.\ (2) is unique even if the coefficients $c_1$ and $c_2$ are equal \cite{Elby-Bub}.
We conclude by noting that the present treatment regarding the three-particle state (2) can be readily extended to $n$-particle states ($n \geq 3$) of the form
\begin{equation}
|\Psi\rangle = c_1 |z_1\rangle |z_2\rangle \cdots |z_n\rangle +
c_2 |-z_1\rangle |-z_2\rangle \cdots |-z_n\rangle .
\end{equation}
Indeed, by letting $|z_j\rangle^{\textstyle\ast}$ be
\begin{equation}
|z_j\rangle^{\textstyle\ast} = \cos\frac{\vartheta_j}{2} e^{-iz_{j}\varphi_{j}/2}
|z_j\rangle + z_j \sin\frac{\vartheta_j}{2} e^{iz_{j}\varphi_{j}/2} |-z_j\rangle ,
\end{equation}
with $j=N+1, N+2,\ldots,n,$ and $N=2,3,\ldots,\mbox{$n-1$},$ and projecting $|\Psi\rangle$ onto the direct product state $|z_{N+1}\rangle^{\textstyle\ast} |z_{N+2}\rangle^{\textstyle\ast} \cdots |z_{n}\rangle^{\textstyle\ast}$, one finds that the resulting $N$-particle state $|\Psi^{+}_{12 \cdots N} \rangle$,
\begin{equation}
p^{1/2}_{+} |\Psi^{+}_{12 \cdots N} \rangle = { }^{\textstyle\ast}\langle z_{N+1}|
{ }^{\textstyle\ast}\langle z_{N+2}| \cdots { }^{\textstyle\ast}\langle z_n ||\Psi\rangle ,
\end{equation}
is entangled for every choice of $|z_j\rangle^{\textstyle\ast}$ except for the case in which $\vartheta_j$ happens to be a multiple of $\pi$. In Eq.\ (25), $p_+$ is a normalization factor given by
\begin{equation}
p_+ = c^2_1 \prod^{n}_{j=N+1} \cos^2 \left( \frac{\vartheta_j}{2} \right)
+ c^2_2 \prod^{n}_{j=N+1} \sin^2 \left( \frac{\vartheta_j}{2} \right) ,
\end{equation}
while the state vector $|\Psi^{+}_{12 \cdots N} \rangle$ is found to be
\begin{eqnarray}
|\Psi^{+}_{12 \cdots N} \rangle & = & p^{-1/2}_{+} \left[ c_1 \cos\frac{\vartheta_{N+1}}
{2} \cos\frac{\vartheta_{N+2}}{2} \cdots \cos\frac{\vartheta_{n}}{2} \right. \nonumber \\
&& \times |z_1\rangle |z_2\rangle \cdots |z_N\rangle + c_2 (z_{N+1})(z_{N+2}) \cdots
(z_n) \nonumber \\
&& \times \sin\frac{\vartheta_{N+1}}{2} \sin\frac{\vartheta_{N+2}}{2} \cdots \sin\frac{\vartheta_{n}}{2} \nonumber \\
&& \times e^{-i(z_{N+1}\varphi_{N+1}+z_{N+2}\varphi_{N+2}+ \cdots +
z_{n}\varphi_{n})} \nonumber \\
&& \times |-z_1\rangle |-z_2\rangle \cdots |-z_N\rangle \biggr] .
\end{eqnarray}
Clearly, expressions (26) and (27) reduce to Eqs.\ (6a) and (5a), respectively, when we take $N=2$ and $n=3$. Similarly, by taking the partial trace of $|\Psi\rangle\langle\Psi|$ over the states corresponding to particles $N+1, N+2,\ldots, n,$ one finds that the reduced density matrix associated with the remaining $N$-particle system is
\begin{equation}
\rho_{12 \cdots N} = c^2_1 |\phi^{+}_{12 \cdots N}\rangle\langle \phi^{+}_{12 \cdots N}|
+ c^2_2 |\phi^{-}_{12 \cdots N}\rangle\langle \phi^{-}_{12 \cdots N}| ,
\end{equation}
with $|\phi^{+}_{12 \cdots N}\rangle$ and $|\phi^{-}_{12 \cdots N}\rangle$ being $|z_1\rangle
|z_2\rangle \cdots |z_N\rangle$ and $|-z_1\rangle |-z_2\rangle \cdots |-z_N\rangle$, respectively. This implies that no violation of local realism can result from joint measurements performed on particles $1,2,\ldots,N$ alone if such measurements are made without any commitment to the results obtained for particles $N+1,N+2,\ldots,n$ (as a matter of fact, no actual measurements need to be performed on particles $N+1,N+2,\ldots,n$ if we are looking at the unconditional correlation function for particles $1,2,\ldots,N,$ since this latter is, by definition, fully independent of whatever measurements on particles $N+1,N+2,\ldots,n$). Indeed, it is not difficult to show that the quantum prediction for the unconditional correlation function of the results of spin measurements on particles $1,2,\ldots,N$ along respective directions $\vec{e}_1,\vec{e}_2,\ldots,\vec{e}_N$ is given by
\begin{eqnarray}
E_{12 \cdots N} \!\!\!&(&\!\!\!\vec{e}_1,\vec{e}_2,\ldots,\vec{e}_N) \nonumber \\
& = & \text{Tr}[\rho_{12\cdots N}
\sigma(\vec{e}_1)\otimes \sigma(\vec{e}_2)\otimes \cdots
\otimes \sigma(\vec{e}_N)] \nonumber \\
& = & z_1 z_2 \cdots z_N \cos\vartheta_1 \cos\vartheta_2 \cdots \cos\vartheta_N ,
\end{eqnarray}
which generalizes Eq.\ (9). Thus we have proved that, for the class of $n$-orthogonal states in Eq.\ (23), the unconditional $N$-particle correlations are compatible with local realism for any $N=2,3,\ldots,n-1$.
However, by adequately generalizing the CHSH inequality to an $N$-measurement scheme \cite{Hardy}, one could equally prove \cite{Cereceda} that in order for the state (27) to yield the largest violation of the appropriate Bell inequality it is necessary that the $c$ numbers in front of $|z_1\rangle |z_2\rangle \cdots |z_N\rangle$ and $|-z_1\rangle |-z_2\rangle \cdots |-z_N\rangle$ have the same modulus.\footnote{%
Notice that, in contrast with the situation above concerning the density matrix $\rho_{12 \cdots N}$, any eventual description of particles $1,2,\ldots,N$ in terms of the pure state $|\Psi^{+}_{12\cdots N}\rangle$ is subject to the occurrence of the results $z_{N+1}, z_{N+2}, \ldots, z_{n}$ for spin measurements performed on particles $N+1,N+2,\ldots,n$ along the respective directions $\vec{e}_{N+1},\vec{e}_{N+2},\ldots,\vec{e}_n$ [see Eq.\ (25)]. Clearly the probability for such an event is $p_+$.}
Obviously, when applied to the state (23), this condition means that $|c_1|=|c_2|=1/\sqrt{2}$. This in turn implies that all observers $N+1,N+2,\ldots,n$ must perform spin measurements within the respective $x$-$y$ plane (i.e., $\vartheta_j =\pi/2$ for all $j$) if the remaining state $|\Psi^{+}_{12\cdots N}\rangle$ for the other $N$ particles is to violate maximally Bell's inequality. In any case, the amount of violation, if any, does depend on the value of measurement parameters $\vartheta_j$ and $\varphi_j$ attached to measuring apparatuses which can operate quite independent (e.g., in a spacelike separated region) from the corresponding apparatuses used to measure the spin of particles $1,2,\ldots,N,$ thus giving rise to a generic {\it entangled entanglement\/} of the kind contemplated by Krenn and Zeilinger \cite{KZ}. [The dependence of the entanglement on the $\vartheta_j$'s as well as the dependence on the relative phase of $|\Psi^{+}_{12\cdots N}\rangle$ on the $\varphi_j$'s are made explicit in Eq.\ (27).]
\end{multicols}
\pagebreak
|
1,108,101,563,205 | arxiv | \section{Introduction }\label{secintro}
We consider the problem of minimizing a continuous function $f:{\mathbb R}^n\to{\mathbb R}$ over a compact set $\mathbf{K}\subseteq {\mathbb R}^n$. That is, we consider the problem of computing the parameter:
\begin{equation*}\label{fmink}
f_{\min,\mathbf{K}}:= \min_{x\in \mathbf{K}}f(x).
\end{equation*}
Our goal is to compare two convergent hierarchies of upper bounds on $f_{\min,\mathbf{K}}$, namely measure-based bounds introduced by Lasserre \cite{Las11}, and simulated annealing bounds, as studied by Kalai and Vempala \cite{Kalai-Vempala 2006}.
The bounds of Lasserre are obtained by minimizing over measures on ${\mathbf K}$ with sum-of-squares polynomial density functions with growing degrees, while simulated annealing bounds use Boltzman distributions on ${\mathbf K}$ with decreasing temparature parameters.
In this note we establish a relationship between these two approaches, linking the degree and temperature parameters in the two bounds (see Theorem \ref{thm:main} for a precise statement).
As an application, when $f$ is a polynomial and $K$ is a convex body,
we can show a faster convergence rate for the measure-based bounds of Lasserre.
The new convergence rate is in $O(1/r)$ (see Corollary~\ref{cor:nonconvex}),
where $2r$ is the degree of the sum-of-squares polynomial density function,
while the dependence was in $O(1/\sqrt{r})$ in the previously best known result from \cite{KLS MPA}.
\medskip
Polynomial optimization is a very active research area in the recent years since the seminal works of Lasserre \cite{Las01}
and Parrilo \cite{Par} (see also, e.g., the book \cite{Las09} and the survey \cite{Lau09}). In particular, hierarchies
of (lower and upper) bounds for the parameter $f_{\min,\mathbf{K}}$ have been proposed, based on sum-of-squares polynomials and semidefinite programming.
For a general compact set ${\mathbf K}$, upper bounds for $f_{\min,\mathbf{K}}$ have been introduced by Lasserre \cite{Las11}, obtained by searching for a sum-of-squares polynomial density function of given maximum degree $2r$, so as to minimize the integration of $f$ with respect to the corresponding probability measure on ${\mathbf K}$.
When $f$ is Lipschitz continuous and under some mild assumption on ${\mathbf K}$ (which holds, e.g., when ${\mathbf K}$ is a convex body), estimates for the convergence rate of these bounds have been proved in \cite{KLS MPA} that are in order $O(1/\sqrt r)$. Improved rates have been subsequently shown when restricting to special sets ${\mathbf K}$. Related stronger results have been shown for the case when ${\mathbf K}$ is the hypercube $[0,1]^n$ or $[-1,1]^n$.
In \cite{KLLS MOR} the authors show a hierarchy of upper bounds using the Beta distribution, with the same convergence rate in $O(1/\sqrt{r})$,
but whose computation needs only elementary operations; moreover an improved convergence in $O(1/r)$ can be shown, e.g., when $f$ is quadratic.
In addition, a convergence rate in $O(1/r^2)$ is shown in \cite{DHL SIOPT}, using distributions based on Jackson kernels
and a larger class of sum-of-squares density functions.
In this paper we investigate the hierarchy of measure-based upper bounds of \cite{Las11} and show that when $K$ is a convex body, convexity can be exploited to show an improved convergence rate in $O(1/r)$, even for nonconvex functions.
The key ingredient for this is to establish a relationship with upper bounds based on simulated annealing and to
use a known convergence rate result from \cite{Kalai-Vempala 2006} for simulated annealing bounds in the convex case.
\medskip
Simulated annealing was introduced by Kirkpatrick et al. \cite{Kirkpatrick et al 1983} as a randomized search procedure for general optimization
problems.
It has enjoyed renewed interest for convex optimization problems since it was shown
by Kalai and Vempala \cite{Kalai-Vempala 2006} that a polynomial-time implementation is possible.
This requires so-called hit-and-run sampling from $\mathbf{K}$, as
introduced by Smith \cite{Smith 1984}, that was shown to be a polynomial-time procedure by Lov\'asz \cite{Lovasz1999}.
Most recently, Abernethy and Hazan \cite{Abernethy_Hazan_2015} showed formal equivalence with a certain interior point
method for convex optimization.
This unexpected equivalence between seemingly different methods has motivated this current work to relate the bounds by Lasserre \cite{Las11}
to the simulating annealing bounds as well.
\medskip
In what follows, we first introduce the measure-based upper bounds of Lasserre~\cite{Las11}. Then we recall the bounds based on simulated annealing and the known convergence results for a linear objective function $f$, and we give an explicit proof of their extension to the case of a general convex function $f$. After that we state our main result and
the next section is devoted to its proof.
In the last section we conclude with numerical examples showing the quality of the two types of bounds and some final remarks.
\section{Lasserre's hierarchy of upper bounds}
Throughout, ${\mathbb R}[x]={\mathbb R}[x_1,\dots,x_n]$ is the set of polynomials in $n$ variables with real coefficients and, for an integer $r\in {\mathbb N}$, ${\mathbb R}[x]_r$ is the set of polynomials with degree at most $r$. Any polynomial $f\in {\mathbb R}[x]_r$ can be written $f=\sum_{\alpha\in N(n,r)} f_\alpha x^\alpha$, where we set $x^\alpha=\prod_{i=1}^nx_i^{\alpha_i}$ for $\alpha\in {\mathbb N}^n$ and $N(n,r)=\{\alpha\in {\mathbb N}^n: \sum_{i=1}^n\alpha_i\le r\}$.
We let $\Sigma[x]$ denote the set of sums of squares of polynomials, and $\Sigma[x]_r=\Sigma[x]\cap {\mathbb R}[x]_{2r}$ consists of all sums of squares of polynomials with degree at most $2r$.
\medskip We recall the following reformulation for $f_{\min,\mathbf{K}}$, established by Lasserre \cite{Las11}:
\begin{equation*}\label{fminkreform2}
f_{\min,\mathbf{K}}=\inf_{h\in\Sigma[x]}\int_{\mathbf{K}}h(x)f(x)dx \ \ \mbox{s.t. $\int_{\mathbf{K}}h(x)dx=1$.}
\end{equation*}
\smallskip
\noindent
By bounding the degree of the polynomial $h\in \Sigma[x]$ by $2r$, we can define the parameter:
\begin{eqnarray}\label{fundr}
\underline{f}^{(r)}_{\mathbf{K}}:=\inf_{h\in\Sigma[x]_r}\int_{\mathbf{K}}h(x)f(x)dx \ \ \mbox{s.t. $\int_{\mathbf{K}}h(x)dx=1$.}
\end{eqnarray}
\smallskip
\noindent
Clearly, the inequality $f_{\min,\mathbf{K}}\le\underline{f}^{(r)}_{\mathbf{K}}$ holds for all $r\in{\mathbb N}$. Lasserre \cite{Las11} gave conditions under which the infimum is attained in the program (\ref{fundr}). De Klerk, Laurent and Sun \cite[Theorem 3]{KLS MPA} established the following rate of convergence for the bounds
$\underline{f}^{(r)}_{\mathbf{K}}$.
\begin{theorem}[De Klerk, Laurent, and Sun \cite{KLS MPA}] \label{thm:dKLS}
Let $f\in {\mathbb R}[x]$ and $\mathbf{K}$ a convex body. There exist constants $C_{f,{\mathbf K}}$ (depending only on $f$ and ${\mathbf K}$) and $r_{\mathbf K}$ (depending only on ${\mathbf K}$) such that
\begin{equation}\label{thmmaineq2}
\underline{f}^{(r)}_{\mathbf{K}}-f_{\min,\mathbf{K}} \le {C_{f,{\mathbf K}} \over \sqrt {r}}\ \ \text{ for all } r\ge r_{\mathbf K}.
\end{equation}
That is, the following asymptotic convergence rate holds: $\underline{f}^{(r)}_{\mathbf{K}}-f_{\min,\mathbf{K}} \simeq O\left( {1\over \sqrt r}\right).$
\end{theorem}
This result of \cite{KLS MPA} holds in fact under more general assumptions, namely when $f$ is Lipschitz continuous and ${\mathbf K}$ satisfies a technical assumption (Assumption 1 in \cite{KLS MPA}), which says (roughly) that around any point in $ \mathbf K$ there is a ball whose intersection with ${\mathbf K}$ is at least a constant fraction of the unit ball.
As explained in \cite{Las11} the parameter $\underline{f}^{(r)}_{\mathbf{K}}$ can be computed using semidefinite programming, assuming one knows the moments $m_\alpha({\mathbf K})$ of the Lebesgue measure on ${\mathbf K}$, where
\begin{equation*}\label{mack}
m_{\alpha}(\mathbf{K}):=\int_{\mathbf{K}}x^{\alpha}dx\ \ \ \mbox{ for } \alpha\in {\mathbb N}^n.
\end{equation*}
\smallskip
\noindent
Indeed suppose $f(x)=\sum_{\beta\in N(n,d)}f_{\beta}x^{\beta}$ has degree $d$.
Writing $h\in\Sigma[x]_{r}$ as $h(x)=\sum_{\alpha\in N(n,2r)}h_{\alpha}x^{\alpha}$, the parameter $\underline{f}^{(r)}_{\mathbf{K}}$ from
(\ref{fundr}) can be reformulated as follows:
\begin{eqnarray}\label{eqSDP}
\underline{f}^{(r)}_{\mathbf{K}}&=&\min\sum_{\beta\in N(n,d)}f_{\beta}\sum_{\alpha\in N(n,2r)}h_{\alpha}m_{\alpha+\beta}(\mathbf{K})\label{fundr2}\\
& &\mbox{ s.t. } \ \ \sum_{\alpha\in N(n,2r)}h_{\alpha}m_{\alpha}(\mathbf{K})=1,\nonumber\\
&&\ \ \ \ \ \ \ \sum_{\alpha\in N(n,2r)}h_{\alpha}x^{\alpha}\in\Sigma[x]_r.\nonumber
\end{eqnarray}
Since the sum-of-squares condition on $h$ may be written as a linear matrix inequality, this is a semidefinite program.
In fact, since it only has one linear equality constraint, it may even be rewritten as a generalised eigenvalue problem.
In particular,
$\underline{f}_{\mathbf{K}}^{(r)}$ {is equal to the the smallest generalized eigenvalue} of the system:
\[
Ax = \lambda Bx \quad \quad\quad (x \neq 0),
\]
where the symmetric matrices $A$ and $B$ are of order ${n + r \choose r}$ with rows and columns indexed by $N(n,r)$,
and
\begin{equation}
\label{matrices A and B}
A_{\alpha, \beta} = \sum_{\delta \in N(n,d)} f_\delta \int_{\mathbf{K}} x^{\alpha + \beta + \delta} dx, \quad B_{\alpha, \beta} = \int_{\mathbf{K}} x^{\alpha + \beta} dx \quad \alpha, \beta \in {N}(n,r).
\end{equation}
For more details, see \cite{Las11,KLS MPA,KLLS MOR}.
\section{Bounds from simulated annealing}
Given a continuous function $f$, consider the associated Boltzman distribution over the set ${\mathbf K}$, defined by the density function:
\[
\textstyle
P_f(x) := \frac{\exp(-f(x))}
{ \int_{{\mathbf K}} \exp(-f(x') ) \, dx' }.
\]
Write
$
X \sim P_f
$
if the random variable $X$ takes values in ${\mathbf K}$ according to the Boltzman distribution.
The idea of simulated annealing is to sample $X \sim P_{f/t}$ where $t > 0$ is a fixed `temperature' parameter, that is subsequently decreased.
Clearly, for any $t>0$, we have
\begin{equation}\label{eqBoltz}
f_{\min,\mathbf{K}} \le \mathbb{E}_{X \sim P_{f/t}}[f(X)].
\end{equation}
The point is that, under mild assumptions, these bounds converge to the minimum of $f$ over ${\mathbf K}$ (see, e.g., \cite{Spall}):
\[
\lim_{t \downarrow 0} \mathbb{E}_{X \sim P_{f/t}}[f(X)] = f_{\min,\mathbf{K}}.
\]
The key step in the practical utilization of theses bounds is therefore to perform the sampling of $X \sim P_{f/t}$.
\medskip
\begin{example}
\label{ex:1}
Consider the minimization of the Motzkin polynomial $$f(x_1,x_2)=64(x_1^4x_2^2+x_1^2x_2^4) - 48x_1^2x_2^2 +1$$ over ${\mathbf K} = [-1,1]^2$,
where there are four global minimizers at the points
$\left(\pm \frac{1}{2},\pm \frac{1}{2}\right)$, and $f_{\min,{\mathbf K}} = 0$.
Figure \ref{figure:motzkin_SA} shows the
corresponding Boltzman density function for $t = \frac{1}{2}$.
Note that this density has four modes, roughly positioned at the four global minimizers of $f$ in $[-1,1]^2$.
The corresponding upper bound on $f_{\min,{\mathbf K}} = 0$ is $\mathbb{E}_{X \sim P_{f/t}}[f(X)] \approx 0.7257$ ($t = \frac{1}{2}$).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.45\textwidth]{Motzkin_SA_t_half.png}
\includegraphics[width=0.45\textwidth]{Motzkin_SA_t_half_contour.png}\\
\caption{\label{figure:motzkin_SA}Graph and contours of the Boltzman density with $t = \frac{1}{2}$ for the Motzkin polynomial.}
\end{center}
\end{figure}
To obtain a better upper bound on $f_{\min,{\mathbf K}}$ from the Lasserre hierarchy, one needs to
use a degree $14$ s.o.s.\ polynomial density; in particular, one has $\underline{f}^{(6)}_{\mathbf{K}}=0.8010$ (degree $12$) and $\underline{f}^{(7)}_{\mathbf{K}}= 0.7088$ (degree $14$). More detailed numerical results are given in Section \ref{sec:conclusion}.
\end{example}
\medskip
When $f$ is linear and ${\mathbf K}$ a convex body, Kalai and Vempala \cite[Lemma 4.1]{Kalai-Vempala 2006} show that the rate of convergence of the bounds in (\ref{eqBoltz}) is linear in the temperature $t$.
\begin{theorem}[Kalai and Vempala \cite{Kalai-Vempala 2006}]
\label{thm:Kallai-Vempala}
Let $f(x) = c^Tx$ where $c$ is a unit vector,
and let ${\mathbf K}$ be a convex body. Then, for any $t>0$, we have
\[
\mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)] - \min_{x \in {\mathbf K}} f(x) \leq n t.
\]
\end{theorem}
We indicate how to extend the result of Kalai and Vempala in Theorem \ref{thm:Kallai-Vempala} to the case of an arbitrary convex function $f$.
This more general result is hinted at in \S6 of \cite{Kalai-Vempala 2006}, where the authors write
\begin{quote}
``... a statement
analogous to [Theorem 2] holds also for general convex functions ..."
\end{quote}
but no precise statement is given there. In any event, as we will now show, the more general result may readily be derived from Theorem~\ref{thm:Kallai-Vempala} (in fact, from the special case of a linear coordinate function $f(x)=x_i$ for some $i$).
\begin{corollary}\label{corconvex}
Let $f$ be a convex function and let ${\mathbf K}\subseteq {\mathbb R}^n$ be a convex body.
Then, for any $t>0$, we have
\[
\mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)] - \min_{x \in {\mathbf K}} f(x) \leq n t.
\]
\end{corollary}
\begin{proof}
Set
$$E_{\mathbf K}:= \mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)]
= {\int_{\mathbf K} f(x)e^{-f(x)/t} dx\over \int_{\mathbf K} e^{-f(x)\over t} dx}.$$
Then we have $$f_{\min,\mathbf{K}}=\min_{x\in {\mathbf K}} f(x)\le E_{\mathbf K}.$$
Define the set
$$\widehat {\mathbf K}:=\{(x,x_{n+1})\in {\mathbb R}^{n+1}: x\in {\mathbf K},\ f(x)\le x_{n+1}\le E_{\mathbf K}\}.$$
Then $\widehat {\mathbf K}$ is a convex body and we have
$$\min_{x\in {\mathbf K}}f(x)=\min_{(x,x_{n+1})\in \widehat {\mathbf K}} x_{n+1}.$$
Accordingly, define the parameter
$$E_{\widehat {\mathbf K}}:= {\int_{\widehat {\mathbf K}} x_{n+1}e^{-x_{n+1}/t} dx_{n+1}dx \over \int_{\widehat {\mathbf K}} e^{-x_{n+1}/t} dx_{n+1}dx}.$$
Corollary \ref{corconvex} will follow if we show that
\begin{equation}\label{eqEE}
E_{\widehat{\mathbf K}}=E_{ {\mathbf K}}+t.
\end{equation}
To this end set $E_{\mathbf K}={N_{\mathbf K}\over D_{\mathbf K}}$ and $E_{\widehat {\mathbf K}}={N_{\widehat {\mathbf K}}\over D_{\widehat {\mathbf K}}}$, where we define
$$N_{\mathbf K}:= \int_{\mathbf K} f(x)e^{-f(x)/t} dx,\ \
D_{\mathbf K}:=\int_{\mathbf K} e^{-f(x)/t}dx,$$
$$N_{\widehat {\mathbf K}}:= \int_{\widehat {\mathbf K}} x_{n+1}e^{-x_{n+1}/t} dx_{n+1}dx,\ \
D_{\widehat {\mathbf K}}:= \int_{\widehat {\mathbf K}} e^{-x_{n+1}/t} dx_{n+1}dx.$$
We work out the parameters $N_{\widehat {\mathbf K}}$ and $D_{\widehat {\mathbf K}}$ (taking integrations by part):
$$D_{\widehat {\mathbf K}}= \int_{\mathbf K} \left(\int_{f(x)}^{E_{\mathbf K}} e^{-x_{n+1}/t}dx_{n+1}\right) dx = \int_{\mathbf K}\left(te^{-f(x)/t} -t e^{-E_{\mathbf K}/t}\right)dx = tD_{\mathbf K} -t e^{-E_{\mathbf K}/t}\text{\rm vol}({\mathbf K}),$$
\begin{eqnarray*}
N_{\widehat {\mathbf K}} &=& \int_{\mathbf K} \left(\int_{f(x)}^{E_{\mathbf K}} x_{n+1} e^{-x_{n+1}/t}dx_{n+1}\right) dx \\
&= & \int_{\mathbf K} \left(-tE_{\mathbf K} e^{-E_{\mathbf K}/t}+t f(x)e^{-f(x)/t} +t \int_{f(x)}^{E_{\mathbf K}} e^{-x_{n+1}/t} dx_{n+1}\right) dx \\
&=& -tE_{\mathbf K} e^{-E_{\mathbf K}/t}\text{\rm vol}({\mathbf K}) +t N_{\mathbf K} +t D_{\widehat {\mathbf K}}.
\end{eqnarray*}
Then, using the fact that $E_{\mathbf K}={N_{\mathbf K}\over D_{\mathbf K}}$, we obtain:
$${N_{\widehat {\mathbf K}}\over D_{\widehat {\mathbf K}}}= t + {N_{\mathbf K}-E_{\mathbf K} e^{-E_{\mathbf K}/t}\text{\rm vol}(K)\over D_{\mathbf K} -e^{-E_{\mathbf K}/t}\text{\rm vol}(K)} = t +{N_{\mathbf K}\over D_{\mathbf K}},
$$
which proves relation (\ref{eqEE}).
We can now derive the result of Corollary \ref{corconvex}.
Indeed, using Theorem 2 applied to $\widehat {\mathbf K}$ and the linear function $x_{n+1}$, we get
$$\mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)] - \min_{x \in {\mathbf K}} f(x) = E_K - \min_{x\in{\mathbf K}} f(x) = (E_{\widehat {\mathbf K}} -\min_{(x,x_{n+1})\in \widehat {\mathbf K}} x_{n+1}) + (E_{{\mathbf K}} -E_{\widehat {\mathbf K}}) \le t(n+1) -t =tn.$$
\mbox{}\hfill\qed
\end{proof}
\medskip
The bound in the corollary is tight asymptotically, as the following example shows.
\medskip
\begin{example}
\label{ex:tight bound}
Consider the univariate problem $\min_x \{x \; | \; x \in [0,1]\}$.
Thus, in this case, $f(x) = x$, ${\mathbf K} = [0,1]$ and $\min_{x \in {\mathbf K}} f(x)=0$.
For given temperature $t>0$, we have
\begin{eqnarray*}
\mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)] - \min_{x \in {\mathbf K}} f(x) = \frac{\int_0^1 x e^{-x/t}dx}{\int_0^\ell e^{-x/t}dx} - 0
= t-{e^{-1/t}\over 1-e^{-1/t}} \sim t \ \mbox{ for small } t.\\
\end{eqnarray*}
\end{example}
\section{Main results}
We will prove the following relationship between the sum-of-squares based upper bound (\ref{fundr}) of Lasserre and the bound (\ref{eqBoltz})
based on simulated annealing.
\begin{theorem}
\label{thm:main}
Let $f$ be a polynomial of degree $d$, let ${\mathbf K}$ be a compact set and set $\widehat{f}_{\max}=\max_{x\in {\mathbf K}} |f(x)|.$ Then we have
\[
\underline{f}^{(rd)}_{\mathbf{K}} \le \mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)] + {\widehat{f}_{\max}\over 2^r} \ \ \mbox{ for any integer } \ r \ge {e\cdot \widehat{f}_{\max} \over t}\ \mbox{ and any } \ t>0.
\]
\end{theorem}
For the problem of minimizing a convex polynomial function over a convex body, we obtain the following improved convergence rate for the sum-of-squares based bounds of Lasserre.
\begin{corollary}\label{cor:main}
Let $f\in {\mathbb R}[x]$ be a convex polynomial of degree $d$ and let ${\mathbf K}$ be a convex body. Then for any integer $r\ge 1$ one has
\[
\underline{f}^{(rd)}_{\mathbf{K}} - \min_{x \in {\mathbf K}} f(x) \leq \frac{c}{r},
\]
for some constant $c>0$ that does not depend on $r$.
(For instance, $c=(ne+1)\widehat{f}_{\max}$.)
\end{corollary}
\begin{proof}
Let $r\ge 1$ and set $t={e\cdot \widehat{f}_{\max}\over r}$. Combining Theorems \ref{thm:Kallai-Vempala} and \ref{thm:main}, we get
\begin{eqnarray*}
\underline{f}^{(rd)}_{\mathbf{K}} - \min_{x \in {\mathbf K}} f(x) &=&
\big( \underline{f}^{(rd)}_{\mathbf{K}} - \mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)] \big)
+\big( \mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)] -f_{\min,\mathbf{K}}\big) \\
&\leq& {\widehat{f}_{\max}\over 2^r}+nt = {\widehat{f}_{\max}\over 2^r} + {ne\cdot \widehat{f}_{\max}\over r} \le {(ne+1)\widehat{f}_{\max}\over r}.
\end{eqnarray*}
\hfill\qed\end{proof}
For convex polynomials $f$, this improves on the known $O(1/\sqrt{r})$ result from Theorem \ref{thm:dKLS}.
One may in fact use the last corollary to obtain the same rate of convergence in terms of $r$ for all polynomials, without the
convexity assumption, as we will now show.
\begin{corollary}
\label{cor:nonconvex}
If $f$ be a polynomial and ${\mathbf K}$ a convex body, then there is a $c > 0$ depending on $f$ and ${\mathbf K}$ only, so that
\[
\underline{f}^{(2r)}_{\mathbf{K}} - \min_{x \in {\mathbf K}} f(x) \le \frac{c}{r}.
\]
A suitable value for $c$ is
\[
c = (ne+1)\left(f_{\min,{\mathbf K}} +C^1_f\cdot \mbox{diam}({\mathbf K}) + C^2_f \cdot \mbox{diam}({\mathbf K})^2\right),
\]
where
$C^1_f = \max_{x \in {\mathbf K}} \| \nabla f(x) \|_2$ and
$C^2_f = \max_{x \in {\mathbf K}} \| \nabla^2 f(x) \|_2$.
\end{corollary}
We first define a convex quadratic function $q$ that upper bounds $f$ on ${\mathbf K}$ as follows:
\[
q(x) = f(a) + \nabla f(a)^\top (x-a) + C^2_f \|x-a\|_2^2,
\]
where $C^2_f = \max_{x \in {\mathbf K}} \| \nabla^2 f(x) \|_2$, and $a$ is the minimizer of $f$ on ${\mathbf K}$.
Note that $q(x) \ge f(x)$ for all $ x \in {\mathbf K}$ by Taylor's theorem, and
$\min_{x \in {\mathbf K}} q(x) = f(a)$.
By definition of the Lasserre hierarchy,
\begin{eqnarray*}
\underline{f}^{(2r)}_{\mathbf{K}} &:=&\inf_{h\in\Sigma[x]_{2r}}\int_{\mathbf{K}}h(x)f(x)dx \ \ \mbox{s.t. $\int_{\mathbf{K}}h(x)dx=1$}\\
&\le&\inf_{h\in\Sigma[x]_{2r}}\int_{\mathbf{K}}h(x)q(x)dx \ \ \mbox{s.t. $\int_{\mathbf{K}}h(x)dx=1$} \\
&\equiv& \underline{q}_{{\mathbf K}}^{(2r)}.
\end{eqnarray*}
Invoking Corollary \ref{cor:main} and using that the degree of $q$ is $2$, we obtain:
\[
\underline{f}^{(2r)}_{\mathbf{K}} \le \underline{q}_{{\mathbf K}}^{(2r)} \le f(a) + \frac{(ne+1)\hat q_{\max}}{r},
\]
where $\hat q_{\max} = \max_{x \in {\mathbf K}} q(x)\le f_{\min,{\mathbf K}} +C^1_f\cdot \mbox{diam}({\mathbf K}) + C^2_f \cdot \mbox{diam}({\mathbf K})^2$.\qed
The last result improves on the known $O\left(\frac{1}{\sqrt{r}} \right)$ rate in Theorem \ref{thm:dKLS}.
\subsection*{Proof of Theorem \ref{thm:main}}
The key idea in the proof of Theorem \ref{thm:main} is to replace the Boltzman density function by a polynomial approximation.
To this end, we first recall a basic result on approximating the exponential function by its truncated Taylor series.
\begin{lemma}[De Klerk, Laurent and Sun \cite{KLS MPA}]\label{lemf2rsos}
Let $\phi_{2r}(\lambda)$ denote the (univariate) polynomial of degree $2r$ obtained by truncating the Taylor series expansion of $e^{-\lambda}$ at the order $2r$. That is,
\begin{equation*}\label{phi2rt}
\phi_{2r}(\lambda):=\sum_{k=0}^{2r}{(-t)^k\over k!}.
\end{equation*}
Then $\phi_{2r}$ is a sum of squares of polynomials.
Moreover, we have
\begin{equation}\label{fmf2r}
0\le \phi_{2r}(\lambda) - e^{-\lambda} \le {\lambda^{2r+1}\over (2r+1)!} \quad \mbox{ for all } \lambda\ge 0.
\end{equation}
\end{lemma}
We now define the following approximation of the Boltzman density $P_{f/t}$:
\begin{equation}\label{eq:density}
\varphi_{2r,t}(x) := \frac{\phi_{2r}(f(x)/t)}{\int_{{\mathbf K}} \phi_{2r}(f(x)/t)dx}.
\end{equation}
By construction, $\varphi_{2r,t}$ is a sum-of-squares polynomial probability density function on ${\mathbf K}$, with degree $2rd$ if $f$ is a polynomial of degree $d$.
Moreover, by relation \eqref{fmf2r} in Lemma \ref{lemf2rsos}, we obtain
\begin{eqnarray}
\varphi_{2r,t}(x) & \le & \frac{\phi_{2r}(f(x)/t)}{\int_{{\mathbf K}} \exp(-f(x)/t)dx} \\
& \le & P_{f/t}(x) + \frac{ {(f(x)/t)^{2r+1} }}{(2r+1)!\int_{{\mathbf K}} \exp(-f(x)/t)dx}.\label{eqphi}
\end{eqnarray}
From this we can derive the following result.
\begin{lemma}
\label{lemma:err}
For any continuous $f$ and scalar $t>0$ one has
\begin{equation}
\label{eq:last term}
\underline{f}^{(rd)}_{\mathbf{K}} \le\int_{{\mathbf K}}f(x)\varphi_{2r,t}(x)dx \le \mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)] +
\frac{\int_{{\mathbf K}}(f(x)-f_{\min,\mathbf{K}}) (f(x))^{2r+1}dx}{t^{2r+1}(2r+1)!\int_{{\mathbf K}} \exp(-f(x)/t)dx}.
\end{equation}
\end{lemma}
\begin{proof}
As $\varphi_{2r,t}(x)$ is a polynomial of degree $2rd$ and a probability density function on ${\mathbf K}$ (by (\ref{eq:density})), we have:
\begin{equation}\label{eq0}
\underline{f}^{(rd)}_{\mathbf{K}} \le \int_{\mathbf K} f(x)\varphi_{2r,t}(x) dx =\int_{\mathbf K} (f(x)-f_{\min,\mathbf{K}}) \varphi_{2r,t}(x)dx + f_{\min,\mathbf{K}}.
\end{equation}
Using the above inequality (\ref{eqphi}) for $\varphi_{2r,t}(x)$ we can upper bound the integral on the right hand side:
\begin{eqnarray*}
\int_K (f(x)-f_{\min,\mathbf{K}}) \varphi_{2r,t}(x)dx & \le
& \int_{\mathbf K} (f(x)-f_{\min,\mathbf{K}})P_{f/t}(x)dx +
\int_{\mathbf K} { (f(x)-f_{\min,\mathbf{K}}) (f(x)/t)^{2r+1}\over (2r+1)! \int_K \exp(-f(x)/t)dx } dx\\
& = & \mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)] - f_{\min,\mathbf{K}} + \int_{\mathbf K} { (f(x)-f_{\min}) (f(x)/t)^{2r+1}\over (2r+1)! \int_K \exp(-f(x)/t)dx } dx.
\end{eqnarray*}
Combining with the inequality (\ref{eq0}) gives the desired result.\qed
\end{proof}
\medskip
We now proceed to the proof of Theorem \ref{thm:main}. In view of Lemma \ref{lemma:err}, we only need to bound the last right-hand-side term in \eqref{eq:last term}:
$$T:= \frac{\int_{{\mathbf K}}(f(x)-f_{\min}) (f(x))^{2r+1}dx}{t^{2r+1}(2r+1)!\int_{{\mathbf K}} \exp(-f(x)/t)dx}$$
and to show that $T\le {\widehat{f}_{\max}\over 2^r}$.
By the defininition of $\widehat{f}_{\max}$ we have
$$(f(x)-f_{\min})(f(x))^{2r+1}\le 2 \widehat{f}_{\max}^{2(r+1)}\ \mbox{ and } \ \exp(-f(x)/t) \ge \exp(\widehat{f}_{\max}/t) \ \mbox{ on } {\mathbf K},$$
which implies
$$ T \le {2 \widehat{f}_{\max}^{2(r+1)} \exp(\widehat{f}_{\max}/t) \over t^{2r+1}(2r+1)!}.$$
Combining with the
Stirling approximation inequality,
\[
r! \ge \sqrt{2\pi r}\left(\frac{r}{e}\right)^r \quad\quad\quad (r \in \mathbb{N}),
\]
applied to $(2r+1)!$, we obtain:
$$T
\le
{2\widehat{f}_{\max}\over \sqrt{2\pi(2r+1)} } \left({\widehat{f}_{\max} e \over t(2r+1)}\right)^{2r+1} \exp(\widehat{f}_{\max}/t).$$
Consider $r\ge {e\cdot \widehat{f}_{\max} \over t}$, so that $\widehat{f}_{\max}/t \le r/e$. Then, using the fact that $r/(2r+1)\le 1/2$, we obtain
\begin{eqnarray*}
T &\le&
\frac{2\widehat{f}_{\max}}{\sqrt{2\pi}}\frac{\exp(r/e)}{\sqrt{2r+1}}\left(\frac{r}{2r+1}\right)^{2r+1} \\
&\le&
\frac{\widehat{f}_{\max}}{\sqrt{2\pi}}\frac{\exp(1/e)^r}{\sqrt{2r+1}}\left(\frac{1}{4}\right)^{r} \\
&=&
\frac{\widehat{f}_{\max}}{\sqrt{2\pi}\sqrt{2r+1}}\left(\frac{\exp(1/e)}{4}\right)^r\\
&< & {\widehat{f}_{\max} \over 2^r}.\\
\end{eqnarray*}
This concludes the proof of Theorem \ref{thm:main}.
\section{Concluding remarks}
\label{sec:conclusion}
We conclude with a numerical comparison of the two hierarchies of bounds.
By Theorem \ref{thm:main}, it is reasonable to compare the bounds $\underline{f}^{(r)}_{\mathbf{K}}$ and
$\mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)]$, with $t = \frac{e\cdot d \cdot \widehat{f}_{\max}}{r}$ and $d$ the degree of $f$.
Thus we define, for the purpose of comparison:
\[
SA^{(r)} = \mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)], \mbox{ with $t = \frac{e\cdot d \cdot \widehat{f}_{\max}}{r}$}.
\]
We calculated the bounds for the polynomial test functions listed in Table \ref{tab:test}.
\begin{table}[h!]
\caption{Test functions, all with $n=2$, domain ${\mathbf K} = [-1,1]^2$, and minimum $f_{\min,{\mathbf K}} = 0$. \label{tab:test}}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}\hline
Name & $f(x)$ & $\widehat{f}_{\max}$ & $d$ & Convex?\\ \hline
Booth function& $(10x_1+20x_2-7)^2 + (20x_1+10x_2-5)^2$ & $2594$ & $2$ & yes \\ \hline
Matyas function& $26(x_1^2+x_2^2)-48x_1x_2$ & $100$ & $2$ & yes \\ \hline
Motzkin polynomial& $64(x_1^4x_2^2+x_1^2x_2^4) - 48x_1^2x_2^2 +1$ & $81$ & $6$ & no \\ \hline
Three-Hump Camel function& $\frac{5^6}{6}x_1^6-5^4\cdot 1.05x_1^4+50x_1^2+25x_1x_2+25x_2^2$ & $2048$ & $6$ & no \\ \hline
\end{tabular}
\end{center}
\end{table}
The bounds are shown in Table \ref{table:result1}.
The bounds $\underline{f}_{\mathbf{K}}^{(r)}$ were taken from \cite{DHL SIOPT},
while the bounds $SA^{(r)}$ were computed via numerical integration, in particular using the
Matlab routine {\tt sum2} of the package Chebfun \cite{chebfun}.
\begin{table}[h!]
\caption{Comparison of the upper bounds $SA^{(r)}$ and $\underline{f}_{\mathbf{K}}^{(r)}$ for the test functions.
\label{table:result1}}
\begin{center}
{\small
\begin{tabular}{| c | >{\centering}m{1.2cm} | c | >{\centering}m{1cm} | c | >{\centering}m{1.3cm} | c | >{\centering}m{1.2cm} | c |}
\hline
\multirow{2}{*}{{$r$}} & \multicolumn{2}{c|}{Booth Function} & \multicolumn{2}{c|}{Matyas Function} & \multicolumn{2}{m{3cm}|}{\centering Three--Hump Camel Function}& \multicolumn{2}{c|}{Motzkin Polynomial} \\ \cline{2-9}
& $\underline{f}_{\mathbf{K}}^{(r)}$ & $SA^{(r)}$& $\underline{f}_{\mathbf{K}}^{(r)}$ & $SA^{(r)}$ & $\underline{f}_{\mathbf{K}}^{(r)}$ & $SA^{(r)}$ & $\underline{f}_{\mathbf{K}}^{(r)}$ & $SA^{(r)}$\\ \hline
$3$ & 118.383 & 367.834 & 4.2817 &15.4212 & 29.0005&247.462 & 1.0614 & 4.0250\\ \hline
$4$ & 97.6473 & 356.113 & 3.8942 &14.8521 & 9.5806 &241.700 & 0.8294 & 3.9697\\ \hline
$5$ & 69.8174 & 345.043 & 3.6894 & 14.3143 & 9.5806 &236.102 & 0.8010 & 3.9157\\ \hline
$6$ & 63.5454 & 334.585 & 2.9956 & 13.8062 & 4.4398 &230.663 & 0.8010 & 3.8631\\ \hline
$7$ & 47.0467 & 324.701 & 2.5469 & 13.3262 & 4.4398 &225.381 & 0.7088 & 3.8118\\ \hline
$8$ & 41.6727 & 315.354 & 2.0430 & 12.8726 & 2.5503 &220.251 & 0.5655 & 3.7618\\ \hline
$9$ & 34.2140 & 306.510 & 1.8335 & 12.4441 & 2.5503 &215.269 & 0.5655 & 3.7130\\ \hline
$10$ & 28.7248 & 298.138 & 1.4784 & 12.0390 & 1.7127 &210.431 & 0.5078 & 3.6654\\ \hline
$11$ & 25.6050 & 290.206 & 1.3764 & 11.6560 & 1.7127 &205.734 & 0.4060 & 3.6190\\ \hline
$12$ & 21.1869 & 282.687 & 1.1178 & 11.2938 & 1.2775 &201.173 & 0.4060 & 3.5737\\ \hline
$13$ & 19.5588 & 275.554 & 1.0686 & 10.9511 & 1.2775 &196.745 & 0.3759 & 3.5296\\ \hline
$14$ & 16.5854 & 268.782 & 0.8742 & 10.6267 & 1.0185 &192.446 & 0.3004 & 3.4865\\ \hline
$15$ & 15.2815 & 262.348 & 0.8524 & 10.3195 & 1.0185 &188.272 & 0.3004 & 3.4444\\ \hline
$16$ & 13.4626 & 256.230 & 0.7020 & 10.0284 & 0.8434 &184.220 & 0.2819 & 3.4034\\ \hline
$17$ & 12.2075 & 250.408 & 0.6952 & 9.75250 & 0.8434 &180.287 & 0.2300 & 3.3633\\ \hline
$18$ & 11.0959 & 244.863 & 0.5760 & 9.49071 & 0.7113 &176.469 & 0.2300 & 3.3242\\ \hline
$19$ & 9.9938 & 239.577 & 0.5760 & 9.24220 & 0.7113 &172.762 & 0.2185 & 3.2860\\ \hline
$20$ & 9.2373 & 234.534 & 0.4815 & 9.00615 & 0.6064 &169.164 & 0.1817 & 3.2487\\ \hline
\end{tabular}
}\end{center}
\end{table}
The results in the table show that the bound in Theorem \ref{thm:main} is far from tight for these examples.
In fact, it may well be that the convergence rates of $\underline{f}_{\mathbf{K}}^{(r)}$ and $SA^{(r)}$ are different
for convex $f$. We know that $SA^{(r)} - f_{\min,{\mathbf K}}= \Theta(1/r)$ is the exact convergence rate for the simulated annealing
bounds for convex $f$ (cf.\ Example \ref{ex:tight bound}), but it was speculated in \cite{DHL SIOPT} that one may in fact have
$\underline{f}_{\mathbf{K}}^{(r)} - f_{\min,{\mathbf K}}= O(1/r^2)$, even for non-convex $f$. Determining the exact convergence rate $\underline{f}_{\mathbf{K}}^{(r)}$ remains an open problem.
Finally, one should point out that it is not really meaningful to compare the computational complexities of computing the
two bounds $\underline{f}_{\mathbf{K}}^{(r)} $ and $SA^{(r)}$, as explained below.
For any polynomial $f$ and convex body ${\mathbf K}$, $\underline{f}^{(r)}_{\mathbf{K}}$ may be computed by solving a generalised
eigenvalue problem with matrices of order ${n+r \choose r}$, as long as the moments of the Lebesgue measure on ${\mathbf K}$ are known.
The generalised eigenvalue computation may be done in $O\left({n+r \choose r}^3\right)$ operations; see \cite{KLLS MOR} for details.
Thus this is a polynomial-time procedure for fixed values of $r$.
For non-convex $f$, the complexity of computing $\mathop{\mathbb{E}}_{X \sim P_{f/t }}[f(X)]$ is not known.
When $f$ is linear, it is shown in \cite{Abernethy_Hazan_2015} that $\mathop{\mathbb{E}}_{X \sim P_{rf }}[f(X)]$ with $t = O(1/r)$ may be obtained
in $O^*\left(n^{4.5}\log(r) \right)$ oracle membership calls for ${\mathbf K}$, where the $O^*(\cdot)$ notation suppresses logarithmic factors.
Since the assumptions on the available information is different for the two types of bounds, there is no simple way to compare these respective complexities.
|
1,108,101,563,206 | arxiv | \section{Introduction}
It was argued by the author from different points of view that the
Photon would have a small mass $\sim 10^{-65}gms$ \cite{bhtd,mp}. We
will look into this now. This value is within the accepted
experimental limits for a Photon mass \cite{lakes}. It was further
argued that it is this Photon mass which is the source of the
puzzling residual cosmic energy that has been observed
lately\cite{mercini}.\\
Let us first derive this residual cosmic energy directly from the
background Dark Energy. We may reiterate that the "mysterious"
background Dark Energy is the same as the quantum Zero Point
Fluctuations in the background vacuum electromagnetic field which
is described by harmonic oscillators \cite{bgsde}. Let us now
consider, following \index{Wheeler}Wheeler a \index{Harmonic
oscillator}Harmonic oscillator in its ground state remembering that
the background Zero Point Field is a collection of such oscillators
\cite{mwt}. The probability amplitude is
$$\psi (x) = \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} e^{-(m\omega/2\hbar)x^2}$$
for displacement by the distance $x$ from its position of classical
equilibrium. So the oscillator fluctuates over an interval
$$\Delta x \sim (\hbar/m\omega)^{1/2}$$
The background \index{electromagnetic}electromagnetic field is an
infinite collection of independent oscillators, with amplitudes
$X_1,X_2$ etc. The probability for the various oscillators to have
amplitudes $X_1, X_2$ and so on is the product of individual
oscillator amplitudes:
$$\psi (X_1,X_2,\cdots ) = exp [-(X^2_1 + X^2_2 + \cdots)]$$
wherein there would be a suitable normalization factor. This
expression gives the probability amplitude $\psi$ for a
configuration $B (x,y,z)$ of the magnetic field that is described by
the Fourier coefficients $X_1,X_2,\cdots$ or directly in terms of
the magnetic field configuration itself by
$$\psi (B(x,y,z)) = P exp \left(-\int \int \frac{\bf{B}(x_1)\cdot \bf{B}(x_2)}{16\pi^3\hbar cr^2_{12}} d^3x_1 d^3x_2\right).$$
$P$ being a normalization factor. Let us consider a configuration
where the magnetic field is everywhere zero except in a region of
dimension $l$, where it is of the order of $\sim \Delta B$. The
probability amplitude for this configuration would be proportional
to
$$\exp [-(\Delta B)^2 l^4/\hbar c)$$
So the energy of \index{fluctuation}fluctuation in a region of
length $l$ is given by finally the density \cite{mwt,cr24,bgs}
\begin{equation}
B^2 \sim \frac{\hbar c}{l^4}\label{e1}
\end{equation}
The above energy density corresponds to an energy $\hbar c/l$ in the
volume $l^3$. This energy is minimum when $l$ is maximum. Let us
take $l$ to be the radius of the universe $\sim 10^{28}cms$. The
minimum energy residue of the background Dark Energy or Zero Point
Field now comes out to be $10^{-33}eV$, exactly the observed value.
This observed residual energy is a cosmic footprint of the
ubiquitous Dark Energy in the universe a puzzling footprint that, as
we noted, has recently been observed \cite{mercini}. If on the other
hand we take for $l$ the smallest possible length, which has been
taken to the Planck length $l_P$, as we will see in
the sequel, then we get the Planck mass $m_P$.\\
The minimum mass $\sim 10^{-33}eV$ or $10^{-65}gms$, will be seen to
be the mass of the Photon, which also is the minimum thermodynamic
mass in the universe, as shown by Landsberg from a totally different
point of view \cite{land}. So (\ref{e1}) gives two extreme masses,
the Planck mass and the Photon mass.\\
As an alternative derivation, it is interesting to derive a model
based on the theory of Phonons which are quanta of sound waves in a
macroscopic body \cite{huang}. Phonons are a mathematical analogue
of the quanta of the electromagnetic field, which are the Photons
that emerge when this field is expressed as a sum of Harmonic
oscillators. This situation is carried over to the theory of solids
which are made up of atoms that are arranged in a crystal lattice
and can be approximated by a sum of Harmonic oscillators
representing the normal modes of lattice oscillations. In this
theory, as is well known the Phonons have a maximum frequency
$\omega_m$ which is given by
\begin{equation}
\omega_m = c \left(\frac{6\pi^2}{v}\right)^{1/3}\label{e2}
\end{equation}
in (\ref{e2}) $c$ represents the velocity of sound in the specific
case of Photons, while $v = V/N$, where $V$ denotes the volume and
$N$ the number of atoms. In this model we write
$$l \equiv \left(\frac{4}{3} \pi v\right)^{1/3}$$
$l$ being the inter particle distance. Thus (\ref{e2}) now becomes
\begin{equation}
\omega_m = c/l\label{e3}
\end{equation}
Let us now liberate the above analysis from the immediate scenario
of atoms at lattice points and quantized sound waves due to the
Harmonic oscillations and look upon it as a general set of Harmonic
oscillators as above. Then we can see that (\ref{e3}) and (\ref{e1})
are identical as
$$\omega = \frac{mc^2}{\hbar}$$
So we again recover with suitable limits the extremes of the Planck
mass and the Photon mass (and other intermediate elementary particle
masses if we take $l$ as a typical Compton wavelength).\\
We now examine separately, the Planck scale and the photon mass. We
remark that there were basically two concepts of space which we had
inherited from the early days of modern science. The predominant
view has been the legacy from the Newtonian world view. Here we
consider space time to form a differentiable manifold. On the other
hand Liebniz had a different view of space, not as a container, but
rather made up of the contents itself. This lead to a view where
space time has the
smallest unit, and is therefore non differentiable.\\
Max Planck had noticed that, what we call the Planck scale today,
\begin{equation}
l_P = \left(\frac{\hbar G}{c^3}\right)^{\frac{1}{2}} \sim
10^{-33}cm\label{ea1}
\end{equation}
is made up of the fundamental constants of nature and so, he
suspected it played the role of a fundamental length. Indeed, modern
Quantum Gravity approaches have invoked (\ref{ea1}) in their quest
for a reconciliation of gravitation with other fundamental
interactions. In the process, the time honoured prescription of a
differentiable spacetime
has to be abandoned.\\
There is also another scale too, made up of fundamental constants of
nature, viz., the well known Compton scale,
\begin{equation}
l = e^2/m_ec^2 \sim 10^{-12}cm\label{ea2}
\end{equation}
where $e$ is the electron charge and $m_e$ the electron mass. This
had appeared in the Classical theory of the electron unlike the
Planck scale, which was
a product of Quantum Theory.\\
The scale (\ref{ea2}) has also played an important role in modern
physics, though it is not considered as fundamental as the Planck
scale. Nevertheless, the Compton scale (\ref{ea2}) is close to
reality in the sense of experiment, unlike (\ref{ea1}), which is
well beyond foreseeable direct experimental contact.
\section{The Planck and Compton Scales}
It is well known that String Theory, Loop Quantum Gravity and a few
other approaches start from the Planck scale. This is also the
starting point in the author's alternative theory of Planck
oscillators in the background dark energy. We first give a rationale
for the fact that the Planck scale would be a minimum scale in the
universe. Our starting point \cite{bgsijmpe} is the model for the
underpinning at the Planck scale for the universe. This is a
collection of $N$ Planck scale
oscillators.\\
Earlier, we had argued that a typical elementary particle like a
\index{pion}pion could be considered to be the result of $n \sim
10^{40}$ evanescent \index{Planck scale}Planck scale oscillators. We
will now consider the problem from a different point of view, which
not only reconfirms the above result, but also enables an elegant
extension to the case of the entire \index{Universe}Universe itself.
Let us consider an array of $N$ particles, spaced a distance $\Delta
x$ apart, which behave like oscillators, that is as if they were
connected by springs. We then have \cite{vandam,uof}
\begin{equation}
r = \sqrt{N \Delta x^2}\label{De1d}
\end{equation}
\begin{equation}
ka^2 \equiv k \Delta x^2 = \frac{1}{2} k_B T\label{De2d}
\end{equation}
where $k_B$ is the Boltzmann constant, $T$ the temperature, $r$ the
extent and $k$ is the spring constant given by
\begin{equation}
\omega_0^2 = \frac{k}{m}\label{De3d}
\end{equation}
\begin{equation}
\omega = \left(\frac{k}{m}a^2\right)^{\frac{1}{2}} \frac{1}{r} =
\omega_0 \frac{a}{r}\label{De4d}
\end{equation}
We now identify the particles with \index{Planck}Planck
\index{mass}masses, set $\Delta x \equiv a = l_P$, the
\index{Planck}Planck length. It may be immediately observed that use
of (\ref{De3d}) and (\ref{De2d}) gives $k_B T \sim m_P c^2$, which
ofcourse agrees with the temperature of a \index{black hole}black
hole of \index{Planck}Planck \index{mass}mass. Indeed, Rosen had
shown that a \index{Planck}Planck \index{mass}mass particle at the
\index{Planck scale}Planck scale can be considered to be a
\index{Universe}Universe in itself. We also use the fact alluded to
that a typical elementary particle like the \index{pion}pion can be
considered to be the result of $n \sim 10^{40}$ \index{Planck}Planck
\index{mass}masses. Using this in (\ref{De1d}), we get $r \sim l$,
the \index{pion}pion \index{Compton wavelength}Compton wavelength as
required. Further, in this latter case, using (\ref{De1d}) and the
fact that $N = n \sim 10^{40}$, and (\ref{De2d}),i.e. $k_BT =
kl^2/N$ and (\ref{De3d}) and (\ref{De4d}), we get for a
\index{pion}pion, remembering that $m^2_P/n = m^2,$
$$k_ B T = \frac{m^3 c^4 l^2}{\hbar^2} = mc^2,$$
which of course is the well known formula for the Hagedorn
temperature for \index{elementary particles}elementary particles
like \index{pion}pions. In other words, this confirms the conclusion
that we can treat an elementary particle as a series of some
$10^{40}$ \index{Planck}Planck \index{mass}mass oscillators. However
it must be observed from (\ref{De2d}) and (\ref{De3d}), that while
the \index{Planck}Planck \index{mass}mass gives the highest energy
state, an elementary particle like the \index{pion}pion is in the
lowest energy state. This explains why we encounter
\index{elementary particles}elementary particles, rather than
\index{Planck}Planck \index{mass}mass particles in nature. Infact as
already noted \cite{rr15}, a \index{Planck}Planck \index{mass}mass
particle decays via the \index{Bekenstein radiation}Bekenstein
radiation within a \index{Planck time}Planck time $\sim
10^{-42}secs$. On the other hand, the lifetime of an elementary
particle
would be very much higher.\\
In any case the efficacy of our above oscillator model can be seen by the fact that we recover correctly the \index{mass}masses and \index{Compton scale}Compton scales in the order of magnitude sense and also get the correct Bekenstein and Hagedorn formulas as seen above, and get the correct estimate of the \index{mass}mass of the \index{Universe}Universe itself, as will be seen below.\\
Using the fact that the \index{Universe}Universe consists of $N \sim
10^{80}$ \index{elementary particles}elementary particles like the
\index{pion}pions, the question is, can we think of the
\index{Universe}Universe as a collection of $n N \, \mbox{or}\,
10^{120}$ Planck \index{mass}mass oscillators? This is what we will
now show. Infact if we use equation (\ref{De1d}) with
$$\bar N \sim 10^{120},$$
we can see that the extent $r \sim 10^{28}cms$ which is of the order
of the diameter of the \index{Universe}Universe itself. Next using
(\ref{De4d}) we get
\begin{equation}
\hbar \omega_0^{(min)} \langle \frac{l_P}{10^{28}} \rangle^{-1}
\approx m_P c^2 \times 10^{60} \approx Mc^2\label{De5d}
\end{equation}
which gives the correct \index{mass}mass $M$, of the
\index{Universe}Universe which in contrast to the earlier
\index{pion}pion case, is the highest energy state while the
\index{Planck}Planck oscillators individually are this time the
lowest in this description. In other words the
\index{Universe}Universe itself can be considered to be described in
terms of normal modes of \index{Planck scale}Planck scale
oscillators (Cf.refs.\cite{psu,psp,uof,gip,ng} for details). We do
not need to specify $N$. We have in this case the following well
known relations
$$R = \sqrt{N}l, Kl^2 = kT,$$
\begin{equation}
\omega^2_{max} = \frac{K}{m} = \frac{kT}{ml^2}\label{ea3}
\end{equation}
In (\ref{ea3}), $R$ is of the order of the diameter of the universe,
$K$ is the analogue of
spring constant, $T$ is the effective temperature while $l$ is the analogue of the
Planck length, $m$ the analogue of the Planck mass and $\omega_{max}$ is the
frequency--the
reason for the subscript $max$ will be seen below. We do not yet give $l$ and $m$ their
usual values as given in (\ref{ea1}) for example, but rather try to deduce these values.\\
We now use the well known result that the individual minimal
oscillators are black holes or mini universes as shown by Rosen
\cite{rosen}. So using the well known Beckenstein temperature
formula for these primordial black holes \cite{ruffini}, that is
$$kT = \frac{\hbar c^3}{8\pi Gm}$$
in (\ref{ea3}) we get,
\begin{equation}
Gm^2 \sim \hbar c\label{e4}
\end{equation}
which is another form of (\ref{ea1}). We can easily verify that
(\ref{e4}) leads to the value $m \sim 10^{-5}gms$. In deducing
(\ref{e4}) we have used the typical expressions for the frequency as
the inverse of the time - the Compton time in this case and
similarly the expression for the Compton length. However it must be
reiterated that no specific values
for $l$ or $m$ were considered in the deduction of (\ref{e4}).\\
We now make two interesting comments. Cercignani and co-workers have
shown \cite{cer1,cer2} that when the gravitational energy becomes of
the order of the electromagnetic energy in the case of the Zero
Point oscillators, that is
\begin{equation}
\frac{G\hbar^2 \omega^3}{c^5} \sim \hbar \omega\label{e5}
\end{equation}
then this defines a threshold frequency $\omega_{max}$ above in
which the oscillations become chaotic. In other words, for
meaningful physics we require that
$$\omega < \omega_{max}.$$
Secondly as we saw from the parallel but unrelated theory of phonons
\cite{huang,rief}, which are also bosonic oscillators, we deduce a
maximal frequency given by
\begin{equation}
\omega^2_{max} = \frac{c^2}{l^2}\label{e6}
\end{equation}
In (\ref{e6}) $c$ is, in the particular case of phonons, the
velocity of propagation, that is the velocity of sound, whereas in
our case this velocity is that of light. Frequencies greater than
$\omega_{max}$ in (\ref{e6}) are again meaningless.
We can easily verify that using (\ref{e5}) in (\ref{e6}) gives back (\ref{e4}).\\
Finally we can see from (\ref{e3}) that, given the value of $l_P$
and using the value of the radius of the universe, viz., $R \sim
10^{27}$, we can deduce that,
\begin{equation}
N \sim 10^{120}\label{e7}
\end{equation}
In a sense the relation (\ref{e4}) can be interpreted in a slightly
different vein as
representing the scale at which all energy- gravitational and electromagnetic becomes one.\\
It should also be noted that, a Planck scale particle is a
Schwarzchild Black Hole. From this point of view, we cannot
penetrate the Planck Scale - it constitutes a physical limit. Thus,
in this sense, the Planck scale is indeed the minimum scale while
the photon scale is the largest - that is, the concerned masses are
respectively the highest and lowest.
|
1,108,101,563,207 | arxiv | \section{Introduction}
Recently, Device-to-Device (D2D) communication as a promising technology of 5G cellular networks, has been emerged to increase the spectral efficiency by reusing the same radio resources among multiple links \cite{D2DLTE}. It is considered as a solution to implement proximity services among multiple devices, such as mobile social networks, public safety and location-based advertisement \cite{ref2}, or for video content delivery \cite{MyIET}. D2D peer discovery is an important procedure to find potential D2D candidates to communicate with each other. Several studies have focussed on different aspects of D2D discovery in recent years. particularly, Long Term Evolution - Advanced (LTE-A) infrastructure is widely used in which, there is no need for extra designs and modifications in current cellular networks. In such systems, peer discovery procedure is under fully control of a central base-station (eNodeB) and is called centralized network-assisted D2D peer discovery. Interference and power control schemes for D2D discovery in Inband cellular by considering user densities, addressed in LTE-A enhancements \cite{ref3}.
In current LTE-A networks, energy consumption aspects of D2D discovery mechanisms which are discussed in the third generation partnership project (3GPP) are studied in \cite{ref4}. A probabilistic model and random access procedure in the LTE-A system that discovers pairs of user equipments (UEs) in a centralized manner have been also proposed in \cite{contrast2} and \cite{ref6}, respectively. Furthermore, a centrally controlled device discovery scheme tailored to the LTE system is proposed in \cite{ref7}. This scheme \cite{ref7} consists of a comprehensive application layer procedure that enables the device discovery services, and a set of Media Access Control (MAC) layer enhancements that provides a resource request/allocation procedure for discovery transmissions. By utilizing sounding reference signal (SRS) channel, which can be accessed by UEs that are LTE-compliant, a neighbor discovery with D2D channel estimation is proposed in \cite{ref8}. One approach to allocate radio resources to D2D peer discovery procedures is using LTE-A uplink radio resources. In this regard, physical uplink control channel (PUCCH) of cellular network is used to establish peer discovery signaling messages between BS and potentially D2D pairs. However it can cause inter carrier interference (ICI) and also transmitted signaling messages need to be strong enough to make sure that the BS and D2D candidates can receive the messages, correctly. Boosting transmit power of signaling messages symbols can increase the average transmit power of discovery signal, while maintaining low ICI and cellular PUCCH reception performance \cite{ref9}. Users also may use broadcasting to advertise their presence and service, to discover other devices, autonomously and continuousl
[11, 12]
In this paper, we propose an uplink underlay network-assisted novel scheme for D2D discovery. A network-controlled signaling algorithm is proposed to exchange discovery messages a) between user devices in potential D2D pairs, and b) between D2D pairs and BS. In this scheme, proximity users feedback their identity and channel information to BS in order to provide accurate estimation of the link quality between D2D pairs candidates. In contrast with the existing works in the literature \cite{contrast1,contrast2} in which the transmitted discovery messages assumed to be always successful in delivering to the respective receivers without considering cellular users' influence, we propose a realistic network model based on the Poisson point process (PPP) to include channel state information (CSI) in D2D discovery process by considering imposed interference from cellular users on D2D pairs. We employ stochastic geometry to analyze the system performance in terms of the cumulative distribution function (CDF) of the experienced signal-to-interference ratio (SIR) at D2D receivers and the expected number of needed time slots for D2D discovery processes in a multi-node scenario.
The rest of paper is organized as follows. In section II, we first delineate our system model and discovery mechanism. Then, we present PPP analysis in section III. The analytic and simulation results are presented in section IV and section V concludes this paper.
\section{SYSTEM MODEL And D2D DISCOVERY ALGORITHM}
\subsection{System Model}
We consider an infinite cellular network as shown in Fig. 1 with randomly distributed user devices and BSs within the network area. We define ${\Phi _{b}}$ as a Poisson point process (PPP) with density of ${\lambda _{b}}$, which determines the locations of the BSs within the network area. We also define ${\Psi _u}$ as another PPP with density of ${\lambda _{u}}$, which determines the locations of the UEs. Potential D2D pairs are sharing cellular users' resources underlay uplink cellular infrastructure, hence, the mutual interference at D2D receivers is taken into account. The broadband connection is provided for the BSs by central scheduler via wired links. For the notational simplicity and mathematical derivations, we focus on a typical random cell, termed representative cell, and analyze the performance of the proposed D2D discovery algorithm in terms of the transmitted signal's success probability and the expected number of required time slots, to satisfy the system minimum constraints. We neglect co-channel interference and neighboring cell users' influence, for the sake of simplicity, however, the analysis can be applied for multi-cell scenarios, as well. All D2D communications are under fully control of the BS and they transmit signaling messages at the beginning of each time slot. The transmission scheme is time division multiple access (TDMA) which implies, given a time for D2D discovery mechanisms, the whole time is divided into small portions called "time slots" and each user transmits its message at the beginning of each time slot, according to a transmission probability. For a given time slot, one D2D discovery message is allowed to pass at the beginning of this time slot, hence, if two users send their discovery message at the same time, collision occurs and therefore failed messages need to be retransmitted in the next time slots. We further assume that the system operates under interference-limited regime, therefore, the background noise is negligible in comparison with the experienced interference at the receivers.
\begin{figure}[ht]
\centering
\includegraphics[width=0.38 \textwidth]{sysmodel3-eps-converted-to.pdf}
\caption{System model with randomly distributed UEs and BSs.}
\label{overhead}
\end{figure}
\subsection{D2D Discovery Algorithm}
In this subsection, we propose new D2D discovery algorithm by considering the BS as the central management system. Technically, to initiate a D2D link between two users, some signaling messages need to be exchanged, through the network entities.
In our proposed scheme, since the BS is the central management system to exchange required signaling messages for D2D discovery scheme, the proposed scheme is centralized. We focus on a potential D2D pair ($u_x$, $u_y$) as a representative for all potential D2D pairs and describe the proposed D2D discovery mechanism by considering $u_x$ as transmitter and $u_y$ as its receiver. Assuming that the users can detect other users in their proximity (such a detection can be done by FlashLinQ mechanism proposed by LTE-A for proximity services \cite{ref15}), the proposed algorithm can be defined in five signaling steps as follows.
\begin{enumerate}
\item User $u_x$ sends a request message to the BS to initiate a D2D link with user $u_y$.
\item After receiving $u_x$'s transmitted signal at BS, the BS forwards discovery message to the user $u_y$ and schedules the user $u_y$ to listen discovery messages transmitted by the user $u_x$ in its vicinity, and sends acknowledgment signal to user $u_x$.
\item User $u_x$ sends discovery message to user $u_y$.
\item After receiving discovery message at user $u_y$, it measures the experienced signal-to-interference ratio (SIR) and sends it back to the BS.
\item BS receives measured SIR and queues the D2D pair to initiate D2D communication; if the received SIR satisfy the system threshold, BS initiate users $u_x$ and $u_y$ to initiate D2D communication; otherwise discovery process will fail and the D2D pair need to retransmit their discovery message in the next time slots.
\end{enumerate}
In all the aforementioned steps, when users send the signaling messages to the BS, the received signal at BS should satisfy the system SIR threshold, likewise.
\section{Analysis}
In this section we provide the analysis for the proposed scheme by employing stochastic geometry to analyze the transmitted signals' success probability. For the proposed system in section II, there are two key conditions to successfully deliver a discovery message from the transmitter to receiver.
a) Collision: when two users transmit their discovery message at the beginning of a given time slot at the same time, collision occurs and both users have to retransmit their discovery messages. Denoting $N$ as the number of potential D2D pairs, the signal from user $u_x$ will successfully be delivered to its respective receiver, if all other pairs are in idle mode. Each user transmit its discovery signal with a specific transmission probability. Hence we define $P_{nc}$ as the probability of successful signal transmit in a given time slot as
\begin{equation}
{P_{nc}} = {T_o}{(1 - {T_o})^{N - 1}},
\end{equation}
where, $T_o$ is the transmission probability. Now, we aim to maximize $P_{nc}$ by seeking optimal transmission probability:
\begin{equation}
\frac{{\partial {P_{nc}}}}{{\partial {T_o}}} = 0 \Rightarrow {T^*_o} = \frac{1}{N},
\end{equation}
where ${T^*_o}$ is the optimal transmission probability. Hence, denoting $P_{nc}^ * $ as the optimal probability of transmitting a signal without collision, we have:
\begin{equation}
P_{nc}^ * = \frac{1}{N}{(1 - \frac{1}{N})^{N - 1}}.
\end{equation}
b) SIR: for a given transmitter and its respective receiver, the received signal strength and consequently the SIR level should meet the system design thresholds. This means that a transmitted signal from user $u_x$ can be successfully delivered to receiver $u_y$, if the experienced SIR at receiver $u_y$ is equal or greater than a threshold $\tau$, i.e.,
\begin{equation}
P(SI{R_{xy}} \ge \tau ).
\end{equation}
Eq. (4) also delineates the cumulative distribution function (CDF) of experienced SIR at receiver $u_y$. Since all potential D2D pairs transmit their discovery message at the beginning of a given time slot, the parameter $N$ can be considered as the number of messages which are simultaneously transmitted by potential D2D users. Now, we define success probability for a transmitted signal from transmitter $u_x$ to its respective receiver $u_y$, by considering joint collision and SIR satisfaction. This means that a transmitted signal from user $u_x$ will successfully be delivered to its respective receiver if a) there is no collision in transmitting the signal and b) the received signal at receiver $u_y$ satisfies the system SIR threshold $\tau$, i.e.,
\begin{equation}
{P_{success}} = P(No{\rm{ }}Collision,SI{R_{xy}} \ge \tau ),
\end{equation}
since the collision process and SIR satisfaction are independent at each D2D pair, we can rewrite the eq. (5) as
\begin{equation}
{P_{success}} = P_{nc}^ * .P(SI{R_{xy}} \ge \tau ).
\end{equation}
Now we aim to derive the closed form of the CDF for the experienced SIR at the receiver of interest, $u_y$, by employing stochastic geometry approach. The experienced SIR at the receiver $u_y$ due to transmitted signal from user $u_x$ is
\begin{equation}
SI{R_{xy}} = \frac{{{P_t}{h_{xy}}l\left( {x,y} \right)}}{{\sum\limits_{z \in \Phi \backslash \{ x\} } {{P_t}{h_{zy}}l\left( {z,y} \right)} }},
\end{equation}
where, ${\rm{ }}l\left( {x,y} \right) = {\left\| {x - y} \right\|^{ - \alpha }}$ is the standard path loss with exponent of $\alpha$ and $\left\| {x - y} \right\|$ denotes the Euclidean distance between transmitter $u_x$ and its respective receiver, $u_y$. ${\sum\nolimits_{z \in \Phi \backslash \{ x\} } {{P_t}{h_{zy}}{l(z,y)}}}$ is total interference due to the transmitting nodes in set $\Phi$, which is the set of concurrent transmitting nodes. Backslash in eq. (7) implies that the node $u_x$ is excluded from transmitters set. ${{h_{xy}}}$ and ${{h_{zy}}}$ are the fading power coefficients with exponential distribution of mean one, corresponding to the channel gain between transmitter $u_x$ and its respective receiver $u_y$, and the interferer $u_z$, respectively. We consider receiver $u_y$ at origin and transmitting node $u_x$ with a fixed distance of $R$ from $u_y$, which is the nearest user to $u_y$. Hence, all the interferer are in the outside of the circle of radius $R$. Now, by denoting $I$ as the total interference, i.e., $I = \sum\nolimits_{z \in \Phi \backslash \{ x\} } {{h_{zy}}l(z,y)} $ , and using Campbell’s theorem \cite{Campell}, the Laplace transform of the interference \cite{stochastic} can be defined as:
\begin{align}
{\mathscr{L}_{I}\left( s \right)} &= \mathscr{L} \left( {\sum\limits_{z \in \Phi } {{h_{zy}}l\left( {z,y} \right)} } \right)\\\notag
&=E\left( {\prod\limits_{z \in \Phi } {{e^{ - s{h_{zy}}l\left( {z,y} \right)}}} } \right)\\\notag
&=\exp \left( { - \pi \lambda_{u} \Gamma \left( {1 + \delta } \right)\Gamma \left( {1 - \delta } \right){s^\delta }} \right)\\\notag
&\buildrel \Delta \over = \exp \left( { - \lambda_{u} G\left( {s,\alpha } \right)} \right),
\end{align}
where, $G\left( {s,\alpha } \right) = \frac{{{\pi ^2}\delta {s^\delta }}}{{\sin \left( {\pi \delta } \right)}}$, $\delta \buildrel \Delta \over = \frac{2}{\alpha }$ and $\Gamma (.)$ is the gamma function. Now, the final expression for the CDF of SIR at receiver $u_y$ can be defined as
\begin{align}
P\left( {SI{R_{xy}} \ge \tau } \right)& = {L_I}\left( {\tau {R^\alpha }} \right)\\&\notag = \exp \left( { - \lambda_u G\left( {\tau {R^\alpha },\alpha } \right)} \right).
\end{align}
By substituting equations (3) and (9) in eq. (6), the final expression for success probability can be derived. Now, we define random variable $X$ denoting the number of successful transmissions in given $n$ time slots. The probability that there is at least one successful discovery signal is
\begin{equation}
p\left( {X \ge 1} \right) = 1 - {\left( {1 - {p_{success}}} \right)^n}.
\end{equation}
Now, we define another system design parameter $\eta$, as the minimum required success probability for requiring at least one time slot for a successful discovery message. We have:
\begin{equation}
p\left( {X \ge 1} \right) \ge \eta.
\end{equation}
By substituting equations (3) and (9) in (6) and then eq. (6) in (10) and then (10) in (11), respectively, and some simple manipulations, the maximum number of required time slots for the proposed D2D discovery mechanism can be defined as
\begin{align}
n \le \frac{{\ln (1 - \eta )}}{{\ln (1 - \frac{1}{N}{{(1 - \frac{1}{N})}^{N - 1}}exp( - {\lambda _u}G(\tau {R^\alpha },\alpha )))}},
\end{align}
where,
\begin{align}
G\left( {\tau {R^\alpha },\alpha } \right) = \frac{{{\pi ^2}\delta {{\left( {\tau {R^\alpha }} \right)}^\alpha }}}{{\sin \left( {\pi \delta } \right)}};{\rm{ }}\delta \buildrel \Delta \over = \frac{2}{\alpha };{\rm{ }}\alpha> 2.\notag
\end{align}
$\alpha> 2$ is the requirement of the stochastic geometry approach addressed in \cite{stochastic}.
\section{Analytic and Simulation Results}
In this section, we provide Monte-Carlo simulations to evaluate system performance. We focus on an isolated single cell of the voronoi tessellation network (Fig. 1) and neglect inter-cell interference. We assume standard microcell channel model by considering log-normal shadow fading effect. The rest of simulation parameters are summarized in Table 1. In the wireless D2D network, described in section II, Users initiate for D2D communication with the nearest neighbor in the vicinity. We choose potential D2D pairs, according to given distance threshold and label them by IDs as $\{ D{D_1},D{D_2},...,D{D_N}\}$. The key parameter in this system is collision detection in simultaneous transmissions by D2D pairs in the beginning of each time slot. As we analyzed in section III, the optimal transmission probability in the beginning of each time slot is given by ${T_o} = \frac{1}{N}$ (eq. 2). Since the channel access model is TDMA, to simulate collision detection, at the beginning of each time slot, each user throws its dice to pick up a number between [1 $N$], if the chosen number is equal to the respective D2D pair's ID, then, the D2D transmitter will be authorized to transmit its discovery signal to its respective receiver. And if the received signal in the receiver can satisfies system thresholds ($\tau$), D2D pair succeed in the collision process and will proceed to the next step of the discovery algorithm.
\begin{table}[h]
\caption {Simulation parameters} \label{tab:title}
\begin{footnotesize}
\small{
\centering
\begin{tabular}{l|l}
\toprule
Parameter & Values\\
\midrule
UEs density ($\lambda_u$) & [$10^{-3}$ $10^0$]\\
BSs density ($\lambda_b$) & 0.2\\
Under consideration D2D pairs ($N$) & 2, 4, 6, 8\\
Required success probability ($\eta$) & 0.9\\
DUEs SINR threshold ($\tau$) & [-20 20] dBm\\
Path loss exponent ($\alpha$) & 4\\
Distance between DUEs($R$) & 30 m\\
User transmit power ($P_t$) & 23 dBm\\
BS transmit power ($P_{bs}$) & 40 dBm\\
Log-normal shadow fading & 4 dB standard deviation\\
Monte-Carlo iterations & 1000\\
\bottomrule
\end{tabular}
\label{tbl:params1}}
\end{footnotesize}
\end{table}
In what follows, we first describe the analytic results and possible tradeoffs in Fig. 1, 2, and 3 and then describe the simulation results for delineated system model in section II.
Fig. 2 demonstrates the impact of $\lambda_u$ on the CDF of SIR for different system SIR thresholds ($\tau$). As can be seen in this figure, by increasing the density of users ($\lambda_u$) within the cell area, which corresponds to more potential D2D pairs within the cell area, the probability of successful discovery messages transmissions with respect to system SIR threshold, decreases, because, the experienced interference at D2D receivers increases. In other hand, increasing system SIR thresholds which corresponds to guaranteeing higher D2D link quality, leads to decreasing the probability of successful discovery messages deliveries. Therefore, there is a trade off between D2D link quality and successful discovery messages deliveries. i.e., guaranteeing higher D2D link quality, the lower is the probability of successful delivery of discovery messages and guaranteeing the lower D2D link quality, the higher is the successful delivery of discovery message. Fig. 3 shows the impact of system SIR threshold on the CDF of SIR. Similar explanations for Fig. 2 are valid for Fig. 3.
\begin{figure}[b]
\centering
\includegraphics[width=0.43 \textwidth]{PSIR_lambda-eps-converted-to.pdf}
\caption{CDF of SIR versus network density $\lambda_u$.}
\end{figure}
Fig. 4 demonstrates the main success probability defined in eq. (6). In this figure, we consider limited number of D2D pairs ($N$) with the impact of joint collision detection and SIR satisfaction. As can be seen from this figure, collision detection has a major impact on the success probability, due to simultaneous transmission at the beginning of time slots. By increasing the number of D2D pairs in a given time slot, to transmit their desired discovery message, the impact of collision detection is more accented.
Fig. 5 shows the CDF of the minimum required number of time slots for successful D2D discovery process. As expected, by increasing the number of potential D2D pairs in a specific $\lambda$ and $\tau$ , the required time slots increases due to more collisions in channel access. In other hand, by increasing the density of the network, since D2D pairs are sharing uplink resources of the cellular users, there is high interference on D2D receivers, which leads to more fails in D2D discovery messages deliveries. Fig. 6 demonstrates the number of required time slots for different number of D2D pairs as derived in eq. (12). Similar explanations for Fig. 5 are also valid for Fig. 6.
\begin{figure}[ht]
\centering
\includegraphics[width=0.43 \textwidth]{PSIR_t-eps-converted-to.pdf}
\caption{CDF of SIR versus $\tau$ for $\lambda_u=0.5, 0.7$.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.41 \textwidth]{PS_lambda-eps-converted-to.pdf}
\caption{$P_{success}$ versus $\lambda_u$ for $\tau=20$ dBm.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.43 \textwidth]{CDF_slots-eps-converted-to.pdf}
\caption{CDF of time slots for $\lambda_u = 0.4 $ and $\tau=20$ dBm.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.43 \textwidth]{N_lambda-eps-converted-to.pdf}
\caption{Required time slots versus $\lambda_u$ for $\tau=20$ dBm.}
\end{figure}
\section{CONCLUSIONS}
In this paper, we proposed a novel scheme for D2D discovery by employing a centralized signaling algorithm for exchanging discovery messages between network entities. We used stochastic geometry to analyze and evaluate the realistic performance of the proposed scheme. For this system, proposing new efficient collision avoidance algorithms in TDMA channel access can promises even high reliability to deliver D2D discovery messages.
\bibliographystyle{IEEEtran}
|
1,108,101,563,208 | arxiv | \section{Introduction}
India is the second most populated country in the world, where $\sim1.36$ billion people speak in over $200$ different languages. Among them, the top five languages ({\em Hindi, Bengali, Telegu, Tamil,} and {\em Malayalam}) covers $\sim 93\%
of the entire population with more than $26\%$ of them being bilingual (as per Wikipedia).
Moreover, a significant proportion of them ~\cite{singh-etal-2018-language} use code-mixed languages to express themselves in Online Social Networks (OSN).
\emph{Code-mixing} (CM) is a linguistic phenomenon in which two or more languages are alternately used during conversation. One of the languages is usually English, while the other can be any regional language such as Hindi (Hindi + English $\rightarrow$ \textit{Hinglish}), Bengali (Bengali + English $\rightarrow$ \textit{Benglish}), Spanish (Spanish + English $\rightarrow$ \textit{Spaniglish}), etc.
Their presence on social media platforms and in day-to-day conversions among the people of a multi-lingual communities (such as Indians) is overwhelming. Despite the fact that a significant population is comfortable with code-mixed languages, the research involving them is fairly young. One of the prime reasons is the linguistic diversity, i.e., research on any language often fails to adapt for other distant languages, thus they need to be studied and researched separately. In recent years, many organizations have identified the challenges and have put in commendable efforts for the development of computational systems in regional monolingual or code-mixed setups.
Traditionally, the NLP community has studied the code-mixing phenomenon from a task-specific point of view. Recently, a few studies \cite{pratapa-etal-2018-word, aguilar-solorio-2020-english} have started learning representations for code-mixed texts for semantic and syntactic tasks. While the former has showcased the importance of multi-lingual embeddings from CM text, the latter has made use of a hierarchical attention mechanism on top of positionally aware character bi-grams and tri-grams to learn robust word representations for CM text. Carrying over the same objective, in this paper, we introduce a novel \textbf{HI}erarchically attentive \textbf{T}ransformer (\textsf{HIT}) framework to effectively encode the syntactic and semantic features in embeddings space. At first, \textsf{HIT}\ learns sub-word level representations employing a fused attention mechanism (FAME) -- an \textit{outer-product} based attention mechanism \cite{pmlr-v119-le20b} fused with standard multi-headed self-attention \cite{vaswanietal2017}. The intuition of sub-word level representation learning is supplemented by the lexical variations of a word in code-mixed languages. The \textit{character-level} \textsf{HIT}\ helps in representing phonetically similar word and their variations to a similar embedding space and extracts better representation for noisy texts. Subsequently, we apply \textsf{HIT}\ module at \textit{word-level} to incorporate the semantics at the sentence-level. The output of \textsf{HIT}\ is a sequence of word representations, and can be fed to the architectures of any downstream NLP tasks. For the evaluation of \textsf{HIT}, we experiment on one classification (sentiment classification), one generative (MT), and two sequence-labelling (POS tagging and NER) tasks. In total, these tasks span to eleven datasets across six code-mixed languages -- one European (\textit{Spanish}) and five Indic (\textit{Hindi}, \textit{Bengali}, \textit{Telugu}, \textit{Tamil}, and \textit{Malayalam}).
Our empirical results show that representations learned by \textsf{HIT}\ are superior to existing multi-lingual and code-mixed representations, and report state-of-the-art performance across all tasks. Additionally, we observe encouraging adaptability of \textsf{HIT}\ in a transfer learning setup across tasks. The representations learned for a task is employed for learning other tasks w/ and w/o fine-tuning. \textsf{HIT}\ yields good performance in both setups for two code-mixed languages.
\noindent \textbf{Main contributions:} We summarize our contributions as follow
\begin{itemize}[leftmargin=*,topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item We propose a hierarchical attention transformer framework for learning word representations of code-mixed texts for six non-English languages.
\item We propose a hybrid self-attention mechanism, FAME, to fuse the multi-headed self-attention and outer-product attention mechanisms in our transformer encoders.
\item We show the effectiveness of \textsf{HIT}\ on eleven datasets across four NLP tasks and six languages.
\item We observe good task-invariant performance of \textsf{HIT}\ in a transfer learning setup for two code-mixed languages.
\end{itemize}
\noindent \textbf{Reproducibility:} Source codes, datasets and other details to reproduce the results have been made public at \url{https://github.com/LCS2-IIITD/HIT-ACL2021-Codemixed-Representation}.
\input{fig-model}
\section{Related Work}
Recent years have witnessed a few interesting work in the domain of code-mixed/switched representation learning. Seminal work was driven by bilingual embedding that employs cross-lingual transfer to develop NLP models for resource-scarce languages \cite{upadhyay-etal-2016-cross, akhtar-etal-2018-solving,10.1613/jair.1.11640}. \newcite{faruqui-dyer-2014-improving} introduced the BiCCA embedding using bilingual correlation, which performed well on syntactical tasks, but poorly on cross-lingual semantic tasks. Similarly, frameworks proposed by \newcite{hermann-blunsom-2014-multilingual} and \newcite{luong-etal-2015-bilingual} depend on projecting the words of two languages into a single embedding space.
However, as demonstrated by \citet{pratapa-etal-2018-word}, bilingual embedding techniques are not ideal for CS text processing and should be replaced by multi-lingual embeddings learnt from CM data. The transformer-based Multilingual BERT \cite{devlin-etal-2019-bert} embedding has been demonstrated \cite{pires-etal-2019-multilingual} to possess impressive cross-lingual model transfer capabilities.
Also, the XLM model \cite{NEURIPS2019_c04c19c2} has also shown the effects of cross-lingual training for low-resource and CM language tasks.
Another school of thought revolves around sub-word level representations, which can help to capture variations found in CM and transliterated text. \newcite{joshi2016towards} proposed a CNN-LSTM based model to learn the sub-word embeddings through 1-D convolutions of character inputs. They showed that it resulted in better sentiment classification performance for CM text. On top of this intuition, attention-based frameworks have also been proven to be successful in learning low-level representations. The HAN \cite{yang2016hierarchical} model provides the intuition of hierarchical attention for document classification, which enables it to differentially attend to more and less important content, at the word and sentence levels.
In another work, \newcite{aguilar-solorio-2020-english} proposed CS-ELMo for code-mixed inputs with similar intuition.
It utilizes the hierarchical attention mechanism on bi-gram and tri-gram levels to effectively encode the sub-word level representations, while adding positional awareness to it.
Our work builds on top of these two earlier works to push the robustness of code-mixed representations to higher levels. However, the main difference between existing studies and \textsf{HIT}\ is the incorporation of outer-product attention-based fused attention mechanism (FAME).
\section{Proposed Methodology}
In this section, we describe the architecture of \textsf{HIT}\ for learning effective representations in code-mixed languages.
The backbone of \textsf{HIT}\ is transformer \cite{vaswanietal2017} and Hierarchical Attention Network (HAN) \cite{yang2016hierarchical}. \textsf{HIT}\ takes a sequence of words (a code-mixed sentence) $S = \langle w_1, w_2, ..., w_N\rangle$ as input and processes each word $w_i$ using a \textit{character-level} \textsf{HIT}\ to obtain sub-word representation $S^{sb} = \langle sb_1, sb_2, ..., sb_N\rangle $. The \textit{character-level} \textsf{HIT}\ is a transformer encoder, where instead of computing multi-headed self-attention only, we amalgamate it with an outer-product attention mechanism \cite{pmlr-v119-le20b} as well. The intuition of outer-product attention is to extract higher-order character-level relational similarities among inputs. To leverage both attention mechanisms, we compute their weighted sum using a softmax layer. Subsequently, we pass it through the typical \textit{normalization} and \textit{feed-forward} layers to obtain the encoder's output. A stacking of $l_c$ encoders is used. In the next layer of the hierarchy, these sub-word representations are combined with positional and rudimentary embeddings of each word and forwarded to the \textit{word-level} \textsf{HIT}'s encoder. Finally, the output of \textit{word-level} \textsf{HIT}\ is fed to the respective task-specific network.
The hierarchical nature of \textsf{HIT}\ enables us to capture both \textit{character-level} and \textit{word-level} relational (syntactic and semantic) similarities. A high-level schema of \textsf{HIT}\ is shown in Figure \ref{fig:model}.
\subsection{Fused Attention Mechanism (FAME)}
\label{subs: FAME}
FAME extends the multi-headed self-attention (MSA) module of a standard transformer by including a novel outer-product attention (OPA) mechanism. Given an input $x$, we use three weight matrices, $W^{self}_Q, W^{self}_K,$ and $W^{self}_V$, to project the input to \textit{Query} ($Q^{self}$) , \textit{Key} ($K^{self}$), and \textit{Value} ($V^{self}$) representations for MSA, respectively. Similarly for OPA we use $W^{outer}_Q, W^{outer}_K,$ and $W^{outer}_V$ for the projecting $x$ to $Q^{outer}, K^{outer}$ and $V^{outer}$. Next, the two attention mechanisms are learnt in parallel, and a weighted sum is computed as its output. Formally, $ H = \alpha_1 \cdot Z_{self} \oplus \alpha_2 \cdot Z_{outer}$,
where $Z_{self}$ and $Z_{outer}$ respectively are the outputs of multi-headed self attention and outer-product attention modules, and $\alpha_1$ and $\alpha_2$ are the respective weights computed through a softmax function.
\paragraph{Multi-Headed Self Attention.}
The standard transformer self-attention module \cite{vaswanietal2017} computes a scaled dot-product between the \textit{query} and \textit{key} vectors prior to learn the attention weights for the \textit{value} vector. We compute the output as follows:
\begin{eqnarray}\tiny
Z_{self} & = & softmax\left( \frac{Q^{self} \cdot K^{self^{T}}}{\sqrt{d^k}}\right) V^{self} \nonumber \\ \nonumber
& = & \sum_i^N softmax\left( \frac{q \cdot k_i}{\sqrt{d^k}}\right) v_i , \forall q \in Q^{self}
\end{eqnarray}
where $N$ is the sequence length, and $d^k$ is the dimension of the \textit{key} vector.
\paragraph{Outer-Product Attention.}
\label{subsubs: OPA}
This is the second attention that we incorporate into FAME. Although the fundamental process of OPA \cite{pmlr-v119-le20b} is similar to the multi-headed self-attention computation, OPA differs in terms of different operators and the use of row-wise \textit{tanh} activation instead of softmax. To compute the interaction between the query and key vectors, we use element-wise multiplication as opposed to the dot-product in MSA. Subsequently, an element-wise \textit{tanh} is applied before computing the outer-product with the value vector. The intuition is to exploit fine-level associations between the \textit{key}-scaled \textit{query} and \textit{value} representations in a code-mixed setup. Similar to the earlier case, we define OPA as:
\begin{eqnarray}
Z_{outer} = \sum_i^N tanh\left(\frac{q \odot k_i}{\sqrt{d_k}}\right) \otimes v_i, \forall q \in Q^{outer} \nonumber
\end{eqnarray}
where $\odot$ is the element-wise multiplication, and $\otimes$ is the outer-product.
\subsection{Task-specific Layers}
As we mention earlier, \textsf{HIT}\ can be adapted for various NLP tasks including sequence labelling, classification, or generative problems. In the current work, we evaluate \textsf{HIT}\ on part-of-speech (POS) tagging, named-entity recognition (NER), sentiment classification, and machine translation (MT). We mention their specific architectural details below.
For the sentiment classification, we apply a {\em GlobalAveragePooling} operation over the token embeddings to obtain the sentence embeddings. Additionally, we concatenate extracted statistical features along with the embeddings before feeding into the final classification layer. We use \textit{tf-idf} (term frequency–inverse document frequency) vectors for $\{1,2,3\}$-grams of words and characters extracted from each text. We hypothesize that these statistical features contain sufficient information to get rid of any handcrafted features like the ones suggested by
~\citet{bansal-etal-2020-code}. Finally, a \textit{softmax} activation function is used for the prediction.
Similarly, for POS tagging and NER, the corresponding labels for each of the token's embedding is obtained through a softmax activated output.
In case of MT, we use an encoder-decoder framework where both the encoder and the decoder are based on the {\textsf{HIT}} framework
\section{Experiments, Results, and Analyses}
In this section, we furnish the details of chosen datasets, our experimental results, comparative study, and necessary analyses.
\input{tab-dataset}
\subsection{Datasets}
\label{subs: data}
We evaluate 11 publicly available datasets across 4 tasks in 6 code-mixed languages. For POS tagging, we employ Hindi, Telugu, Bengali, and Spanish, whereas, we evaluate Hindi and Spanish datasets for NER. Similarly, in sentiment classification, we incorporate Hindi, Tamil, Malayalam, and Spanish code-mixed sentences. Finally, for machine translation, we use a recently released Hindi-English code-mixed parallel corpus. A brief statistics of all datasets is presented in Table \ref{tab:dataset}.\\
$\bullet$ \textbf{POS tagging:} We use the Hindi-English code-mixed POS dataset provided by ~\citet{singh-etal-2018-twitter}. It was collected from Twitter and has $1489$ sentences. Each token in the sentence is tagged with one of the 14 tags\footnote{We furnish the details of tagset in the appendix}. The Bengali and Telugu datasets are collected from ICON-2016 workshop\footnote{\url{http://amitavadas.com/Code-Mixing.html}}. The instances are the social-media messages, collected from Twitter, Facebook and WhatsApp, and have $1982$ and $626$ sentences in Telugu and Bengali, respectively. These two datasets follow Google universal tagset \cite{DBLP:journals/corr/abs-1104-2086} and contain $52$ and $39$ tags respectively.
For Spanish, we use Linguistic Code-switching Evaluation (LinCE) POS dataset ~\cite{alghamdi-etal-2016-part} consisting of more that $35k$ sentences with $14$ tags.
\input{tab-sentiment}
$\bullet$ \textbf{Sentiment classification:} We explore the Hinglish sentiment classification dataset developed by ~\newcite{joshi2016towards}. The dataset contains $3879$ Facebook public posts comprises of $15\%$ {\em negative}, $50\%$ {\em neutral}, and $35\%$ {\em positive} samples. We further consider two sentiment classification datasets for Dravidian languages \textit{viz.} Tamil and Malayalam \cite{chakravarthi-etal-2020-corpus},
containing $15744$ and $6739$ instances respectively with four sentiment labels -- {\em positive}, {\em negative}, {\em neutral}, and {\em mixed feelings}. Additionally, we use SemEval-2020 ~\cite{patwa-etal-2020-sentimix} dataset for Spanish code-mixed sentiment classification. It supports a classic 3-way sentiment classification. \\
$\bullet$ \textbf{Named-entity recognition:} For NER, we employ Hindi \cite{singh2018named} and Spanish \cite{aguilar-etal-2018-named} datasets with $2079$ and $52781$ sentences, respectively. In Hindi, the labels are \textit{name}, \textit{location}, and \textit{organization}. The Spanish dataset has six additional labels -- \textit{event}, \textit{group}, \textit{product}, \textit{time}, \textit{title}, and \textit{other} named entities.\\
$\bullet$ \textbf{Machine Translation:} We utilize a recently developed Hindi-English code-mixed parallel corpus for machine translation \cite{gupta-etal-2020-semi} comprising more than $200k$ sentence pair. For experiments, we transliterate all Devanagari Hindi text.
\subsection{Baselines}
\label{subs:baseline}
\paragraph{POS tagging, NER \& sentiment classification:}
\noindent $\rhd$ \textbf{BiLSTM} \cite{hochreiter1997long}: It is a weak baseline with two conventional BiLSTM layers. For POS and NER, we additionally incorporate a CRF layer for the final classification.
$\rhd$ \textbf{HAN} \cite{yang2016hierarchical}: We adapt the Hierarchical Attention Network (HAN) for our purpose. The subword embedding is computed at the first level of attention network followed by a word-level attention at the second level. Recently, ~\newcite{bansal-etal-2020-code} also adopted HAN for code-mixed classification.
$\rhd$ \textbf{ML-BERT} \cite{devlin-etal-2019-bert}: We fine-tune multilingual BERT \cite{m:bert}.
$\rhd$ \textbf{CS-ELMo} \cite{aguilar-solorio-2020-english}: It is one of state-of-the-arts on code-mixed languages. It uses pre-trained ELMo \cite{peters-etal-2018-deep} to transfer knowledge from English to code-mixed languages.
\noindent $\rhd$ \textbf{Subword-LSTM} \cite{joshi2016towards}: It is a hybrid CNN-LSTM model. A 1D convolution operation is applied for the subword representation. Subsquently, the convoluted features are max-pooled and fed to an LSTM. Since this system disregards word boundaries in a sentence, we use it for {\em sentiment classification only}.
\paragraph{Machine translation:}
For machine translation, we evaluate {\textsf{HIT}} against \textbf{GFF-Pointer} ~\cite{gupta-etal-2020-semi}, a gated feature fusion (GFF) based approach to amalgamate the XLM and syntactic features during encoding and a Pointer generator for decoding. Furthermore, we also incorporate three other baselines for comparison -- \textbf{Seq2Seq} ~\cite{sutskever2014advances}, \textbf{Attentive-Seq2Seq} ~\cite{bahdanau2014neural} and \textbf{Pointer Generator}~\cite{see2017get}.
\subsection{Experimental Setup}
For each experiment,
we use a $dropout=0.1$ in both transformer block and the task specific layers. Categorical cross-entropy loss with Adam ($\eta = 0.001$, $\beta_{1} = 0.9, \beta_{2} = 0.999$) optimizer \cite{kingma2014adam} is employed in all experiments. We train our models for maximum $500$ epochs with an early-stopping criteria having $patience = 50$.
We additionally use a learning rate scheduler to reduce learning rate to $70\%$ at plateaus with a patience of $20$ epochs. All models are trained with \textit{batch-size}$=32$.
\input{tab-pos}
\subsection{Experimental Results}
\label{subs:setup}
We compute precision, recall, F1-score for POS, NER, and sentiment classification, whereas, BLEU, METEOR, and ROUGE scores are reported for the machine translation task.
{\bf Sentiment classification:} As shown in Table \ref{tab:results:sentiment}, \textsf{HIT}\ obtains best F1-scores across all languages. For Hindi, three baselines (BiLSTM, ML-BERT, and CS-ELMo) obtain the best F1-score of $0.909$, where \textsf{HIT}\ yields a small improvement with $0.915$ F1-score. In comparison, we observe an improvement of $1.4\%$ for Tamil, where {\textsf{HIT}} and the best baseline (CS-ELMo) report $0.473$ and $0.459$ F1-scores, respectively. We observe the same pattern for Malayalam and Spanish as well -- in both cases, \textsf{HIT}\ obtains improvements of $0.9\%$ and $2.0\%$, respectively. For Malayalam, {\textsf{HIT}} reports $0.651$ F1-score, whereas CS-ELMo reports $0.642$ F1-score. In case of Spanish, HAN turns out to be the best baseline with $0.440$ F1-score.
Comparatively, \textsf{HIT}\ achieves $0.460$ F1-score.
The last two rows of Table \ref{tab:results:sentiment} report ablation results -- a) excluding outer-product attention ($Atn^{outer}$) from \textsf{HIT}; and b) excluding sub-word embeddings (\textit{character-level} \textsf{HIT}). In all cases, the absence of \textit{sub-word} embeddings yields negative effect on the performance; hence, suggesting the effectiveness of \textit{character-level} \textsf{HIT}\ in the architecture. On the other hand, omitting outer-product attention declines F1-scores in $3$ out of $4$ cases -- we observe a margin improvement of $0.04$ points for Malayalam. In summary, \textsf{HIT}\ attains state-of-the-art performance across all four datasets, whereas, the best baseline (CS-ELMo) reports $1.2\%$ lower scores on average.
\input{tab-ner}
{\bf POS tagging:} Table \ref{tab:results:pos} shows the comparative results for POS tagging in Hindi, Telugu, Bengali, and Spanish. Similar to sentiment classification, we observe that \textsf{HIT}\ attains best F1-scores across three datasets ($0.625\%$ better on average). It achieves $0.919$, $0.762$, $0.853$, and $0.825$ F1-scores for Hindi, Telugu, Bengali, and Spanish, respectively. In comparison, CS-ELMo yields best F1-scores among all the baselines across three datasets \textit{viz.} Hindi ($0.910$), Telugu ($0.775$), and Bengali ($0.847$). For Spanish, ML-BERT obtains the best baseline F1-score of $0.802$. From ablation, we observe the negative effect on performance by removing either the outer-product attention or \textit{character-level} \textsf{HIT}\ for the majority of the cases.
{\bf NER:}
The performance of \textsf{HIT}\ for NER is also in-line with the previous two tasks, as show in Table \ref{tab:results:ner}. As mentioned earlier, we evaluate \textsf{HIT}\ for Hindi and Spanish datasets. In both cases, we observe $\ge1\%$ improvement in F1-score, in comparison with the best baseline (CS-ELMo).
In all three tasks, CS-ELMo is arguably the most consistent baseline. Together with the state-of-the-art performance of \textsf{HIT}, we regard the good performance to the subword-level contextual modelling -- both systems use contextual representational models (ELMo and Transformer) to encode the syntactic and semantic features. Moreover, the FAME module in \textsf{HIT}\ assists in improving the performance even further.
\input{tab-NMT}
\input{tab-transfer}
{\bf Machine Translation:} Finally, Table \ref{tab:results:NMT} reports the results for the English to Hindi (En-Hi) machine translation task. For comparison, we also report BLEU, METEOR, and ROUGE-L scores for four baseline systems -- Seq2Seq \cite{sutskever2014advances}, Attentive-Se2Seq \cite{bahdanau2014neural}, Pointer Generator \cite{see2017get}, and GFF-Pointer \cite{gupta-etal-2020-semi}. For all three metrics, \textsf{HIT}\ reports significant improvement ($1$-$9$ points) over the state-of-the-art and other baselines. GFF-Pointer obtains $21.55$ BLEU score, while the other baselines yield BLEU scores in the range $[15-17]$. In comparison, \textsf{HIT}\ obtain $28.22$ BLEU, {\em an extremely convincing result}. Similarly, \textsf{HIT}\ reports $51.52$ ROUGE and $29.59$ METEOR scores, respectively.
\subsection{Effects of Transfer Learning across Tasks}
One of the core objectives of representation learning is that the learned representation should be task-invariant -- the representations learned for one task should also be (near) effective for other tasks. The intuition is that the syntactic and semantic features captured for a language should be independent of the tasks, and if it does not comply, the representation can be attributed to capture the task-specific feature, instead of linguistic features. To this end, we perform transfer learning experiments with (w/) and without (w/o) fine-tuning. Since we have only one dataset for Tamil, Telugu, Bengali, and Malayalam, we choose Hindi and Spanish code-mixed datasets (POS, NER, and sentiment classification) for the study. Table \ref{tab:transfer} reports results for both code-mixed languages. For each case, we learn \textsf{HIT}'s representation on one (source) task and subsequently utilize the representation for the other two tasks (targets). Moreover, we repeat each experiment with and without fine-tuning \textsf{HIT}.
\input{tab-confusion}
For Hindi code-mixed, we do not observe the positive effect of transfer learning for NER. It could be because of the limited lexical variations of named-entities in other datasets. However, we obtain the best F1-score ($0.936$) for POS tagging in a transfer learning setup with sentiment classification. Similarly, for the sentiment classification as target, we observe performance improvements with both POS and NER as source tasks. In Spanish, we also observe increment in F1-scores for all three tasks. We attribute these improvements to the availability of more number of sentences for \textsf{HIT}\ to leverage the linguistic features in both Hindi and Spanish.
\input{tab-error-examples}
\input{fig-gradient-heatmap-Sentiment}
\subsection{Error Analysis}
\label{subs: analysis}
In this section, we analyze the performance of \textsf{HIT}\ both quantitatively and qualitatively. At first, we report the confusion matrices\footnote{\label{lbl:confusion}Confusion matrices and more error cases for other tasks are presented in the appendix.} for Hindi NER and sentiment classification in Table \ref{fig:confusion}. In both cases, we observe the \textit{true-positives} to be significant for all labels. Furthermore, we also observe the \textit{false-positives} to be extremely low (except for `\textit{B-Org}' in NER) for majority of the cases -- suggesting very good precision in general. The major contribution in error is due to the \textit{neutral} and \textit{other} classes in sentiment and NER, respectively. In sentiment analysis, $10\%$ of the \textit{positive} and \textit{negative} labels each were mis-classified as \textit{neutral}. Similarly in NER, we observe that the \textit{organization} entities (\textit{B-Org \& I-Org}) and \textit{other} classes confuse with each other in a significant number of samples. It may be due to the under-represented ($\sim13\%$) \textit{organization} entities in the dataset.
We also perform qualitative error analysis of \textsf{HIT}\ and CS-ELMo. Table \ref{tab:error} reports the results for the NER and sentiment classification tasks$^{\ref{lbl:confusion}}$. For the first example in sentiment classification, \textsf{HIT}\ accurately predicts the sentiment labels as \textit{positive}; however in comparison, CS-ELMo mis-classifies it as \textit{neutral}. For the second example, both \textsf{HIT}\ and CS-ELMo wrongly predict the sentiment as \textit{neutral}. One possible reason could be the presence of the negatively-inclined word \textit{chodo} ({\em leave}) in the sentence. For NER, the sentence has two entities (one \textit{person} and one \textit{organization}). While \textsf{HIT}\ correctly identifies `\textit{dhan dhan satguru}' as person, it could not recognize `\textit{msg}' as organization. On the other hand, CS-ELMo correctly identifies both.
Furthermore, we take the first example of sentiment analysis (from Table \ref{tab:error}) to get the insight of \textsf{HIT}.
It is not hard to understand that the most positive vibe is attributed by the phrase `\textit{badhai ho sir}' (\textit{congratulations sir}).
To validate our hypothesis, we use a gradient based interpretation technique, Grad-CAM \cite{Selvaraju_2019}, which uses gradients of neural networks to show the effect of neurons on the final output. Due to hierarchical and modular nature of {\textsf{HIT}}, we are able to extract the intermediate word level representations learnt by the \textit{character-level} \textsf{HIT}\ and compute the gradient of loss of the actual class considering these representations. The magnitude of gradient shows the impact of each word on the final output.
Figure \ref{fig:interpretsentiment1:org} shows the word-level and character-level gradient maps for the original input. We can observe that \textsf{HIT}\ attends to the most important component in both cases. For \textit{word-level}, it highlights the positive phrase `\textit{liye badhai}' ({\em congratulations on}). Moreover, character-level \textsf{HIT}\ attends to the two syllables `\textit{b}' and `\textit{dh}' in the word `\textit{badhai}' ({\em congratulation}). It suggests that both the \textit{word-level} and \textit{character-level} components are capable of extracting important features from inputs. Furthermore, to check the robustness, we investigate \textsf{HIT}\ on a perturbed input. In the previous example, we tweak the spelling of the most important word `\textit{badhai}' to `\textit{badha\textbf{\underline{a}}i\textbf{\underline{i}}}' (an out-of-vocabulary word considering the dataset). Figure \ref{fig:interpretsentiment1:perturb} shows similar patterns in the perturbed input case as well. It signifies that \textsf{HIT}\ identifies the phonetic similarity of the two words and is flexible to the spelling variants, a common feature in code-mixed environment.
\subsection{\textsf{HIT}'s Performance on Monolingual Data} \label{sec:monolingual}
In this section, we outline the performance of {\textsf{HIT}} for monolingual and low-resource settings. We consider the sentiment classification dataset curated by \newcite{akhtar-etal-2016-aspect}, containing 5417 transliterated \emph{Hindi} reviews with 4 sentiment labels - \emph{positive}, \emph{negative}, \emph{neutral}, and \emph{conflict}. We also utilize a Magahi POS dataset \cite{kumar-etal-2012-developing}, annotated with $33$ tags from the BIS-tagset \footnote{\url{https://thottingal.in/blog/2019/09/10/bis-pos-tagset-review/}}. We report the performance of \textsf{HIT}\ and other baselines on these two datasets in Table \ref{tab:monolingual}. For the Hindi sentiment classification task, we observe that {\textsf{HIT}} yields an F1-score of $0.635$, which is better than CS-ELMo and ML-BERT by $9.3\%$ and $5.9\%$. Also, for Magahi POS, {\textsf{HIT}} reports the best F1-score of $0.775$ -- increaments of $+2.1\%$ and $+9.5\%$ over CS-ELMo and ML-BERT, respectively. These results suggest that {\textsf{HIT}} is capable of handling monolingual and low-resource texts in an efficient manner.
\input{tab-monolingual}
\subsection{Learnable Parameters and Power Usage}
We conduct all our experiments on 1 Tesla T4 GPU. In Table \ref{tab:parameters}, we report the total trainable parameters for HIT and other baselines. We observe that \textsf{HIT}\ requires a comparable number of parameters. For instance, in the Hindi-English sentiment analysis task (sequence classification), HIT has a total \textasciitilde$2.7M$ trainable parameters, while other baselines such as, CS-ELMo, HAN, Subword-LSTM, and BiLSTM require \textasciitilde$2.9M$, \textasciitilde$2.7M$, \textasciitilde$2.1M$, and \textasciitilde$2.8M$ parameters, respectively. ML-BERT has a whopping \textasciitilde$179.2M$ parameters. Similarly, in Hindi-English POS tagging, the number of parameters for HIT is comparable (or even lesser) -- HIT: \textasciitilde$1.4M$, CS-ELMo: \textasciitilde$2.4M$, HAN: \textasciitilde$1.4M$, BiLSTM-CRF: \textasciitilde$1.5M$, ML-BERT: \textasciitilde$177.9M$. We observed similar distribution for other tasks/languages as well.
We further note that {\textsf{HIT}} is significantly more efficient than the current SOTA models as it takes $13$ s/epoch to train which is significantly lower than CS-ELMo ($18$ s/epoch), HAN ($14$ s/epoch), and ML-BERT ($172$ s/epoch), while it takes a bit more time compared to BiLSTM ($12$ s/epoch) and Subword-LSTM ($7$ s/epoch). We also computed the amount of power consumption for training \textsf{HIT}\ for a maximum 500 epochs. Following the guidelines of \newcite{strubell-etal-2019-energy}, we estimate a total power consumption of 0.383 kWh and equivalent CO2 emission of 0.365 pounds.
\input{tab-hyperparameters}
\section{Conclusion}
In this work, we present {\textsf{HIT}} -- a hierarchical transformer-based framework for learning robust code-mixed representations. \textsf{HIT}\ contains a novel fused attention mechanism, which calculates a weighted sum of the multi-headed self attention and outer-product attention, and is capable of capturing relevant information at a more granular level. We experimented with eleven code-mixed datasets for POS, NER, sentiment classification, and MT tasks across six languages. We observed that {\textsf{HIT}} successfully outperforms existing SOTA systems. We also demonstrate the task-invariant nature of the representations learned by {\textsf{HIT}} via a transfer learning setup, signifying it's effectiveness in learning linguistic features of CM text rather than task-specific features. Finally, we qualitatively show that {\textsf{HIT}} successfully embeds semantically and phonetically similar words of a code-mixed language.
\section*{Acknowledgement}
The work was partially supported by the Ramanujan Fellowship (SERB) and the Infosys Centre for AI, IIITD.
\bibliographystyle{acl_natbib}
\section{Semantic Understanding of Languages}\label{subs: semanticanalysis}
|
1,108,101,563,209 | arxiv | \section{\@startsection {section}{1}{\z@}
{-30pt \@plus -1ex \@minus -.2ex}
{2.3ex \@plus.2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand{\@seccntformat}[1]{\csname the#1\endcsname. }
\makeatother
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{conjecture}{Conjecture}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\theoremstyle{definition}
\newtheorem{defn}{Definition}[section]
\newtheorem{rem}{Remark}[section]
\newtheorem{exam}{Example}[section]
\newtheorem{pict}{Figure}[section]
\begin{document}
\begin{center}
\uppercase{\bf Previous Player's Positions of Impartial Three-Dimensional Chocolate-Bar Games }
\vskip 20pt
{\bf Ryohei Miyadera }\\
{\smallit Keimei Gakuin Junior and High School, Kobe City, Japan}. \\
{\tt [email protected]}
\vskip 10pt
{\bf Hikaru Manabe}. \\
{\smallit Keimei Gakuin Junior and High School, Kobe City, Japan}. \\
{\tt [email protected]}
\vskip 10pt
{\bf Shunsuke Nakamura}. \\
{\smallit Independent Researcher, Tokyo, Japan }. \\
{\tt [email protected]}
\end{center}
\vskip 20pt
\centerline{\smallit Received: , Revised: , Accepted: , Published: }
\vskip 30pt
\pagestyle{myheadings}
\markright{\smalltt INTEGERS: 19 (2019)\hfill}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\centerline{\bf Abstract}
\noindent
In this study, we investigate three-dimensional chocolate bar games, which are variants of the game of Chomp. A three-dimensional chocolate bar is a three-dimensional array of cubes in which a bitter cubic box is present in some part of the bar. Two players take turns and cut the bar horizontally or vertically along the grooves. The player who manages to leave the opponent with a single bitter block is the winner.
We consider the $\mathcal{P}$-positions of this game, where the $\mathcal{P}$-positions are positions of the game from which the previous player (the player who will play after the next player) can force a win, as long as they play correctly at every stage.
We present sufficient conditions for the case when the position $\{p,q,r\}$ is a $\mathcal{P}$-position if and only if $(p-1) \oplus (q-1) \oplus (r-1)$, where
$p, q$, and $r$ are the length, height, and width of the chocolate bar, respectively.
\pagestyle{myheadings}
\markright{\smalltt \hfill}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\section{Introduction}\label{introductionsection}
Chocolate bar games are variants of the Chomp game presented in \cite{gale}.
A two-dimensional chocolate bar is a two-dimensional array of squares in which a bitter square printed in black is present in some part of the bar. See the chocolate bars in Figure \ref{two2dchoco}.
A three-dimensional chocolate bar is a three-dimensional array of cubes in which a bitter cubic box printed in black is present in some parts of the bar. Figure \ref{two3dchoco} displays examples of three-dimensional chocolate bars. Games involving these chocolate bars may be defined as follows.
\begin{defn}\label{definitionofchoco}
(i) Two-dimensional chocolate bar game: Each player in turn breaks the bar in a straight line along the grooves and eats the broken piece. The player who manages to leave the opponent with a single bitter block (black block) is the winner. \\
(ii) Three-dimensional chocolate game: The rules are the same as in (i), except that the chocolate is cut horizontally or vertically along the grooves. Examples of cutting three-dimensional chocolate bars are shown in Figure \ref{3dcut}.
\end{defn}
\begin{pict}
\begin{tabular}{cc}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[height=1.4cm]{twochoco.eps}
\label{two2dchoco}
\end{minipage}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[height=1.8cm]{two3d.eps}
\label{two3dchoco}
\end{minipage}
\end{tabular}
\end{pict}
\begin{exam}
Three methods of cutting a three-dimensional chocolate bar.\\
\begin{pict}
\includegraphics[height=2.7cm]{3choco.eps}
\label{3dcut}
\end{pict}
\noindent
\end{exam}
For completeness, we briefly review some of the necessary concepts of combinatorial game theory; refer to \cite{lesson} for greater detail. Let $Z_{\ge 0}$ and $N$ be sets of non-negative integers and natural numbers, respectively.
\begin{defn}\label{definitionfonimsum11}
Let $x$ and $y$ be non-negative integers. Expressing them in Base 2 yields
$x = \sum_{i=0}^n x_i 2^i$ and $y = \sum_{i=0}^n y_i 2^i$ with $x_i,y_i \in \{0,1\}$.
We define nim-sum $x \oplus y$ as:
\begin{equation}
x \oplus y = \sum\limits_{i = 0}^n {{w_i}} {2^i}.
\end{equation}
where $w_{i}=x_{i}+y_{i} \ (\bmod\ 2)$.
\end{defn}
As chocolate bar games are impartial games without draws, only two outcome classes are possible.
\begin{defn}\label{NPpositions}
$(a)$ A position is called a $\mathcal{P}$-\textit{position}, if it is a winning position for the previous player (the player who just moved), as long as he/she plays correctly at every stage.\\
$(b)$ A position is called an $\mathcal{N}$-\textit{position}, if it is a winning position for the next player, as long as he/she plays correctly at every stage.
\end{defn}
\begin{defn}\label{defofmexgrundy2}
$(i)$ For any position $\mathbf{p}$ of game $\mathbf{G}$, there is a set of positions that can be reached by precisely one move in $\mathbf{G}$, which we denote as \textit{move}$(\mathbf{p})$. \\
$(ii)$ The \textit{minimum excluded value} ($\textit{mex}$) of a set $S$ of non-negative integers is the least non-negative integer that is not in S. \\
$(iii)$ Each position $\mathbf{p}$ of an impartial game has an associated Grundy number, and we denote this as $\mathcal{G}(\mathbf{p})$.\\
The Grundy number is recursively defined by $G(\mathbf{p}) = \textit{mex}\{G(\mathbf{h}): \mathbf{h} \in move(\mathbf{p})\}.$
\end{defn}
\begin{theorem}\label{theoremofsumg2}
For any position $\mathbf{p}$ of the game,
$\mathcal{G}(\mathbf{p}) =0$ if and only if $\mathbf{p}$ is a $\mathcal{P}$-position.
\end{theorem}
The original two-dimensional chocolate bar introduced by Robin \cite{robin} is the chocolate shown on the left-hand side in Figure \ref{two2dchoco}.
Because the horizontal and vertical grooves are independent, an $m \times n$ rectangular chocolate bar is equivalent to the game of Nim, which includes heaps of $m-1$ and $n-1$ stones, respectively. Therefore, the chocolate $6 \times 4$ bar game shown on the left-hand side of Figure \ref{two2dchoco} is mathematically the same as Nim, which includes heaps of $5$ and $3$ stones, respectively.
It is well known that the Grundy number of the Nim game with heaps of $m-1$ stones and $n-1$ stones is $(m-1) \oplus (n-1)$; therefore, the Grundy number of the $m \times n$ rectangular bar is $(m-1) \oplus (n-1)$.
Robin \cite{robin} also presented a cubic chocolate bar, as shown on the left-hand side of Figure \ref{two3dchoco}.
It can be easily determined that this $5 \times 5 \times 5$ three-dimensional chocolate bar is mathematically the same as Nim with heaps of $4$, $4$, and $4$ stones, and the Grundy number of this cuboid bar is $4 \oplus 4 \oplus 4$.
It is then natural to ask the following question.
\vspace{0.5cm}
\noindent
\bf{Question 1. \ }\normalfont
\textit{What is the necessary and sufficient condition whereby a three-dimensional chocolate bar may have a Grundy number $(p-1) \oplus (q-1) \oplus (r-1)$, where $p, q$, and $r$ are the length, height, and width of the bar, respectively?}
\normalfont
\vspace{0.5cm}
Although the authors answered this question for two-dimensional chocolate bars in \cite{jgame} and for the three-dimensional case in \cite{ns3d}, the results of these studies are omitted here.
When the Grundy number of a chocolate bar with $p, q$, and $r$ as the length, height, and width, respectively, is
$(p-1) \oplus (q-1) \oplus (r-1)$, the position is a $\mathcal{P}$-position if and only if
$(p-1) \oplus (q-1) \oplus (r-1)=0$ for the chocolate bar.
Therefore, it is natural to ask the following question.
\vspace{0.5cm}
\noindent
\bf{Question 2. \ }\normalfont
\textit{Under what condition may a three dimensional chocolate bar with $p, q$, and $r$ as the length, height, and width, respectively, have a $\mathcal{P}$-position if and only if $(p-1) \oplus (q-1) \oplus (r-1)=0$?}
\normalfont
\vspace{0.5cm}
In the remainder of this paper, we present a sufficient condition for which Question 2 may be answered.
Determining the necessary and sufficient conditions for this question is a very difficult unsolved problem considered by the authors.
We suppose that the difficulty of presenting the necessary and sufficient conditions arises from the fact that there are many kinds of sufficient conditions. For more information, see Theorems \ref{theoremforoddk} in Section \ref{sub4mone} and Conjecture \ref{theoremmanabe} in Section \ref{others}.
We now define a three-dimensional chocolate bar.
\begin{defn}\label{definitionoffunctionf3d}
Suppose that $f(u,v)\in Z_{\geq0}$ for $u,v \in Z_{\geq0}$. $f$ is said to monotonically increase if $f(u,v) \leq f(x,z)$ for $x,z,u,v \in Z_{\geq0}$ with $u \leq x$ and $v \leq z$.
\end{defn}
\begin{defn}\label{defofbarwithfunc3d}
Let $f$ be the monotonically increasing function in Definition \ref{definitionoffunctionf3d}.\\
Let $x,y,z \in Z_{\geq0}$.
The three-dimensional chocolate bar comprises a set of $1 \times 1 \times 1$ boxes.
For $u,w \in Z_{\geq0}$ such that $u \leq x$ and $w \leq z$, the height of the column at position $(u,w)$ is $ \min (f(u,w),y) +1$.
There is a bitter box at position $(0,0)$.
We denote this chocolate bar as $CB(f,x,y,z)$. Note that $x+1, y+1$, and $z+1$ are the length, height, and width of the bar, respectively.
\end{defn}
\begin{exam}
Here, we let $f(x,z)$ $= \lfloor \frac{x+z}{3}\rfloor$, where $ \lfloor \ \rfloor$ is the floor function, and we present several examples of $CB(f,x,y,z)$.
\begin{pict}
\centering
\includegraphics[height=3cm]{coordinate3d.eps}
\label{coordinate3d}
\end{pict}
\begin{pict}
\begin{tabular}{cc}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[height=2.5cm]{f14310.eps}
$CB(f,14,3,10)$
\label{f14310}
\end{minipage}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[height=2.5cm]{f9310.eps}
$CB(f,9,3,10)$
\label{f9310}
\end{minipage}
\end{tabular}
\end{pict}
\begin{pict}
\begin{tabular}{cc}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[height=3cm]{f1367.eps}
$CB(f,13,6,7)$
\label{f1367}
\end{minipage}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[height=2.2cm]{f437.eps}
$CB(f,4,3,7)$
\label{f437}
\end{minipage}
\end{tabular}
\end{pict}
\end{exam}
Next, we define $move_f(\{x, y, z\})$ in Definition \ref{movefor3dimension}. $move_f(\{x, y, z\})$ is a set that contains all of the positions that can be reached from position $\{x, y, z\}$ in one step (directly).
\begin{defn}\label{movefor3dimension}
For $x,y,z \in Z_{\ge 0}$, we define
\begin{align}
& move_f(\{x,y,z\})=\{\{u,\min(f(u,z),y),z \}:u<x \} \cup \{\{x,v,z \}:v<y \} \nonumber \\
& \cup \{ \{x,\min(y, f(x,w) ),w \}:w<z \}, \text{ where \ } u,v,w \in Z_{\ge 0}.\nonumber
\end{align}
\end{defn}
\begin{rem}
Definition \ref{movefor3dimension} shows how to reduce the coordinates of the chocolate bar by cutting, and in Example \ref{chococute}, we provide concrete examples of reducing the coordinates.
\end{rem}
\section{When $f(x,z) = \lfloor \frac{x+z}{k}\rfloor$ for $k =4m+3$}\label{sub4mone}
Let $k = 4m + 3$ for some $m \in Z_{\geq 0}$.
Let $x = \sum_{i=0}^n x_i 2^i$, $y = \sum_{i=0}^n y_i 2^i$ and $z = \sum_{i=0}^n z_i 2^i$ for some $n \in Z_{\ge 0}$ and $x_i,y_i,z_i \in \{0,1\}$.
Throughout this section, we assume that
\begin{equation}
f(x,z) = \lfloor \frac{x+z}{k}\rfloor.
\end{equation}
Before we prove several lemmas, we first consider the procedures provided in Example \ref{chococute}. Although this example is lengthy, the proofs of the lemmas are difficult to understand without first considering this example.
\begin{exam}\label{chococute}
Let $f(x,z)$ $= \lfloor \frac{x+z}{3}\rfloor$. \\
$(i)$ We begin with the chocolate bar shown in Figure \ref{f14310}. If the first coordinate $x = 14$ is reduced to $u=9$ by cutting the
chocolate bar $CB(f,14,3,10)$ shown in Figure \ref{f14310}, by Definition \ref{movefor3dimension} the second coordinate will be $\min(f(u,z),y)$
$= \min(f(9,10),3)$ $= \min(\lfloor \frac{19}{3}\rfloor,3)$ $= \min(6,3)$ = 3. Therefore,
we can reduce $x = 14$ to $u=9$ without affecting the second coordinate $3$, which is the height of the chocolate bar, and
we obtain the chocolate bar $CB(f,9,3,10)$ shown in Figure \ref{f9310} (i.e., $\{9,3,10\} \in move_f(\{14,3,10\})$).
\begin{table}
\begin{tabular}{cc}
\begin{minipage}{.5\textwidth}
\centering
\begin{tabular}{|c|c|c|c|} \hline
\text{ \ } & $x=14$ & $y=3$ & $z=10$ \\ \hline
$2^3=8$ & $x_3=1$ & $y_3=0$ & $z_3=1$ \\ \hline
$2^2=4$ & $x_2=1$ & $y_2=0$ & $z_2=0$ \\ \hline
$2^1=2$ & $x_1=1$ & $y_1=1$ & $z_1=1$ \\ \hline
$2^0=1$ & $x_0=0$ & $y_0=1$ & $z_0=0$ \\ \hline
\end{tabular}
\caption{$CB(f,14,3,10)$ }\label{3D1981}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\begin{tabular}{|c|c|c|c|} \hline
\text{ \ } & $u=9$ & $y=3$ & $z=10$ \\ \hline
$2^3=8$ & $u_3=1$ & $y_3=0$ & $z_3=1$ \\ \hline
$2^2=4$ & $u_2=0$ & $y_2=0$ & $z_2=0$ \\ \hline
$2^1=2$ & $u_1=0$ & $y_1=1$ & $z_1=1$ \\ \hline
$2^0=1$ & $u_0=1$ & $y_0= 1$ &$z_0=0$ \\ \hline
\end{tabular}
\caption{$CB(9,3,10)$ }\label{3D19}
\end{minipage}
\end{tabular}
\end{table}
\noindent
$(ii)$ We begin with the chocolate bar in Figure \ref{f1367}. If the first coordinate $x = 13$ is reduced to $u=4$ by cutting the chocolate bar $CB(f,13,6,7)$ in Figure \ref{f1367} by Definition \ref{movefor3dimension}, the second coordinate will be $\min(f(u,z),y)$
$= \min(f(4,7),6)$ $= \min(\lfloor \frac{11}{3}\rfloor,6) =\min(3,6) =3$. Therefore, the second coordinate, $6$, which is the height of the chocolate bar, will be reduced to $3$. Then, we obtain the chocolate bar shown in Figure \ref{f437} (i.e., $\{4,3,7\} \in move_f(\{13,6,7\})$).
\begin{table}
\begin{tabular}{cc}
\begin{minipage}{.5\textwidth}
\centering
\begin{tabular}{|c|c|c|c|} \hline
\text{ \ } & $x=13$ & $y=6$ & $z=7$ \\ \hline
$2^3=8$ & $x_3=1$ & $y_3=0$ & $z_3=0$ \\ \hline
$2^2=4$ & $x_2=1$ & $y_2=1$ & $z_2=1$ \\ \hline
$2^1=2$ & $x_1=0$ & $y_1=1$ & $z_1=1$ \\ \hline
$2^0=1$ & $x_0=1$ & $y_0=0$ & $z_0=1$ \\ \hline
\end{tabular}
\caption{$CB(f,13,6,7)$ }\label{3D1982}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\begin{tabular}{|c|c|c|c|} \hline
\text{ \ } & $x=4$ & $y=3$ & $z=7$ \\ \hline
$2^3=8$ & $u_3=0$ & $v_3=0$ & $z_3=0$ \\ \hline
$2^2=4$ & $u_2=1$ & $v_2=0$ & $z_2=1$ \\ \hline
$2^1=2$ & $u_1=0$ & $v_1=1$ & $z_1=1$ \\ \hline
$2^0=1$ & $u_0=0$ & $v_0=1$ & $z_0=1$ \\ \hline
\end{tabular}
\caption{$CB(f,4,3,7)$ }\label{3D19b}
\end{minipage}
\end{tabular}
\end{table}
\noindent
$(iii)$ The procedures presented in $(i)$ and $(ii)$ are good examples of moving to a position whose nim-sum is 0 from a position whose nim-sum is not 0.
In $(i)$, $14 \oplus 3 \oplus 10 = 7$, and suppose that the player wants to move to a position whose nim-sum is 0. First, let $u_3= x_3 = 1$. Next,
reduce $x_2 =1$ to $u_2 = 0$. Note that
\begin{equation}\label{xbigeru0}
x = \sum_{i=0}^3 x_i 2^i = 2^3 + 2^2 + 2 > 2^3 + 0 \times 2^2 + u_1 \times 2 + u_0 = \sum_{i=0}^3 u_i 2^i =u
\end{equation}
regardless of the values of $u_1, u_0$. Then, reduce $x_1$ to $u_1=0$ and increase $x_0 = 0$ to $u_0 =1$. Note that by considering (\ref{xbigeru0}), one can choose any value for $u_1, u_0$. Then, we obtain the position $\{9,3,10\}$ such that $9 \oplus 3 \oplus 10 = 0$.
In $(ii)$, $13 \oplus 6 \oplus 7 = 12$, and suppose that it is desired to move to a position whose nim-sum is 0. First, we have to reduce $x_3=1$ to $u_3=0$.
Because $\{x_2,y_2,z_2\} = \{1,1,1\}$ and $1 \oplus 1 \oplus 1 \ne 0 \ (\mod 2)$, we may let
$\{u_2,y_2,z_2\} = \{0,1,1\}$ or $\{u_2,v_2,z_2\} = \{1,0,1\}$ by reducing $y$ to $v$. Note that once we reduce $x$, we cannot reduce $z$.
If
\begin{equation}\label{firstcase}
\{u_2,y_2,z_2\} = \{0,1,1\},
\end{equation}
we have
\begin{align}
& f( u, z )\\ \nonumber
& = f( \sum_{i=0}^3 u_i 2^i, \sum_{i=0}^3 z_i 2^i )\\ \nonumber
& = f(0 \times 2^3 + 0 \times 2^2 + u_1 2^1 + u_0 2^0, 7) \\ \nonumber
& = \lfloor \frac{7+u_1 2^1 + u_0 2^0}{3}\rfloor \leq \lfloor \frac{10}{3}\rfloor=3, \label{xbigeru}
\end{align}
regardless of the values of $u_1, u_0$. We then have $f(u,z) < 4 = y_2 2^2 \leq y$.
Therefore, by Definition \ref{movefor3dimension} (\ref{firstcase}) leads to a contradiction.
We should then let
\begin{equation}\label{secondcase}
\{u_2,v_2,z_2\} = \{1,0,1\},
\end{equation}
by simultaneously reducing $x$ and $y$.
Next, we let $\{u_1,v_1,z_1\} = \{0,1,1\}$ or $\{u_1,v_1,z_1\} = \{1,0,1\}$.
If $\{u_1,v_1,z_1\} = \{1,0,1\}$, by (\ref{secondcase})
\begin{align}
& f(u, z)\\ \nonumber
& = f( \sum_{i=0}^3 u_i 2^i, \sum_{i=0}^3 z_i 2^i )\\ \nonumber
& = f( 0 \times 2^3 + 2^2 + 2^1 + u_0 2^0, 7) \\ \nonumber
& = \lfloor \frac{13 + u_0 2^0}{3}\rfloor \geq \lfloor \frac{13}{3}\rfloor \\ \nonumber
& =4 > 1 \geq \sum_{i=0}^3 v_i 2^i = 0 \times 2^3 + 0 \times 2^2 + 0 \times 2 + v_0 =v, \label{xbigeru}
\end{align}
and we obtain
\begin{equation}\label{ysmallth}
f(u,z) > v.
\end{equation}
When we reduce $x$ to $u$ and $y$ to $v$, by definition
\ref{movefor3dimension} we have
\begin{equation}
v = \min(f(u,z),y), \nonumber
\end{equation}
and this contradicts (\ref{ysmallth}).
Therefore, let $\{u_1,v_1,z_1\} = \{0,1,1\}$. Using similar reasoning, we let $\{u_0,v_0,z_0\} = \{0,1,1\}$.
We then obtain the position $\{4,3,7\}$ such that $4 \oplus 3 \oplus 7 = 0$ and
$v = 3 = \lfloor \frac{4+7}{3}\rfloor = f(u,z).$
\end{exam}
We define
\begin{equation}
S_t = \sum_{i=n-t}^n (x_i + z_i - ky_i) 2^i, \label{defofs},
\end{equation}
for $t = 0,1, \cdots, n$.
\begin{lemma}\label{lemmaforf}
We have the following relationships between $f(x,z)$ and $S_n$.\\
$(a)$
\begin{equation}
y = f(x,z) \nonumber
\end{equation}
if and only if $0 \leq S_n < k$.\\
$(b)$
\begin{equation}
y > f(x,z) \nonumber
\end{equation}
if and only if $S_n < 0$.\\
$(c)$
\begin{equation}
y < f(x,z) \nonumber
\end{equation}
if and only if $S_n \geq k$.
\end{lemma}
\begin{proof}
First, note that $S_n = \sum_{i=0}^n (x_i + z_i - ky_i) 2^i = x + z - ky$.\\
$(a)$
\begin{equation}
y = f(x,z) = \lfloor \frac{x+z}{k}\rfloor \nonumber
\end{equation}
if and only if $y \leq \frac{x+z}{k} < y+1$ if and only if $0 \leq S_{n}= x+z-ky < k$.\\
$(b)$
\begin{equation}
y > f(x,z) = \lfloor \frac{x+z}{k}\rfloor \nonumber
\end{equation}
if and only if $\frac{x+z}{k} < y$, which occurs if and only if $ S_{n}= x+z-ky < 0$.\\
We then obtain $(c)$ via $(a)$ and $(b)$.
\end{proof}
\begin{lemma}\label{lemma01}
Let $t \in Z_{\geq 0}$.
Suppose that for $i = n,n-1, \cdots, n-t$
\begin{equation}
x_i \oplus y_i \oplus z_i=0. \label{oplus01}
\end{equation}
There then exists an even number $a$ such that
\begin{equation}
S_t = a 2^{n-t}.\label{evenconditiont}
\end{equation}
\end{lemma}
\begin{proof}
Because $k$ is odd, by (\ref{oplus01}) $x_{i} + z_{i}-k y_{i}$ is even for $i = n, n-1, \cdots, n-t$, and therefore
we have (\ref{evenconditiont}).
\end{proof}
\begin{lemma}\label{lemma1}
Let $t \in Z_{\geq 0}$.
Suppose that for $i = n,n-1, \cdots, n-t$
\begin{equation}
x_i \oplus y_i \oplus z_i=0 \label{oplus1}
\end{equation}
and
\begin{equation}
S_t < 0.\label{negativeconditiont}
\end{equation}
Then, for any natural number $j$ such that $j > t$,
\begin{equation}
S_j < 0.\nonumber
\end{equation}
\end{lemma}
\begin{proof}
By Lemma \ref{lemma01}, (\ref{oplus1}),
and (\ref{negativeconditiont}),
\begin{equation}
S_t = a2^{n-t} \label{evencondition}
\end{equation}
for some even number $a$ such that $a \leq -2$.
Then, by (\ref{evencondition}), for any natural number $j$ such that $j > t$ the following holds:
\begin{align}
S_j = & S_t + \sum_{i=n-j}^{n-t-1} (x_i + z_i - ky_i) 2^i \nonumber \\
\leq & S_t + 2 \times \sum_{i=0}^{n-t-1} 2^i \nonumber \\
\leq & (-2)2^{n-t} + 2 \times (2^{n-t} -1)= -2 < 0. \nonumber
\end{align}
\end{proof}
\begin{lemma}\label{lemma1b}
Let $t \in Z_{\geq 0}$.
Suppose that for $i = 0,1, \cdots, n-t$
\begin{equation}
x_i \oplus y_i \oplus z_i=0 \label{oplus1b}
\end{equation}
and
\begin{equation}
y \leq f(x,z).\label{ysmallerthanxz}
\end{equation}
Then,
\begin{equation}
S_t \geq 0. \label{bigthan0}
\end{equation}
\end{lemma}
\begin{proof}
If
\begin{equation}
S_t < 0, \label{smalthan0}
\end{equation}
by (\ref{oplus1b}) and Lemma \ref{lemma1}, we have
\begin{equation}
S_n< 0. \nonumber
\end{equation}
Then by $(b)$ of Lemma \ref{lemmaforf}, we have
\begin{equation}
y > f(x,z), \nonumber
\end{equation}
and this contracts (\ref{ysmallerthanxz}). Therefore, (\ref{smalthan0}) is not true, and we have
(\ref{bigthan0}).
\end{proof}
\begin{lemma}\label{lemma2}
Let $t \in Z_{\geq 0}$.
If
\begin{equation}
S_t \geq k2^{n-t},\label{greaterthank}
\end{equation}
then, for any natural number $j$ such that $j > t$,
\begin{equation}
S_j \geq k2^{n-j}. \label{greaterthanj}
\end{equation}
\end{lemma}
\begin{proof}
$S_j$ will be smallest when $\{x_i,y_i,z_i\}$ = $\{0,1,0\}$ for $i = n-t-1,n-t-2, \cdots, n-j.$
Therefore, it is sufficient to prove (\ref{greaterthanj}) for this case.
By (\ref{greaterthank}), for any natural number $j$ such that $j > t$,
\begin{align}
S_j = & S_t + \sum_{i=n-j}^{n-t-1} (x_i + z_i - ky_i) 2^i \nonumber \\
\geq & S_t - k(2^{n-t-1}+2^{n-t-2} + \cdots +2^{n-j}) \nonumber \\
> & k2^{n-t} - k(2^{n-t}-2^{n-j}) = k2^{n-j}. \nonumber
\end{align}
\end{proof}
\begin{lemma}\label{lemma3}
Let $t \in Z_{\geq 0}$.
Suppose that
\begin{equation}
0 \leq S_t \leq 2m \times 2^{n-t}.\nonumber
\end{equation}
Then, we have the following cases $(a)$ and $(b)$.\\
$(a)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,1,0\}$ or $\{0,1,1\}$,
then
\begin{equation}
S_{t+1}< 0.\nonumber
\end{equation}
$(b)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,0,1\}$ or $\{0,0,0\}$,
then
\begin{equation}
0 \leq S_{t+1} < k \times 2^{n-t-1}.\nonumber
\end{equation}
\end{lemma}
\begin{proof}
$(a)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,1,0\}$ or $\{0,1,1\}$,
\begin{align}
S_{t+1} = & S_t +2^{n-t-1}-k \times 2^{n-t-1} \nonumber \\
\leq & 2m \times 2^{n-t} + 2^{n-t-1}-(4m+3)2^{n-t-1} \nonumber \\
= & -2 \times 2^{n-t-1} <0.\nonumber
\end{align}
$(b)$
If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{0,0,0\}$,
\begin{align}
0 & \leq S_{t+1} \nonumber \\
& = S_t \leq 4m \times 2^{n-t-1} \nonumber \\
& < k \times 2^{n-t-1}. \nonumber
\end{align}
If
\begin{equation}
\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,0,1\},
\end{equation}
\begin{align}
0 & \leq S_{t+1} \nonumber \\
& = S_t + 2 \times 2^{n-t-1} \nonumber \\
& \leq (4m+2)2^{n-t-1} \nonumber \\
& < k \times 2^{n-t-1}. \nonumber
\end{align}
\end{proof}
\begin{lemma}\label{lemma4}
Let $t \in Z_{\geq 0}$.
Suppose that
\begin{equation}
(2m+2) \times 2^{n-t} \leq S_t < k \times 2^{n-t} \label{greaterthank22b}
\end{equation}
and
\begin{equation}
x_i \oplus y_i \oplus z_i=0 \label{oplus001}
\end{equation}
for $i = n,n-1, \cdots, n-t$.
Then, we have the following cases $(a)$ and $(b)$.\\
$(a)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,1,0\}$ or $\{0,1,1\}$,
then,
\begin{equation}
0 \leq S_{t+1} < k \times 2^{n-t-1}.\nonumber
\end{equation}
$(b)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\}= \{1,0,1\}$ or $\{0,0,0\}$,
then,
\begin{equation}
S_{t+1} \geq k \times 2^{n-t-1}.\nonumber
\end{equation}
\end{lemma}
\begin{proof}
\noindent
$(a)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\}= \{1,1,0\}$ or $\{0,1,1\}$, then
\begin{align}
S_{t+1} & = S_t +2^{n-t-1}-k2^{n-t-1} \nonumber \\
& = S_t - (4m+2)2^{n-t-1}. \nonumber
\end{align}
By (\ref{greaterthank22b})
\begin{align}
& (2m+2)\times 2^{n-t} - (4m+2)2^{n-t-1} \nonumber \\
& \leq S_{t+1} = S_t -(4m+2)2^{n-t-1} \nonumber \\
& < (4m+3)2^{n-t} -(4m+2)2^{n-t-1}. \nonumber
\end{align}
Therefore,
\begin{equation}
2 \times 2^{n-t-1} \leq S_{t+1} < (4m+4)2^{n-t-1}. \label{eqa1}
\end{equation}
By (\ref{oplus001}) and Lemma \ref{lemma01}, $S_{t+1} = a2^{n-t-1}$ for some even integer $a$, and therefore by (\ref{eqa1}) we have
\begin{equation}
0< S_{t+1} \leq (4m+2)2^{n-t-1} < k\times 2^{n-t-1}. \nonumber
\end{equation}
$(b)$
If $(x_{n-t-1},x_{n-t-1},x_{n-t-1}) = \{1,0,1\}$ or $\{0,0,0\}$, then by (\ref{greaterthank22b})
\begin{equation}
k \times 2^{n-t-1} < (4m+4)2^{n-t-1} \leq S_t \leq S_{t+1}. \nonumber
\end{equation}
\end{proof}
\begin{lemma}\label{fromNtoPforh}
We assume that
\begin{equation}
x \oplus y \oplus z \neq 0 \label{nimsumno0}
\end{equation}
and
\begin{equation}\label{inequalityyz}
y \leq f(x,z).
\end{equation}
Then, at least one of the following statements is true:\\
$(1)$ $u\oplus y \oplus z= 0$ for some $u\in Z_{\geq 0}$ such that $u<x$;\\
$(2)$ $u \oplus v \oplus z= 0$ for some $u, v \in Z_{\geq 0}$ such that $u < x,v < y$ and $v=f(u,z)$;\\
$(3)$ $x\oplus v\oplus z= 0$ for some $v\in Z_{\geq 0}$ such that $v<y$;\\
$(4)$ $x\oplus y\oplus w= 0$ for some $w\in Z_{\geq 0}$ such that $w<z$ and $y \leq f(x,w)$;\\
$(5)$ $x\oplus v\oplus w= 0$ for some $v,w \in Z_{\geq 0}$ such that $v<y,w <z$ and $v=f(x,w)$.\\
\end{lemma}
\begin{proof}
Let $x= \sum\limits_{i = 0}^n {{x_i}} {2^i}$ and $y= \sum\limits_{i = 0}^n {{y_i}} {2^i}$, and $z= \sum\limits_{i = 0}^n {{z_i}} {2^i}$.
If $n=0$, we have $x,z \leq 1$, and $y \leq f(x,z) = \lfloor \frac{x+z}{k}\rfloor = 0$. Then, by (\ref{nimsumno0}), we have
$\{x,y,z\} = \{1,0,0\}$ or $\{0,0,1\}$. In this case we obtain $(1)$ for $\{u,y,z\} = \{0,0,0\}$ or $(4)$ for $\{x,y,w\} = \{0,0,0\}$ by reducing $x=1$ to $u=0$ or reducing $z=1$ to $w =0$.
Next, we assume that $n \geq 1$ and that there exists a non-negative integer $t$ such that
\begin{equation}\label{zerotonminusut1}
x_i \oplus y_i \oplus z_i = 0
\end{equation}
for $i = n,n-1,...,n-t+1$ and
\begin{equation}\label{xyznminush}
x_{n-t} \oplus y_{n-t} \oplus z_{n-t} \neq 0.
\end{equation}
Let $S_k = \sum_{i=n-k}^n (x_i + z_i - ky_i) 2^i$ for $k=0,1,\cdots ,t-1$, and we may then
define $S_k$ for $k=t,t+1,\cdots ,n$.
By (\ref{zerotonminusut1}), (\ref{inequalityyz}), and Lemma \ref{lemma1b},
we have
\begin{equation}\label{xyznminush22}
S_{t-1} \geq 0.
\end{equation}
We then have three cases.\\
\underline{Case $(1)$}
Suppose that $\{x_{n-t},y_{n-t},z_{n-t}\}=\{1,0,0\}$. We reduce $x$ to $u$ and for $i = 0,1, \cdots, t-1$ let
\begin{equation}
u_{n-i} = x_{n-i}, \nonumber
\end{equation}
and we define $u_{n-i}$ for $i = t,t+1, \cdots, n$ using an inductive method with the following steps $[I]$ and $[II]$.\\
Step $[I]$
Let $u_{n-t}=0$. Then,
\begin{equation}
u= \sum\limits_{i = 0}^n {{u_i}} {2^i}< \sum\limits_{i = 0}^n {{x_i}} {2^i}=x.\nonumber
\end{equation}
Because $u_{n-t}=0$ and $y_{n-t}=z_{n-t}=0$, by (\ref{xyznminush22}) we have
\begin{equation}\label{xyznminush22b}
S_{t} = S_{t-1}+(u_{n-t}+y_{n-t}-kz_{n-t})2^{n-t} = S_{t-1} \geq 0.\nonumber
\end{equation}
We then consider two subcases $(1.1)$ and $(1.2)$ according to the value of $S_{t}$.\\
\underline{Subcase $(1.1)$}
We suppose that
\begin{equation}
0 \leq S_{t} \leq 2m \times 2^{n-t}.\label{from0to2t}
\end{equation}
We then have two subsubcases for two possible values of $z_{n-t-1}$. \\
\underline{Subsubcase $(1.1.1)$}
Suppose that $z_{n-t-1} = 0$. We have two subsubsubcases for two possible values of $y_{n-t-1}$. \\
\underline{Subsubsubcase $(1.1.1.1)$}
If $y_{n-t-1} = 0$, let
\begin{equation}
\{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{0,0,0\} \nonumber
\end{equation}
and
\begin{equation}
S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-ky_{n-t-1})2^{n-t-1}. \nonumber
\end{equation}
Then, by Lemma \ref{lemma3} and (\ref{from0to2t})
\begin{equation}\label{condition01}
0 \leq S_{t+1} < k2^{n-t-1}.
\end{equation}
Then, we begin Step $[II]$ with (\ref{condition01}) while knowing that $y$ has not been reduced.\\
\underline{Subsubsubcase $(1.1.1.2)$}
If $y_{n-t-1} = 1$, let $v_{n-t-1}=0<y_{n-t-1}$.
Then, we have
\begin{equation}
\sum\limits_{i = 0}^n {{v_i}} {2^i} < \sum\limits_{i = 0}^n {{y_i}} {2^i} \nonumber
\end{equation}
for any values of $v_{i}$ for $i = 0,1, \cdots, n-t-1$. In this subsubsubcase,
we reduce $y$ to $v$ by reducing $x$ to $u$.
For a concrete example of reducing $y$ to $v$ by reducing $x$ to $u$, see $(ii)$ and $(iii)$ in Example \ref{chococute}.
Let
\begin{equation}
\{u_{n-t-1}, v_{n-t-1},z_{n-t-1}\}=\{0,0,0\} \nonumber
\end{equation}
and
\begin{equation}
S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-kv_{n-t-1})2^{n-t-1},\nonumber
\end{equation}
then by Lemma \ref{lemma3} and (\ref{from0to2t})
\begin{equation}
0 \leq S_{t+1} < k2^{n-t-1}.\label{condition02}
\end{equation}
Then, we begin Step $[II]$ with (\ref{condition02}) while knowing that we have reduced $y$ to $v$.\\
\underline{Subsubcase $(1.1.2)$}
Suppose that $z_{n-t-1} = 1$. We have two subsubsubcases for two possible values of $y_{n-t-1}$. \\
\underline{Subsubsubcase $(1.1.2.1)$}
If $y_{n-t-1} = 0$, let
\begin{equation}
\{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{1,0,1\} \nonumber
\end{equation}
and
\begin{equation}
S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-ky_{n-t-1})2^{n-t-1}, \nonumber
\end{equation}
then, by Lemma \ref{lemma3} and (\ref{from0to2t})
\begin{equation}
0 \leq S_{t+1} < k 2^{n-t-1}.\label{condition03}
\end{equation}
Then, we begin Step $[II]$ with (\ref{condition03}) while knowing that $y$ has not been reduced.\\
\underline{Subsubsubcase $(1.1.2.2)$}
If $y_{n-t-1} = 1$, let $v_{n-t-1}=0<y_{n-t-1}$.
Then, we have
\begin{equation}
v=\sum\limits_{i = 0}^n {{v_i}} {2^i} < \sum\limits_{i = 0}^n {{y_i}} {2^i}=y \nonumber
\end{equation}
for any values of $v_{i}$ for $i = 0,1, \cdots, n-t-1$. In this subsubsubcase,
we reduce $y$ to $v$ by reducing $x$ to $u$. For a concrete example of reducing $y$ to $v$ by reducing $x$ to $u$, see $(ii)$ and $(iii)$ in Example \ref{chococute}.
Then, let
\begin{equation}
\{u_{n-t-1}, v_{n-t-1},z_{n-t-1}\}=\{1,0,1\} \nonumber
\end{equation}
and
\begin{equation}
S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-kv_{n-t-1})2^{n-t-1}, \nonumber
\end{equation}
then, by Lemma \ref{lemma3} and (\ref{from0to2t})
\begin{equation}
0 \leq S_{t+1} < k 2^{n-t-1}. \label{condition04}
\end{equation}
Then, we begin Step $[II]$ with (\ref{condition04}) while knowing that we have reduced $y$ to $v$.\\
\underline{Subcase $(1.2)$}
We suppose that
\begin{equation}
(2m+2)2^{n-t} \leq S_{t} < k2^{n-t}.\label{biggerth2m2}
\end{equation}
We have two subsubcases for two possible values of $z_{n-t-1}$. \\
\underline{Subsubcase $(1.2.1)$}
Suppose that $z_{n-t-1} = 0$. We have two subsubsubcases for two possible values of $y_{n-t-1}$. \\
\underline{Subsubsubcase $(1.2.1.1)$}
If $y_{n-t-1} = 1$, let
\begin{equation}
\{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{1,1,0\} \nonumber
\end{equation}
and
\begin{equation}
S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-ky_{n-t-1})2^{n-t-1},\nonumber
\end{equation}
then, by Lemma \ref{lemma4} and (\ref{biggerth2m2})
\begin{equation}
0 \leq S_{t+1} < k2^{n-t-1}.\label{condition05}
\end{equation}
Then, we begin Step $[II]$ with (\ref{condition05}) while knowing that $y$ has not been reduced.\\
\underline{Subsubsubcase $(1.2.1.2)$}
If $y_{n-t-1} = 0$, let
\begin{equation}
\{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{0,0,0\}.\nonumber
\end{equation}
By Lemma \ref{lemma4} and (\ref{biggerth2m2})
\begin{equation}
S_{t+1} \geq k2^{n-t-1}.\label{condition06}
\end{equation}
Then, we begin Step $[II]$ with (\ref{condition06}) while knowing that $y$ has not been reduced.\\
\underline{Subsubcase $(1.2.2)$}
Suppose that $z_{n-t-1} = 1$. We have two subsubsubcases for two possible values of $y_{n-t-1}$. \\\\
\underline{Subsubsubcase $(1.2.2.1)$}
If $y_{n-t-1} = 1$, let
\begin{equation}
\{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{0,1,1\}.\nonumber
\end{equation}
and
\begin{equation}
S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-ky_{n-t-1})^{n-t-1},\nonumber
\end{equation}
then, by Lemma \ref{lemma4} and (\ref{biggerth2m2})
\begin{equation}
0 \leq S_{t+1} < k2^{n-t-1}.\label{condition07}
\end{equation}
Then, we begin Step $[II]$ with (\ref{condition07}) and the fact that $y$ has not been reduced.\\
\underline{Subsubsubcase $(1.2.2.2)$}
If $y_{n-t-1} = 0$, let
\begin{equation}
\{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{1,0,1\}.\nonumber
\end{equation}
By Lemma \ref{lemma4} and (\ref{biggerth2m2})
\begin{equation}
S_{t+1} \geq k 2^{n-t-1}.\label{condition08}
\end{equation}
Then, we begin Step $[II]$ with (\ref{condition08}) while knowing that $y$ has not been reduced.\\
\underline{Case $(2)$}
We suppose that $\{x_{n-t},y_{n-t},z_{n-t}\}=\{0,0,1\}$.
Then, we can use the same method used for Case $(1)$.\\
\underline{Case $(3)$}
We suppose that $y_{n-t}=1$.
Let $v_{n-t}=0 < y_{n-t}$ and $v_{i}=x_{i}+z_{i}$ (mod 2) for $i=n-t-1, \cdots, 0$. Then, we have $x\oplus v\oplus z= 0$ and $v < y \leq f(x,z)$, and we have $(3)$ of this lemma. In this case, we do not need Step $[II]$\\
Step $[II]$. We have two cases.\\
\underline{Case $(1)$} This is a sequel to Case $(1)$ of Step $[I]$.
Here, the procedure consists of three subcases.\\
\underline{Subcase $(1.1)$} Suppose that $S_{t+1} \geq k 2^{n-t-1}$.
In this case, $y$ is not reduced to $v$ in the last procedure, i.e., Step $[I]$.
By Lemma \ref{lemma2}, we let $u_{i}=y_i + z_i \ (\mod 2)$ for $i=n-t-2, \cdots, 0$ without affecting the values of $y_{i}$ for $i=n-t-2, \cdots, 0$. Then, we have $(1)$ for this lemma.\\
\underline{Subcase $(1.2)$} Suppose that $0 \leq S_{t+1} < k 2^{n-t-1}$ and
$y$ was reduced to $v$ in Step $[I]$.
Then, we choose the values of $u_{n-i}, v_{n-i}$ for $i = t+2, t+3, \cdots, n$ such that
$0 \leq S_{i} < k 2^{n-i}$ by the following $(a)$ or $(b)$.\\
$(a)$ For $i \geq t+1$, if $0 \leq S_{i} < 2m \times 2^{n-i}$, then we let $\{u_{n-i-1},v_{n-i-1},z_{n-i-1} \}$
$= \{0,0,0\}$ or $\{1,0,1\}$ when $z_{n-i-1}=0$ or $z_{n-i-1}=1$, respectively.
Then, by Lemma \ref{lemma3}, we have $0 \leq S_{i+1} < k 2^{n-i-1}$.\\
$(b)$ For $i \geq t+1$, if $ S_{i} \geq (2m+2) \times 2^{n-i}$, then we let $\{u_{n-i-1},v_{n-i-1},z_{n-i-1} \}$
$= \{1,1,0\}$ or $\{0,1,1\}$ when $z_{n-i-1}=0$ or $z_{n-i-1}=1$, respectively.
Then, by Lemma \ref{lemma4}, we have $0 \leq S_{i+1} < k 2^{n-i-1}$.
Therefore, for $i = t+2, \cdots ,$ we have $0 \leq S_{i} < k 2^{n-i}$, and finally
we have $0 \leq S_{n} < k 2^{n-n}=k$. We then have
$v = f(u,z)$ and $u \oplus v \oplus z = 0$; therefore, we have $(5)$ of this lemma.\\
\underline{Subcase $(1.3)$} Suppose that $0 \leq S_{t+1} < k 2^{n-t-1}$ and
$y$ was not reduced to $v$ during the last procedure. In this case, we use the same method as in step $[I]$. \\
\underline{Case $(2)$} This is a sequel to Case $(2)$ of Step $[I]$.
Then, we can use the same method used for Case $(1)$ of Step $[I]$.
\end{proof}
\begin{lemma}\label{fromPtoNforh}
We assume that
$x \oplus y \oplus z = 0$ and
\begin{equation}\label{inequalityyz2}
y \leq f(x,z).
\end{equation}
Then, the following hold:\\
$(1)$ $u\oplus y \oplus z \neq 0$ for any $u\in Z_{\geq 0}$ such that $u<x$;\\
$(2)$ $u \oplus v \oplus z \neq 0$ for any $u, v \in Z_{\geq 0}$ such that $u < x,v < y$, and $v=f(u,z)$;\\
$(3)$ $x\oplus v\oplus z \neq 0$ for any $v\in Z_{\geq 0}$ such that $v<y$;\\
$(4)$ $x\oplus y\oplus w \neq 0$ for any $w\in Z_{\geq 0}$ such that $w<z$ and $y \leq f(x,w)$;\\
$(5)$ $x\oplus v\oplus w \neq 0$ for any $v,w \in Z_{\geq 0}$ such that $v<y,w <z$, and $v=f(x,w)$.
\end{lemma}
\begin{proof}
If $x \oplus y \oplus z = 0$, a positive value of the nim-sum is obtained by changing the value of one of $x,y,z$.
Therefore, we have
$(1)$, $(3)$, and $(4)$.
Next, we prove $(2)$. The only way to have
$u \oplus v \oplus z = 0$ for some $u, v \in Z_{\geq 0}$ such that $u < x,v < y$ and
\begin{equation}\label{equalityfor}
v=f(u,z)
\end{equation}
is to reduce $\{x_{n-t},y_{n-t},z_{n-t}\} = \{1,1,0\}$ to $\{u_{n-t},v_{n-t},z_{n-t}\} = \{0,0,0\}$. We consider two cases.\\
\underline{Case $(1)$} Suppose that $0 \leq S_{t-1} \leq 2m \times 2^{n-t+1}$. Then, for $\{x_{n-t},y_{n-t},z_{n-t}\} = \{1,1,0\}$ by Lemma \ref{lemma3}, we have $S_{t+1}< 0$. Then, by Lemmas \ref{lemma1} and \ref{lemmaforf}, we have $y > f(x,y)$. This contradicts (\ref{inequalityyz2}).\\
\underline{Case $(2)$} Suppose that $ S_{t-1} \geq (2m+2) \times 2^{n-t+1}$. For $\{u_{n-t},v_{n-t},z_{n-t}\} = \{0,0,0\}$ by Lemma \ref{lemma4},
$S_{t} \geq k \times 2^{n-t}$; therefore, by Lemma \ref{lemma2}, $S_n \geq k$. Using Lemma \ref{lemmaforf}, we then have $y < f(u,v)$.
This contradicts (\ref{equalityfor}).
Similarly we can prove $(5)$.
\end{proof}
\begin{theorem}\label{theoremforoddk}
Let $f(x,z) = \lfloor \frac{x+z}{k}\rfloor$ for $k = 4m+3$. Then, the
chocolate bar $CB(f,x,y,z)$ is a $\mathcal{P}$-position if and only if
\begin{equation}
x \oplus y \oplus z=0. \label{nscondtionfornim0}
\end{equation}
\end{theorem}
\begin{proof}
Let $A_k=\{\{x,y,z\}:x\oplus y \oplus z = 0\}$ and $B_k =\{\{x,y,z\}: x\oplus y \oplus z \neq 0\}$.
If we begin the game with a position $\{x,y,z\}\in A_{k}$, then using Theorem \ref{fromPtoNforh}, any option leads to a position $\{p,q,r\} \in B_k$.
From this position $\{p,q,r\}$ by Theorem \ref{fromNtoPforh}, our opponent can choose an appropriate option that leads to a position in $A_k$. Note that any option reduces some of the numbers in the coordinates. In this way, our opponent can always reach a position in $A_k$, and finally, they win by reaching $\{0,0,0\}\in A_{k}$. Therefore, $A_k$ is the set of $\mathcal{P}$-positions.
If we begin the game with a position $\{x,y,z\}\in B_{k}$, then by Theorem \ref{fromNtoPforh}, we can choose an appropriate option that leads to a position $\{p,q,r\}$ in $A_k$. From $\{p,q,r\}$, any option chosen by our opponent leads to a position in $B_k$. In this way, we win the game by reaching $\{0, 0, 0\}$. Therefore, $B_k$ is the set of $\mathcal{N}$-positions.\\
\end{proof}
\section{othercases}\label{others}
The result shown in Section \ref{sub4mone} depends on the assumption that $k$ is odd, but it seems that a similar result can be proven for an even number $k$ with a restriction on the size of $x,z$.
The authors discovered the following conjecture via calculations using the computer algebra system Mathematica, but they have not managed to prove it.
\begin{conjecture}\label{theoremmanabe}
Let $f(x,z) = \lfloor \frac{x+z}{k}\rfloor$ for $k = 2^{a+2}m+2^{a+1}$ and $x,z \leq (2^{2a+2}-2^{a+1})m+2^{2a+1}-1$, where $a,m \in Z_{\ge 0}$. Then, the chocolate bar $CB(f,x,y,z)$ is a $\mathcal{P}$-position if and only if
\begin{equation}
x \oplus y \oplus z=0. \nonumber
\end{equation}
\end{conjecture}
\begin{rem}
If we compare Theorem \ref{theoremforoddk} and Conjecture \ref{theoremmanabe}, it seems very difficult to obtain the necessary and sufficient condition for Question 2.
\end{rem}
The authors also have the following conjecture that also
may be derived via calculations using the computer algebra system Mathematica.
\begin{conjecture}
Let $f(x,z) = \lfloor \frac{x+z}{k}\rfloor$ for $k = 4m + 1$. Then, the chocolate bar $CB(f,x,y,z)$ is a $\mathcal{P}$-position if and only if
\begin{equation}
(x+1) \oplus y \oplus (z+1)=0. \nonumber
\end{equation}
\end{conjecture}
\section*{Acknowledgements}
We are indebted to Shouei Takahasi and Taishi Aono. Although not the primary authors, their contributions were significant. We would like to thank Editage (www.editage.com) for English language editing. This work was
supported by Grant-in-Aid for Scientific Research of Japan.
|
1,108,101,563,210 | arxiv | \section{INTRODUCTION}
Investigation of the flow behavior of the accreting matter in the vicinity of a black hole is important
since the spectrum and the intensity of the emitted radiation depend on the flow structure. The event horizon
presents the unique inner boundary condition in which the in-falling matter crosses the horizon with the speed of light ($c$).
Therefore black hole accretion has to be transonic, as a result of which existence of one sonic point or critical point is assured for black hole accretion. General relativity also ensures that
matter must posses sub-Keplerian angular momentum closer to the horizon. Although within the marginally stable circular orbit ($r_{ms}$), the angular momentum at $r{\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}}r_{ms}$ is definitely sub-Keplerian (and the value $l {\sim} l_{ms}=l|_{r=r_{ms}}$), but at larger radius the angular momentum should be generally large.
Therefore, a general accretion disk should have viscosity to remove the angular momentum outwards.
The first serious model of viscous accretion disk was presented by \citet{ss73}, in which the angular momentum distribution was Keplerian, the accretion disk was geometrically thin and optically thick. In Shakura-Sunyaev disk the pressure and advection terms were not properly considered and no attempt was made to
satisfy the inner boundary condition around a black hole apart from the adhoc termination of the disk at
$r \leq r_{ms}$. Along with this theoretical short coming,
Shakura-Sunyaev disk also failed to explain power-law high energy part of a black hole candidate spectrum. Therefore, search for another component of an accretion disk which may explain the origin of the high energy radiations from black hole candidates, were undertaken by various groups.
One such model which got a wide attention was ADAF [{\it e.g.,} \citet{i77}, \citet{ny94} hereafter NY94]. This model was first constructed around a Newtonian gravitational potential, where the viscously dissipated energy is advected along with the mass, momentum and the entropy of the flow. The original ADAF solution was self-similar and wholly subsonic, and was found to be thermally and dynamically stable. Howover, the low viscosity ADAF showed convective instability \citep{ia99}, that has no dynamical effect if the angular momentum is transported outward but it is dynamically important in case the opposite is true. The global solution of ADAF showed that the flow actually becomes transonic at around few Schwarzschild radii ($r_g$), and the self-similarity may be maintained far away from the sonic point \citep{cal97}.
Simultaneous to these developments, there were some interesting research going on sub-Keplerian flows around black holes.
It has been shown that sub-Keplerian flow does posses multiple sonic point in a significant range of the energy-angular momentum parameter space \citep{lt80}. One of the consequences of existence of multiple sonic points, is that the flow accreting through the
outer sonic point can be slowed down by the centrifugal barrier. This slowed down matter acts as barrier to the faster fluid
following it. If the strength of the barrier is strong enough then accretion shocks may form \citep{c89}. General global solutions in the advective domain incorporating viscosity and thermal effects were obtained by many independent researchers \citep{c90,c96,lgy99,lmc98,gl04}.
Furthermore, it has also been shown that the global ADAF solution is a subset of the general advective solutions \citep{lgy99}. Whether a flow will follow an ADAF solution or some kind of hybrid solution
with or without shock will depend on the outer boundary condition and the physical processes dominant in the disk.
Although steady-state solutions are possible in a certain range of parameter space \citep{c89,cd04,mlc94,mrc96a}, but advective solutions with discontinuities such as shocks
are generally prone to various kind of instabilities. Since, various flow variables across the shock surface
jumps abruptly, this results in a markedly different cooling, heating and other dissipation rates across the shock. This may render the shock unstable. For example, in presence of bremsstrahlung cooling, resonance between cooling timescales and in fall timescales in the post shock part of the disk gives rise to oscillating shocks \citep{msc96b}. \citet{lmc98} showed that beyond a critical viscosity post-shock disk may oscillate. The interaction of the outflow and the inflow
may also cause the bending instability in the disk \citep{makbc01}. \citet{mtk99} showed that in presence of non-axisymmetric azimuthal perturbations the shock initially becomes unstable but stabilizes within a finite radial extent into an asymmetric closed pattern.
Moreover, the post-shock region may be associated with the elusive Compton cloud that produces the hard photons \citep{ct95,cm06,mc08} and may also be the base of the jet \citep{dc99,dcc01,cd07,dc08,bdl08,dbl09}. Therefore instabilities
of the post-shock region may manifest itself as the variabilities observed in the emitted hard photons seen in microquasars and active-galactic nuclei \citep{msc96b}. To add a new twist, \citet{ft04} conjectured the presence of multiple shocks and \citet{gcsr10} actually reported the presence of two oscillating shocks giving rise to two quasi-periodic oscillations.
In this paper, we concentrate on the study of instabilities of rotating fluid around black holes, generated by the angular-momentum transport by viscosity. Since the temperature, density etc are higher and the velocity is lower in the post-shock region compared to the pre-shock region, the angular momentum transport rate should be different
in the two regions of the disk. In other words, in this paper we simulate transonic, viscous, rotating
fluid around black holes. We employ a new code to study the effect of angular momentum transport in the accretion disk. Unlike other purely Eulerian codes, this new code is especially developed to strictly conserve angular momentum in absence of viscosity. In \S 2, governing equations and assumptions are presented. In \S 3, the code which was built to calculate the evolution of angular momentum as accurately as possible is described, along with tests for a rotating transonic flow and a viscous flow. In \S 4, the structure and the instability shown in simulations are presented, along with descriptions on the nature of the instability. A summary and discussion is presented in \S 5.
\section{BASIC EQUATIONS}
The one-dimensional time-dependent equations for quasi-spherical
accretion of viscous flows are given by
\begin{equation}
{\partial \rho \over \partial t} + {1 \over r^2}
{\partial \over \partial r} (r^{2}\rho v_r) = 0,
\end{equation}
\begin{equation}
{\partial v_r \over \partial t} + v_r{\partial v_r \over \partial r} +
{1 \over \rho}{\partial p \over \partial r} = {l^2 \over r^3} -
{\partial \Phi_i \over \partial r},
\end{equation}
\begin{equation}
{\partial l \over \partial t} + v_r {\partial l \over \partial r} =
{1 \over r^2 \rho} {\partial \over \partial r} \left[\mu r^4 {\partial
\over \partial r} \left({l \over r^2}\right)\right],
\end{equation}
\begin{equation}
{\partial e \over \partial t} + v_r {\partial e \over \partial r} +
{p \over r^2 \rho}{\partial \over \partial r}(r^2 v_r) = f {\mu r^{2}
\over \rho} \left[{\partial \over \partial r} \left({l \over r^2}\right)\right]^2,
\end{equation}
where $\rho$, $v_r$, $l$, $\Phi_i$ and $e$ are the gas density, radial velocity,
specific angular momentum, gravitational potential and specific internal energy, respectively.
The angular velocity is defined as $\Omega = l/r^2 $. The suffix $i$ in equation (2) denotes ${\rm N}$
or ${\rm PN}$, corresponding to Newtonian or pseudo-Newtonian gravity \citep{pw80}, respectively,
and are given by
\begin{equation}
\Phi_{\rm N} = -{GM_{BH} \over r}
\end{equation}
and
\begin{equation}
\Phi_{\rm PN} = -{GM_{BH} \over r - r_g}
\end{equation}
where $M_{BH}$ is the black hole mass and the Schwarzschild radius is $r_g=2GM_{BH}/c^2$.
The pseudo-Newtonian potential is widely used to mimic the Schwarzschild geometry.
For the gas pressure the equation of state for ideal gas is assumed,
{\it i.e.,} \
\begin{equation}
p = (\gamma-1) \rho e,
\end{equation}
where $\gamma$ is the ratio of specific heats.
For viscosity, the $\alpha$ prescription
(Shakura \& Sunyaev 1973) can be assumed, {\it i.e.,} \ the dynamical
viscosity coefficient is described by
\begin{equation}
\mu = \alpha \rho {c_s^2 \over \Omega_K},
\end{equation}
where
\begin{equation}
c^2_s=\frac{\gamma p}{\rho}
\end{equation}
is the square of the adiabatic sound speed, and
\begin{equation}
\Omega_K = {l_K \over r^2}= \left[\frac{1}{r}\frac{d\Phi_i}{dr}\right]^{1/2}
\end{equation}
is the Keplerian angular velocity, and the viscosity parameter $\alpha$ is a
constant which is generally less than 1. It is to be noted that the actual expression of
$\Omega_K$ depends on the gravitational potential used. Finally following NY94, the
parameter $f$ measures the fraction of the viscously generated
energy that is stored as entropy and advected along with flows.
The value $f=1$ corresponds to the limit of full advection and has been used in this paper.
In the following, we use $c$ and
$r_g$ as the units of velocity and length,
respectively, unless otherwise stated. In the geometrical units, the unit of time is $\tau_g=r_g/c$.
\section{CODE AND TESTS}
One of the most demanding tasks in carrying out numerical
simulations of equations (1) -- (4) is to calculate the evolution of the
angular momentum as accurately as possible. Capturing shocks
sharply should also be important in resolving structures with clarity, if
shocks are involved. It has been known that the latter can be
achieved by using codes based on modern, upwind finite-difference
schemes on an Eulerian grid. However, without viscosity in such Eulerian codes,
it is normally the azimuthal momentum ($\rho
v_\phi$) and not the angular momentum that is treated as a conserved
quantity. On the other hand, codes based on the Lagrangian concept,
such as the SPH code, can be designed to preserve the angular
momentum strictly. Although it has been successfully applied to many
studies of accretion flows however, the SPH code is known to be
unduly dissipative
\citep[see, {\it e.g.,} ][for discussions]{mrc96a}.
Here, we describe an Eulerian code that was built to accurately calculate
the evolution of the angular momentum including its
transport due to viscosity, and at the same time to capture
discontinuities (shocks and contact discontinuities) sharply with
minimum numerical dissipation. The code is composed of two parts;
hydrodynamic and viscosity part. The hydrodynamic part is based
on the Lagrangian TVD plus remap approach. The Lagrangian/remap
approach is not new in numerical hydrodynamics and was employed
previously \citep{cw84}. But here we show
that in this approach the equation for angular momentum
conservation can be directly solved, so the hydrodynamics part can
be designed to preserve the angular momentum strictly in the absence
of viscosity. At the same time, the TVD scheme \citep{h83,rokc93} guarantees sharp reproductions of discontinuities and
minimum numerical dissipation. In the viscosity part, the viscous
angular momentum transfer is updated through an implicit method, assuring
it is free from numerical instabilities related to it. But the
viscous heating is updated with a second order explicit method,
since it is less subject to numerical instabilities.
\subsection{Hydrodynamic Part}
The hydrodynamic part consists of the Lagrangian step and the remap step.
First in the Lagrangian step, the equations for Lagrangian
hydrodynamics are solved.
On the Lagrangian grid defined with mass coordinate, equations (1) -- (4),
except for the centrifugal force, gravity, and viscosity terms which are
treated separately [see below], can be written in a conservative form as
\begin{equation}
{d\tau \over dt} - {\partial(r^2 v_r) \over \partial m} = 0,
\end{equation}
\begin{equation}
{dv_r \over dt} + r^2{\partial p \over \partial m} = 0,
\end{equation}
\begin{equation}
{dl \over dt} = 0,
\end{equation}
\begin{equation}
{dE \over dt} + {\partial(r^2 v_r p) \over \partial m} = 0,
\end{equation}
where ${\tau}$ and $E$ are the specific volume and the specific total
energy, respectively, that are related to the quantities used in equations
(1) -- (4) as
\begin{equation}
\tau = {1 \over \rho}, \qquad E = e + {v^2_r \over 2}.
\end{equation}
The mass coordinate is related to the spatial coordinate via
\begin{equation}
dm = \rho(r) r^2 dr,
\end{equation}
and its position can be followed with
\begin{equation}
{dr \over dt} = v_r(m,t).
\end{equation}
Equations (11), (12), (14) form a hyperbolic system of conservation
equations, and upwind schemes can be applied to build codes that
advance the Lagrangian step using the Harten's TVD scheme, which is
an explicit, second-order, finite difference scheme to
solve a hyperbolic system of conservation equations \citep{h83,rokc93}. We note that the angular momentum in Eq. (13) is
preserved, so it need not be updated in the Lagrangian step.
In the remap step, the quantities evolved in the Lagrangian grid
are redistributed to the Eulerian grid, to preserve the spatially fixed
grid structure.
Before the Lagrangian step, the Lagrangian and Eulerian grid zones coincide.
But after the step, the Lagrangian grid zone moves to the updated
position
\begin{equation}
r^{\rm new} = r^{\rm old} + {\bar v} \Delta t,
\end{equation}
where $\bar v$ is the time-averaged velocity, and so it does not
coincide with the Eulerian grid zone any more. Not only are the
quantities conserved in the Eulerian grid, the density, radial
momentum, and total energy, but also the angular momentum are
remapped. For the remap, we employ the third order accurate scheme
used in the PPM code (see Colella \& Woodward 1984, for details).
With the Lagrangian and remap steps, equations (1) -- (4) are updated in
the Eulerian grid, except for the centrifugal force, gravity, and viscosity
terms on the right hand side.
The centrifugal force and gravity terms are calculated separately after
the Lagrangian and remap steps such that
\begin{equation}
v_i^{\rm hydro} = v_i^{\rm lag+remap} + \Delta t \left( {l_i^{\rm remap}
\over r_i^3} - {d\Phi \over dr}\Bigg|_i \right).
\end{equation}
Then the viscosity terms are calculated, as discussed in the next
subsection.
Non-uniform Eulerian grids can be employed in the code. For the
problem in this paper we use a grid, where the size of cells
increases exponentially as
\begin{equation}
\Delta r_i = \Delta r_1 \times \delta^{i-1},
\end{equation}
to achieve higher resolution at the origin.
Here $\Delta r_1$ the size of the first grid cell and $\delta$ is
the increment factor.
\subsection{Viscosity Part}
Viscosity has two effects on accretion flows.
First, it transfers the angular momentum outwards, allowing the matter
to accrete inwards.
At the same time, it acts as friction, which results in viscous heating.
Since the term for the angular momentum transfer in Eq. (3) is linear
in $l$, it can be solved implicitly.
Substituting of $(l^{\rm new} + l^{\rm remap})/2$ for $l$, Eq. (3) without the
advection term becomes
\begin{equation}
a_i l_{i-1}^{\rm new} + b_i l_{i}^{\rm new} +c_i l_{i+1}^{\rm new} =
-a_i l_{i-1}^{\rm remap} - (b_i -2)l_{i}^{\rm remap} -c_i l_{i+1}^{\rm remap},
\end{equation}
forming a tridiagonal matrix. Here $a_i$, $b_i$, and $c_i$ are given
with $\rho$, $\mu$, and $r$ as well as $\Delta r$ and $\Delta t$.
The tridiagonal matrix can be solved easily for $l^{\rm new}$ with
an appropriate boundary condition \citep{ptvf92}. The
term for the viscous heating in Eq. (4) is also linear in $e$ (note
that $\mu \propto e$), so it alone can be solved implicitly too.
However, combining the two linear equations for $l$ and $e$ becomes
complicated. Through numerical experiments, we found that the
explicit treatment for the viscous heating does not cause any
numerical problem. So instead of implementing a complicated scheme
to solve simultaneously $l$ and $e$ implicitly, we solve the angular
momentum transfer implicitly, while solving the viscous heating
explicitly.
\subsection{Tests}
Two tests are presented to demonstrate that the code can handle transonic
flow as well as viscous flow which are involved in our problem.
The capability of the code to capture shocks sharply and resolve
structures clearly is tested with a transonic accretion flow. In these tests we reproduce well-known results.
The evolution of an inviscid flow, which enters the outer boundary with
a small amount of angular momentum and approaches a black hole
described by Paczy\'nski \& Wiita potential \citep{pw80}, is calculated in cylindrical geometry. We note that previous subsections describe the
code only in spherical geometry, but the code is actually
written in arbitrary geometries. For this test, the version in the cylindrical geometry is used. Without viscosity, the angular
momentum is preserved. A shock can form, if the rotating flow through the outer sonic point
approaches the centrifugal barrier and decelerates discontinuously, due to the twin effect of centrifugal force
and pressure.
In the test, $l=1.8cr_g$, a value slightly below the
marginally stable value [$l_{ms}=(3/2)^{3/2}cr_g$], is used. The values of the flow
quantities in geometrical units are $(\rho, p, v_r) = (0.71809, 0.007604, -0.083566)$ at
the injection radius $r_{\rm inj}=50r_g$. The sound speed at $r_{\rm inj}$ is $c_s=0.1188c$, hence the fluid is subsonically injected. The fluid becomes supersonic after crossing the outer sonic point
at $r_{co}=27.9r_g$, becomes subsonic at the shock $r_{sh}=7.89r_g$ and enters the blackhole supersonically
after crossing the inner sonic point at $r_{ci}=2.563r_g$. For the ratio of specific
heats, $\gamma = 4/3$ is used. This is the case of a stable standing
shock considered in \citet{mrc96a}. In Figure 1, the
numerical solutions (open circles) of density $\rho$ (top) and radial Mach number $M_r=v_r/c_s$ (bottom) using 2048 uniform grid cells is compared with
the analytical solution (solid line). The flow in smooth
regions coincides with the analytical solution very well and the
shock position matches the analytical value very well. The agreement of analytical result
with the current code is better than those with the purely Eulerian
TVD code and the SPH code presented in \citet{mrc96a}.
Next, the performance of the code for a subsonic viscous flow is tested with a self-similar ADAF solution. Matter is steadily injected with Keplerian angular velocity
into the computational domain at $r_{\rm inj}$, and the simulation
lasts until the steady state is reached. Figure 2 presents the flow
quantities after the steady state is reached and compares them with
the analytic solution. The Newtonian potential is used, and the ADAF
is described with a self-similar solution (NY94). The values of
physical parameters used in this test are $\gamma = 4/3$, and $\alpha =
0.3$. The simulation was performed on an exponentially
increasing grid of 780 cells with $\Delta r_1 = 0.4972 r_s$ and
$\delta = 1.01$. Here $r_s$ is the sink size. The injection position
is $r_{\rm inj} \sim 3.6915 \times 10^4 r_s$. The quantities are
drawn in units of the Keplerian velocity and the Keplerian angular
momentum at the sink, $v_K(r_s)$ and $l_K(r_s)$, and the density is
in an arbitrary unit. The figure shows that the analytic solution is
reproduced very closely in the region between $r \sim 10 r_s$ and $r
\sim 10^4 r_s$ in a box of size $1.1611 \times 10^5 r_s$. The error
in the specific angular momentum is less than a few \% at most.
\section{RESULTS OF SIMULATIONS}
In previous numerical works, oscillation phenomena in accretion flows around black holes related to the QPO were reported. The study of
inviscid supersonic accretion flows around a Newtonian central object, showed that the accretion disk with
shock structure to be dynamically unstable \citep{rbol95}.
Global transonic accretion flows around black holes have been known to exhibit stationary shocks for inviscid
\citep{c89,mrc96a,dcc01} as well as dissipative flows \citep{c96,lgy99,lmc98,bdl08}.
However, since the post-shock flow is hotter, denser and slower, the dissipation rate in
the post-shock flow is shorter than that of the pre-shock flow, which may make the post-shock flow
unstable.
Indeed it has been shown that the energy-angular momentum parameter space for standing shock decreases with the increase of
viscosity parameter \citep{cd04,gl04,dbl09}. \citet{lmc98} simulated viscous transonic flow and showed steady shocks exist for low viscosity, while for higher viscosity shock becomes unstable.
However, \citet{lmc98} restricted their investigations for a hot flow ($T_{inj}\sim 10^{11}K$
at the injection radius), and very low viscosity
parameter ($\alpha \lower.5ex\hbox{$\; \buildrel < \over \sim \;$}$ few$\times10^{-3}$). Since the pre-shock flow was chosen to be hot (post-shock disk was obviously even hotter), the angular
momentum removal was very efficient in both the pre-shock as well as post-shock disk, even when the viscosity parameter was low. The length scale of the computation box was only about few tens of $r_g$.
In this paper we simulate viscous transonic flow which is cold to begin with, and investigate the instability arising from
reasonably higher viscosity of the flow. The reason to choose cold flow at the injection is to
have a very different angular momentum transport rate in the post-shock and pre-shock disk,
and thereby to maximize the effect of shock instability.
Moreover, we keep the length scale of computation box fairly large
so as to study large amplitude and low frequency shock instability.
\subsection{Shock Formation in Inviscid Rotating Flow}
We start our set of simulations with a low energy, rotating, transonic, inviscid flow around a black hole (described by $\Phi_{PN}$).
The steady-state, inviscid, transonic solution corresponds to a flow characterized by Bernoulli parameter or specific energy [which in our unit system is ${\cal E}=0.5v^2_r+c^2_s/(\gamma -1)+l^2/(2r^2)-1/\{2(r-1)\}$] and specific angular momentum ($l$). Parameters ${\cal E}, ~l$ for the inviscid flow
are $1.25{\times}10^{-6}c^2$ and $1.8cr_g$, respectively. The ratio of specific heat is given by $\gamma=1.4$. The Bondi radius (the length scale within which gravity becomes important) is defined as
$r_B = GM_{BH}/c_{s,\infty}^2$, where $c_{s,\infty}^2={\cal E}(\gamma-1)$ for an inviscid flow.
In this particular case $r_B=10^6r_g$ and $c_{s,\infty} = 7.071~{\times}10^{-4}c$. The analytical, steady-state solution of flows for these parameters gives two physical sonic points, the inner one is at
$r_{ci}= 2.394r_g$ and the outer one at $r_{co}=199991.04r_g \approx 0.2r_B$. The analytical solution also predicts a shock at $r_{sh}=22.2r_g$. It has been shown in connection to Figure 1, that it is possible to simulate transonic flow quite accurately with subsonic injection {\it i.e.,} when $r_{inj}>r_{co}$.
However in the present scenario, $r_{sh}\ll r_{co}$. Therefore, if $r_{inj}>r_{co}$, then a large amount of computation time will be wasted in simulating uninteresting region
of the disk. Hence, without any loss of generality, we choose the injection parameters from the supersonic portion of the analytical curve in order to reduce computation time.
To further reduce the computation time and also to
achieve higher resolution close to the center, we use
exponentially-increasing grids, which are 3,553 cells with $\Delta
r_1 = 0.0296$ and $\delta = 1.001$ and the length of the computation box corresponds to $1000 r_g = 0.001 r_B$.
The injection radius is hence $r_{inj}=1000r_g=0.001r_B$, and
the flow radial velocity [$v_r (inj) = 2.970 \times 10^{-2}c$], specific angular momentum [$l_{inj}=1.8cr_g$] and sound
speed [$ c_s(inj) = 4.827 \times 10^{-3}c$] at $r_{inj}$ is taken from the analytical solution.
In Figure 3, we compare the steady-state analytical solution (solid) with simulation result (open circle) when steady state is reached. Various flow variables like $\rho$ (a), $v_r$ (b),
$c_s$ (c) and $p$ (d) are plotted with $log(r)$. Figure 3, shows excellent agreement of the simulation result with the analytical curve and the shock has been captured within 2-3 cells.
\subsection{Shock Oscillation of Viscous Flow}
Time dependent solutions of viscous transonic accretion flow are obtained by starting with the inviscid flow described in
\S 4.1 as the initial condition, and then increasing the viscosity parameter $\alpha$.
The action of the viscosity can be understood from equations (3,4). For accretion ({\it i.e.,}
when $v_r < 0$), if the r.h.s of equation (3) is negative then the
angular momentum will be transported outwards.
In presence of Shakura-Sunyaev type viscosity, the
angular momentum transport in the post-shock subsonic flow is much more efficient than the pre-shock
supersonic flow. As a result of which, angular momentum starts to pile up in the immediate post-shock fluid, resulting in a jump in the angular momentum distribution across the shock.
Similarly, equation (4) tells us that the viscous heat dissipation in the post shock disk will also be higher compared to the pre-shock disk.
It is well known that a standing shock forms if the total pressure (ram+gas pressure)
is conserved across the shock \citep{c89}.
In Figure 4a, the inviscid solution produces a stationary shock location $r_{sh}=22.2r_g$, as is
shown in Figure 3.
In Figure 4b, the shock location as a function of $t$ is plotted for $\alpha=0.003$. The excess gas
pressure due to viscous heat dissipation, and the increased centrifugal force due to the piled-up
angular momentum in the post-shock disk, pushes the shock front outwards.
For low $\alpha$, shock front moves to a larger location ($r_{sh}\sim 31r_g$ as in
Fig. 4b) where the balance between
total outward push and the total inward pressure from the pre-shock flow is restored.
However, for higher viscosity parameter $\alpha=0.006$ (Fig. 4c),
the enhanced angular momentum transport creates an even more stronger outward push
and the shock front overshoots a possible equilibrium position and the shock starts to oscillate. Interestingly, when the shock moves to around $\sim 70~r_g$ and beyond, a second shock tends to emerge, which expand and collide with the outer shock. The combined shock then drifts outwards, the inner shock re-emerges and the cycle continues. In the following, let us make a detailed investigation on transonic flow with higher viscosity, and the emergence of two shocks.
In Figure 5a -- 5d, we have plotted radial velocity $v_r$ (dashed-dot) and the sound speed $c_s$ (solid)
at four time steps (a) $t=2.9{\times}10^5 r_g/c$, (b) $t=3.165{\times}10^5r_g/c$, (c) $t=3.4{\times}10^5r_g/c$
and (d) $t=3.615{\times}10^5r_g/c$, where the outer boundary conditions are same as Figure 3, and $\alpha=0.01$.
In Figure 6a -- 6d, the specific angular momentum distribution $l(r)$ is plotted for the same simulation and for the same time steps as in Figure 5a -- 5d. Corresponding panels of Figures 5 and 6 are to be considered in tandem to understand this complicated phenomenon.
The different snap shots in Figures 5a -- 6d, corresponds to (i) the maxima in outer shock (Figs. 5a and 6a),
(ii) the expansion stage of the combined shock just after the minima in the next cycle (Figs. 5b and 6b), (iii) just before the maxima of the outer shock (Figs. 5c and 6c), and (iv) just after the maxima
of the outer shock (Figs. 5d and 6d).
In Figure 5a, there are two shock structures, the inner shock is at $r_{sh}({\rm in})\sim 130r_g$ and the outer
shock $r_{sh}({\rm out})\sim 500r_g$. Corresponding angular momentum distribution in Figure 6a shows that the
$dl/dr>0$ in the range $r<20r_g$ and $r_{sh}({\rm in})<r<r_{sh}({\rm out})$, while $dl/dr<0$ is in the range
$20r_g<r<r_{sh}({\rm in})$. In Figure 5b, the two shocks merge and the combined shock is at $r_{sh}({\rm in})=r_{ sh}({\rm out})=r_{sh}\sim100r_g$. Figure 6b shows that $dl/dr>0$ in a region where $r<20r_g$, and $dl/dr<0$ for $20r_g<r\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} r_{sh}$, with a smaller hump in angular momentum distribution around $r_{sh}$.
The angular momentum distribution attains a tall peak, and the enhanced centrifugal pressure almost stalls the infall (Fig. 5b) in that region.
However, due to the extra pressure from the piled-up $l$ the combined shock moves outwards, while the contact discontinuity wave resulting from the collision of shocks moves towards the black hole.
As a result the sound speed ({\it i.e.,} temperature) in the immediate post-shock flow drops (Fig. 5c), and the angular momentum distribution becomes $dl/dr>0$ (Fig. 6c). This allows for a more free infall and $v_r$ in the immediate post shock disk increases considerably (Fig. 5c). As the contact discontinuity wave is absorbed by the black hole the angular momentum in the immediate post-shock region gets reduced considerably, $v_r$ becomes supersonic in the region. However, the flow again hit the centrifugal barrier closer to the black hole and the inner shock reemerges (Figs. 5d and 6d). We note that the regions of $dl/dr<0$ are subject to the rotational instability. The non-steady behavior shown here should be partly attributed to the instability.
\subsubsection{On emergence of the inner shock, shock collision and the angular momentum transfer}
In the top panel of Figure 7, the shock oscillation is plotted for $\alpha=0.01$. Therefore, each of the panels of Figures. 5a -- 6d corresponds to the various snap shots of flow variables taken from the top panel of Figure 7 (time sequences a -- d
are marked on the figure).
Similar to Figure 4c, $r_{sh}$ in the top panel of Figure 7 starts to oscillate as the viscosity is turned on.
A transient inner shock, {\it i.e.,} $r_{sh}({\rm in})$ develops when $r_{sh}({\rm out})\gsim80r_g$.
Initially the dynamics of the two shocks are similar to that of Figure 4c, {\it i.e.,} the inner shock forms
when $r_{sh}({\rm out})$ is at the maxima, and then $r_{sh}({\rm in})$ collides with the contracting outer shock, and the merged shock then expands. However, for $t> 0.2{\times}10^5\tau_g$ the shock dynamics slowly change, both shock expands and then contracts, and the shocks collide while contracting.
The merged shock then reaches a minima and then expands, and this cycle continues.
The query about the formation of the inner shock can be understood as follows:
as the original shock expands to a distance $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 80r_g$, the sound speed in the immediate postshock region and close to the black hole differs by almost an order of magnitude.
Hence the rate of angular momentum transport in a region nearer to the inner sonic point is much higher than the region closer to the shock. Hence, the angular momentum transport rate is not only markedly different
between the post-shock and pre-shock region, but also within the post-shock flow when the shock expands to a very large distance. Hence the angular momentum piles up in between inner sonic point ($r_{ci}$) and the shock ({\it e.g.,} Fig. 6b), which enhances the centrifugal barrier and impedes the accretion.
Continued shock expansion reduces the post shock sound speed ({\it i.e.,} temperature) and creates a mild but positive angular momentum gradient, which increases the infall velocity in the immediate post-shock
flow. This can continue up to the extent that the {\it post shock fluid once again become supersonic
in the immediate post-shock domain, however further down-stream the piled-up angular momentum virtually stops the supersonic inflow causing the formation of the inner shock}. The inner shock again jacks up the temperature, which causes the inner shock to expand too. If the outer shock is contracting
then the two shocks may collide, or, both the shock may expand in phase and collide during the
contraction phase. The combined shock then expands and the whole cycle is repeated. It may be noted that the inner shock emerges half way into each of the cycles and hence it is a persistent feature.
It is interesting to seek the radiative property of such oscillatory dynamics of the disk.
We estimate the bremsstrahlung loss aposteriori from the disk, as a representative of the radiative
loss.
It is well known that the bremsstrahlung emission $\propto \rho^2~c_s$. In Figure 7b,
we plot $E_{Total~Br}/\rho_{inj}^2~c_{s}(inj)$ as a function of time, where
\begin{equation}
E_{Total~Br} =\int^{r_{inj}}_{r_{sink}}\rho^2~c_s~r^2dr,
\end{equation}
$\rho_{inj}$ and $c_{s}(inj)$ are the density and sound speed of the flow at the outer boundary or $r_{inj}$.
Hence, the bottom panel of Figure 7 represents total bremsstrahlung emission from the computational box compared to the bremsstrahlung emission at $r_{inj}$. Interestingly, bremsstrahlung emission also has a periodic behavior, whose period is similar to the period of the shock oscillation. However, the shock maxima/minima may or may not coincide with either emission maxima or minima. In this particular case, initially there is no distinct correlation, but as the oscillation approaches quasi saturation, the emission maxima coincides with the combined shock minima. And the emission minima coincides with the rising phase of the combined shock and when the inner shock has not been formed.
As the combined shock contracts it pushes the post-shock matter inward just like a `bellow' in a blacksmiths
shop. This jacks up the $\rho$, $c_s$ and $v_r$. The enhanced $\rho$ and $c_s$ contribute to form the
emission maxima. As the combined shock expands, the flow variables like $\rho$, $c_s$ in the immediate post region decreases, and the emission starts to decrease until it reaches the minima. Since the wide difference of $c_s$ also triggers the differential
$l$ transfer in the inner regions and outer regions of the post-shock disk, the angular momentum again starts to pile up and starts the formation of the inner shock described above.
There exists a secondary peak in the bremsstrahlung emission as well which appears to be related to the dynamics of the inner shock.
The time lag between the shock maxima and the emission maxima is $\delta t\sim 2{\times}10^4\tau_g$.
Initially the oscillation period of the shock was $T^{\prime}_{osc}\sim 5\times 10^3 \tau_g$.
The oscillation period gradually increases to a quasi-saturation value of $T_{osc}\sim 8 {\times} 10^4 \tau_g$.
Since,
\begin{equation}
\tau_g = {2GM \over c^3} \sim 10^{-5} {M_{BH} \over M_\odot}\ {\rm
sec},
\end{equation}
therefore,
\begin{equation}
T_{\rm osc} \sim 8 \times 10^4 \tau_{g} \sim 8 \times 10^{-1}
~ {M_{BH} \over M_{\odot}}\ {\rm sec}.
\end{equation}
This would correspond to the frequency $\sim {0.125 Hz}$ for stellar mass black hole {\it i.e.,} $M_{BH}\sim 10 M_{\odot}$.
In case of a super-massive black hole ($M_{BH}\sim 10^8 M_{\odot}$), these time-scales will
correspond to $2.5$ yr variabilities.
However, since there are two shocks, we are interested to see the influence of the dynamics of the two
shocks on emission.
In the left panel of Figure 8, we plot the outer shock (top), inner shock (middle) and the
relative bremsstrahlung emission (bottom) for reference, and in the right panels
we plot the power density spectra for the outer shock (top), inner shock (middle) and the bremsstrahlung
emission (bottom) for a stellar mass black hole ($M_{BH}=10M_{\odot}$). The power density spectrum of the outer shock shows a frequency of $\sim {0.125 Hz}$. The power density spectrum of the inner shock has a prominent peak at the frequency $\sim 0.25 Hz$
and a weaker peak around $\sim 0.125 Hz$, the secondary peak suggests that the oscillation of the outer shock forces a weak periodicity on the inner shock as well. The power density spectrum of the inner shock is a bit noisy, since the time variation of the inner shock, although persistent, is not continuous. Interestingly, Figure 8, shows that the bremsstrahlung emission also peaks at around the same frequencies as that of the two shocks, confirming that
the quasi periodicity in the emission is due to the quasi periodicity of the two shocks.
In presence of such a dynamical disk, it is intriguing to investigate the time variation of the amount of matter and angular momentum consumed by the black hole.
Let us define the mass loss parameter, or the ratio of the rate of mass cannibalized by the black hole to the rate of mass injected, as ${\dot M}/{\dot M}_{inj}$, where ${\dot M}=(\rho v_r r^2)|_{sink}$
and ${\dot M}_{inj}=(\rho v_r r^2)|_{r_{inj}}$. The angular momentum loss rate is defined as
${\dot L}/{\dot L}_{inj}$, where ${\dot L}={\dot M}l_{sink}$ and ${\dot L}_{inj}={\dot M}_{inj}
l_{inj}$. The average specific angular momentum of the disk is defined as
\begin{equation}
<l>=\frac{\int l dr}{\int dr}
\end{equation}
In Figure 9, we plot the mass-loss parameter (top panel), angular momentum loss rate (middle),
and the average angular momentum of the disk (bottom) as a function of time. The profiles of the mass-loss parameter and the angular momentum loss rate are similar to that of the bremsstrahlung emission rate.
Since the distribution of $\rho$ peaks when the shock is at the minima, the peaks of the mass-loss parameter and the angular momentum loss rate coincide with the peak of the bremsstrahlung emission.
As the shock recedes, $v_r$ and $\rho$ decreases resulting in matter getting accumulated in
the disk {\it i.e.,} ${\dot M}/{\dot M}_{inj}<1$. As the shock contracts it squeezes more matter into the black hole (accumulated in the expansion phase) than it is being supplied, therefore ${\dot M}/{\dot M}_{inj}>1$.
It is to be noted that in this particular case, the disk prefers to stay in the state where ${\dot M}/{\dot M}_{inj} \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 1$. The angular momentum loss rate
follows ${\dot M}/{\dot M}_{inj}$. Interestingly, the maxima of the average angular momentum of the disk coincides with the minima of the emission, mass-loss rate and the angular momentum loss rate. The bottom panel of Figure 9 suggests that if the average angular momentum of the disk increases, then $v_r$
should decrease in a large region of the disk, which should reduce the rate of matter actually accreted onto the black hole. And the average angular momentum ($<l>$) of the disk increases with the increase of
the peak and the width of the angular momentum distribution of the disk, which corresponds to the dips in emission, mass-loss parameter and angular momentum loss rate.
Although $<l>$ oscillates with the same period as that of shock, but the disk interestingly prefers to stay in the state $<l>$ is greater than $l_{inj}$.
Since the disk itself is oscillating, all these flow parameters should oscillate
with the same period. And indeed the bremsstrahlung emission, the mass loss rate, the angular momentum loss rate etc all oscillate with the same period of shock oscillation.
\subsubsection{Shock oscillation for higher viscosity}
The dynamics of the disk with higher viscosity parameter is different from that due to the lower one. For higher viscosity parameter the difference in the disk dynamics will arise from more efficient angular momentum transfer as well as higher viscous dissipation of heat, even if the outer boundary condition remains the same.
In Figure 10, we have plotted the shock location with time (top) and the bremsstrahlung emission
with time (bottom) for a fluid with the same injection parameters as the inviscid flow described in \S
4.1, i.e., $r_{inj}=1000r_g$, $v_r(inj)=2.970 \times 10^{-2}c$, and $c_s(inj)=4.827 \times 10^{-3}c$
and the viscosity parameter is $\alpha=0.1$. The time variation of shock for $\alpha=0.1$ (Fig. 10)
is distinctly different from that of $\alpha=0.01$ ({\it i.e.,} Fig. 7). The inner shock forms, expands, and at some epoch
collides with the contracting outer shock, while at some other epoch it disappears before colliding with the outer shock. The inner shock is weaker compared to the disk with lower $\alpha$. The time evolution of the two shocks are somewhat similar to the initial phases of the shock variation for $\alpha=0.01$. Comparison of the time variation of bremsstrahlung emission with the time variation of the shock shows no correlation between shock minima and emission maxima
unlike the case for $\alpha=0.01$. In Figure 11a -- 11b, $v_r$ (dashed-dot) and $c_s$ (solid) are plotted
corresponding to the emission maxima (Fig. 11a) and emission minima (Fig. 11b). Similarly the corresponding specific angular momentum distributions are plotted for the emission maxima (Fig. 11c) and minima (Fig. 11d),
and the density too are plotted for the emission maxima (Fig. 11e) and minima (Fig. 11f). The maxima of the bremsstrahlung emission occurs when the inner shock is tending to form, while the minima occurs when the inner shock has not been formed (also refer Fig. 10). This change in the behavior of the shock
and the emission properties compared to that of the $\alpha=0.01$, actually depends on the different rate of angular momentum transfer. Since the viscosity in the present case is ten fold higher than $\alpha=0.01$, the outward angular momentum transport is very efficient. So close to the
black hole, the angular momentum rises steeply outward unlike the flow with lower viscosity ({\it e.g.,} Fig. 11c -- 11d may be compared with Fig. 6a -- 6d). If the shock is closer to the black hole
then the peak of the angular momentum distribution is very close to the outer shock (Fig. 11d); this causes matter to accrete more freely between the horizon and the peak of the angular momentum distribution, and hence the density is lower (Fig. 11b, 11d and 11f). This causes the emission to dip. As the shock moves out the angular momentum peak is situated further inside (Fig. 11c). And this causes the matter to madly fall inwards between the outer shock and the inner $l$ peak. As the in falling matter encounters the angular momentum pile it decelerates drastically increasing the density considerably, and hence enhances the bremsstrahlung emission (Fig. 11a, 11c and 11e). Eventually it forms an inner shock but the enhanced energy deposition in the post-inner shock region causes the inner shock to expand and thereby reducing density and emission. In this connection one may point out that the immediate post-shock (for both inner and outer shocks) region may be decelerating or accelerating ({\it e.g.,} Figs. 5a-5d, 11a-11b). However, it was
predicted by \citet{n92,nh94} that post-shock acceleration
and deceleration
correspond to unstable and stable shocks, respectively. The reason for this is that no
standing shock can exist in the viscous flow for the corresponding initial conditions.
In the top panel of Figure 12, we plot ${\dot M}/{\dot M}_{inj}$, and like the lower viscosity case its peak and trough coincides with that of the bremsstrahlung emission. The angular momentum loss rate
${\dot L}/{\dot L}_{inj}$ (middle) also follows the pattern of ${\dot M}/{\dot M}_{inj}$.
Since the angular momentum distribution is higher during the peak emission,
the average angular momentum of the disk $<l>$ (bottom) peak coincides with the emission peak.
Moreover, the ${\dot M}/{\dot M}_{inj}<1$ most of the time, which means because of higher angular momentum most of the matter that are being injected in to the disk, are not being consumed by the black hole, which is indicated by the fact that $<l>$ is significantly higher than $l_{inj}$.
The role of viscosity can be ascertained if one compares the viscous time with the advection time scale.
The viscous time-scale may be defined as
\begin{equation}
\tau_{\rm vis}=\int^{r_{sh}}_{r_{ci}}\frac{r}{\nu}dr,
\end{equation}
where, $\nu=\mu/\rho$
and the advection time scale
\begin{equation}
\tau_{\rm ad}=\int^{r_{sh}}_{r_{ci}}\frac{dr}{|v_r|}
\end{equation}
The time-variation of $\tau_{\rm vis}$ closely follows the corresponding variation of the shock location,
with minima and maxima of each, coinciding with the other at exactly the same time.
When compared for $\alpha=0.01$ case,
at the shock minima, $\tau_{\rm vis}$ and $\tau_{\rm ad}$ are comparable and we see the shock front is pushed outward. At the shock maxima $\tau_{\rm ad} << \tau_{\rm vis}$, {\it i.e.,} advection dominates, consequently the shock front hurls inwards contracting significantly from few $\times 100~r_g$ to few $\times 10~r_g$. Hence the interplay between these two timescales, sustains the
oscillation. The general pattern of the temporal behavior of the time scales and the relation to the shock oscillation for the flow with $\alpha=0.1$ is similar to that with the flow $\alpha=0.01$. However, $\tau_{\rm ad}$ and $\tau_{\rm vis}$ for $\alpha=0.1$ are roughly comparable and hence oscillations do not saturate.
\citet{ft00} investigated for Bondi flow that entropy-acoustic cycles may
sustain shock oscillation if $c_s(r_{ci})/c_s(r_{sh})\gg 1$.
In the simulations we have run, ${\rm max}~[c_s(r_{ci})/c_s(r_{sh})] \sim 10$ but generally the ratio is less
in most of the times. Hence the effect of entropy-acoustic cycles in regulating
shock oscillation is probably moderate in our case. A multi-dimensional simulation may give a
more definitive answer.
\section{SUMMARY AND DISCUSSION}
This paper is intended to study the time-dependent simulations of large amplitude oscillations of advective, viscous, sub-Keplerian disks, to complement earlier works of studying low amplitude oscillations undertaken by Molteni and his collaborators \citep{lmc98}. As an improvement we have employed a new code which uses the Lagrangian TVD/remap approach. This code strictly conserved the angular momentum without viscosity, and reduced the numerical dissipation considerably ({\it e.g.,} \S 3). Tests showed that the shock capturing capability of this code is better than both standard Eulerian code and Lagrangian SPH code ({\it e.g.,} Fig. 1), and followed the angular momentum transfer of the viscous, subsonic analytical solution extremely well ({\it e.g.,} Fig. 2).
Oscillation of accretion shock was borne out by the different rates of angular momentum transfer
across the shock and the heat dissipated due to the presence of viscosity. It has been shown that in presence of low viscosity parameter ($\alpha=0.003$), the shock front of a disk, with the same initial and boundary conditions as those of the inviscid case, did tend to expand and
settled at a larger distance from the disk (Fig. 4b). For an even higher viscosity
($\alpha \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 0.005$), the rate of
angular momentum transfer was higher, which caused a faster rate of shock front expansion.
As the shock front exceeded a possible equilibrium position it started to oscillate (Fig. 4c).
However, it is to be remembered that the value of the critical viscosity parameter ($\alpha \sim 0.005$ in the present case) is not sacrosanct,
but actually depends on the initial condition. For example, it has been shown that the critical viscosity
parameter will be higher for flows with lower angular momentum, while for a fluid with higher initial
energy the critical viscosity parameter will be lower \citep[see,][]{cd04}. Hence, if proper initial
condition is used then a stable shock is expected to form for higher
viscosity parameters ({\it i.e.,} $\alpha \sim 0.1$ -- $0.2$) too, investigation of which, however, is not the point of interest for the present paper.
A detailed study of the disk dynamics was conducted for reasonably high viscosity
({\it i.e.,} $\alpha=0.01,~\&~0.1$). For $\alpha=0.01$ the shock oscillation amplitude was found to be quite high $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 100r_g$. This resulted in a large sound speed gradient in the post-shock subsonic flow. In case of large amplitude shock oscillation, the rate of outward angular momentum transport in a region closer to the inner
sonic point was shown to be much higher compared to the rate of angular momentum transport near the shock.
As a result, our simulation showed that the angular momentum to be piled up in an intermediate region between the shock and the inner sonic point.
The expanding shock also increased the inflow velocity in the immediate post-shock region only to be
decelerated by the extra centrifugal pressure due to the piled-up angular momentum further inside the disk ({\it e.g.,} Fig. 6a -- 6d). The inflow velocity in the post shock disk may be increased to the extent that it may again become supersonic, then the resistance from the excess centrifugal pressure from the piled-up angular momentum distribution may cause the formation of inner shock. In case of moderately high $\alpha$, the distance between the peak of the angular momentum distribution and the outer shock
is large enough to allow for the $v_r$ to become supersonic again and enhanced the possibility to form the inner shock.
It is to be noted that the amplitude of shock oscillation will possibly be lesser for multi-dimensional simulation.
Viscosity is more active in the post-shock disk, and hence the extra centrifual force due to the piled angular momentum and the heat dissipated by viscosity both actively take part in shock oscillation.
However, in case of realistic accretion flow, a part of viscous heat dissipated in the post-shock disk will also be spend to puff it up, which would imply less outward push on the shock surface. Hence for a flow with same injection and viscosity parameters, the oscillation amplitude for a multi-dimensional
disk is expected to be lesser compared to a purely conical flow. Consequently, the critical viscosity above
which the disk becomes oscillatory will also be higher.
The time evolution of shocks for higher viscosity was shown to be distinctly different from that of the lower one. The inner shock was weaker and more sporadic for a disk with $\alpha=0.1$.
The main reason was because of the higher rate of angular momentum transport. Even when the shock was around $100r_g$, highly efficient angular momentum transport created a smooth increase of angular momentum, which only peaked closer to $r_{sh}$. As shock expanded $v_r$ increased, but the opportunity to become supersonic was minimized since the peak of the $l(r)$ was closer to the shock.
Hence the inner shock, if formed at all, was weaker. However, since shock amplitude for $\alpha=0.1$
was much larger than the case with $\alpha=0.01$, with time the formation of inner shock became more regular, and the behavior was more similar to that of $\alpha=0.01$.
The oscillatory motion of the shock induced oscillation in all the disk parameters like emission, rate of matter consumed by the black hole, the rate of angular momentum consumed by the black hole, and
the average angular momentum of disk. All these parameters did oscillate with the same period as that of the shock.
The disc oscillation started with $\alpha \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 0.005$. Considering
$M_{BH}=10~M_{\odot}$, for $\alpha=0.005$ the oscillation frequency of the outer shock was $5~Hz$, and the inner shock $10~Hz$, for $\alpha=0.006$ the frequencies were $1~Hz$ and $3~Hz$ respectively, and for $\alpha=0.01$ the two frequencies were $0.125~Hz$ and $0.25~Hz$. Hence one may conclude that
apart from the dependence of the oscillation frequency on injection parameters, the QPO frequency definitely decreases with increasing viscosity and vice-versa. Observationally, GRO J1655-40
exhibits a rise in QPO frequency in its rising state and a fall in QPO frequency in its declining
phase in 2005 \citep{cdnp08}.
\citet{cdp09} plotted the
QPO frequency for the object XTE J1550-564 in 1998 burst phase. They showed that in the rising phase
of the outburst, the low frequency QPO increases from $0.08 Hz$ to $13.1~Hz$ and then
starts to decrease in the declining phase before disappearing. Such rise and fall of QPO frequencies may be explained by the change in shock oscillation frequency due to the
change of the net viscosity of the disk.
In presence of viscosity a positive angular momentum gradient {\it i.e.,} $dl/dr \geq 0$ helps in outward transport of angular momentum. However, a negative gradient may trigger inward transport of angular momentum. The $dl/dr<0$ condition was attained in the disk in at least two locations, at the outer shock front and just behind the peak of the specific angular momentum distribution. Those regions were subject to the rotational instability. $dl/dr<0$ caused the average angular momentum $<l>$ of the disk to increase, and hence the period and the amplitude of the shock oscillation to increase too.
This is less perceptible for lower $\alpha$ and the shock oscillation achieved
quasi-saturation, but for $\alpha=0.1$ the shock went outside the computation domain.
We repeated the simulation with $\alpha=0.3$ (not presented in the paper) and in this case too the shock went outside the domain,
although formation of inner sonic point and oscillation of the two shocks were observed too.
In case of multi-dimensional simulations, a part of the post shock matter would have ejected along the vertical
direction in the form of winds, which would have carried away a part of the angular momentum, such that the increase of $<l>$ may have been arrested for higher $\alpha$.
This would have meant that the shock oscillation may saturate for $\alpha \geq 0.1$. Hence, we conjecture that the non-saturation of shock oscillation for $\alpha\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 0.1$, could be an artifact of one-dimensional
simulation. We will test it in a future work using multi-dimensional simulations.
\acknowledgments{SL was supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0004738).
DR was supported in part by National Research Foundation of Korea through grant KRF-2007-341-C00020.
|
1,108,101,563,211 | arxiv | \section{Introduction}
The estimation of the parameters of a complex sinusoidal in either one or two dimensions
is a pervasive problem in signal processing. For this problem, the maximum likelihood (ML)
principle provides an optimal estimator, in the sense that it achieves the Cramer-Rao
bound at relatively low signal-to-noise ratios. Yet in practice this estimator is deemed
too complex, due to the associated maximization problem. In rough terms, the situation is
that, though the ML cost function can be regularly sampled using the FFT efficiently, the
localization of its maximum requires some search procedure, and here is where the high
computational burden seems unavoidable. This observation was first stated by B. G. Quinn
in \cite{Quinn91}, where it was shown that a Gauss-Newton iteration may fail to find the
global maximum of the cost function, if the initial iteration is taken from the DFT of the
data samples (without zero padding). This has led several researches to give up the ML
approach, and look for sub-optimal estimators with lower computational burden,
\cite{Quinn94,So10}. However, in some references \cite{Aboutanios05, MacLeod98,Perisa06}
there has been an attempt to recover the initial ML approach, based on two arguments. The
first is that it is feasible to approximately locate the maximum of the ML cost function,
simply by looking for the DFT sample with largest module. Besides, the accuracy of this
coarse localization can be improved by zero padding. And the second is that it is
possible to interpolate the cost function close to this location from the nearby DFT
samples with some accuracy, since it is a smooth function. These arguments, if properly
exploited, make it possible to improve the performance significantly, with a complexity
similar to that of the FFT. The purpose of this paper is to go one step further in this
direction, and show that it is feasible to perform this interpolation with high accuracy
from a small number of DFT samples, so that the actual ML estimate can be obtained with
the complexity of a single FFT. The key lies in viewing the DFT as a band-limited signal
in the frequency variable.
The basic concept in this paper is first presented in the next section in the context of
time-domain interpolation. It is a simple an efficient technique to interpolate a
band-limited signal from a small number of samples with high accuracy. Then, the
estimation problem in one dimension is introduced in Sec. \ref{sec:mfe1}, where it is
shown that the method in the next section is actually the key for an efficient solution,
if the independent variable is properly interpreted. Afterward, Sec. \ref{sec:mfe2}
presents the extension of the method in Sec. \ref{sec:mfe1} to the two-dimensional case,
and finally Sec. \ref{sec:ne} contains a numerical example.
\section{High accuracy interpolation of a band-limited signal and its derivatives}
Given a band-limited signal $\Fs(t)$ with two-sided bandwidth $B$, a usual task is to
interpolate its value from a finite set of samples surrounding $t$. If the sampling period
is $T$ with $BT<1$ (Nyquist condition), and $2P+1$ samples are taken symmetrically around
$t$, then this task consists in finding a set of coefficients $a_p(u)$ such that the formula
\be{eq:20}
\Fs(t)\approx\sum_{p=-P}^P \Fs((n-p)T)a_p(u)
\end{equation}
is accurate, where $t=nT+u$ is the modulo-$T$ decomposition of $t$,
\be{eq:50} n\equiv\lfloor t/T+1/2\rfloor,\;\; u\equiv t-nT. \end{equation}
For fixed $u$, the $a_p(u)$ can be obtained numerically using filter optimization
techniques \cite{Laakso96}. Yet this approach is cumbersome, if not only $\Fs(t)$ but also
its derivatives must be interpolated for varying values of $u$ efficiently. Recently, a
so-called barycentric interpolator was derived in \cite{Selva10} that solves this problem
satisfactorily. This interpolator takes the form
\be{eq:46}
\Fs(nT+u)\approx\left.\sum_{p=-P}^P \frac{\Fs((n-p)T)w_p}{u-pT}\right/\sum_{p=-P}^P
\frac{w_p}{u-pT},
\end{equation}
where $w_p$ is a set of constants which are samples of a fixed function $\Fw(t)$,
[$w_p\equiv\Fw(pT)$]. In \cite{Selva10}, this function is given by the formula
\be{eq:22}
\Fw(t)\equiv \frac{\FGam(t/T+P+1)\FGam(-t/T+P+1)\Fg(t)}{\FL'(t)},
\end{equation}
where $\FGam(\cdot)$ is the Gamma function, $\Fg(t)$ is the pulse
\be{eq:23}
\frac{\ensuremath{{\mathrm{sinc}}}((1-BT) \sqrt{(t/T)^2-(P+1)^2)}}{\ensuremath{{\mathrm{sinc}}}(j (1-BT)(P+1))},
\end{equation}
and $\FL(t)$ is the polynomial
\be{eq:24}
\FL(t)\equiv \prod_{p=-P}^P (t-pT).
\end{equation}
(See \cite{Selva10} for further details.) The fact is that the error of (\ref{eq:20})
decreases exponentially with trend $\Fe^{-\pi(1-BT)P}$. In practice this means that a
small $P$ is enough to obtain high accuracy. Besides, as shown in \cite{Selva10}, Eq.
(\ref{eq:20}) can be differentiated with low complexity, so as to interpolate the
differentials of $\Fs(t)$ of any order, and (\ref{eq:46}) can be evaluated using only one
division. This interpolator will be the fundamental tool in the next section.
\section{ML frequency estimation: 1-D case}
\label{sec:mfe1}
Consider a signal $\Fz(t)$ consisting of an undamped exponential with complex amplitude
$a$ and frequency $f_o$, contaminated by complex white noise $\Fn(t)$ of zero mean and
variance $\sigma^2$,
\be{eq:25}
\Fz(t)= \Fe^{j2\pi f_o t}+\Fn(t).
\end{equation}
For simplicity, $f_o$ is assumed to lie in $[-1/2,1/2[$. Next, assume that $M$ samples of
$\Fz(t)$ are taken at instants $m_1, m_1 +1, \ldots, m_1+M-1$ with $m_1\equiv -\lceil
M/2\rceil$. The maximum likelihood estimation of $f_o$ from these samples is the argument
that maximizes the cost function
\be{eq:26}
\FL(f)\equiv |\Fc(f)|^2,
\end{equation}
where $\Fc(f)$ is the correlation
\be{eq:27}
\Fc(f)\equiv \sum_{m=m_1}^{m_1+M-1} \Fz(m)\Fe^{-j2\pi f m}.
\end{equation}
Since $\Fc(f)$ is the DFT of the samples $\Fz(m)$, the maximum of $\FL(f)$ can be
approximately located by selecting a frequency spacing $\Delta f$ such that $1/\Delta
f$ is a power of two, and then computing the samples
\be{eq:28}
\Fc(k\Delta f)\equiv \sum_{m=m_1}^{m_1+M-1} \Fz(m)\Fe^{-j2\pi k m \Delta f },
\end{equation}
by means of a radix-2 FFT algorithm. The cost of this operation is only $\FO(M\log M)$.
This way, it is possible to obtain a frequency $k_o\Delta f$ that lies close to the true
abscissa of the maximum of $\FL(f)$. However, if $\hat{f}_o$ is this abscissa, the
approximation of $\hat{f}_o$ using $k_o\Delta f$ is very inaccurate. In this situation,
the accuracy can be improved by reducing $\Delta f$ (over-sampling), but then the
computational burden becomes high.
The barycentric interpolator in the previous section provides an efficient method to
obtain $\hat{f}_o$ The key idea is that the correlation $\Fc(f)$ in (\ref{eq:27}) is a
band-limited signal \emph{in the $f$ variable} of bandwidth $2m_1$, i.e, the variable $f$
in (\ref{eq:27}) plays the same role as the variable $t$ in (\ref{eq:20}). Also, $\FL(f)$
is band-limited with bandwidth $4m_1$. So, the method of sampling the correlation using
the FFT, and then looking for its maximum module can be reformulated taking into account
these information. First, since $\FL(f)$ has two-sided bandwidth $4m_1$, it is necessary
to sample this function with a spacing smaller than its Nyquist period $1/(4 m_1 )$, in
order to coarsely locate its maximum. This implies that a factor-two zero padding is
enough to ensure the localization of the maximum of $\FL(f)$. And second, since $\Fc(f)$
has bandwidth $2m_1$, it can be interpolated using the barycentric formula in
(\ref{eq:20}) with $2m_1$ in place of $B$, and $\Delta f$ in place of $T$, i.e,
\be{eq:29}
\Fc(f)
\approx\left.\sum_{p=-P}^P \frac{\Fc((k-p)\Delta f)w_p}{\gamma-p\Delta f}\right/
\sum_{p=-P}^P
\frac{w_p}{\gamma-p\Delta f},
\end{equation}
where $f=k\Delta f+\gamma$ is the modulo-$\Delta f$ decomposition of $f$ defined as in
(\ref{eq:50}). If $\tilde{\Fc}(f)$ denotes the approximation to $\Fc(f)$ in (\ref{eq:29}),
then one may replace the problem of maximizing $\tilde{\FL}(f)$ with that of maximizing
\be{eq:34}
\tilde{\FL}(f)\equiv |\tilde{\Fc}(f)|^2
\end{equation}
Besides, the maximum of this function is close to the abscissa of the largest FFT sample
in (\ref{eq:28}), $k_o\Delta f$.
Since $P$ is small and there is a coarse estimate of the maximum abscissa, the
maximization of (\ref{eq:34}) is a low-complexity problem that can be solved using
standard numerical methods. Given that the differentials of the barycentric interpolator
can be easily computed \cite[Sec. IV]{Selva10}, a suitable numerical method is Newton's
algorithm, in which the $r$th iteration $f_r$ is refined using
\be{eq:35}
f_{r+1}=f_r+\tilde{\FL}'(f_r)/\tilde{\FL}''(f_r)
\end{equation}
where
\bae{eq:36}{2}{r@{\,=\,}l}
\D{\tilde{\FL}'(f)}&\D{\frac{2}{M}\mathrm{Re}\{\tilde{\Fc}'(f)\tilde{\Fc}(f)^*\},}\\
\D{\tilde{\FL}''(f)}&\D{\frac{2}{M}(\mathrm{Re}\{\tilde{\Fc}''(f)\tilde{\Fc}(f)^*\}
+|\tilde{\Fc}'(f)|^2).}
\end{array}\end{equation}
This process can be initiated with $f_1=k_o\Delta f$, and a small number of iterations
(3 to 5) is enough to obtain $\hat{f}_o$.
The computational burden of this method is given by that of the FFT, i.e, it is $\FO(M\log
M)$, and it yields the actual ML estimate of $f_o$.
\section{ML frequency estimation: 2-D case}
\label{sec:mfe2}
The method in the previous section can be extended to two-dimensional signals with
non-essential changes. For this, it is only necessary to consider two variables, $t_1$ and
$t_2$, and repeat the same interpolation procedure. The initial model, equivalent to
(\ref{eq:25}), is
\be{eq:37}
\Fz(t_1,t_2)=\Fe^{j2\pi (f_{1,o} t_1+f_{2,o}t_2)}+\Fn(t_1,t_2).
\end{equation}
Next, this signal is sampled at the integer pairs $(m,n)$ for $m_1\leq m<m_1+M$ and
$n_1\leq n<n_1+N$, with $m_1\equiv-\lceil M/2\rceil$, $n_1\equiv-\lceil N/2\rceil$. The
2-D equivalent of the cost function in (\ref{eq:26}) is
\be{eq:38}
\FL(f_1,f_2)\equiv |\Fc(f_1,f_2)|^2,
\end{equation}
where $\Fc(f_1,f_2)$ is the correlation
\be{eq:39}
\Fc(f_1,f_2)\equiv \sum_{m=m_1}^{m_1+M-1} \sum_{n=n_1}^{n_1+N-1}\Fz(m,n)
\Fe^{-j2\pi (f_1 m+f_2n)}.
\end{equation}
This function can be sampled with spacings $\Delta f_1$, $\Delta f_2$ using a radix-2 2-D
FFT. These samples provide a frequency pair $(k_{1,o} \Delta f_1,$ $k_{2,o} \Delta f_2)$
that lies close to the maximum of $\FL(f_1,f_2)$. Then, it is possible to set up an
interpolation formula like (\ref{eq:29}) but in two dimensions,
\bae{eq:40}{1.5}{l}
\D{\Fc(f_1,f_2)\approx\sum_{p_1=-P_1}^{P_1} \sum_{p_2=-P_2}^{P_2} }\\
\D{{}\hspace{0.5cm}
\frac{\Fc((k_1-p_1)\Delta f_1,(k_2-p_2)\Delta f_2)w_{1,p_1}w_{2,p_2}}
{(\gamma_1-p_1\Delta f_1)(\gamma_1-p_1\Delta f_1)}\cdot}\\
\D{{}\hspace{0.9cm}
\Big[\sum_{p_1=-P_1}^{P_1} \frac{w_{1,p_1}}
{\gamma_1-p_1\Delta f_1}
\sum_{p_2=-P_2}^{P_2}
\frac{w_{2,p_2}}{\gamma_2-p_2\Delta f_2}\Big]^{-1},
}
\end{array}\end{equation}
where $f_1=k_1\Delta f_1+\gamma_1$ and $f_2=k_2\Delta f_2+\gamma_2$ are the modulo
decompositions, and $w_{1,p_1}$ and $w_{2,p_2}$ are the barycentric weights corresponding
to bandwidths $2m_1$ and $2n_1$, and truncation indices $P_1$ and $P_2$, respectively. If
$\tilde{\Fc}(f_1,f_2)$ denotes the formula in (\ref{eq:40}), the problem of maximizing
$\FL(f_1,f_2)$ can be substituted by the problem of maximizing
\be{eq:41}
\tilde{\FL}(f_1,f_2)\equiv |\tilde{\Fc}(f_1,f_2)|^2.
\end{equation}
Finally, the Newton iteration for this problem is
\be{eq:42}
(f_{1,r+1},f_{2,r+1})=(f_{1,r},f_{2,r})+\mathcal{H}\tilde{\FL}^{-1}\nabla
\tilde{\FL},
\end{equation}
where $\nabla\tilde{\FL}$ and $\mathcal{H}\tilde{\FL}$ are the gradient and Hessian of
$\tilde{\FL}$ respectively, evaluated at $(f_{1,r},f_{2,r})$. These functionals are
\be{eq:43}
\nabla\tilde{\FL}=\frac{2}{MN}\,\mathrm{Re}\{ \nabla\tilde{\Fc}\tilde{\Fc}^*\}
\end{equation}
and
\be{eq:44}
\mathcal{H} \tilde{\FL}=\frac{2}{MN}\,\mathrm{Re}\{\mathcal{H}\tilde{\Fc}\,\tilde{\Fc}^*+
\nabla \tilde{\Fc}\nabla\tilde{\Fc}^\mathrm{H}\},
\end{equation}
where $\nabla{\tilde{\Fc}}$ and $\mathcal{H}\tilde{\Fc}$ are the corresponding gradient
and Hessian of $\tilde{\Fc}$.
Since the evaluation of $\tilde{\Fc}(f_1,f_2)$ has a small cost which is independent of
either $M$ or $N$, the complexity is given by the 2-D FFT, i.e, it is $\FO(MN\log(MN))$.
Note that sub-optimal methods like that in \cite{So10} have complexity $\FO(MN(M+N))$,
that is, the method in this section yields the ML estimate and, besides, has a smaller
complexity order.
\section{Numerical examples}
\label{sec:ne}
Let $\Fphi(f,t)$ denote the value delivered by the barycentric formula in (\ref{eq:46}),
when the input signal is the undamped exponential $\Fs(t)=\Fe^{j2\pi ft}$. Since the
functions to interpolate in this paper are sums of exponentials like $\Fe^{j2\pi ft}$, a
simple way to assess the interpolation accuracy is to evaluate the maximum module of
$\Fe^{j2\pi fu}-\Fphi(f,u)$, for $|u|\leq T/2$ and $f$ varying in $[-B/2,B/2]$, i.e, to
assess the spectrum function
\be{eq:47}
\FE(f)\equiv \max_{|u|\leq T/2}|\Fe^{j2\pi f u}-\Fphi(f,u)|.
\end{equation}
\begin{figure}
\includegraphics{Fig4}
\caption{\label{fig:4} Error spectrum $\FE(f)$ of the barycentric interpolator in
(\ref{eq:46}) for several truncation indices $P$.}
\end{figure}
Fig \ref{fig:4} shows $\FE(f)$ for $BT=0.25$ and several truncation indices $P$. Note that
any accuracy can be achieved uniformly in $[-B/2,B/2]$, by slightly increasing $P$.
In order to test the interpolation error in a specific example, a 2-D undamped exponential
with $M=500$, $N=651$ and frequencies $f_{o,1}=0.234452$ and $f_{o,2}=-0.143254$ was
generated. Then, complex white noise was added, so that the signal-to-noise ratio was
$S/N=5$ dB. Fig. \ref{fig:2} shows the interpolation error for the cost function
$\FL(f_1,f_2)$, defined by
\be{eq:45}
\epsilon\equiv \frac{\max_{f_1,f_2} |\FL(f_1,f_2)-\tilde{\FL}(f_1,f_2)|}
{\max_{f_1,f_2} \FL(f_1,f_2)},
\end{equation}
for varying truncation index $P$. Again, it is clear that any accuracy can be achieved by
slightly increasing $P$.
\begin{figure}
\includegraphics{Fig2}
\caption{\label{fig:2} Truncation index $P$ versus the $\log_{10}$ of the maximum
interpolation error ($\log_{10}\epsilon$) for the cost function $\FL(f_1,f_2)$.}
\end{figure}
Next, two estimators were compared in this example. To describe them, let $\mz$ denote the
data matrix obtained by sampling (\ref{eq:37}) as described in that section,
\be{eq:48}
[\mz]_{m,n}\equiv \Fz((m_1+m-1)T_1,(n_1+n-1)T_2),
\end{equation}
and let $\vur_1$, $\vv_1$ denote its first left and right singular vectors, respectively.
The first method consisted in computing the 1-D ML estimator from $\vur_1$ and $\vv_1$ so
as to obtain the respective estimations of $f_{o,1}$ and $f_{o,2}$,
\bae{eq:49}{1.5}{l}
\D{\hat{f}_{o,1}=\arg \max_f \Big|\sum_{m=1}^{M}[\vur_1]_m\Fe^{-j2\pi(m_1-1+m)f}\Big|^2.}\\
\D{\hat{f}_{o,2}=\arg \max_f \Big|\sum_{n=1}^{N}[\vv_1]_n\Fe^{-j2\pi(n_1-1+n)f}\Big|^2.}
\end{array}\end{equation}
The actual estimations $\hat{f}_{o,1}$ and $\hat{f}_{o,2}$ were computed from $\vur_1$ and
$\vv_1$ by applying the 1-D method in Sec. \ref{sec:mfe1} to each of them. This method
had complexity $\FO(MN(M+N))$, since it involved the computation of the singular vectors
$\vur_1$ and $\vv_1$, and is equivalent to the method in \cite{So10}. The second method
computed the actual ML estimator from $\mz$ using the method in Sec. \ref{sec:mfe2}, and
its complexity was just $\FO(MN\log(MN))$. For the values of $M$ and $N$ given above, the
over-sampling factors were $2.048$ and $1.523$, respectively, i.e, the FFT had size 1024
in both dimensions.
\begin{figure}
\includegraphics{Fig1}
\caption{\label{fig:1} Root-Mean-Square (RMS) error of the subspace and ML estimators,
together with the Cramer-Rao bound.}
\end{figure}
Fig. \ref{fig:1} shows the root-mean-square (RMS) error for the first estimator, termed
subspace estimator, for the second estimator (interpolated ML estimator), and the
Cramer-Rao bound. The threshold performance of the second is clearly superior.
\begin{figure}
\includegraphics{Fig3}
\caption{\label{fig:3} Average number of iterations required by the Newton method.}
\end{figure}
Finally, Fig. \ref{fig:3} shows the average number of iterations that required the
interpolated ML estimator, which was roughly equal to three.
\section{Conclusions}
A method has been presented that allows one to compute the maximum likelihood (ML)
estimation of a complex 2-D sinusoidal, with the complexity of the fast Fourier transform
(FFT). First, it is recalled in the paper that a band-limited signal can be interpolated
with high accuracy from a small number of samples, if the sampling frequency is somewhat
higher than the Nyquist frequency. Besides, a specific barycentric formula is proposed to
perform this kind of interpolation. And second, it is shown that the ML cost function for
the estimation of a complex 2-D (and 1-D) sinusoidal can be viewed as a band-limited
signal, if the time and frequency variables are switched. Finally, these two results are
combined in a method that is able to deliver the ML estimate with the complexity order of
the FFT, which is based on Newton's algorithm.
\bibliographystyle{IEEEbib}
|
1,108,101,563,212 | arxiv | \section{Introduction and main results} \label{I}
The paper deals with the existence and multiplicity of solutions for boundary value problems of the form
\[ \label{Qlambda} \tag{$Q_{\lambda}$}
-\Delta u = c_{\lambda}(x) u + \mu(x) |\nabla u|^2 + h(x)\,, \quad u \in H_0^1(\Omega) \cap L^{\infty}(\Omega)\,,\]
with $ c_{\lambda}$ depending on a real parameter $\lambda$. Here $\Omega \subset \R^N$, $ N \geq 2$, is a bounded domain with boundary $\partial \Omega$ of class $\mathcal{C}^{1,1}$, $c_{\lambda}$ and $h$ belong to $L^q(\Omega)$ for some $q > N/2$ and $\mu$ belongs to $L^{\infty}(\Omega)$.
\medbreak
This type of problem, which started to be studied by L. Boccardo, F. Murat and J.P. Puel in the 80's, has attracted a new attention these last years.
Under the condition $c_{\lambda} \leq -\alpha_0 < 0$ a.e. in $\Omega$ for some $\alpha_0 > 0$,
the existence of a solution of \eqref{Qlambda} is a particular case of the results of \cite{B_M_P_1983, B_M_P_1992} and its
uniqueness follows from \cite{B_B_G_K_1999, B_M_1995}. The case $c_{\lambda} \equiv 0$
was studied in \cite{F_M_2000, A_DA_P_2006} and the existence requires some smallness condition on $\| \mu h\|_{N/2}$.
The situation where one only requires $c_{\lambda} \leq 0$ a.e. in $\Omega$ (i.e. allowing parts of
the domain where $c_{\lambda} \equiv 0$ and parts of it where $c_{\lambda} < 0$ )
proved to be more complex to treat. In the recent papers \cite{A_DC_J_T_2015, DC_F_2018}, the authors explicit sufficient conditions for the existence of solutions of \eqref{Qlambda}. Moreover, in \cite{A_DC_J_T_2015}, the uniqueness of solution is established (see also \cite{A_DC_J_T_2014} in that direction). All these results were obtained without requiring any sign conditions on $\mu$ and $h$.
\medbreak
In case $c_{\lambda} = \lambda c \gneqq 0$, as we shall discuss later, problem \eqref{Qlambda} behaves very differently and becomes much richer. Following \cite{S_2010}, which considers a particular case,
\cite{J_S_2013} studied \eqref{Qlambda} with $\mu(x) \equiv \mu > 0$ and
$\lambda c \gneqq 0$ but without a sign condition on $h$. The authors proved the existence of at least two
solutions when $\lambda>0$ and $\|(\mu h)^{+}\|_{N/2}$ are small enough.
The restriction $\mu$ constant was removed in \cite{A_DC_J_T_2015} and extended to $\mu(x) \geq \mu_1 > 0$ a.e. in $\Omega$, at the expense of adding the hypothesis $h \gneqq 0$.
Next, in \cite{DC_J_2017}, assuming stronger regularity on $c$ and $h$, the authors removed the condition $h \gneqq 0$. In this paper, it is also lightened that the structure of the set of solutions when $\lambda >0$, crucially depends on the sign of the (unique) solution of $(Q_0)$. Note that, in \cite{DC_F_2018}, the above results are extended to the $p$-Laplacian case. Also, in the frame of viscosity solutions and fully nonlinear equations, under corresponding assumptions, similar conclusions have been obtained very recently in \cite{SN_S_2018}.
\medbreak
We refer to \cite{J_S_2013} for an heuristic explanation on how the behavior of \eqref{Qlambda} is affected by the change of sign in front of the linear term. Actually, in the case where $\mu(x) \equiv \mu$ is a constant, it is possible to transform problem \eqref{Qlambda} into a new one which admits a variational formulation.
When $c_{\lambda} \leq - \alpha_0 < 0$, the associated functional, defined on $H_0^1(\Omega)$, is coercive. If $c_{\lambda} \lneqq 0$, the coerciveness may be lost and when $c_{\lambda} \gneqq 0$, in fact as soon as $c_{\lambda}^+ \gneqq 0$, the functional is unbounded from below.
In \cite{J_S_2013} this variational formulation was directly used to obtain the solutions. In \cite{A_DC_J_T_2015, DC_J_2017} where $\mu$ is non constant, topological arguments, relying on the derivation of a priori bounds for certain classes of solutions, were used.
\medbreak
The only known results where $c_{\lambda}$ may change sign are \cite{J_RQ_2016, DC_F_2018-A3} (see also \cite{GM_I_RQ_2015} for related problems).
They both concern the case where $\mu$ is
a positive constant. In \cite{J_RQ_2016}, assuming $h \gneqq 0$, $\mu h$ and $c_{\lambda}^+$ small in an appropriate sense, the existence of at least two non-negative solutions was proved. In \cite{DC_F_2018-A3}, the authors show that the loss of positivity of the coefficient of $u$ does not affect the structure of the set of solutions of \eqref{Qlambda} observed in \cite{DC_J_2017} when
$c_{\lambda}=\lambda c \gneqq 0$. Since $\mu$ is constant in \cite{J_RQ_2016, DC_F_2018-A3}, it is possible to treat the problem variationally. The main issue, to derive the existence of solutions, is then to show the boundedness of the Palais-Smale sequences.
\medbreak
When $c_{\lambda} \gneqq 0$, all the above mentioned results require either $\mu$ to be constant or to be uniformly bounded from below by a positive constant (or similarly bounded from above by a negative constant). In \cite{Soup_2015}, assuming that the three coefficients functions are non-negative, a first attempt to remove these restrictions on $\mu$ is presented.
Following the approach of \cite{A_DC_J_T_2015}, the proofs of the existence results reduce to obtaining a priori bounds on the non negative solutions of \eqref{Qlambda}. First it is observed in \cite{Soup_2015} that a necessary condition is the existence of a ball $B(x_0,\rho)\subset\Omega$ and
$\nu>0$ such that $\mu\geq \nu$ and $c\geq \nu$ on $B(x_0,\rho)$. When $N=2$ this condition also proves to be sufficient. If $N=3$ or $4$ the condition $\mu\geq \mu_0>0$ on a set
$\omega \subset \Omega$ such that $\mbox{supp}(c)\subset\overline{\omega}$ permits to obtain the a priori bounds. Other sets of conditions are presented when $N=3$ and $N=5$. However, if the approach developed in \cite{Soup_2015}, which relies on interpolation and elliptic estimates in weighted Lebesgue spaces, works well in low dimension, the possibility to extend it to dimension $N \geq6$ is not apparent.
\medbreak
In this paper we pursue the study of \eqref{Qlambda} and consider situations where the three coefficients functions $c_{\lambda}$, $\mu$ and $h$ may change sign. We define for $v \in L^1(\Omega)$, $v^+= \max(v,0)$ and $v^- =
\max(-v,0)$.
As observed already in \cite{DC_F_2018-A3}, the structure of the solution set depends on the size of the
positive hump (i.e. $c_{\lambda}^+$) but it is not affect by the size of the negative hump (i.e. $c_{\lambda}^-$).
Hoping to clarify this, we now write $c_{\lambda}$ under the form $c_{\lambda}=\lambda c_+- c_-$
and
consider the problem
\[ \label{Plambda} \tag{$P_{\lambda}$}
-\Delta u = (\lambda c_+(x)- c_-(x)) u + \mu(x) |\nabla u|^2 + h(x)\,, \quad u \in H_0^1(\Omega) \cap L^{\infty}(\Omega)\,,\]
under the assumption
\[ \label{A1} \tag{$A_1$}
\left\{
\begin{aligned}
&\Omega \subset \mathbb{R}^{N}} %R^{N,\, N \geq 2, \textup{ is a bounded domain with boundary }\partial \Omega \textup{ of class }\mathcal{C}^{1,1},
\\&c_+, c_- , h^{+} \in L^q(\Omega) \textup{ for some } q > N/2 \,,
\ \mu, h^{-} \in L^{\infty}(\Omega) \,,
\\
& c_+(x) \geq 0, \ c_-(x) \geq 0 \textup{ and } c_-(x) c_+(x) =0 \textup{ a.e. in }\Omega,
\\
& |\Omega_{+}|> 0, \textup{ where } \Omega_{+} := \operatorname{Supp}(c_{+})
\\
&\textup{there exists a }\epsilon>0 \textup{ such that }\mu(x) \geq \mu_1 > 0 \textup{ and }c_- = 0 \textup{ in } \{x\in \Omega : d(x,\Omega_+)<\epsilon\}.
\end{aligned}
\right.
\]
For a definition of $\operatorname{Supp}(f) $ with $f \in L^p(\Omega)$, for some $p \geq 1$, we refer to \cite[Proposition 4.17]{Brezis}. Note also that the condition that $c_-=0$ on $\{x\in \Omega : d(x,\Omega_+)<\epsilon\}$ for some $\epsilon >0$, is reminiscent of the so-called ``thick zero set" condition first introduced in \cite{AlTa}.
\medbreak
We also observe that, under the regularity assumptions of condition \eqref{A1}, any solution of \eqref{Plambda} belongs to $\mathcal{C}^{0,\tau}(\overline{\Omega})$ for some $\tau > 0$. This can be deduce from \cite[Theorem IX-2.2]{L_U_1968}, see also
\cite[Proposition 2.1]{A_DC_J_T_2014}.
\medbreak
As in \cite{A_DC_J_T_2015,DC_J_2017, Soup_2015} we obtain our results using a topological approach, relying thus on the derivation of a priori bounds. In that direction our main result is the following.
\begin{theorem}
\label{aPrioriBound}
Assume \eqref{A1}.
Then, for any $ \Lambda_2 > \Lambda_1 > 0$, there exists a constant $M > 0$ such that, for each
$\lambda \in [\Lambda_1, \Lambda_2]$, any
solution of \eqref{Plambda} satisfies
$\sup_{\Omega} u \leq M$.
\end{theorem}
Having at hand this a priori bound, following the strategy of \cite{A_DC_J_T_2015}, we show the existence of a continuum of solutions of \eqref{Plambda}. More precisely, defining
\begin{equation} \label{sigma}
\Sigma := \{ (\lambda, u ) \in \R \times \mathcal{C}(\overline{\Omega}) :
u \textup{ solves } \eqref{Plambda} \},
\end{equation}
we prove the following theorem.
\begin{theorem} \label{th1}
Assume \eqref{A1} and suppose that $(P_0)$ has a solution $u_0$ with $c_+u_0 \gneqq 0$.
Then, there exists a continuum $\mathscr{C} \subset \Sigma$ such that the projection of $\mathscr{C}$ on the
$\lambda$-axis is an unbounded interval $(-\infty,\overline{\lambda}]$ for some
$\overline{\lambda} \in (0,+\infty)$ and $\mathscr{C}$ bifurcates from infinity to the right of the axis $\lambda = 0$.
Moreover:
\begin{itemize}
\item[1)]
for all $\lambda\leq0$, the problem \eqref{Plambda} has an unique solution $u_{\lambda}$ and this solution satisfies $u_0-\|u_0\|_{\infty}\leq u_{\lambda}\leq u_0$.
\item[2)]
there exists $\lambda_0 \in (0, \overline{\lambda}]$ such that, for all $\lambda \in (0,\lambda_0)$,
the problem \eqref{Plambda} has at least two solutions with $u_i\geq u_0$ for $i=1$, $2$.
\end{itemize}
\end{theorem}
\begin{remark} \mbox{}
\begin{itemize}
\item[(a)]
Theorem \ref{th1}, 1) generalizes \cite[Theorem 1.2]{A_DC_J_T_2015}.
\medbreak
\item[(b)]
Note that problem $(P_0)$ is given by
$$-\Delta u = - c_{-}(x)u + \mu(x) |\nabla u|^2 + h(x)\,, \qquad u \in H_0^1(\Omega) \cap L^{\infty}(\Omega)\,.$$
In \cite{A_DC_J_T_2015, DC_F_2018} the authors give sufficient conditions to ensure the existence of a
solution of $(P_0)$.
Moreover, if $h \geq 0$ in $\Omega$, \cite[Lemma 2.2]{A_DC_J_T_2014} implies that the solution of $(P_0)$ is non-negative.
\end{itemize}
\end{remark}
Let us give some ideas of the proofs. As we do not have global sign conditions, the approaches used in \cite{A_DC_J_T_2015,DC_J_2017,Soup_2015} to
obtain the a priori bounds do not apply anymore and another strategy is required.
To this aim, we further develop some techniques first sketched in the unpublished work
\cite{S_2015}. These techniques, in the framework of viscosity solutions of fully nonlinear equations,
now lies at the heart of the paper \cite{SN_S_2018}. We also make use of some ideas
from \cite{GM_I_RQ_2015}. First we show, in Lemma \ref{Step 1}, that it is sufficient to control the behavior
of the solutions on $\overline{\Omega}_{+}$. By compactness, we are then reduced to study what happens around an (unknown) point $\overline{x} \in \overline{\Omega}_{+}$. We shall consider separately the alternative cases $\overline{x} \in \overline{\Omega}_{+} \cap \Omega$ and $\overline{x} \in \overline{\Omega}_{+} \cap \partial \Omega$. A local analysis is made respectively in a ball or a semiball centered at $\overline{x}$.
If similar analysis, based on the use of Harnack type
inequalities, had previously been performed in other contexts when $\overline{x} \in \Omega$,
we believe it is not the case when $\overline{x} \in \partial \Omega$.
For $\overline{x} \in \partial \Omega$, the key to our approach is the use of boundary weak Harnack inequality.
Actually a major part of the paper is devoted to establishing this inequality.
This is done in a more general context than needed for \eqref{Plambda}.
In particular it also cover the case of the $p$-Laplacian with a zero order term. We believe
that this ``boundary weak Harnack inequality", see Theorem \ref{BWHIP},
is of independent interest and will proved to be useful in other settings. Its proof uses ideas introduced by
B. Sirakov \cite{S_2017}. In \cite{S_2017} such type of inequalities is
established for an uniformly elliptic operator and viscosity solutions. However, since our context is
quite different, the result of \cite{S_2017} does not apply to our situation and we need
to work out an adapted proof.
\medbreak
We now describe the organization of the paper. In Section \ref{II}, we present some preliminary results which are needed in the development of our proofs. In Section \ref{appBWHI}, we prove the boundary weak Harnack inequality for the $p$-Laplacian. The a priori bound, namely Theorem \ref{aPrioriBound}, is proved in Section \ref{III}. Finally Section \ref{IV} is devoted to the proof of Theorem \ref{th1}.
\bigbreak
\noindent \textbf{Notation.}
\begin{enumerate}{\small
\item[1)] In $\mathbb R^N$, we use the notations $|x|=\sqrt{x_1^2+\ldots+x_N^2}$ and $B_R(y)=\{x\in \mathbb R^N : |x-y|<R\}$.
\item[2)] We denote $\mathbb R^+=(0,+\infty)$, $\mathbb R^-=(-\infty,0)$ and $\mathbb N=\{1,2,3,\ldots\}$.
\item[3)] For $h_1$, $h_2\in L^1(\Omega)$ we write
\begin{itemize}
\item $h_1\leq h_2$ if $h_1(x)\leq h_2(x)$ for a.e. $x\in\Omega$,
\item $h_1\lneqq h_2$ if $h_1\leq h_2$ and
$\textup{meas}(\{x\in\Omega:
h_1(x)<h_2(x)\})>0$.
\end{itemize}}
\end{enumerate}
\section{Preliminary results} \label{II}
In this section, we collect some results which will play an important role throughout the work. First of all, let us consider the boundary value problem
\begin{equation} \label{eqlu}
-\Delta u + H(x,u,\nabla u) = f, \qquad u \in H_0^1(\omega) \cap L^{\infty}(\omega).
\end{equation}
Here $\omega \subset \R^N$ is a bounded domain, $f \in L^1(\omega)$ and $H: \omega \times \R \times \mathbb{R}^{N}} %R^{N \rightarrow \R$ is a Carath\'eodory function.
\begin{definition}\label{lower-upper}
We say that $\alpha \in H^1(\omega) \cap L^{\infty}(\omega)$ is a \textit{lower solution} of \eqref{eqlu}
if $\alpha^{+} \in H_0^1(\omega)$ and, for all $\varphi \in H_0^1(\omega) \cap L^{\infty}(\omega)$ with $\varphi \geq 0$, we have
\[ \int_{\omega} \nabla \alpha \nabla \varphi\, dx + \int_{\omega} H(x,\alpha, \nabla \alpha) \varphi\, dx
\leq \int_{\omega} f(x) \varphi\, dx\,.\]
Similarly, $\beta \in H^{1}(\omega) \cap L^{\infty}(\omega)$ is an \textit{upper solution} of \eqref{eqlu}
if $\beta^{-} \in H_0^1(\omega)$ and, for all $\varphi \in H_0^1(\omega) \cap L^{\infty}(\omega)$ with $\varphi \geq 0$, we have
\[ \int_{\omega} \nabla \beta \nabla \varphi\,dx + \int_{\omega} H(x,\beta, \nabla \beta) \varphi\, dx
\geq \int_{\omega} f(x) \varphi\,dx\,.\]
\end{definition}
Next, we consider the boundary value problem
\begin{equation} \label{eqBC}
-\Delta u + a(x)u = b(x)\,, \qquad u \in H_0^1(\omega)\,,
\end{equation}
under the assumption
\begin{equation} \label{hypLMP}
\left\{
\begin{aligned}
& \omega \subset \mathbb{R}^{N}} %R^{N,\ N \geq 2\,, \textup{ is a bounded domain,} \\
& a,\,\, b \in L^r(\omega)\, \textup{ for some }\, r > N/2.
\end{aligned}
\right.
\end{equation}
\begin{remark}
With the regularity imposed in the following lemmas and in the absence of a gradient term in the equation, we do not need the lower and upper solutions to be bounded. The full Definition \ref{lower-upper} will however be needed in other parts of the paper.
\end{remark}
\begin{lemma} \rm\textbf{(Local Maximum Principle)} \label{LMP}
\it Under the assumption \eqref{hypLMP}, assume that $u \in H^1(\omega)$ is a lower solution of \eqref{eqBC}.
For any ball $B_{2R} (y) \subset \omega$ and any $ s > 0$,
there exists $C = C(s,r,\|a\|_{L^r(B_{2R}(y))},R) > 0$ such that
\[ \sup_{B_R(y)}u^{+} \leq C \Big[ \Big(\int_{B_{2R}(y)} (u^{+})^s dx \Big)^{1/s}
+ \|b^+\|_{L^r(B_{2R}(y))}\Big] \,.\]
\end{lemma}
\begin{proof}
See for instance \cite[Theorem 8.17]{G_T_2001_S_Ed} and \cite[Corollary 3.10]{M_Z_1997}.
\end{proof}
\begin{lemma} \rm\textbf{(Boundary Local Maximum Principle)} \label{BLMP}
\it Under the assumption \eqref{hypLMP}, assume that $u \in H^1(\omega)$ is a lower solution of \eqref{eqBC}
and let $x_0 \in \partial \omega$. For any $R > 0$ and any $ s > 0$, there exists
$C = C(s,r,\|a\|_{L^r(B_{2R}(x_0) \cap \omega)},R) > 0$ such that
\[ \sup_{B_R(x_0) \cap \omega}u^{+} \leq C \Big[ \Big(\int_{B_{2R}(x_0) \cap \omega} (u^{+})^s dx
\Big)^{1/s} + \|b^+\|_{L^r(B_{2R}(x_0) \cap \omega)}\Big] \,.\]
\end{lemma}
\begin{proof}
See for instance \cite[Theorem 8.25]{G_T_2001_S_Ed} and \cite[Corollary 3.10 and Theorem 3.11]{M_Z_1997}.
\end{proof}
\begin{remark}
Lemmas \ref{LMP} and \ref{BLMP} proof's are done in \cite{G_T_2001_S_Ed} for $a \in L^{\infty}(\omega)$
and $s>1$. Nevertheless, as it is remarked on page 193 of that book, the proof is valid for
$a \in L^r(\omega)$ with $r > N/2$ and, following closely the proof of \cite[Corollary 3.10]{M_Z_1997}, the proofs can be extended for any $s > 0$.
\end{remark}
\begin{lemma}\rm\textbf{(Weak Harnack Inequality)} \label{WHI}
\it Under the assumption \eqref{hypLMP}, assume that $u \in H^1(\omega)$ is a non-negative upper solution
of \eqref{eqBC}. Then, for any ball $B_{4R}(y) \subset \omega$
and any $1 \leq s < \frac{N}{N-2}$ there exists $C = C(s,r,\|a\|_{L^r(B_{4R}(y))},R) > 0$ such that
\[\inf_{B_R(y)} u \geq C \Big[ \Big( \int_{B_{2R}(y)} u^s dx \Big)^{1/s}
- \|b^-\|_{L^{r}(B_{4R}(y))} \Big] \,.\]
\end{lemma}
\begin{proof}
See for instance \cite[Theorem 8.18]{G_T_2001_S_Ed} and \cite[Theorem 3.13]{M_Z_1997}.
\end{proof}
Now, inspired by \cite[Lemma 3.2]{B_C_1998} (see also \cite[Appendix A]{D_2011}$\,$), we establish the following version of the Brezis-Cabr\'e Lemma.
\begin{lemma} \label{bcLemma1}
Let $\omega \subset \mathbb{R}^{N}} %R^{N$, $N \geq 2$, be a bounded domain with boundary $\partial \omega$ of class $\mathcal{C}^{1,1}$ and let $a \in L^{\infty}(\omega)$ and $f \in L^1(\omega)$ be non-negative functions. Assume that $ u \in H^1(\omega)$ is an upper solution of
\[ -\Delta u + a(x) u = f(x)\,, \quad u \in H_0^1(\omega)\,.\]
Then, for every $B_{2R} (y)\subset \omega$, there exists
$C = C(R,y,\omega,\|a\|_{\infty}) > 0$ such that
\[ \inf_{\omega} \frac{u(x)}{d(x,\partial \omega)} \geq C \int_{B_{R}(y)} f(x) \,dx\,. \]
\end{lemma}
\begin{proof}
First of all, as $a$ and $f$ are non-negative, by the weak maximum principle, it follows that
\[\inf_{\omega} \frac{u(x)}{d(x,\partial \omega)}\geq 0\,.\]
Now let $B_{2R} (y)\subset \omega\,.$ By the above inequality, we can assume without loss of generality that
\[
\int_{B_R(y)} f(x)\, dx > 0\,.
\]
We split the proof into three steps.
\medbreak
\noindent \textbf{Step 1:} \textit{There exists $c_1 = c_1(R,y,\omega,\|a\|_{\infty}) > 0$ such that}
\begin{equation} \label{bc3}
\frac{u(x)}{d(x,\partial \omega)} \geq c_1 \int_{B_R(y)} f(x) \,dx\,, \quad \forall\ x \in \overline{B_{R/2}(y)}\,.
\end{equation}
\indent Since $f$ is non-negative, observe that $u$ is a non-negative upper solution of
\[ -\Delta u + a(x) u = 0\,, \quad u \in H_0^1(\omega)\,.\]
Hence, by Lemma \ref{WHI}, there exists a constant
$c_2 = c_2(R,\|a\|_{\infty}) > 0$ such that
\begin{equation} \label{bc1}
u(x) \geq c_2\int_{B_{R}(y)} u\, dx\,, \quad \forall\ x \in \overline{B_{R/2}(y)}\,.
\end{equation}
Now, let us denote by $\xi$ the solution of
\begin{equation}
\left\{
\begin{aligned}
-\Delta \xi + \|a\|_{\infty}\xi & = \chi_{B_{R}(y)}\,, & \textup{ in } \omega\,,\\
\xi & = 0\,, & \textup{ on } \partial \omega\,.
\end{aligned}
\right.
\end{equation}
By \cite[Theorem 3]{B_N_1993}, there exists a constant $c_3 = c_3(R,y, \omega,\|a\|_{\infty}) > 0$ such that, for all $ x \in \omega$, $\xi(x) \geq c_3 d(x,\partial \omega)$. Thus, since $B_{2R}(y) \subset \omega\,,$ $f$ is non-negative and $d(x,\partial \omega)\geq R$ for $x\in B_R(y)$,
it follows that
\begin{equation*}
\int_{B_{R}(y)} u\, dx = \int_{\omega} u\big(-\Delta \xi + \|a\|_{\infty}\xi\big)\, dx
\geq \int_{\omega} f(x)\,\xi\, dx \geq c_3 \int_{\omega} f(x) \,d(x,\partial \omega)\, dx
\geq c_3 R \int_{B_R(y)} f(x)\, dx\,.
\end{equation*}
Hence, substituting the above information in \eqref{bc1} we obtain for $c_4 = c_2c_3R$
\begin{equation} \label{bc2}
u(x) \geq c_4 \int_{B_R(y)} f(x) \,dx\,, \quad \forall\ x \in \overline{B_{R/2}(y)}\,,
\end{equation}
from which, since $\omega \subset \R^N$ is bounded, \eqref{bc3} follows.
\medbreak
\noindent \textbf{Step 2:} \textit{There exists $c_5 = c_5(R,y,\omega,\|a\|_{\infty}) > 0$ such that}
\begin{equation} \label{bc5}
\frac{u(x)}{d(x,\partial \omega)} \geq c_5 \int_{B_R(y)} f(x) \,dx\,, \quad
\forall\ x \in \omega \setminus \overline{B_{R/2}(y)}.
\end{equation}
\indent Let $w$ be the unique solution of
\begin{equation} \label{bc4}
\left\{
\begin{aligned}
-\Delta w + \|a\|_{\infty}w & = 0 \,, \quad & \textup{ in } \omega \setminus \overline{B_{R/2}(y)}\,, \\
w & = 0\,, & \textup{ on } \partial \omega\,,\\
w & = 1\,, & \textup{ on } \partial B_{R/2}(y)\,.
\end{aligned}
\right.
\end{equation}
Still by \cite[Theorem 3]{B_N_1993}, there exists $c_6 = c_6(R,y,\omega,\|a\|_{\infty}) > 0$ such that $w(x) \geq c_6 d(x,\partial \omega)$ for all
$x \in \omega \setminus \overline{B_{R/2}(y)}$.
On the other hand, let us introduce
\[ v(x) = \frac{u(x)}{c_4 \int_{B_R(y)} f(x) \,dx}\,,\]
with $c_4$ given in \eqref{bc2}. Observe that $v$ is an upper solution of \eqref{bc4}. Hence, by the standard comparison principle, it follows that $v(x) \geq w(x)$ for all $x \in \omega \setminus \overline{B_{R/2}(y)}$ and \eqref{bc5} follows.
\medbreak
\noindent \textbf{Step 3:} \textit{Conclusion.}
\medbreak
The result follows from \eqref{bc3} and \eqref{bc5}.
\end{proof}
\section{Boundary weak Harnack inequality} \label{appBWHI}
In this section we present a \textit{boundary weak Harnack inequality} that will be central in the proof of Theorem \ref{aPrioriBound}. As we believe this type of inequality has its own interest, we establish it in the more general framework of the $p$-Laplacian. Recalling that $\Delta_p u = \textup{div}(|\nabla u|^{p-2}\nabla u)$ for $1<p<\infty$, we introduce the boundary value problem
\begin{equation} \label{eqBWHIP}
\pLaplac u + a(x)|u|^{p-2}u = 0\,, \quad u \in W_0^{1,p}(\omega)\,.
\end{equation}
Let us also recall that $u \in W^{1,p}(\omega)$ is an {\it upper solution of \eqref{eqBWHIP}} if $u^{-} \in W_0^{1,p}(\omega)$ and, for all $\varphi \in W_0^{1,p}(\omega)$ with $\varphi \geq 0$, it follows that
\begin{equation*}
\int_{\omega} |\nabla u|^{p-2} \nabla u\, \nabla \varphi \, dx + \int_{\omega} a(x)|u|^{p-2}u \,\varphi \, dx\geq 0\,.
\end{equation*}
We then prove the following result.
\begin{theorem} \rm \textbf{(Boundary Weak Harnack Inequality)} \label{BWHIP}
Let $\omega \subset \mathbb{R}^{N}} %R^{N$, $N \geq 2$, be a bounded domain with boundary $\partial \omega$ of class
$\mathcal{C}^{1,1}$ and let $a \in L^{\infty}(\omega)$ be a non-negative function. Assume that $u$ is a non-negative upper solution of \eqref{eqBWHIP}
and let $x_0 \in \partial \omega$. Then, there exist $\overline R>0$,
$\epsilon = \epsilon(p, \overline{R},\|a\|_{\infty},\omega) > 0$ and
$C = C(p,\overline{R}, \epsilon,\|a\|_{\infty},\omega) > 0$ such that, for all
$R \in(0,\overline R]\,,$
\[ \inf_{B_R(x_0) \cap \omega} \frac{u(x)}{d(x,\partial \omega)}
\geq C \,\Big( \int_{B_R(x_0) \cap \omega}
\Big( \frac{u(x)}{d(x,\partial \omega)} \Big)^{\epsilon} \,dx \Big)^{1/\epsilon}\,. \]
\end{theorem}
\medbreak
As already indicated, in the proof of Theorem \ref{BWHIP} we shall make use of some ideas from \cite{S_2017}.
\medbreak
Before going further, let us introduce some notation that we will be used throughout the section. We define
\[ r := r(N,p) = \left\{
\begin{aligned}
& \frac{N(p-1)}{N-p} &\textup{ if } p < N, \\
& +\infty & \textup{ if } p \geq N,
\end{aligned}
\right.
\]
\vspace{0.05cm}
\noindent and denote by $Q_{\rho}(y)$ the cube of center $y$ and side of length $\rho$, i.e.
\[ Q_{\rho}(y) = \{ x \in \R^N: |x_i-y_i| < \rho/2 \textup{ for } i = 1, \ldots, N \}.
\]
\vspace{0.05cm}
\noindent In case the center of the cube is $\rho e$ with $e = (0,0,\ldots,1/2)$, we use the notation
$Q_{\rho} = Q_{\rho}(\rho e)$.
\medbreak
Let us now introduce several auxiliary results that we shall need to prove Theorem \ref{BWHIP}.
We begin recalling the following comparison principle for the $p$-Laplacian.
\begin{lemma} \rm \cite[Lemma 3.1]{T_1983} \label{CPT} \it
Let $\omega \subset \mathbb{R}^{N}} %R^{N$, $N \geq 2$, be a bounded domain and let $a \in L^{\infty}(\omega)$ be a non-negative function.
Assume that $u\,, v \in W^{1,p}(\omega)$ satisfy (in a weak sense)
\begin{equation*}
\left\{
\begin{aligned}
\pLaplac u + a(x)|u|^{p-2} u & \leq \pLaplac v + a(x)|v|^{p-2}v\,, \quad & \textup{ in }\omega\,, \\
u & \leq v\,, \quad & \textup{ on } \partial \omega\,.
\end{aligned}
\right.
\end{equation*}
Then, it follows that $u \leq v$.
\end{lemma}
\medbreak
As a second ingredient, we need the weak Harnack inequality.
\begin{theorem} \rm \cite[Theorem 3.13]{M_Z_1997}
\label{IWHIRefined} \it
Let $\omega \subset \mathbb{R}^{N}} %R^{N$, $N \geq 2$,
be a bounded domain and let $a \in L^{\infty}(\omega)$ be a non-negative function. Assume that $u \in W^{1,p}(\omega)$
is a non-negative upper solution of
\begin{equation*}
\pLaplac u + a(x)|u|^{p-2}u = 0\,, \quad u \in W_0^{1,p}(\omega)\,,
\end{equation*}
and let $Q_{\rho}(x_0) \subset \omega$. Then, for any $\sigma,\tau \in (0,1)$ and $\gamma \in (0,r)$,
there exists $C = C(p,\gamma,\sigma,\tau,\rho,\|a\|_{\infty}) > 0$ such that
\[ \inf_{Q_{\tau \rho}(x_0)} u \geq
C \Big( \int_{Q_{\sigma \rho}(x_0)} u^{\gamma} \,dx \Big)^{1/\gamma}\,.\]
\end{theorem}
In the next result, we deduce a more precise information on the dependence of $C$ with respect to
$\rho$. This is closely related to \cite[Theorem 1.2]{T_1967} where however the constant still depends on
$\rho$.
\begin{cor}
\label{IWHICubes}
Let $a$ be a non-negative constant and $\gamma \in (0,r)$. There exists $C = C(p,\gamma,a) > 0$ such that, for all $0 < \tilde{\rho} \leq 1$, any
$u \in W^{1,p}(Q_{\frac{3\tilde \rho}{2}}(e))$ non-negative upper solution of
\begin{equation} \label{equ CorA4 u}
\pLaplac u + a |u|^{p-2}u = 0\,, \quad u \in W_{0}^{1,p}(Q_{\frac{3\tilde \rho}{2}}(e)),
\end{equation}
satisfies
\[
\inf_{Q_{\tilde \rho}(e)} u
\geq
C\, {\tilde \rho}^{\,-N/\gamma} \Big( \int_{Q_{\tilde \rho}(e)} u^{\gamma} \,dx \Big)^{1/\gamma}\,.
\]
\end{cor}
\begin{proof}
Let $C = C(p,a,\gamma) > 0$ be the constant given by Theorem \ref{IWHIRefined} applied with $ \rho=\frac{3}{2}$ and $\sigma=\tau=\frac{2}{3}$. This means that if $v \in W^{1,p}(Q_{\frac{3}{2}}(e))$ is a non-negative upper solution of
\begin{equation} \label{eq cub 32}
\pLaplac v + a |v|^{p-2}v = 0\,, \quad \quad v \in W_0^{1,p}(Q_{\frac{3}{2}}(e)),
\end{equation}
then
\[ \inf_{Q_{1}(e)} v(y) \geq
C \Big( \int_{Q_{1}(e)} v^{\gamma} \,dy \Big)^{1/\gamma}\,.
\]
As $0 < \tilde{\rho} \leq 1$, observe that if $u$ is a non-negative upper solution of \eqref{equ CorA4 u}, then $v$ defined by $v(y)=u(\tilde \rho y', \tilde \rho (y_N-\frac12)+\frac12)$, where $y=(y',y_N)$ with $y'\in \mathbb R^{N-1}$, is a non-negative upper solution of \eqref{eq cub 32}.
Thus, we can conclude that
\begin{equation*}
\inf_{Q_{\tilde \rho}(e)} u(x) = \inf_{Q_{1}(e)} v(y) \geq
C \Big( \int_{Q_{1}(e)} v^{\gamma} \,dy \Big)^{1/\gamma}
=
C {\tilde \rho}^{\,-N/\gamma} \Big( \int_{Q_{\tilde \rho}(e)} u^{\gamma} \,dx \Big)^{1/\gamma}\,.
\end{equation*}
\end{proof}
\medbreak
Finally, we introduce a technical result of measure theory.
\begin{lemma} \rm \cite[Lemma 2.1]{I_S_2016} \it \label{GISL}
Let $E \subset F \subset Q_1$ be two open sets. Assume there exists $\alpha \in (0,1)$ such that:
\begin{itemize}
\item $|E| \leq (1-\alpha)|Q_1|$.
\item For any cube $Q \subset Q_1$, $|Q \cap E| \geq (1-\alpha)|Q|$ implies $Q \subset F$.
\end{itemize}
Then, it follows that $|E| \leq (1-c\alpha)|F|$ for some constant $c = c(N) \in (0,1)$.
\end{lemma}
Now, we can perform the proof of the main result. We prove the boundary weak Harnack inequality for cubes and as consequence we obtain the desired result.
\begin{lemma}[\textbf{Growth lemma}] \label{GL}
Let $a$
be a non-negative constant. Given $\nu > 0$,
there exists $k = k(p,\nu, a) > 0$ such that, if $u \in W^{1,p}(Q_{\frac32})$ is a non-negative upper solution of
\begin{equation*}
\pLaplac u + a\,|u|^{p-2}u = 0\,, \quad u \in W_{0}^{1,p}(Q_{\frac32} )\,,
\end{equation*}
and the following inequality holds
\begin{equation} \label{hypGL}
|\{x\in Q_1 : u(x) > x_N \}| \geq \nu.
\end{equation}
Then $u(x) > k x_N$ in $Q_1$.
\end{lemma}
\begin{remark}
Before we prove the Lemma, observe that there is no loss of generality in considering $a$ a non-negative constant instead of $a\in L^{\infty}(Q_{\frac32})$ non-negative. If $u\geq 0$ satisfies
\begin{equation*}
\pLaplac u + a(x)|u|^{p-2}u \geq 0\,, \quad \textup{ in } Q_{\frac32},
\end{equation*}
then $u$ satisfies also
\begin{equation*}
\pLaplac u + \|a\|_{\infty}\,|u|^{p-2}u \geq 0\,, \quad \textup{ in } Q_{\frac32}.
\end{equation*}
\end{remark}
\begin{proof}
Let us define
$S_{\delta} = Q_{\frac32} \setminus Q_{\frac32-\delta}\big(\frac32 e\big)$
and fix $c_1 = c_1(\nu) \in (0,\frac12)$ small enough in order to ensure that
$|S_{\delta}| \leq \frac{\nu}{2}$ for any $0 < \delta \leq c_1$.
\medbreak
\noindent \textbf{Step 1:} \textit{For all $\delta\in (0, c_1]$, it follows that
$ |\{x \in Q_{\frac32-\delta}\big(\frac32 e\big): u(x) > x_N \} | \geq \frac{\nu}{2}$.}
\medbreak
Directly observe that
\[
\{x\in Q_1 : u(x) > x_N \}\subset
\{x\in Q_{\frac32}: u(x) > x_N \}
\subset
\{x\in Q_{\frac32-\delta}\big(\frac{3}{2}e\big) : u(x) > x_N \}
\cup S_{\delta}.
\]
Hence, Step 1 follows from \eqref{hypGL} and the choice of $c_1$.
\medbreak
\noindent \textbf{Step 2:} \textit{For any $\epsilon > 0$ and all $\delta\in (0, c_1]$,
the following inequality holds}
\begin{equation} \label{s3GL}
\Big( \int_{Q_{\frac32-\delta}\big(\frac32 e\big)} u^{\epsilon} \, dx \Big)^{1/\epsilon}
\geq \frac{\delta}{2} \big( \frac{\nu}{2} \big)^{1/\epsilon}.
\end{equation}
\medbreak
Since $u \geq 0$ and, for any $x \in Q_{\frac32-\delta}\big(\frac32 e\big)$ we have $x_N \geq \frac{\delta}{2}$, it follows that
\[
\begin{aligned}
\int_{Q_{\frac{3}{2}-\delta}(\frac32 e)} u^{\epsilon} \,dx
&\geq
\int_{ \{ x \in Q_{\frac32-\delta}(\frac32 e):\ u(x) \geq x_N\}} u^{\epsilon} \,dx
\\
& \geq \int_{ \{ x \in Q_{\frac32-\delta}(\frac32 e): u(x) \geq x_N\}}
\Big(\frac{\delta}{2}\Big)^{\epsilon} \,dx
=
\Big(\frac{\delta}{2}\Big)^{\epsilon}\,
\Big|\Big\{ x \in Q_{\frac32-\delta}\big(\frac{3}{2}e\big): u(x) \geq x_N\Big\}\Big|.
\end{aligned}
\]
Step 2 follows then from Step 1.
\medbreak
\noindent \textbf{Step 3:} \textit{For any $\epsilon \in (0,r)$ and all $\delta\in (0, c_1]$, there exists
$C_{\delta} = C_{\delta}(p,\epsilon, \delta, a) > 0$ such that}
\[ \inf_{Q_{\frac32-\delta}\big(\frac32 e\big)} \frac{u(x)}{x_N}
\geq \frac{ \delta \,C_{\delta}}{3} \Big( \frac{\nu}{2} \Big)^{1/\epsilon}.
\]
\medbreak
By Theorem \ref{IWHIRefined} applied with $\rho = \frac32$, $x_0 = \frac32 e$ and
$\tau = \sigma = 1-\frac{2}{3}\delta$,
there exists a constant $C_{\delta} = C_{\delta}(p,\epsilon,\delta,a) > 0$ such that
\[ \inf_{Q_{\frac32-\delta}\big(\frac32 e\big)} u (x)
\geq C_{\delta} \Big( \int_{Q_{\frac32-\delta}\big(\frac32 e\big)} u^{\epsilon} \,dx \Big)^{1/\epsilon}\,.\]
Since for all $x\in Q_{\frac32-\delta}\big(\frac32 e\big)$ we have $x_N\leq \frac32$,
Step 3 follows from the above inequality and Step 2.
\medbreak
\noindent \textbf{Step 4:} \textit{Conclusion.}
\medbreak
We fix $\epsilon \in (0,r)$, define
$k_{\delta} = \frac{\delta\,C_{\delta}}{3} \big( \frac{\nu}{2} \big)^{1/\epsilon}$
and introduce $\eta:[-\frac{3-2c_1}{4},\frac{3-2c_1}{4}]^{N-1} \to \R$ a $\mathcal{C}^{\infty}$ function satisfying
\[ \eta(x_1, \ldots, x_{N-1}) = \left\{ \begin{aligned}
& 0, \quad &\textup{if } (x_1, \ldots, x_{N-1}) \in \big[-\tfrac{1}{2},\tfrac{1}{2}\,\big]^{N-1}, \\
& \tfrac{c_1}{2}, & \textup{if } (x_1, \ldots, x_{N-1}) \in
\partial_{\mathbb R^{N-1}} \big(\big[-\tfrac{3-2c_1}{4},\tfrac{3-2c_1}{4}\big]^{N-1}\big),
\end{aligned}
\right.
\]
and
\[
0\leq \eta(x_1, \ldots, x_{N-1}) \leq \frac{c_1}{2}\qquad \textup{ for } (x_1, \ldots, x_{N-1}) \in
\big[-\tfrac{3-2c_1}{4},\tfrac{3-2c_1}{4}\big]^{N-1}.
\]
Moreover, we consider the auxiliary function
\[
v_{\delta} (x_1, \ldots, x_N)= \frac{1}{\delta} \big(x_N-\eta(x_1,\ldots,x_{N-1})\big)^2 + \big(x_N-\eta(x_1,\ldots,x_{N-1})\big)
\]
defined in
\[
\begin{aligned}
\omega_{\delta}&=\Big\{
(x_1, \ldots, x_N) \in
\big[-\tfrac{3-2c_1}{4},\tfrac{3-2c_1}{4}\big]^{N-1}
\times \big[0, \tfrac{c_1}{2}\big] :
\eta(x_1, \ldots, x_{N-1}) \leq x_N\leq \frac{\delta}{2}\Big\}.
\end{aligned}
\]
Observe that, in $\omega_{\delta}$, we have $0\leq x_N-\eta(x_1,\ldots,x_{N-1})\leq \frac{\delta}{2}$. Hence, there exists $c_2 = c_2(p,\nu,a) \in (0,c_1]$ such that, for all $0 < \delta \leq c_2$,
\[
\pLaplac v_{\delta} + a |v_{\delta}|^{p-2}v_{\delta} \leq
-\frac{2}{\delta}(p-1) + 2^{p-1} \Big|\sum_{i=1}^{N-1}\frac{\partial}{\partial x_i}\Big[
\Big(\sum_{i=1}^{N-1}(\frac{\partial\eta}{\partial x_i})^2+1\Big)^{\frac{p-2}{2}}
\frac{\partial\eta}{\partial x_i}\Big] \Big|
+ \frac{3a}{4} \delta
\leq
0\,, \quad \textup{ in }\omega_{\delta}.
\]
On the other hand, we define $u_{\delta} = \frac{2u}{k_{\delta}}$ and immediately observe that
\[ \pLaplac u_{\delta} + a |u_{\delta}|^{p-2}u_{\delta} \geq 0\,, \quad \textup{ in } \omega_{\delta}\,.\]
Now, since by Step 3, we have
\[
u_{\delta}\geq \frac{2k_{\delta}}{k_{\delta}}\frac{\delta}{2}=\delta\geq v_{\delta},
\quad \textup{ for } x_N=\frac{\delta}{2},
\]
it follows that
\[
u_{\delta}\geq v_{\delta}
\quad \textup{ on }\partial \omega_{\delta}.
\]
Then, applying Lemma \ref{CPT}, it follows that, for any $\delta \in (0,c_2]$,
$v_{\delta} \leq u_{\delta}$ in $\omega_{\delta}$. For $\delta = c_2/2$, we obtain in particular
\begin{equation*}
u(x) \geq \frac{1}{2}k_{\frac{c_2}{2}} v_{\frac{c_2}{2}} (x)
= \frac{1}{2}k_{\frac{c_2}{2}}\bigl( \frac{2}{c_2} x_N^2 + x_N \bigr)
\geq \frac{1}{2}k_{\frac{c_2}{2}} x_N \,, \quad \textup{ in } \omega_{\frac{c_2}{2}}\cap Q_1\,.
\end{equation*}
The result then follows from the above inequality and Step 3.
\end{proof}
\begin{lemma} \label{iterationLemma}
Let $a \in L^{\infty}(Q_4)$ be a non-negative function.
Assume that $u \in W^{1,p}(Q_4)$ is a non-negative upper solution of
\begin{equation} \label{IL11}
\pLaplac u + a(x)|u|^{p-2}u = 0\,, \quad u \in W_0^{1,p}(Q_{4})\,,
\end{equation}
satisfying
\[ \inf_{Q_1} \frac{u(x)}{x_N} \leq 1\,.\]
Then, there exist $M = M(p,\|a\|_{\infty}) > 1$ and $\mu \in (0,1)$ such that
\begin{equation} \label{IL1}
\big| \{x \in Q_1: u(x)/x_N > M^j \} \big| < (1-\mu)^j\,, \quad \forall\ j \in \N\,.
\end{equation}
\end{lemma}
\begin{proof}
Let us fix some notation that we use throughout the proof.
We fix $\gamma \in (0,r)$ and consider $C_1 = C_1(p,\|a\|_{\infty}) > 0$ the constant given by
Corollary \ref{IWHICubes}.
We introduce $\alpha \in (0,1)$ and fix $C_2 \in (0,1)$ the constant given by Lemma \ref{GISL}.
Moreover, we choose $\nu = (1-\alpha)\big(\frac{1}{4}\big)^N$ and denote by
$k = k(\nu,p,\|a\|_{\infty}) \in (0,1)$ the constant given by Lemma \ref{GL} applied to an upper solution of
\begin{equation}
\label{IL12} \pLaplac u + 2^p\|a\|_{\infty} |u|^{p-2} u = 0\,, \quad u \in W_0^{1,p}( Q_{\frac32})\,,
\end{equation}
with the chosen $\nu$. Let us point out that, if $u$ is a non-negative upper solution of \eqref{IL11},
then $u$ is a non-negative upper solution of \eqref{IL12}. Finally, we consider
\[
M \geq \max \Big\{ \frac{1}{k}, \frac{4}{C_1}(1-\alpha)^{-1/\gamma}\Big\}\,,
\]
and we are going to show that \eqref{IL1} holds with $\mu = \alpha C_2$.
\medbreak
First of all, observe that $\{ x \in Q_1: u(x)/x_N > M\} \subset \{ x \in Q_1: ku(x) > x_N\}$.
Hence, since $\inf_{Q_1} ku(x)/x_N \leq k$, Lemma \ref{GL} implies that
\begin{equation} \label{IL2}
|\{ x \in Q_1: u(x)/x_N > M\}|\leq |\{ x \in Q_1: ku(x)> x_N\}|
<\nu < 1-\alpha<
1- C_2\alpha
\end{equation}
and, in particular, \eqref{IL1} holds for $j=1$. Now, let us introduce, for $j \in \N \setminus \{1\}$,
\[ E = \{ x \in Q_1: u(x)/x_N > M^j \} \quad \textup{ and } \quad F = \{ x \in Q_1: u(x)/x_N > M^{j-1} \}\,.\]
Since $M > 1$ and $j \in \N \setminus \{1\}$, observe that \eqref{IL2} implies that
\begin{equation}
\label{eq A7.5}
|E| = |\{ x \in Q_1: u(x)/x_N > M^j \} | \leq |\{ x \in Q_1: u(x)/x_N > M\} | \leq 1-\alpha\,,
\end{equation}
and the first assumption of Lemma \ref{GISL} is satisfied.
\bigbreak
\noindent \textbf{Claim:} \textit{For every cube $Q_{\rho}(x_0) \subset Q_1$ such that
\begin{equation}\label{IL3}
|E \cap Q_{\rho}(x_0)| \geq (1-\alpha)|Q_{\rho}(x_0)| = (1-\alpha)\rho^N \,.
\end{equation}
we have $Q_{\rho}(x_0) \subset F$.}
\bigbreak
Let us denote $x_0=(x_0', x_{0_N})$ with $x_0'\in \mathbb R^{N-1}$. We define
the new variable $y = \big( \frac{x'-x_0'}{\rho'}, \frac{x_N}{\rho'} \big)$,
where $\rho' = 2 x_{0_N}$, and the rescaled function $v(y) = \frac{1}{\rho'} u(\rho'y' + x_0', \rho'y_N)$.
Then $v$ is a non-negative upper solution of
\begin{equation} \label{IL13}
\pLaplac v + 2^p \|a\|_{\infty}|v|^{p-2}v = 0\,, \quad \textup{ in } Q_{4/\rho'}\big(-x_0'/\rho', 2/\rho'\,\big).
\end{equation}
Moreover, observe that
\[ x\in E \cap Q_{\rho}(x_0) \quad \textup{ if and only if }
\quad y\in \{ y \in Q_{\rho/\rho'}(e): v(y)/M^j > y_N\}\,,\]
and so, that \eqref{IL3} is equivalent to
\begin{equation} \label{IL4}
|\{ y \in Q_{\rho/\rho'}(e): v(y)/M^j > y_N\}| \geq (1-\alpha)|Q_{\rho/\rho'}(e)|
= (1-\alpha)\big(\frac{\rho}{\rho'} \big)^N.
\end{equation}
Observe also that the embedding $Q_{\rho}(x_0)\subset Q_1$ implies that $\rho\leq \rho'\leq 2-\rho$ and
$|x_{0,i}|\leq \frac{1-\rho}{2}$ for $i\in \{1,\ldots,N-1\}$. In particular, we have
$Q_{\frac32}\subset Q_{4/\rho'}\big(-x_0'/\rho', 2/\rho'\,\big)$. Hence $v$ is an upper solution of \eqref{IL12}.
\medbreak
Now, we distinguish two cases:
\medbreak
\noindent \textbf{Case 1:} \textit{$\rho \geq \rho'/4$}.
Observe that $v/M^j$ is a non-negative upper solution of \eqref{IL12}.
Moreover, as $\rho\leq\rho'$, \eqref{IL4} implies that
\[ |\{y \in Q_1: v(y)/M^j > y_N\}| \geq |\{y \in Q_{\rho/\rho'}(e): v(y)/M^j > y_N\}| \geq \nu\,.\]
Hence, by Lemma \ref{GL}, $v(y)/M^j > k y_N$ in $Q_1$ and so, by the definition of $k$, $v(y)/y_N > M^{j-1}$
in $Q_{\rho/\rho'}(e)$. This implies that $u(x)/x_N > M^{j-1}$ in $Q_{\rho}(x_0)$.
\medbreak
\noindent \textbf{Case 2:} \textit{$ \rho < \rho'/4$}.
Recall that $v/M^j$ is a non-negative upper solution of \eqref{IL12}. Hence, $v/M^j$ is also a
non-negative upper solution of
\begin{equation*}
\pLaplac u + 2^p\|a\|_{\infty} |u|^{p-2} u = 0\,,
\quad \textup{ in } Q_{\frac{3 \rho}{2 \rho'}}(e) \subset Q_{\frac32}\,,
\end{equation*}
Thus, by Corollary \ref{IWHICubes}, we deduce that
\begin{equation}
\label{eq LemB7 case 2}
\inf_{Q_{\rho/\rho'}(e)} \frac{v(y)}{M^j} \geq C_1 \Big( \big( \frac{\rho}{\rho'} \big)^{-N}
\int_{Q_{\rho/\rho'}(e)} \big( \frac{v}{M^j} \big)^{\gamma} \,dy \Big)^{1/\gamma}.
\end{equation}
Now, let us introduce
\[ G = \{ y \in Q_{\rho/\rho'}(e): v(y)/M^j > 1/4\}\,,\]
and, as $y_N > 1/4$ for all $y \in Q_{\rho/\rho'}(e)$, observe that \eqref{IL4} implies the following inequality
\[ |G| \geq |\{y \in Q_{\rho/\rho'}(e): v(y)/M^j > y_N \}| \geq (1-\alpha)\big( \frac{\rho}{\rho'} \big)^N\,.\]
Hence, we deduce that
\[
\int_{Q_{\rho/\rho'}(e)} \big( \frac{v}{M^j} \big)^{\gamma} \,dy
\geq
\int_{G} \big( \frac{v}{M^j} \big)^{\gamma} \,dy
>
\big(\frac{1}{4} \big)^{\gamma} |G|
\geq
\big(\frac{1}{4} \big)^{\gamma} (1-\alpha) \big( \frac{\rho}{\rho'} \big)^{N},
\]
and so, by \eqref{eq LemB7 case 2}, that
\[ \inf_{Q_{\rho/\rho'}(e)} \frac{v}{M^j} > \frac{C_1}{4} (1-\alpha)^{1/\gamma}\,.\]
Finally, using that $M \geq \frac{4}{C_1}(1-\alpha)^{-1/\gamma}$ and that $y_N \leq 1$ in $Q_{\rho/\rho'}(e)$, we deduce that $v(y) > M^{j-1}y_N$ in $Q_{\rho/\rho'}(e)$.
Thus, we can conclude that $u(x) / x_N > M^{j-1}$ in $Q_{\rho}(x_0)$.
\medbreak
In both cases we prove that $u(x)/x_N > M^{j-1}$ in $Q_{\rho}(x_0)$. This means that $Q_{\rho}(x_0) \subset F$ and so, the Claim is proved.
\medbreak
Since \eqref{eq A7.5} and the Claim hold, we can apply Lemma \ref{GISL} and we obtain that
$|E| \leq (1-C_2\alpha)|F|\,,$ i.e.
\[ |\{ x \in Q_1: u(x)/x_N > M^j \} | \leq (1-C_2 \alpha)\, |\{x \in Q_1: u(x)/x_N > M^{j-1}\}|\,,
\quad \forall\ j \in \N \setminus \{1\}\,.\]
Iterating in $j$ and using \eqref{IL2}, the result follows with $\mu = C_2\alpha \in (0,1)$ depending only on $N$.
\end{proof}
\begin{theorem}[\textbf{Boundary weak Harnack inequality for cubes}]
\label{BWHIC}
Let $a \in L^{\infty}(Q_4)$ be a non-negative function. Assume that $u \in W^{1,p}(Q_4)$ is a
non-negative upper solution of
\begin{equation*}
\pLaplac u + a(x)|u|^{p-2}u = 0\,, \quad u \in W_0^{1,p}(Q_{4}),
\end{equation*}
Then, there exist $\epsilon = \epsilon(p,\|a\|_{\infty}) > 0$ and
$C = C(p,\epsilon,\|a\|_{\infty}) > 0$ such that
\[ \inf_{Q_1} \frac{u(x)}{x_N} \geq
C \Big( \int_{Q_1} \big( \frac{u(x)}{x_N} \big)^{\epsilon} \,dx \Big)^{1/\epsilon}\,.\]
\end{theorem}
\begin{proof}
Let us split the proof into three steps.
\medbreak
\noindent \textbf{Step 1:} \textit{Assume that $\inf_{Q_1} \frac{u(x)}{x_N} \leq 1$.
Then, there exist $\epsilon = \epsilon(p,\|a\|_{\infty}) > 0$ and $C = C(p,\epsilon,\|a\|_{\infty}) > 0$
such that, for all $t\geq0$,}
\begin{equation*}
|\{ x \in Q_1: u(x)/x_N > t\}| \leq C \min\{1,t^{-2\epsilon}\}\,.
\end{equation*}
Let us define the real valued function
\[ f(t) = |\{x \in Q_1: u(x)/x_N > t\}| \,,\]
and let $M$ and $\mu$ be the constants obtained in Lemma \ref{iterationLemma}. We define
\[ C =\max\{(1-\mu)^{-1}, M^{2\epsilon}\} > 1 \quad \textup{ and }
\quad \epsilon =- \frac{1}{2}\frac{\ln (1-\mu)}{\ln M} > 0\,.\]
If $t \in [0,M]$, we easily get
\[
|\{ x \in Q_1: u(x)/x_N > t\}| \leq 1\leq C M^{-2\epsilon}\leq C \min\{1,t^{-2\epsilon}\}.
\]
Hence, let us assume $t > M > 1$. Without loss of generality, we assume $t \in [M^j, M^{j+1}]$ for some
$j \in \N$, and it follows that
\[ \frac{\ln t}{\ln M} - 1 \leq j \leq \frac{\ln t}{\ln M}\,.\]
Since $f$ is non-increasing and $1-\mu \in (0,1)$, the above inequality and Lemma \ref{iterationLemma} imply
\begin{equation}
\label{ThmA8-1}
f(t) \leq f(M^j) \leq (1- \mu)^j \leq (1-\mu)^{\frac{\ln t}{\ln M}-1} \,.
\end{equation}
Finally, observe that
\begin{equation}
\label{ThmA8-2}
\ln \Big((1-\mu)^{\frac{\ln t}{\ln M}-1} \Big)
= \Big( \frac{\ln t}{\ln M}-1 \Big) \ln(1-\mu) = \ln t \frac{\ln(1-\mu)}{\ln M} - \ln(1-\mu)
\leq
-2\epsilon\ln t+\ln C = \ln (C t^{-2\epsilon})\,.
\end{equation}
The Step 1 then follows from \eqref{ThmA8-1}, \eqref{ThmA8-2} and the fact that $\min\{1,t^{-2\epsilon}\} = t^{-2\epsilon}$ for $t \geq 1$.
\medbreak
\noindent \textbf{Step 2:} \textit{Assume that $\inf_{Q_1} \frac{u(x)}{x_N} \leq 1$.
Then, there exists $C = C(p,\epsilon,\|a\|_{\infty}) > 0$} such that
\begin{equation} \label{fubini}
\int_{Q_1} \Big( \frac{u(x)}{x_N} \Big)^{\epsilon} \,dx \leq C < + \infty\,.
\end{equation}
Directly, applying \cite[Lemma 9.7]{G_T_2001_S_Ed}, we obtain that
\[ \int_{Q_1} \Big( \frac{u(x)}{x_N} \Big)^{\epsilon} \,dx
= \epsilon \int_0^{\infty} t^{\epsilon-1} |\{x \in Q_1: u(x)/x_N > t\}| \,dt\,.\]
Hence, \eqref{fubini} follows from Step 1.
\medbreak
\noindent \textbf{Step 3:} \textit{Conclusion.}
\medbreak
Let us introduce the function
\[ v = \frac{u}{\inf_{y\in Q_1} \frac{u(y)}{y_N}+\beta}\,,\] where $\beta > 0$ is an arbitrary
positive constant. Obviously, $v$ satisfies the hypothesis of Step 2. Hence, applying Step 2, we obtain that
\[ \int_{Q_1} \big( \frac{u(x)}{x_N} \big)^{\epsilon}
\Big( \frac{1}{\inf_{y\in Q_1} \frac{u(y)}{y_N}+\beta} \Big)^{\epsilon} \,dx \leq C\,,
\]
or equivalently that
\[
\frac{1}{C^{1/\epsilon}} \Big( \int_{Q_1} \big( \frac{u(x)}{x_N} \big)^{\epsilon} \,dx \Big)^{1/\epsilon}
\leq \inf_{Q_1} \frac{u(x)}{x_N} + \beta\,.\]
Letting $\beta \rightarrow 0$ we obtain the desired result.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{BWHIP}}]
Thanks to the regularity of the boundary, there exists $\overline{R}>0$ and a diffeomorphism $\varphi$ such that
$\varphi(B_{\overline{R}}(x_0)\cap \omega)\subset Q_1$ and $\varphi(B_{\overline{R}}(x_0)\cap \partial\omega)\subset \{ x \in \partial Q_1: x_N = 0 \}$.
The result then follows from Theorem \ref{BWHIC}.
\end{proof}
We end this section by presenting a corollary of Theorem \ref{BWHIP}. Consider the equation
\begin{equation} \label{eqBCl}
-\Delta u + a(x)u = b(x)\,, \qquad u \in H_0^1(\omega)\,,
\end{equation}
under the assumption
\begin{equation} \label{hypBC}
\left\{
\begin{aligned}
& \omega \subset \mathbb{R}^{N}} %R^{N, \ N \geq 2, \textup{ is a bounded domain with boundary } \partial \omega
\textup{ of class } \mathcal{C}^{1,1}\,,\\
& a \in L^{\infty}(\omega)\,,\ b^- \in L^p(\omega) \textup{ for some } p > N \textup{ and } b^+ \in L^1( \omega)\,,\\
& a \geq 0 \textup{ a.e. in } \omega\,.
\end{aligned}
\right.
\end{equation}
\begin{cor} \label{BWHI}
\it Under the assumption \eqref{hypBC}, assume that $u \in H^1(\omega)$ is a non-negative upper solution of
\eqref{eqBCl} and let $x_0 \in \partial \omega$. Then, there exist $\overline R>0$, $\epsilon = \epsilon(\overline{R},\|a\|_{\infty},$ $\omega) > 0$, $C_1 = C_1(\overline{R},\epsilon,\|a\|_{\infty},\omega) > 0$ and $C_2 = C_2(\omega,\|a\|_{\infty}) > 0$ such that, for all $R \in(0,\overline R]$,
\[ \inf_{B_R(x_0) \cap \omega} \frac{u(x)}{d(x,\partial \omega)}
\geq C_1 \Big( \int_{B_R(x_0) \cap \omega}
\Big( \frac{u(x)}{d(x,\partial\omega)} \Big)^{\epsilon} dx \Big)^{1/\epsilon}- C_2 \|b^-\|_{L^p(\omega)}\,.\]
\end{cor}
In order to prove Corollary \ref{BWHI} we need the following lemma
\begin{lemma} \label{bcLemma2}
Let $\omega \subset \mathbb{R}^{N}} %R^{N$, $N \geq 2$, be a bounded domain with boundary $\partial \omega$ of class
$\mathcal{C}^{1,1}$ and let $a \in L^{\infty}(\omega)$ and $g \in L^p(\omega),\, p > N$, be non-negative functions.
Assume that $u \in H^1(\omega)$ is a lower solution of
\begin{equation*}
-\Delta u + a(x) u = g(x)\,, \quad u \in H_0^1(\omega)\,.
\end{equation*}
Then there exists $C = C(\omega,\|a\|_{\infty})> 0$ such that
\[ \sup_{\omega}\frac{u(x)}{d(x,\partial \omega)} \leq C \|g\|_{L^p(\omega)}\,.\]
\end{lemma}
\begin{proof}
First of all, observe that it is enough to prove the result for $v$ solution of
\begin{equation*}
\left\{
\begin{aligned}
-\Delta v + a(x)v & = g(x)\,, \quad & \textup{ in } \omega\,,\\
v & = 0\,, & \textup{ on } \partial \omega\,.
\end{aligned}
\right.
\end{equation*}
as, by the standard comparison principle it follows that $u \leq v$.
Applying \cite[Theorem 9.15 and Lemma 9.17]{G_T_2001_S_Ed} we deduce that $v \in W_0^{2,p}(\omega)$ and there exists $C_1 = C_1(\omega,\|a\|_{\infty})> 0$ such that
\[ \|v\|_{W^{2,p}(\omega)} \leq C_1 \|g\|_{L^p(\omega)}\,.\]
Moreover, as $p > N$, by Sobolev's inequality, we have $C_2=C_2(\omega,\|a\|_{\infty})$ with
\[ \|v\|_{\mathcal{C}^{1}(\overline{\omega})} \leq C_2 \|g\|_{L^p(\omega)} \,,\]
and so, we easily deduce that
\[ v(x) \leq C_3 \|g\|_{L^p(\omega)}d(x,\partial \omega)\,, \quad \forall\ x \in \omega\,.\]
Hence, since $u \leq v$, the result follows from the above inequality.
\end{proof}
\begin{proof}[\textbf{Proof of Corollary \ref{BWHI}}]
Let $w \geq 0$ be the solution of
\begin{equation}
\left\{
\begin{aligned}
-\Delta w + a(x)w & = b^-(x)\,, \quad & \textup{ in } \omega\,, \\
w & = 0\,, & \textup{ on } \partial \omega\,.
\end{aligned}
\right.
\end{equation}
Observe that $v=u+w$ satisfies
\begin{equation}
\left\{
\begin{aligned}
-\Delta v + a(x)v & \geq 0, \quad & \textup{ in } \omega\,,
\\
v & \geq 0\,, & \textup{ on } \partial \omega\,.
\end{aligned}
\right.
\end{equation}
Hence, by Theorem \ref{BWHIP}, there exist $\overline R>0$,
$\epsilon = \epsilon(p, \overline{R},\|a\|_{\infty},\omega) > 0$ and
$C = C(p,\overline{R}, \epsilon,\|a\|_{\infty},\omega) > 0$ such that, for all
$R \in(0,\overline R]\,,$
\begin{equation}\label{last}
\inf_{B_R(x_0) \cap \omega} \frac{v(x)}{d(x,\partial \omega)}
\geq C \,\Big( \int_{B_R(x_0) \cap \omega}
\Big( \frac{v(x)}{d(x,\partial \omega)} \Big)^{\epsilon} \,dx \Big)^{1/\epsilon}\,.
\end{equation}
On the other hand, by Lemma \ref{bcLemma2}, there exists $C_2 = C_2(\omega,\|a\|_{\infty}) > 0$ such that
\begin{equation}\label{bc7}
\sup_{\omega} \frac{w(x)}{d(x,\partial \omega)} \leq C_2 \|b^-\|_{L^p(\omega)}\,.
\end{equation}
From \eqref{last}, \eqref{bc7} and using that $u = v-w$, the corollary follows observing that $w\geq 0$ and hence $v\geq u$.
\end{proof}
\section{A priori bound} \label{III}
This section is devoted to the proof of Theorem \ref{aPrioriBound}.
As a first step we observe that, to obtain our a priori upper bound on the solutions of \eqref{Plambda}, we only need to control the solutions on
$\Omega^+$. This can be proved under a weaker assumption than \eqref{A1}. More precisely, we assume
\[ \label{01} \tag{$B$}
\left\{
\begin{aligned}
&\Omega \subset \mathbb{R}^{N}} %R^{N,\, N \geq 2, \textup{ is a bounded domain with boundary }\partial \Omega \textup{ of class }\mathcal{C}^{0,1},
\\
& c_+, \ c_- \textup{ and } h \textup{ belong to } L^q(\Omega) \textup{ for some } q > N/2 \,,
\ \mu \textup{ belong to } L^{\infty}(\Omega) \,,
\\
& c_+(x) \geq 0, \ c_-(x) \geq 0 \textup{ and } c_-(x) c_+(x) =0 \textup{ a.e. in }\Omega,
\\
& |\Omega_{+}|> 0, \textup{ where } \Omega_{+} := \operatorname{Supp}(c_{+}),
\end{aligned}
\right.
\]
and we prove the next result.
\begin{lemma}
\label{Step 1}
Assume that \eqref{01} holds. Then, there exists $M >0$ such that, for any $\lambda\in\mathbb R$, any solution $u$ of \eqref{Plambda} satisfies
$$
-\sup_{\Omega_+} u^- - M \, \leq \, u \, \leq \sup_{\Omega_+}u^+ + M.
$$
\end{lemma}
\begin{remark}
Let us point out that if $c_{+} \equiv 0$, i.e. $|\Omega_{+}| = 0$, the problem \eqref{Plambda} reduces to
\begin{equation} \label{R41}
-\Delta u = -c_{-}(x) u + \mu(x) |\nabla u|^2 + h(x), \quad u \in H_0^1(\Omega) \cap L^{\infty}(\Omega),
\end{equation}
which is independent of $\lambda$. If \eqref{R41} has a solution, by \cite[Proposition 4.1]{A_DC_J_T_2015} it is unique and so, we have an a priori bound.
\end{remark}
\begin{proof}
In case problem \eqref{Plambda} has no solution for any $\lambda\in \mathbb R$, there is nothing to prove. Hence, we assume the existence of $\tilde\lambda\in\mathbb R$ such that $(P_{\tilde\lambda})$ has a solution $\tilde{u}$. We shall prove the result with $M := 2 \|\tilde{u}\|_{\infty}$.
Let $u$ be an arbitrary solution of \eqref{Plambda}.
\bigbreak
\noindent \textbf{Step 1:} \textit{$u \leq \sup_{\Omega_+}u^++ M$.}
\medbreak
Setting $D:= \Omega \backslash \overline\Omega_+$ we define
$v = \displaystyle u - \sup_{\partial D} u^+$. We then obtain
\[
-\Delta v = -c_-(x)v + \mu(x)|\nabla v|^2 + h(x) - c_-(x) \sup_{\partial D}u^+
\leq -c_-(x)v + \mu(x)|\nabla v|^2 + h(x)\,, \quad \textup{ in } D\,.
\]
As $v \leq 0$ on $\partial D$, the function
$v$ is a lower solution of
\begin{equation}\label{pivot}
- \Delta z = -c_-(x)z + \mu(x)|\nabla z|^2 + h(x)\,, \qquad u \in H^1_0(D)\cap L^{\infty}(D).
\end{equation}
Setting
$ \tilde{v} = \tilde{u} + \|\tilde{u}\|_{\infty}$ we observe that
\[
-\Delta \tilde{v} = -c_-(x) \tilde{v} + \mu(x)|\nabla \tilde{v}|^2 + h(x) + c_-(x) \|\tilde{u}\|_{\infty}
\geq - c_-(x)\tilde{v} + \mu(x)|\nabla \tilde{v}|^2 + h(x)\,, \quad \textup{ in } D\,,
\]
and thus, as $\tilde{v} \geq 0$ on $\partial D$, the function $\tilde{v}$ is an upper solution of \eqref{pivot}.
By \cite[Lemma 2.1]{A_DC_J_T_2014}, we know that $u$, $\tilde{u} \in H^1(\Omega) \cap W_{loc}^{1,N}(\Omega) \cap \mathcal{C}(\overline{\Omega})$ and hence, $v$, $\tilde{v} \in H^1(D) \cap W_{loc}^{1,N}(D) \cap \mathcal{C}(\overline{D})$.
Applying \cite[Lemma 2.2]{A_DC_J_T_2014} we conclude that $v \leq \tilde{v}$ in $D$ namely, that
\[
u - \sup_{\partial D} u^+ \leq \tilde{u} + \| \tilde{u}\|_{\infty} \,, \quad \textup{ in } D.
\]
This gives that
\[
u \leq \tilde{u} + \| \tilde{u}\|_{\infty} + \sup_{\partial D} u^+ \,, \quad \textup{ in } D,
\]
and hence
\[
u \leq M +\sup_{\Omega_+} u^+ \,, \quad \textup{ in } \Omega.
\]
\noindent \textbf{Step 2:} \textit{$u \geq -\sup_{\Omega_+} u^- - M$.}
\medbreak
We now define $v = \displaystyle u + \sup_{\partial D} u^-$ and obtain $v \geq 0$ on $\partial D$ as well as
\[
-\Delta v = - c_-(x)v + \mu(x)|\nabla v|^2 + h(x) + c_-(x) \sup_{\partial D} u^-
\geq - c_-(x)v + \mu(x)|\nabla v|^2 + h(x)\,, \quad \textup{ in } D\,.
\]
Thus $v$ is an upper solution of \eqref{pivot}. Now defining $ \tilde{v} = \tilde{u} - \|\tilde{u}\|_{\infty}$, again, we have $\tilde{v} \leq 0$ on $\partial D$ as well as
\[
-\Delta \tilde{v} = -c_-(x) \tilde{v} + \mu(x)|\nabla \tilde{v}|^2 + h(x) - c_-(x) \|\tilde{u}\|_{\infty}
\leq - c_-(x)\tilde{v} + \mu(x)|\nabla \tilde{v}|^2 + h(x)\,, \quad \textup{ in } D\,.
\]
Thus $\tilde{v}$ is a lower solution of \eqref{pivot}. As previously we have that
$v$, $\tilde{v} \in H^1(D) \cap W_{loc}^{1,N}(D) \cap \mathcal{C}(\overline{D})$
and applying \cite[Lemma 2.2]{A_DC_J_T_2014} we obtain that $ \tilde{v} \leq v$ in $D$. Namely
\[
\tilde{u} - \|\tilde{u}\|_{\infty} \leq u + \sup_{\partial D} u^-\,, \quad \textup{ in } D.\]
Thus
\[
u \geq \tilde{u} - \|\tilde{u}\|_{\infty} -\sup_{\partial D} u^-\,, \quad \textup{ in } D,\]
and without restriction we get that
\[
u \geq -\sup_{\Omega_+} u^- -M \,, \quad \textup{ in } \Omega,
\]
ending the proof.
\end{proof}
Now, let $u \in H_0^1(\Omega) \cap L^{\infty}(\Omega)$ be a solution of \eqref{Plambda}.
Following \cite[Proposition 6.1]{A_DC_J_T_2015}, we introduce
\begin{equation}
\label{def w}
w_i(x) = \frac{1}{\mu_i} \big(e^{\mu_i u(x)} - 1 \big) \quad \textup{ and } \quad g_i(s)
= \frac{1}{\mu_i} \ln(1+\mu_i s), \qquad i = 1,2\,,
\end{equation}
where $\mu_1$ is given in \eqref{A1} and $\mu_2 = \operatorname{esssup} \mu(x)$. Observe that
\[ u = g_i(w_i) \quad \textup{ and } \quad 1+ \mu_i w_i = e^{\mu_i u} ,\qquad i = 1,2\,,\]
and that, by standard computations,
\begin{equation} \label{idwi}
-\Delta w_i = (1+\mu_i w_i)\big[(\lambda c_{+}(x)-c_{-}(x))g_i(w_i) + h(x)\big]
+ e^{\mu_i u} |\nabla u|^2 (\mu(x)-\mu_i).
\end{equation}
Using \eqref{idwi} we shall obtain a uniform a priori upper bound on $u$ in a neighborhood of any fixed point
$\overline{x} \in \overline{\Omega}_{+}$. We consider the two cases $\overline{x} \in \overline{\Omega}_{+} \cap \Omega$ and $\overline{x} \in \overline{\Omega}_{+} \cap \partial \Omega$ separately.
\begin{lemma} \label{Steps A}
Assume that \eqref{A1} holds and that $\overline{x} \in \overline{\Omega}_{+} \cap \Omega$.
For each $\Lambda_2 > \Lambda_1 > 0$, there exist $M_I > 0$ and $R > 0$ such that, for any
$\lambda \in [\Lambda_1,\Lambda_2]$, any solution $u$ of \eqref{Plambda} satisfies
$\sup_{B_R(\overline{x})}u \leq M_I$.
\end{lemma}
\begin{proof}
Under the assumption \eqref{A1} we can find a $R > 0$ such that
$ \mu(x) \geq \mu_1 > 0$, $ c_{-} \equiv 0$ in $B_{4R}(\overline{x}) \subset \Omega$
and $c_{+} \gneqq 0$ in $B_R(\overline{x})$.
For simplicity, in this proof, we denote $B_{mR}=B_{mR}(\overline{x})$, for $m\in \mathbb N$.
\medbreak
Since $c_{-} \equiv 0$ and $\mu(x)\geq\mu_1$ in $B_{4R}$,
observe that \eqref{idwi}
reduces to
\begin{equation}
\label{equl*}
-\Delta w_1 + \mu_1 h^{-}(x) w_1 \geq \lambda (1+ \mu_1 w_1) c_{+}(x)g_1(w_1) +h^+(x)(1+\mu_1 w_1)
- h^{-}(x)\,, \quad \textup{ in } B_{4R}.
\end{equation}
\noindent Let $z_2$ be the solution of
\begin{equation}
\label{z2}
-\Delta z_2 + \mu_1 h^{-}(x) z_2 = -\Lambda_2 c_{+}(x)\frac{e^{-1}}{\mu_1} \,, \qquad z_2 \in H^1_0(B_{4R}).
\end{equation}
By classical regularity arguments (see for instance \cite[Theorem III-14.1]{L_U_1968}),
$z_2 \in \mathcal{C}(\overline{B_{4R}})$. Hence, there exists
$D=D(\overline{x},\mu_1, \Lambda_2, \|h^-\|_{L^q(B_{4R})}, \|c_+\|_{L^q(B_{4R})}, q, R) > 0$
such that
\begin{equation}
\label{z2borne}
z_2\geq -D \textup{ in }B_{4R}.
\end{equation}
Moreover, by the weak maximum principle \cite[Theorem 8.1]{G_T_2001_S_Ed}, we have that $z_2 \leq 0$. Now defining $v_1=w_1-z_2+\frac{1}{\mu_1}$, and since $\min_{[-1/\mu_i,+\infty[} (1+\mu_i s) g_i(s)=-\frac{e^{-1}}{\mu_i}$, we observe that $v_1$ satisfies
\begin{equation}
\label{v_11}
-\Delta v_1 + \mu_1 h^{-}(x) v_1 \geq \Lambda_1 c_{+}(x) (1+ \mu_1 w_1) g_1(w_1)^+\,,
\quad \textup{ in } B_{4R}.
\end{equation}
Also, since $w_1>-1/\mu_1$, we have $v_1>0$ in $\overline{B_{4R}}$.
Note also that $ 0 < 1+ \mu_1 w_1 = \mu_1 v_1 + \mu_1 z_2$ in $\overline{B_{4R}}$. Now, we split the rest of the proof into four steps.
\medbreak
\noindent \textbf{Step 1:} \textit{There exist
$C_{1} = C_{1}(\overline{x}, \Lambda_1, \Lambda_2, R, \mu_1, q, \|h^{-}\|_{L^{\infty}(B_{4R})},
\|c_+\|_{L^q(B_{4R})}) > 0 $ such that}
\begin{equation}\label{eq-step1}
k:= \inf_{B_R} v_1(x) \leq C_1.
\end{equation}
In case $ \mu_1 \inf_{B_R} v_1(x) \leq 1+\mu_1 D$, where $D$ is given by \eqref{z2borne}, the Step 1 is proved. Hence, we assume that
\begin{equation}
\label{cas2}
\mu_1 v_1(x) \geq 1+\mu_1 D, \qquad \forall\ x\in B_R.
\end{equation}
In particular, $\mu_1 v_1 + \mu_1 z_2\geq 1$ on $B_R$. Now, by Lemma \ref{bcLemma1} applied on \eqref{v_11} with $\omega = B_{4R}$, there exists $C = C(R,
\|h^{-}\|_{L^{\infty}(B_{4R})},\mu_1, \Lambda_1, \overline x) > 0$ such that,
\begin{equation*}
\begin{aligned}
k
&\geq C \int_{B_R} c_{+}(y)\, \Big(\mu_1 v_1(y) + \mu_1 z_2(y)\Big) \ln \Big(\mu_1 v_1(y) + \mu_1 z_2(y)\Big)\,dy
\\
&\geq C \int_{B_R} c_{+}(y)\, (\mu_1 k-\mu_1 D)
\ln\big(\mu_1 k-\mu_1 D \big) \,dy \\
& =C (\mu_1 k-\mu_1 D) \ln\big(\mu_1 k-\mu_1 D \big)\|c_{+}\|_{L^1(B_R)}.
\end{aligned}
\end{equation*}
As $c_{+} \gneqq 0$ in $B_R$,
comparing the growth in $k$ of the various terms, we deduce that $k$ must remain bounded and thus the existence of
$C_{1} = (\overline{x}, \Lambda_1, \Lambda_2, R, \mu_1, q, \|h^{-}\|_{L^{\infty}(B_{4R})},
\|c_+\|_{L^q(B_{4R})}) > 0 $
such that \eqref{eq-step1} holds.
\medbreak
\noindent \textbf{Step 2:}\,\,\textit{For any $1 \leq s < \frac{N}{N-2}$, there exists
$C_{2} = C_{2}(\overline{x}, \mu_1, R, s, \Lambda_1, \Lambda_2, q, \|h^{-}\|_{L^{\infty}(B_{4R})},\, \|c_+\|_{L^q(B_{4R})}
) > 0 $ such that
\[
\int_{B_{2R}} (1+ \mu_1 w_1)^s \,dx\leq C_{2}.
\]}
Applying Lemma \ref{WHI} to \eqref{v_11}, we deduce the existence of
$C = C(s,\mu_1, R,\|h^{-}\|_{L^q(B_{4R})}) > 0$ such that
\[
\Big( \int_{B_{2R}} v_1^s \,dx \Big)^{1/s} \leq C \inf_{B_R} v_1 \,.
\]
The Step 2 follows from Step 1 observing that
$ 0 \leq 1+ \mu_1 w_1 = \mu_1 v_1 + \mu_1 z_2 \leq \mu_1 v_1.$
\medbreak
\noindent \textbf{Step 3:}\,\,\textit{For any $1 \leq s < \frac{N}{N-2}$, we have, for the constant $C_2 >0$ introduced in Step 2, that}
\[ \int_{B_{2R}} \big(1+\mu_2 w_2 \big)^{\frac{\mu_1 s}{\mu_2}} dx \leq
C_{2}.\]
\indent This directly follows from Step 2 since, by the definition of $w_i$, we have
\[
(1+\mu_2 w_2)^{\frac{\mu_1}{\mu_2}}=(e^{\mu_2 u})^{\frac{\mu_1}{\mu_2}}
= e^{\mu_1 u}= (1+\mu_1 w_1).
\]
\noindent \textbf{Step 4:}\textit{ Conclusion.}
\medbreak
We will show the existence of $C_{3} = C_3 (\overline{x},\mu_1, \mu_2, R, \Lambda_1, \Lambda_2,q,\|h^{-}\|_{L^{\infty}(B_{4R})}, \|c_+\|_{L^q(B_{4R})}
) > 0 $ such that
\[ \sup_{B_R} w_2 \leq C_{3}\,.\]
Thus, thanks to the definition of $w_2$, we can conclude the proof. Let us fix $s\in [1, \frac{N}{N-2})$,
$r\in(\frac{N}{2}, q)$
and $\alpha = \frac{(q-r)\mu_1 s}{\mu_2 q r}$ and let $c_{\alpha}>0$ such that
\[ \ln(1+x) \leq (1+x)^{\alpha} + c_{\alpha}, \quad \forall\ x \geq 0.\]
We introduce the auxiliary functions
\[
\begin{array}{c}
a(x) = \Lambda_2 c_{+}(x)(1+\mu_2w_2)^{\alpha} + c_{\alpha} \Lambda_2 c_{+}(x) + \mu_2 h^+(x), \vspace{0.225cm} \\
\displaystyle
b(x) = \frac{\Lambda_2}{\mu_2}c_{+}(x) (1+\mu_2 w_2)^{\alpha}
+ c_{\alpha} \frac{\Lambda_2}{\mu_2} c_{+}(x) + h^+(x)+ c_-(x) \frac{e^{-1}}{\mu_2},
\end{array}
\]
and, as $\mu(x)\leq \mu_2$, we deduce from \eqref{idwi} that $w_2$ satisfies
\begin{equation*}
\left\{
\begin{aligned}
-\Delta w_2 & \leq a(x) w_2 + b(x)\, \quad & \textup{ in } \Omega\,, \\
w_2 & = 0 & \textup{ on } \partial\Omega.
\end{aligned}
\right.
\end{equation*}
Now, as $q/r > 1$, by Step 3 and H\"older inequality, it follows that
\[
\begin{aligned}
\int_{B_{2R}} ( c_{+}(x) (1+\mu_2 w_2 )^{\alpha})^r dx
&\leq
\|c_{+}\|_{L^q(B_{2R})}^r
\Big( \int_{B_{2R}}(1+\mu_2 w_2)^{\frac{\alpha q r}{q-r}} dx \Big)^{\frac{q-r}{q}}
\\
& \leq \|c_{+}\|_{L^q(B_{2R})}^r
\Big( \int_{B_{2R}}(1+\mu_2 w_2)^{\frac{\mu_1 s}{\mu_2}} dx \Big)^{\frac{q-r}{q}}
\leq C_2^{\frac{q-r}{q}} \|c_{+}\|_{L^q(B_{2R})}^r.
\end{aligned}
\]
Hence, there exists $D (\overline{x},\mu_1, \mu_2, s, R, \Lambda_1, \Lambda_2,q,\|h^{-}\|_{L^{\infty}(B_{4R})}, \|c_+\|_{L^q(B_{4R})},
r, \|h^+\|_{L^q(B_{2R})}) > 0 $ such that
\begin{equation} \label{ab6}
\max \{ \, \|a\|_{L^r(B_{2R})}, \|b\|_{L^r(B_{2R})} \} \leq D\,.
\end{equation}
Applying then Lemma \ref{LMP}, there exists $C (\overline{x},\mu_1, \mu_2, s, R, \Lambda_1, \Lambda_2,q,\|h^{-}\|_{L^q(B_{4R})},
\|c_+\|_{L^q(B_{4R})}
) > 0 $
such that
\[ \sup_{B_R} w_2^+ \leq
C \Big[ \Big( \int_{B_{2R}} (w_2^+)^{ \frac{\mu_1}{\mu_2} s } dx \Big)^{ \frac{\mu_2}{\mu_1 s}}
+ \|b\|_{L^r(B_{2R})} \Big] \leq C \Big[ \Big( \int_{B_{2R}} (w_2^+)^{ \frac{\mu_1}{\mu_2} s } dx \Big)^{ \frac{\mu_2}{\mu_1 s}}
+ D \Big] \,.\]
On the other hand, by Step 3, we get
\[ \int_{B_{2R}} (w_2^+)^{ \frac{\mu_1}{\mu_2} s } dx
\leq C(\mu_1, \mu_2, s) \int_{B_{2R}} ( 1+\mu_2 w_2 )^{ \frac{\mu_1}{\mu_2} s } dx \leq C(\mu_1, \mu_2, s)\,C_2\,,\]
and the result follows.
\end{proof}
\begin{lemma} \label{Steps B}
Assume that \eqref{A1} holds and that $\overline{x} \in \overline{\Omega}_{+} \cap \partial \Omega$. For each $\Lambda_2 > \Lambda_1 > 0$, there exist $R > 0$ and $M_B > 0$ such that,
for any $\lambda \in [\Lambda_1,\Lambda_2]$, any solution of \eqref{Plambda} satisfies
$\sup_{B_R(\overline{x}) \cap \Omega}u \leq M_B\,.$
\end{lemma}
\begin{proof} Let $\overline R>0$ given by Theorem \ref{BWHIP}.
Under the assumption \eqref{A1}, we can find $R \in (0, \overline R/2]$ and $\Omega_1 \subset \Omega$ with
$\partial\Omega_1$ of class ${\mathcal C}^{1,1}$ such that
$B_{2R}(\overline{x}) \cap \Omega\subset \Omega_1$ and
$\mu(x) \geq \mu_1 > 0$, $c_{-} \equiv 0$ and $c_{+} \gneqq 0$ in $\Omega_1$.
\medbreak
Since $c_{-} \equiv 0$ and $\mu(x)\geq\mu_1$ in $\Omega_1$, observe that \eqref{idwi}
reduces to
\begin{equation}
\label{equ*bis}
-\Delta w_1 + \mu_1 h^{-}(x) w_1 \geq \lambda (1+ \mu_1 w_1) c_{+}(x)g_1(w_1) +h^+(x)(1+\mu_1 w_1) - h^{-}(x)\,, \quad \textup{ in } \Omega_1
\end{equation}
\noindent Let $z_2$ be the solution of
\begin{equation}
\label{z2bis}
-\Delta z_2 + \mu_1 h^{-}(x) z_2 = -\Lambda_2 c_{+}(x)\frac{e^{-1}}{\mu_1}
\,, \qquad z_2 \in H^1_0(\Omega_1).
\end{equation}
As in Lemma \ref{Steps A}, $z_2\in \mathcal{C}(\overline{\Omega_1})$ and there exists a
$D=D(\mu_1, \Lambda_2, \|h^-\|_{L^q(B_{4R})}, \|c_+\|_{L^q(B_{4R})}, q, \Omega_1) > 0$
such that $-D \leq z_2 \leq 0$ on $\Omega_1$. Now defining $v_1=w_1-z_2+\frac{1}{\mu_1}$ we observe that $v_1$ satisfies
\begin{equation}
\label{v_1}
-\Delta v_1 + \mu_1 h^{-}(x) v_1 \geq \Lambda_1 c_{+}(x)(1+ \mu_1 w_1) g_1(w_1)^+\,,
\quad \textup{ in } \Omega_1.
\end{equation}
and $v_1>0$ on $\overline{\Omega_1}$.
Note also that $0 < 1+ \mu_1 w_1 = \mu_1 v_1 + \mu_1 z_2$ on $\overline{\Omega_1}$. Next, we split the rest of the proof into three steps.
\medbreak
\noindent \textbf{Step 1: }\textit{There exists
$C_{1} = C_{1}(\Omega_1, \overline{x}, \Lambda_1, \Lambda_2, R, \mu_1, q , \|h^{-}\|_{L^{\infty}(\Omega_1)},
\|c_+\|_{L^q(\Omega_1)}) > 0 $
such that
\[ \inf_{B_{2R}(\overline{x}) \cap \Omega_1} \frac{v_1(x)}{d(x,\partial \Omega_1)} \leq C_{1}\,.\]}
\smallbreak
Choose $R_2 > 0$ and $y\in \Omega$ such that $B_{4 R_2}(y) \subset B_{2R}(\overline{x}) \cap \Omega$ and
$c_{+} \gneqq 0$ in $B_{R_2}(y)$.
As in Step 1 of Lemma \ref{Steps A},
there exists $C = C(\Omega_1, y, \Lambda_1, \Lambda_2, R_2, \mu_1, q , \|h^{-}\|_{L^{\infty}(\Omega_1)},
\|c_+\|_{L^q(\Omega_1)}) > 0 $
such that
\[
\inf_{B_{R_2}(y)} v_1(x) \leq C\,.\]
We conclude by observing, since $B_{4R_2}(y) \subset B_{2R}(\overline{x}) \cap \Omega_1$, that
$$
\inf_{B_{2R}(\overline{x}) \cap \Omega_1} \frac{v_1(x)}{d(x,\partial \Omega_1)}
\leq \inf_{B_{R_2}(y)} \frac{v_1(x)}{d(x,\partial \Omega_1)}
\leq
\frac{1}{3 R_2} \, \inf_{B_{R_2}(y)} v_1(x).
$$
\noindent \textbf{Step 2:} \textit{There exist $\epsilon = \epsilon(\overline{R}, \mu_1, \|h^{-}\|_{L^{\infty}(\Omega_1)}, \Omega_1)> 0$ and $C_{2} = C_{2}(\overline{x},\mu_1, R, \overline{R}, s, \Lambda_1, \Lambda_2,q,\|h^{-}\|_{L^{\infty}(\Omega_1)},$ $\|c_+\|_{L^q(\Omega_1)}) > 0 $ such that
\[
\Big( \int_{B_{2R}(\overline{x}) \cap \Omega} (1+ \mu_1 w_1)^{\epsilon} \,dx \Big)^{1/\epsilon}
\leq
C_{2}.
\]}
\indent By Theorem \ref{BWHIP} applied on \eqref{v_1} and Step 1, we obtain constants
$\epsilon = \epsilon(\overline{R}, \mu_1, \|h^{-}\|_{L^{\infty}(\Omega_1)},\Omega_1)>0$ and
$ C=C(\Omega_1, \overline{x},\mu_1, \epsilon, \overline{R}, \Lambda_1, \Lambda_2,q,\|h^{-}\|_{L^{\infty}(\Omega_1)}, \|c_+\|_{L^q(\Omega_1)}) > 0 $ such that
\[ \Big( \int_{B_{2R}(\overline{x}) \cap \Omega_1}
\Big( \frac{v_1(x)}{d(x,\partial \Omega_1)} \Big)^{\epsilon} dx \Big)^{1/\epsilon} \leq C\,.\]
This clearly implies, since $\Omega_1 \subset \Omega$, that
\[ \Big( \int_{B_{2R}(\overline{x}) \cap \Omega_1}
v_1(x)^{\epsilon} dx \Big)^{1/\epsilon} \leq C \operatorname{diam}(\Omega) \,.\]
\noindent The Step 2 then follows observing that
$0 \leq 1 + \mu_1 w_1 = \mu_1 v_1 + \mu_1 z_2 \leq \mu_1 v_1$ and taking into account that $B_{2R}(\overline{x}) \cap \Omega = B_{2R}(\overline{x}) \cap \Omega_1.$
\medbreak
\noindent \textbf{Step 3:} \textit{Conclusion.}
\medbreak
Arguing exactly as in Step 3 and 4 of Lemma \ref{Steps A},
using Lemma \ref{BLMP} and Step 2, we show the existence of
$C_{3} = C_3 (\overline{x},\mu_1, \mu_2, R, \Lambda_1, \Lambda_2,\|h^{-}\|_{L^{\infty}(\Omega_1)}, \|c_+\|_{L^q(B_{2R}(\Omega_1)}) > 0 $
such that
\[ \sup_{B_{R}(\overline{x}) \cap \Omega} w_2 \leq C_{3}\,.\]
Hence, the proof of the lemma follows by the definition of $w_2$.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{aPrioriBound}}]
\noindent Arguing by contradiction we assume the existence of
sequences $\{\lambda_n\}\subset [\Lambda_1 ,\Lambda_2]$, $\{u_n\}$
solutions of \eqref{Plambda} for $\lambda=\lambda_n$
and of points
$\{x_n\} \subset \Omega$ such that
\begin{equation} \label{ab3}
u_n(x_n) = \max\{u_n(x): x \in \overline{\Omega}\} \rightarrow \infty\,, \quad \textup{ as } n \rightarrow \infty\,.
\end{equation}
Observe that Lemma \ref{Step 1} and \eqref{ab3} together imply the existence of
a sequence of points $y_n \in \overline{\Omega}_{+}$ such that
\begin{equation} \label{ab8}
u_n(y_n) = \max\{u_n(y): y \in \overline{\Omega}_{+}\} \rightarrow \infty\,,
\quad \textup{ as } n \rightarrow \infty\,.
\end{equation}
Passing to a subsequence if necessary, we may assume that
$\lambda_n \rightarrow \overline{\lambda} \in [\Lambda_1 ,\Lambda_2]$ and
$y_n \rightarrow \overline{y} \in \overline{\Omega}_{+}$. Now, let us distinguish two cases:
\begin{itemize}
\item If $\overline{y} \in \overline{\Omega}_{+} \cap \Omega$, Lemma \ref{Steps A} shows that we can find
$R_I > 0$ and $M_I > 0$ such that, if $u \in H_0^1(\Omega) \cap L^{\infty}(\Omega)$ is a solution of
\eqref{Plambda}, then $\sup_{B_{R_{I}}(\overline y)} u\leq M_{I}$.
This contradicts \eqref{ab8}.
\item If $\overline{y} \in \overline{\Omega}_{+} \cap \partial \Omega$, Lemma \ref{Steps B} shows
that we can find $R_B > 0$ and $M_B > 0$ such that, if $u \in H_0^1(\Omega) \cap L^{\infty}(\Omega)$ is a solution of \eqref{Plambda},
then $\sup_{B_{R_{B}}(\overline y)\cap \Omega} u\leq M_B$. Again, this contradicts
\eqref{ab8}.
\end{itemize}
As \eqref{ab8} cannot happen, the result follows.
\end{proof}
\section{Proof of Theorem \ref{th1}} \label{IV}
Let us begin with a preliminary result.
\begin{lemma} \label{nonExistenceLambdaLarge}
Under the assumption \eqref{A1}, assume that $(P_0)$ has a solution $u_0$ for which there exist $\overline x\in \Omega$ and $R>0$ such that $c_+u_0 \gneqq 0$, $c_-\equiv 0$ and $\mu\geq 0$ in $B_R(\overline x)$.
Then there exists $\overline\Lambda\in\, (0,\infty)$
such that, for $\lambda \geq \overline\Lambda$, the problem \eqref{Plambda} has no solution $u$ with $u \geq u_0$ in $B_R(\overline x)$.
\end{lemma}
\begin{proof}
Let us introduce $\overline{c}(x) := \min\{c_{+}(x),1\}$. Observe that $0 \lneqq \overline{c} \leq c_{+}$
and define $\gamma^1_1 > 0$ as the first eigenvalue of the problem
\begin{equation} \label{Pgamma}
\left\{
\begin{aligned}
-\Delta \varphi & = \gamma \overline c (x)\varphi & \textup{ in } B_R(\overline x),\\
\varphi & = 0 & \textup{ on } \partial B_R(\overline x).
\end{aligned}
\right.
\end{equation}By standard arguments, there exists $\varphi_1^1 \in {\mathcal C}_0^1(\overline {B_R(\overline x)})$ an associated first eigenfunction such that
$\varphi_1^1(x)>0$ for all $x\in B_R(\overline x)$ and,
denoting by $n$ the outward normal to $\partial B_R(\overline x)$, we also have
\begin{equation}\label{derive-negative}
\frac{\partial \varphi_1^1(x)}{\partial n} < 0\,, \quad \textup{on } \partial B_R(\overline x).
\end{equation}
Now, let us introduce the function $\phi \in H_0^1(\Omega) \cap L^{\infty}(\Omega)$, defined as
\[ \phi(x) = \left\{ \begin{aligned}
& \varphi_1^1(x)\,, \quad & x \in B_R(\overline x), \\
& 0 & x \in \Omega \setminus B_R(\overline x),
\end{aligned}
\right.
\]
and suppose that $u$ is a solution of \eqref{Plambda} such that $u \geq u_0$ in $B_R(\overline x)$. First observe that, in view of \eqref{derive-negative} and as $u\geq u_0$ on $\overline{B_R(\overline x)}$, there exists a constant $C>0$ independent of $u$ such that
\begin{equation}\label{derive-negative-bis}
\int_{\partial B_R(\overline x)} u \frac{\partial \varphi_1^1}{\partial n} \,dS \leq C.
\end{equation}
Thus on one hand, using \eqref{Pgamma} and \eqref{derive-negative-bis}, we obtain
\begin{equation} \label{ne1}
\begin{aligned}
\int_{\Omega}\big( \nabla \phi \nabla u + c_{-} &(x)\, \phi\, u\big)\, dx
=
\int_{B_R(\overline x)} \nabla \varphi_1^1 \nabla u\, dx
=
- \int_{B_R(\overline x)} u \Delta \varphi_1^1\, dx
+
\int_{\partial B_R(\overline x)} u \frac{\partial \varphi_1^1}{\partial n} \,dS
\\
&\leq
- \int_{B_R(\overline x)} u \Delta \varphi_1^1 \,dx + C
= \gamma_1^1 \int_{B_R(\overline x)}\overline{c}(x) \,\varphi_1^1 \,u\, dx + C
\leq \gamma_1^1 \int_{\Omega} c_{+}(x) \,\phi\, u\, dx\, + C.
\end{aligned}
\end{equation}
On the other hand, considering $\phi$ as test function in \eqref{Plambda} we observe that
\begin{equation} \label{ne2}
\int_{\Omega} \big(\nabla \phi \nabla u + c_{-}(x) \,\phi \,u\big)\, dx
= \lambda \int_{\Omega} c_{+}(x) \,u\,\phi \, dx + \int_{\Omega} \big( \mu(x) |\nabla u|^2 + h(x)\big) \phi\, dx\,.
\end{equation}
From \eqref{ne1} and \eqref{ne2}, we then deduce that, for a $D>0$ independent of $u$.
\begin{equation} \label{ne3}
\begin{aligned}
(\gamma_1^1 - \lambda) \int_{\Omega} c_{+}(x) \,\phi \,u\, dx
& \geq
\int_{\Omega} \big( \mu(x) |\nabla u|^2 + h(x) \big) \phi\, dx - C
\\
& =
\int_{B_R(\overline x)} \big( \mu(x) |\nabla u|^2 + h(x) \big) \varphi_1^1\, dx - C \geq -D.
\end{aligned}
\end{equation}
As $c_+u_0 \gneqq 0$ in $B_R(\overline x)$, we have that
\[
\int_{\Omega} c_{+}(x) \,\phi \,u\, dx
\geq \int_{\Omega} c_{+}(x) \,\phi \,u_0\, dx >0.
\]
Hence, for $\lambda > \gamma_1^1$ large enough, we obtain a contradiction with \eqref{ne3}.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{th1}}] We treat separately the cases $\lambda \leq 0$ and $\lambda > 0$.
\medbreak
\noindent \textbf{Part 1:} \textit{$\lambda\leq0$.}
\medbreak
Observe that for $\lambda\leq 0$ we have $\lambda c_+-c_-\leq -c_- $ and hence the result
follows from \cite[Lemma 5.1, Proposition 4.1, Proposition 5.1, Theorem 2.2]{A_DC_J_T_2015}
as in the proof of \cite[Theorem 1.2]{A_DC_J_T_2015}. Moreover, observe that $u_0$ is an upper solution of \eqref{Plambda}. Hence we conclude that
$u_{\lambda}\leq u_0$ by
\cite[Lemmas 2.1 and 2.2]{A_DC_J_T_2014}.
\medbreak
\noindent \textbf{Part 2:} \textit{$\lambda > 0$.}
\medbreak
Consider, for $\lambda \geq 0$ the modified problem
\[ \label{Pbar} \tag{$\overline{P}_\lambda$}
-\Delta u +u= (\lambda c_+(x) - c_-(x)+1) \, ((u-u_0)^{+} +u_0)+\mu(x)|\nabla u|^2 + h(x)\,, \quad u \in H_0^1(\Omega) \cap L^{\infty}(\Omega).
\]
As in the case of $(P_{\lambda})$, any solution of
\eqref{Pbar} belongs to $\mathcal{C}^{0,\tau}(\overline{\Omega})$ for some $\tau > 0$. Moreover, observe that $u$ is a solution of \eqref{Pbar} if and only if it is a fixed point of the operator
$\overline T_\lambda$ defined by
$\overline T_\lambda: {\mathcal C}(\overline\Omega)\to {\mathcal C}(\overline\Omega): v\mapsto u$
with $u$ the solution of
\[
-\Delta u +u-\mu(x)|\nabla u|^2 = (\lambda c_+(x) - c_-(x)+1)\, ((v-u_0)^{+} +u_0)+h(x)\,, \quad u \in H_0^1(\Omega) \cap L^{\infty}(\Omega).
\]
Applying \cite[Lemma 5.2]{A_DC_J_T_2015}, we see that $\overline T_\lambda$ is completely continuous. Now, we denote
\[
\overline\Sigma :=
\{ (\lambda, u ) \in \R \times \mathcal{C}(\overline{\Omega}) : u \textup{ solves } \eqref{Pbar} \}\,
\]
and we split the rest of the proof into three steps.
\medbreak
\noindent \textbf{Step 1:} \textit{If $u$ is a solution of \eqref{Pbar} then $u\geq u_0$
and hence it is a solution of \eqref{Plambda}.}
\medbreak
Observe that $(u-u_0)^{+} +u_0-u\geq 0$. Also we have that $\lambda c_+(x)((u-u_0)^{+} +u_0) \geq \lambda c_+(x)u_0\geq 0$. Hence, we deduce
that a solution $u$ of \eqref{Pbar} is an upper solution of
\begin{equation}
\label{eqThm1.2}
-\Delta u = - c_-(x) \, ((u-u_0)^{+} +u_0)+\mu(x)|\nabla u|^2 + h(x)\,, \quad u \in H_0^1(\Omega) \cap L^{\infty}(\Omega).
\end{equation}
Then the result follows from
\cite[Lemmas 2.1 and 2.2]{A_DC_J_T_2014} noting that $u_0$ is a solution of \eqref{eqThm1.2}.
\medbreak
\noindent \textbf{Step 2:} \textit{$u_0$ is the unique solution of $(\overline{P}_0)$ and
$i(I-\overline T_0,u_0) = 1$.}
\medbreak
Again the uniqueness of the solution of $(\overline{P}_0)$ can be deduced from
\cite[Lemmas 2.1 and 2.2]{A_DC_J_T_2014}. Now, in order to prove that $i(I-\overline T_0,u_0) = 1$, we consider the operator $S_t$ defined by $S_t: {\mathcal C}(\overline\Omega)\to {\mathcal C}(\overline\Omega): v\mapsto u$
with $u$ the solution of
\[
-\Delta u +u-\mu(x)|\nabla u|^2 = t[ (- c_-(x)+1)\, (u_0+ (v-u_0)^{+} -(v-u_0-1)^+)+h(x)]\,, \quad u \in H_0^1(\Omega) \cap L^{\infty}(\Omega).
\]
First, observe that there exists $R>0$ such that, for all $t\in [0,1]$ and all $v\in {\mathcal C}(\overline\Omega)$,
\[
\|S_t v\|_{\infty}<R.
\]
This implies that
\[
\textup{deg}(I-S_1, B(0, R))=
\textup{deg}(I, B(0, R))=
1.
\]
By \cite[Lemmas 2.1 and 2.2]{A_DC_J_T_2014}, we see that $u_0$ is the only fixed point of $S_1$.
Hence, by the excision property of the degree, for all $\epsilon>0$ small enough, it follows that
\[
\textup{deg}(I-S_1, B(u_0, \epsilon))=
\textup{deg}(I-S_1, B(0, R))= 1.
\]
Thus, as for $\epsilon<1$, $S_1=\overline T_0$, we conclude that
\[
i(I-\overline T_0,u_0)=\lim_{\epsilon\to0}
\textup{deg}(I-\overline T_0, B(u_0, \epsilon))=\lim_{\epsilon\to0}
\textup{deg}(I-S_1, B(u_0, \epsilon))= 1.
\]
\noindent \textbf{Step 3:} \textit{Existence and behavior of the continuum.}
\medbreak
By \cite[Theorem 3.2]{R_1971} (see also \cite[Theorem 2.2]{A_DC_J_T_2015}), there exists a continuum
$\mathscr{C} \subset \overline\Sigma$ such that
$\mathscr{C} \cap ( [0,\infty) \times \mathcal{C}(\overline{\Omega}))$
is unbounded. By Step 1, we know that if $u \in \mathscr{C} $ then $u \geq u_0$ and is a solution of $(P_{\lambda})$. Thus applying Lemma \ref{nonExistenceLambdaLarge}, we deduce that
$\textup{Proj}_{\R}\mathscr{C} \cap [0,\infty) \subset [0, \overline \Lambda]$. By Theorem \ref{aPrioriBound} and Step 1, we deduce that for every $\Lambda_1 \in (0,\overline\Lambda)$,
there is an a priori bound on the solutions of \eqref{Pbar} for $\lambda \in [\Lambda_1,\overline\Lambda]$.
Hence, the projection of
$\mathscr{C} \cap ( [\Lambda_1, \overline\Lambda)\times \mathcal{C}(\overline{\Omega}))$ on
$\mathcal{C}(\overline{\Omega})$ is bounded, and so, we deduce that $\mathscr{C}$ emanates from infinity
to the right of $\lambda = 0$. Finally, since $\mathscr{C}$ contains $(0, u_0)$ with $u_0$ the unique solution
of $(P_0)$, we conclude that there exists $\lambda_0 \in (0,\overline\Lambda)$ such that problem
\eqref{Pbar}, and thus problem $(P_{\lambda})$, has at least two solutions satisfying $u \geq u_0$ for $\lambda \in (0,\lambda_0)$.
\end{proof}
\section*{Acknowledgements}
The authors thank warmly Prof. David Arcoya for helpful discussions having lead to improvements of the results.
Part of this work was done during the visit of the first author to the University of Bourgogne-Franche-Comt\'e. She thanks the LMB for its hospitality and the R\'egion Bourgogne-Franche-Comt\'e for the financial support.
\bigbreak
\bibliographystyle{plain}
|
1,108,101,563,213 | arxiv | \subsection*{1. Introduction}
\indent
The quantum group structure plays an important role in the study
of two dimensional
integrable models because $R$-matrices intertwining between
diferent
irreps of a quantum group provide solutions to the Yang-Baxter
equation.
Two important families of integrable models are the 6-vertex and
8-vertex solutions to the Yang-Baxter equation \cite{bax}.
Whereas the 6-vertex
solutions are intertwiners $R$-matrices for
$U_q(\widehat{sl(2)})$,
a quantum group interpretation for the elliptic 8-vertex
family is not yet known.
Nevertheless, the 8-vertex regime is well understood for the
particular class of solutions to the Yang-Baxter equation
satisfying the free-fermion condition \cite{FW}
\begin{equation}
R_{00}^{00}(u) R_{11}^{11}(u) + R_{01}^{10}(u) R_{10}^{01}(u) =
R_{00}^{11}(u) R_{00}^{11}(u) + R_{01}^{01}(u) R_{10}^{10}(u)
\end{equation}
Indeed, a quantum group like structure has been found recently
for the most general free
fermionic elliptic 8-vertex model in a magnetic field.
The matrix of its Boltzmann weights \cite{F,BS}
acts as intertwiner for the afinization of a quantum Hopf
deformation of the Clifford algebra in two dimensions, noted
$\widehat{CH_q(2)}$ \cite{U}.
A major interest of the free fermionic solutions to the
Yang-Baxter equation is in their connection, in the 6-vertex
limit ($R_{00}^{11}(u)=R_{11}^{00}=0$),
with $N=2$ supersymmetric
integrable models.
The free fermionic 6-vertex solutions
are given by the $R-$matrix intertwiners between nilpotent
irreps
of the Hopf algebra $U_{\epsilon}(\widehat{sl(2)})$, with
$\epsilon^4=1$
(the nilpotent irreps are a special case of the cyclic
representations that enlarge the representation theory of
$U_{\epsilon}(\widehat{sl(2)})$
when $\epsilon$ is a root of unity).
In the trigonometric limit
the $R-$matrix for $\widehat{CH_q(2)}$ becomes
that for $U_{\epsilon}(\widehat{sl(2)})$, $\epsilon^4=1$.
In this article we construct the quantum Clifford-Hopf
algebras $\widehat{CH_q(D)}$ for even dimensions $D \geq 2$,
generalizing the results in \cite{U}. This general case is
interesting because it yields one of the rare examples of
elliptic $R-$matrices. The $R-$matrices we find admit
several spectral parameters, due to the structure of
$\widehat{CH_q(D)}$ as a Drinfeld twist \cite{D} of the
tensor product of
several copies of $\widehat{CH_q(2)}$.
The possibility to connect with
extended supersymmetry in the trigonometric limit of
$\widehat{CH_q(D)}$,
and a related supersymmetric integrable model
are analyzed in sect.3. Finally, in sect.4,
we study the spin chain hamiltonian associated to these
algebras. The model obtained represents
several $XY$ Heisenberg chains in an external magnetic field
\cite{LSM}
coupled among them in a simple way. Though the
coupling is simple it can be an starting point to get a
quantum group structure for more complicated models built
through the coupling of two XY or XX models
(Bariev model \cite{B}, 1-dimensional Hubbard model).
The last part of this section is devoted
to showing the equivalence of this model --under some
restrictions-- with
a generalized $XY$ model proposed by M.Suzuki
in relation with the 2-dimensional dimer problem \cite{S}.
\subsection*{2. The quantum Clifford algebra}
\indent
A Clifford algebra $C(\eta)$ related to a cuadratic form or
metric $\eta$ is the associative algebra generated by the
elements $\{
\Gamma_{\mu} \}_{\mu =1}^D$, which satisfy
\begin{equation}
\{ \Gamma_{\mu}, \Gamma_{\nu} \} = 2 \eta_{\mu \nu} {\bf 1}
\;\;\; \mu, \nu = 1,\ldots,D
\label{8}
\end{equation}
\indent
The quantum Clifford-Hopf algebra $CH_q(D)$ \cite{U} is a
generalization and quantum deformation of $C(\eta)$,
generated by elements
$\Gamma_\mu$, $\Gamma_{D+1}$ (the analog of $\gamma_{5}$ for
the Dirac matrices) and new central
elements $E_\mu$ ($\mu =1,..,D$) verifying
\begin{eqnarray}
& & \Gamma_{\mu}^2 = \frac{q^{E_{\mu}}-q^{-E_{\mu}}}
{q- q^{-1}} \;\; , \;\; \Gamma_{D+1}^2 = {\bf 1}
\nonumber \\
& & \{ \Gamma_{\mu}, \Gamma_{\nu} \} =0, \;\; \mu \neq \nu
\nonumber \\
& & \{\Gamma_{\mu}, \Gamma_{D+1} \} =0 \label{9} \\
& & [ E_{\mu}, \Gamma_{\nu} ] = [ E_{\mu}, \Gamma_{D+1} ] =
[ E_{\mu}, E_{\nu} ] = 0 \;\; \forall \mu, \nu \nonumber
\end{eqnarray}
The charges $E_{\mu}$ result from elevating the components
of the metric $\eta$ from numbers to operators. The
generator $\Gamma_{D+1}$ will plays a similar role to
$(-1)^{F}$, with $F$ the fermion number operator. Although
for the standard Clifford algebra $D$ represents the
dimension of the space-time, in our case $D$ is only a
parameter labeling (3).
The algebra $CH_q(D)$ is a Hopf algebra
with the following comultiplication $\Delta$, antipode $S$
and counit $\epsilon$
\begin{equation}
\begin{array}{lll}
\Delta (E_{\mu}) = E_{\mu} \otimes {\bf 1} \: + \: {\bf 1}
\otimes E_{\mu}, \: \: \:& S(E_{\mu}) = -E_{\mu}, \: \:
&\epsilon(E_{\mu}) = 0 \\
\Delta (\Gamma_{\mu}) = q^{E_{\mu}/2} \Gamma_{D+1} \otimes
\Gamma_{\mu} \: +
\: \Gamma_{\mu} \otimes q^{-E_{\mu}/2}, \: \: \:
& S(\Gamma_{\mu}) = \Gamma_{\mu}
\Gamma_{D+1}, \: \:& \epsilon(\Gamma_{\mu}) = 0 \\
\Delta(\Gamma_{D+1}) = \Gamma_{D+1} \otimes \Gamma_{D+1},
\: \: \: & S(\Gamma_{D+1}) = \Gamma_{D+1}, \: \: &
\epsilon(\Gamma_{D+1}) = 1 \\
\end{array}
\end{equation}
The irreducible representations of $CH_q(D)$ are in one
to one correspondence with those of the Clifford algebra
$C(\eta)$ for all possible signatures of the metric $\eta$,
in D (D even) or
D+1 (D odd) dimensions respectively.
They are labelled by complex parameters
$\{\lambda_{\mu}\}_{\mu=1}^D$,
the eigenvalues of the Casimir operators $K_\mu=q^{E_\mu}$.
{}From
now on we restrict ourselves to the case $D$ even, $D=2M$.
The irreps of $CH_q(2M)$ are isomorphic to the tensor
product of $M$
$CH_q(2)$ irreps, being their dimension $2^M$. Thus,
a basis for $CH_q(2M)$ can be obtained from the
$CH_q(2)^{\otimes M}$ generators as follows
($\gamma_{\alpha}, E_{\alpha} (\alpha =1,2), \gamma_3$
$ \in CH_q(2)$):
\begin{eqnarray}
\Gamma_{2(n-1)+ \alpha} & = & \gamma_3 \otimes \cdots
\otimes \gamma_3
\otimes \stackrel{n)}{\gamma_{\alpha}} \otimes 1 \otimes
\cdots \otimes 1 \hspace{1cm} n=1,..,M; \; \alpha=1,2
\nonumber \\
E_{2(n-1)+ \alpha} & = & 1 \otimes \cdots \otimes 1
\otimes E_{\alpha} \otimes 1 \otimes \cdots \otimes 1 \\
\Gamma_{D+1} & = & \gamma_3 \otimes \cdots \otimes
\gamma_3 \nonumber
\end{eqnarray}
\noindent
The Hopf algebra $CH_q(2M)$ is related to the tensor
product
$CH_q(2)^{\otimes M}$ by a Drinfeld twist $B$ \cite{D}
\begin{equation}
\Delta_{\scriptscriptstyle CH_q(2M)} (g)= B
\Delta_{\scriptscriptstyle CH_q(2)^{\otimes M}} (a)
B^{-1} \hspace{6mm} \forall g \in CH_q(2M)
\end{equation}
where the operator
$B \in CH_q(2)^{\otimes M} \otimes CH_q(2)^{\otimes M}$
acting on the tensor product of two $CH_q(2M)$ irreps
is defined by
\begin{eqnarray}
& & B= (-1)^{F*F} \\
& & F*F= \sum_{1\leq j<i \leq M}
(1 \otimes \cdots \otimes \stackrel{i)}{f} \otimes
\cdots \otimes 1) \otimes (1 \otimes \cdots \otimes
\stackrel{j)}{f} \otimes \cdots \otimes 1) \nonumber
\end{eqnarray}
with $f=0$(boson),$1$(fermion) the fermion number for the
two vectors in a $CH_q(2)$ irrep.
The reason to introduce the operator $B$ in formula (6)
is that the comultiplication in $CH_q(2)^{\otimes M}$
treats each factor $CH_q(2)$ separatedly. This can be
represented by a twist between the $CH_q(2)$ pieces of a
$CH_q(2M)$ irrep.
Since one of the vectors in a $CH_q(2)$ irrep behaves as
a fermion, this twist has the effect of introducing some
signs that we represent by the operator $B$ (fig.1).
\begin{figure}[b]
\epsffile{lopez1.ps}
\caption{Graphycal representation of the expression
(6) for $CH_q(4)$. $(a,i)$ denote the vectors in a
$CH_q(2)^{\otimes 2}$
irrep, the index $a$ corresponding to the first
$CH_q(2)$ and $i$ to the second.}
\end{figure}
Next we introduce a sort of affinization of the Hopf
algebra $CH_q(D)$.
The generators of this new algebra $\widehat{CH_q(D)}$
are $\Gamma_{\mu}^{(i)}$, $E_{\mu}^{(i)}$ ($i=0,1$) and
$\Gamma_{D+1}$ verifying (3) and (4) for each value of
$i$. We impose also that the anticommutator
$ \{ \Gamma_{\mu}^{(1)} , \Gamma_{\nu}^{(2)} \}$
belong to the center of
$\widehat{CH_q(D)}$ $ \forall \mu , \nu$.
Let's give now the explicit realization of
$\widehat{CH_q(2)}$. It is an useful example, and it
will provide us with the building blocks for any $D$.
A two-dimensional irrep $\pi_{\xi}$ of
$\widehat{CH_q(2)}$
is labelled by $\xi = (z,\lambda_1,\lambda_2) \in
C^3$ and reads as follows
\begin{eqnarray}
& \begin{array}{ccc}
\pi_{\xi}(\gamma^{(0)}_1) =
\left(\frac{\lambda_1^{-1}-\lambda_1}{q-q^{-1}}\right)^{1/2}
\left(
\begin{array}{cc} 0 & z^{-1} \\ z & 0 \\ \end{array} \right)
&, & \pi_{\xi}(\gamma^{(1)}_1) =
\left(\frac{\lambda_1-\lambda_1^{-1}}{q-q^{-1}}\right)^{1/2}
\left(
\begin{array}{cc} 0 & z \\ z^{-1} & 0 \\ \end{array} \right)
\\
& & \\
\pi_{\xi}(\gamma^{(0)}_2) =
\left(\frac{\lambda_2^{-1}-\lambda_2}{q-q^{-1}}\right)^{1/2}
\left(
\begin{array}{cc} 0 & -i z^{-1} \\ i z & 0 \\ \end{array}
\right) &, & \pi_{\xi}(\gamma^{(1)}_2) =
\left(\frac{\lambda_2-\lambda_2^{-1}}{q-q^{-1}} \right)^{1/2}
\left(
\begin{array}{cc} 0 & -i z \\i z^{-1} & 0 \\ \end{array}
\right) \\
\end{array} & \\ \nonumber \\
& \pi_{\xi}(\gamma_3) =
\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array}
\right)
\hspace{1cm} , \hspace{1cm}
\left. \begin{array}{lll} \pi_{\xi}(q^{E^{(0)}_1}) =
\lambda_1^{-1} & , &
\pi_{\xi}(q^{E^{(1)}_1}) = \lambda_1 \\
\pi_{\xi}(q^{E^{(0)}_2}) = \lambda_2^{-1} &,
& \pi_{\xi}(q^{E^{(1)}_2}) =
\lambda_2 \\ \end{array} \right. \nonumber \\
\nonumber
\end{eqnarray}
For the affine $\widehat{CH_q(2M)}$ we can define a
straightforward
generalization of the expression (5). It allows to
introduce $M$
different affinization parameters $\{ z_n \}_{n=1}^{M}$,
one for each $\widehat{CH_q(2)}$ piece
\begin{eqnarray}
\Gamma_{2(n-1)+ \alpha}^{(i)} & = & \gamma_3 \otimes \cdots
\otimes \gamma_3 \otimes \gamma_{\alpha}^{(i)}
\otimes 1 \otimes \cdots \otimes 1 \hspace{1cm} n=1,..,M;
\; \alpha=1,2; \; i=0,1 \nonumber \\
E_{2(n-1)+ \alpha}^{(i)} & = & 1 \otimes \cdots \otimes 1
\otimes
E_{\alpha}^{(i)} \otimes 1 \otimes \cdots \otimes 1 \\
\Gamma_{D+1} & = & \gamma_3 \otimes \cdots \otimes
\gamma_3 \nonumber \\
\nonumber
\end{eqnarray}
The intertwiner $R-$matrix for two irreps with labels
$\xi = \{ z_{n},\lambda_{2n-1},\lambda_{2n} \}_{n=1}^{M}$
is defined by the
condition
\begin{equation}
R_{\xi_1 \xi_2} \Delta_{\xi_1 \xi_2}(g) = \Delta_{\xi_2
\xi_1}(g) R_{\xi_1 \xi_2} \;\;\; \forall g \in
\widehat{CH_q(2M)}
\end{equation}
\noindent
with $\Delta_{\xi_1 \xi_2}= \pi_{\xi_1} \otimes \pi_{\xi_2}
(\Delta)$. Since (6) remains true for any element
$g \in \widehat{CH_q(2M)}$, the intertwiner $R-$matrix
between two irreps (which furthermore satisfies the
Yang-Baxter equation) is given by \cite{D}
\begin{equation}
R_{\scriptscriptstyle CH_q(2M)}(u_{1},..,u_{M}) = B \:
R_{\scriptscriptstyle CH_{q}(2)^{\otimes M}}
(u_{1},..,u_{M}) \: B^{-1} \\
\end{equation}
\[ R_{\scriptscriptstyle CH_{q}(2)^{\otimes M}}
(u_{1},..,u_{M})
=R_{\scriptscriptstyle CH_q(2)}^{(1)}(u_{1}) \ldots
R_{\scriptscriptstyle CH_q(2)}^{(M)}(u_{M}) \]
The matrices $R_{\scriptscriptstyle CH_q(2)}^{(n)} =
R_{\xi_{1}^{(n)}
\xi_{2}^{(n)}}$ ($\xi^{(n)}=(z_{n},\lambda_{2n-1},
\lambda_{2n})$) are the $\widehat{CH_q(2)}$ intertwiners
\begin{eqnarray}
& & \begin{array}{ll}
R_{00}^{00}=1-e(u_{n})e_{1}e_{2} \; , & R_{11}^{11}=
e(u_{n})-e_{1}e_{2} \\
R_{01}^{10} = e_{1} - e(u_{n})e_{2} \; , & R_{10}^{01}=
e_{2} - e(u_{n})e_{1} \\
\end{array} \\
& & R_{01}^{01} = R_{10}^{10}=(e_{1}sn_{1})^{1/2}
(e_{2}sn_{2})^{1/2}(1-e(u_{n}))/sn(u_{n}/2) \nonumber \\
& & R_{00}^{11} = R_{11}^{00}=-ik(e_{1}sn_{1})^{1/2}
(e_{2}sn_{2})^{1/2}(1+e(u_{n}))sn(u_{n}/2) \nonumber
\end{eqnarray}
where $e(u_n) = cn(u_n) + i sn(u_n)$ is the elliptic
exponential of modulus $k_n$,
$e_{i}=e(\psi_{i}^{n}), sn_{i}=sn(\psi_{i}^{n})$
($i=1,2$) and $u_n, \psi_{i}^{n}$ are
elliptic angles depending on the labels $\xi_{i}^{(n)}$
(see ref.\cite{U} for details).
There is a
constraint on the irrep labels so that (12) be indeed
their intertwiner
\begin{equation}
\frac{2 (\lambda_{2n-1} - \lambda_{2n})}
{(1-\lambda_{2n-1}^2)^{1/2}
(1-\lambda_{2n}^2)^{1/2}(z_{n}^2 - z_{n}^{-2})} = k_{n} \; ,
\hspace{5mm} n=1,..,M
\end{equation}
All the $R_{CH_q(2)}^{(n)}$ matrices are independent and
commute among them. It's remarkable that the spectral
curve (13) of irreps that
admit an intertwiner is parametrized by $M$ independent
elliptic
moduli $k_n$. Indeed, some of them can be in the elliptic
regime and others in the trigonometric ($k \! = \! 0$).
The matrix $R_{CH_q(2M)}$ can be thought of
as the scattering matrix for objects composed of M
different kinds of particles. There is real interaction
when two equal particles scatter from each other, given
by $R_{CH_{q}(2)}^{(n)}$; otherwise
there is only a sign coming from their statistics and
represented by the operator $B$ (fig 2).
Finally, note that the $R-$matrix (12) coincides with the
Boltzmann weights for the most
general 8-vertex free fermionic solution to the Yang-Baxter
equation in non zero magnetic field \cite{F,BS}.
\begin{figure}
\epsffile{lopez2.ps}
\caption{Graphycal representation of the $CH_q(4)$
$R-$matrix.}
\end{figure}
\subsection*{3. Extended supersymmetry}
\indent
In order to analyze the connection of $\widehat{CH_q(2M)}$
with supersymmetry algebras,
we will study the limit in which the $R-$matrix (12)
becomes trigonometric.
Let us consider first the case $D=2$ in detail. This case
turns out to be related to an $N=2$ (2 supersymmetry charges)
integrable Ginzburg-Landau model.
We shall also give an heuristic motivation for the
construction of the Hopf algebra $\widehat{CH_q(2)}$ based
on its trigonometric 6-vertex limit.
The 6-vertex free fermionic solutions are given by the
intertwiner $R-$matrix between nilpotent irreps of
$U_{\epsilon}(\widehat{sl(2)})$, $\epsilon^4 \! = \! 1$
($\Rightarrow \epsilon \! = \! i$) \cite{N}.
In a $U_{\epsilon=i}(sl(2))$ nilpotent irrep the values of
the special Casimirs are $Q_{\pm}^2 \! = \! 0$ ($Q_{\pm}=
S_{\pm} \epsilon^{\pm H/2}$) and $K^2 \! = \! \lambda^2$
arbitrary ($K\!=\! \epsilon^{H}$); namely, they are the
highest weight case of the cyclic irreps.
Furthermore when $\epsilon^4=1$ the anticommutator
$\{ Q_{+},Q_{-} \}$ also belongs to the center, suggesting
the connection
with a Clifford algebra through the mixing of the positive
and negative root generators $Q_{\pm}$. The total fermion
number is conserved in the 6-vertex solutions to the
Yang-Baxter equation, but it is not in the elliptic regime.
Hence a non trivial mixing is needed to represent the
elliptic regime. The Hopf algebra $CH_q(2)$ assigns
different {\it central elements}
$\left[ E_{1} \right]_q , \left[ E_{2} \right]_q$
to the square of the generators $\gamma_{1}$, $\gamma_{2}$
respectively, in such a way that the mixing can only be
undone
(trigonometric limit) when $E_{1} \! = \! E_{2} \! = \!E$.
It implies $k=0$ in (13). For the affine $\widehat{CH_q(2)}$
this limit leads to
$U_{\epsilon=i}(\widehat{sl(2)})$ (this statement is only
rigurous for the affine case): i.e. $R_{CH_q(2)}$
becomes the $R-$matrix intertwiner for
$U_{\epsilon=i}(\widehat{sl(2)})$, provide the labels of
the two algebras are related by $\lambda \! = \! q^E$.
Using the generators
$Q_{\pm},\overline{Q}_{\pm} \in U_{\epsilon =i}
(\widehat{sl(2)})$, we
can define an $N=2$ supersymmetry algebra with topological
extension $T_{\pm}$ \cite{BL,V}
\begin{eqnarray}
& & Q_{\pm}^{2} = \overline{Q}_{\pm}^{2} =
\{ Q_{\pm} ,\overline{Q}_{\pm} \} =0 \nonumber \\
& & \{ Q_{\pm} , \overline{Q}_{\mp} \} =2T_{\pm} \; ,
\hspace{5mm}
\mid \! T_{\pm} \! \mid = \left[ E \right]_q \\
& & \{ Q_{+} , Q_{-} \} =2m \: z^{2} \; , \hspace{5mm}
\{ \overline{Q}_{+} ,
\overline{Q}_{-} \} =2m \: z^{-2} \nonumber
\end{eqnarray}
satisfying the Bogomolnyi bound $\mid \! T_{\pm} \! \mid =m$.
The free fermionic condition (1) ensures the $N=2$ invariance
of the $R-$matrix.
Moreover, the $N=2$ part of the scattering matrix for the
solitons of the Ginzburg-Landau superpotential
$W=X^{n+1} /(n+1)- \beta X$ \cite{FI} is given by $R-$matrices
of $U_{\hat{q}}(\widehat{gl(1,1)})$ with $\hat{q}^{2n}\!=\!1$
\cite{SR}, or equivalently by those of $U_{\epsilon = i}
(\widehat{sl(2)})$ between nilpotent irreps with labels
$\lambda\!=\! \hat{q}$ \cite{M}.
The Ginzburg-Landau models have a particular importance in
the context of $N=2$ supersymmetry, since they allow to
classify a wide variety of $N=2$ superconformal field
theories \cite{VW}.
Of great interest are the relevant perturbations of these
theories giving massive integrable models, as happens for
the superpotential
$W(X)=X^{n+1} /(n+1)- \beta X$.
We would like now to make plausible in this context why the
supersymmetry algebra (14) has a non-trivial comultiplication.
In a $N=2$ Ginzburg-Landau model, the superpotential enters
explicitly in the SUSY conmutators through
\begin{eqnarray}
& & \{ Q_{+}, \overline{Q}_{+} \} = \Delta W , \hspace{1cm}
\{ Q_{-}, \overline{Q}_{-} \} = \Delta W^{*} \\
& & \Delta W = W(X^{j})-W(X^{i}) \nonumber
\end{eqnarray}
with $X(-\infty) \! = \! X^i$, $X(\infty) \! = \! X^j$ and
$X^i, X^j$ minima of $W$. Let's call $K_{(i,i+l)}$ the
soliton going from $X^i$ to $X^j$, where $l=j-i$.
It is straighforward to see that $\Delta W$ depends on
both $l$ and $i$. Naively, the dependence in $i$ was
not expected since
all the solitons with the same $l$ are equivalent. For the
superpotential proposed it is possible to obtain a
supersymmetric algebra without this dependence, at the price
of reabsorbing it in a non-trivial quantum group
comultiplication
\begin{eqnarray}
\Delta (Q_{\pm}) = q^{\pm E} \gamma_{3} \otimes
Q_{\pm} + Q_{\pm} \otimes {\bf 1} & & \\
\Delta (\overline{Q}_{\pm}) = q^{\mp E} \gamma_{3} \otimes
\overline{Q}_{\pm} + \overline{Q}_{\pm}
\otimes {\bf 1} & & \nonumber
\end{eqnarray}
\indent
On the other hand, it is worth noting the relation of (16)
with the fermion number of the solitons.
In the solitonic sectors, the fermion number operator
acquieres a fractional constant piece due to the interaction
of the fermionic degrees of freedom with the solitonic
background. The fractional piece of the fermion number in
a soliton sector $K_{(i,j)}$, is given by \cite{FI2,G}
\begin{equation}
f=- \frac{1}{2\pi} ( \: Im \: \ln{ \: W^{''} (X)} \:)
\mid_{X^i}^{X^j} \: \:
= \: \: \frac{s}{n} \; \hspace{1cm} s=1,...,n-1
\end{equation}
The relation with $CH_q(2)$ labels is
$q^{E}\!=\!e^{i \pi s/n}$. Therefore $q^{\pm E} \! \gamma_3$
in (16) would be the analog of $e^{\pm i \pi F}$, with $F$
the fermion number operator. This interpretation fails for
$\Delta(\overline{Q}_{\pm})$, where the signs are
interchanged, leading in fact to a quantum group structure
instead to a Lie superalgebra.
Let us return to buiding extended supersymmetry algebras
from the general $\widehat{CH_q(2M)}$, in the same sense
as above.
The trigonometric limit of
$\widehat{CH_q(2M)}$ is obtained as an independent
trigonometric limit
in each $\widehat{CH_q(2)}$ piece. Then the affine Hopf
algebra
$\widehat{CH_q(2M)}$ becomes
in essence the anticommuting tensor product of $M$
$U_{\epsilon=i}(\widehat{sl(2)})$ factors, each with
its own spectral parameter.
Imposing that the eigenvalues of all the central
charges $E_i$ and the spectral parameters $z_i$
($i=1,..,M$) coincide,
we get $M$ copies of the same structure (14),
$\{ Q_{\pm}^{(i)}, \,
\overline{Q}_{\pm}^{(i)}, \, T_{\pm}^{(i)} \! =
\! T_{\pm} \}_{i=1}^{M}$.
Therefore we find an $N=2M$ supersymmetry algebra with
$M$ topological charges. Indeed, the dimension of a
$\widehat{CH_q(2)}$ irrep is $2^M$ as is needed to
saturate the Bogomolnyi bound $\mid \! T_{\pm}^{(i)} \!
\mid=\mid \! T_{\pm} \! \mid=m$.
Besides, we have seen that the $\widehat{CH_q(2M)}$
irreps can be thougth of as collections of M
independent solitons $\widehat{CH_q(2)}$.
Let us consider the more general trigonometric limit
with equal values of the central charges $E_i$, but
arbitrary spectral parameters $z_i$ ($i=1,..,M$).
Then the charges
\begin{equation}
Q_{\pm}^{T}=\sum_{i=1}^{M} Q_{\pm}^{(i)} \hspace{6mm},
\hspace{6mm}
\overline{Q}_{\pm}^{T} = \sum_{i=1}^{M}
\overline{Q}_{\pm}^{(i)}
\end{equation}
verify the commutation relations of $N=2$ supersymmetry
(14). In fact, (14) is satisfied even if we allow
different central charges $E_i$ . However, in this case
the comultiplication doesn't
preserve the expression (18) of
$Q_{\pm}^{T},\overline{Q}_{\pm}^{T}$.
\subsection*{4. Generalized $XY$ spin chains}
\indent
The quantum group structure plays an important role
in 2-dimensional statistical models, since $R-$matrix
intertwiners provide systematic solutions to the
integrability condition, the Yang-Baxter equation. In
this way integrable models can be built associated to
a quantum group, allowing to connect integrability with
an underlying symmetry principle. As noted above, the
intertwiner $R-$matrix for the Clifford-Hopf algebra
$\widehat{CH_q(2)}$ reproduces the 8-vertex free fermion
model in magnetic field. In this section we will analyze
the model defined by the algebras $\widehat{CH_q(D)}$
for general $D=2M$. Following the transfer matrix method,
the study of a 2-dimensional statistical model is
equivalent to that of its corresponding spin chain.
The L-site
hamiltonian for a periodic chain defined by the
$\widehat{CH_q(2M)}$ Hopf algebras is given by (provided
that $R(0)\!=\! {\bf 1}$)
\begin{eqnarray}
& & H= \sum_{j=1}^{L} i \frac{\partial}{\partial u}
R_{j,j+1}(u) \mid_{u=0} \\
& & H= \sum_{j=1}^{L} \sum_{n=1}^{M} \{ (J_{x}^{n}
\sigma_{x,j}^{n}
\sigma_{x,j+1}^{n} + J_{y}^{n} \sigma_{y,j}^{n}
\sigma_{y,j+1}^{n}) \sigma_{z,j}^{n+1} ... \sigma_{z,j}^{M}
\sigma_{z,j+1}^{1} ... \sigma_{z,j+1}^{n-1} \: + \: h^{n}
\sigma_{z,j}^{n} \} \nonumber \\
\nonumber
\end{eqnarray}
where $\sigma_{a}^{n}$ ($a=x,y,z \, , \; n=1,..,M$) are
$M$ sets of Pauli matrices, and the constants
$J_{x}^{n},J_{y}^{n},h^{n}$ depend on the quantum labels
of the irreps whose intertwiner is $R$
\begin{eqnarray}
J_{x}^{n} & = & 1+ \Gamma^{n} \; , \; J_{y}^{n} \, = \, 1-
\Gamma^{n} \hspace{1cm} n=1,..,M \nonumber \\
\Gamma^{n} & = & k_{n}sn(\psi^{n}) \\
h^{n} & = & 2 cn(\psi^{n}) \nonumber
\end{eqnarray}
The requirement $R(0)= {\bf 1}$
implies $\psi_{1}^{n}=\psi_{2}^{n}=\psi^{n}$.
The hamiltonian (19) can be diagonalized through a
Jordan-Wigner
transformation and its excitations behave as free fermions
(massless when
$J_{x}^{n}=J_{y}^{n}$ massive otherwise).
This model provides $M$ groups of Pauli matrices
$\sigma_{a,j}^{n}$ ($a=x,y,z$)
for each site $j$ on the chain, so it behaves as having
$M$ layers
with an $XY$ model defined in each layer.
The factors $(\sigma_{z,j}^{k+1}
... \sigma_{z,j}^{M} \sigma_{z,j+1}^{1} ...
\sigma_{z,j+1}^{k-1})$
make the fermionic excitations on different layers
anticonmute.
Thus the algebra $\widehat{CH_q(2M)}$ provides a way
to put different
non-interacting fermions in a chain with a quantum group
interpretation.
When $M \! = \! 1$, $H$
reduces to the hamiltonian of an $XY$ Heisenberg chain in
an external
magnetic field $h$, that is the spin chain asociated with
the 8-vertex free fermion model \cite{LSM}
\begin{equation}
H= \sum_{j=1}^{L} \{ J_{x} \sigma_{x,j}
\sigma_{x,j+1} + J_{y} \sigma_{y,j}
\sigma_{y,j+1} + h \sigma_{z,j} \} \\
\nonumber
\end{equation}
The aim of this section is to show that the model above
is equivalent under some restrictions to the generalized
integrable $XY$
chain proposed and solved in ref.\cite{S},
\begin{equation}
\widetilde{H} = - \sum_{k=1}^{K}
\sum_{j=1}^{L'} (\tilde{J}_{x}^{k}
\sigma_{x,j} \sigma_{x,j+k} + \tilde{J}_{y}^{k} \sigma_{y,j}
\sigma_{y,j+k} ) \sigma_{z,j+1} \ldots \sigma_{z,j+k-1} \,
+ \, h \sum_{j=1}^{L'} \sigma_{z,j} \\
\nonumber
\end{equation}
finding in this way a quantum group structure for this
integrable model.
The hamiltonian (22) can also be diagonalized with a
Jordan-Wigner transformation and its quasi-particles behaves
as free fermions.
The main application of the generalized $XY$ model is the
problem of covering a surface with horizontal and vertical
dimers. Indeed, the ground state of $\widetilde{H}$ for a
particular choice of parameters reproduces the two-dimensional
pure dimer problem \cite{S}, first solved in terms of a
Pfaffian \cite{K}.
To see the relation between $H$ and $\widetilde{H}$, let us
choose identical $XY$ models on each layer of the former
chain
\begin{equation}
J_{x}^{n}=J_{x} \: , \hspace{8mm} J_{y}^{n}=J_{y} \: ,
\hspace{8mm} h^{n}=h \hspace{8mm} n=1,..,M
\end{equation}
and rearrange the spin labels to form a single-layer chain
\begin{equation}
\sigma_{a,j}^{n}=\sigma_{a,j+n} \hspace{12mm} n=1,..,M \,
; \; a=x,y,z
\end{equation}
Then the hamiltonians $H$ and $\widetilde{H}$ coincide if
we set in the latter
\begin{equation}
\tilde{J}_{x}^{k} = - J_{x} \delta_{M,k} \, , \hspace{8mm}
\tilde{J}_{y}^{k} = - J_{y} \delta_{M,k}
\hspace{1cm} k=1,..,K
\end{equation}
\indent
The general $\widetilde{H}$ (22) is obtained by adding
hamiltonians $H^{(M)}$ derived from $\widehat{CH_q(2M)}$
$R-$matrices.
The fact that this sum is also solvable
relies on setting equal parameters
in each $H^{(M)}$ (this is the same condition
that leads to $N=2M$ supersymmetry in the
trigonometric limit of $\widehat{CH_q(2M)}$).
Therefore, the affine quantum Clifford Hopf algebras
$\widehat{CH_q(2M)}$ encode the hidden quantum group
for the generalized $XY$ spin chain (22).
\subsection*{5. Comments}
\indent
We have studied the quantum Clifford algebras
$\widehat{CH_q(2M)}$ in connection with extended
supersymmetry and with statistical integrable models.
It is worth noting that the hamiltonian derived from
$\widehat{CH_q(4)}$
in the trigonometric regime and without magnetic field, is
the limiting case $U \rightarrow \infty$ of the two layer
chain \cite{B}:
\begin{equation}
H= - \frac{1}{2} \sum_{j=1}^{L} \{ ( \sigma_{x}^{j}
\sigma_{x}^{j+1} + \sigma_{y}^{j} \sigma_{y}^{j+1} )
(1-U \tau_{z}^{j+1}) + ( \tau_{x}^{j} \tau_{x}^{j+1} +
\tau_{y}^{j} \tau_{y}^{j+1} ) (1- U \sigma_{z}^{j}) \}
\end{equation}
The coupling between the two layers in this model implies real
interaction, so the excitations are not free fermions, and the
ground state presents spontaneous magnetization
(if $U \neq 0, \infty$).
It still can be solved by Bethe Ansatz techniques, but a
$R-$matrix interpretation
for it is not known. The algebra $\widehat{CH_q(4)}$ gives us
a simple way of coupling two $XY$ models. Perhaps it would be
possible
to twist (may be in a way related to a quantum deformation
proposed recently for the Clifford algebras \cite{C}) and
break the full set of generators to a
shorter set giving a quantum group structure for this model.
We have built extended supersymmetric algebras
from the $\widehat{CH_q(2M)}$ generators in the trigonometric
limit.
The Clifford Hopf algebras can be thought of as elliptic
generalizations of supersymmetry (the anticommutators of
charges that
give the momentum $P$ and $\overline{P}$ get deformed in
the
elliptic case, but are still central elements). It would be
interesting to
analize what deformation of the Poincar\'e group one gets
in such a way.
\vspace{1cm}
{\bf Acknowledgements}
\indent
The author would like to thank A. Berkovich, R. Cuerno,
C. G\'omez
and G. Sierra for discussions and encouragement. This work is
supported by M.E.C. fellowship AP91 34090983.
\newpage
|
1,108,101,563,214 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
To develop high-performance Adaptive Optics (AO) systems, it is of paramount importance to develop sufficiently accurate models of active optical components, such as Deformable Mirrors (DMs) or spatial light modulators~\cite{vogel2006modeling,stewart2007open,vogel2010modeling,haber2020modelingHaberVerhaegen}. Once high-fidelity models are established and properly validated, they can be used to develop high-performance model-based controllers~\cite{ruppel2012feedforward,mocci2020pi,kuiper2018advances,haber2016framework,manetti2013self,ravensbergen2009deformable,polo2013linear,bifano1999microelectromechanical,haber2013predictive,haber2013identification,kulcsar2012minimum,chiuso2009dynamic,song2011controller}.
The main focus of this manuscript is on the development of control algorithms for DMs. In their essence, DMs are reflective optical elements whose surface deformations are precisely controlled by actuators. Widely-used DM actuation principles are based on MEMS, electromagnetic, electrostatic, piezoelectric, and ferrofluidic technologies and concepts. For sufficiently wide ranges of applied control actions (control voltages), MEMS and piezoelectric, as well as some other DM types, exhibit nonlinear behavior. The DM nonlinearities can manifest themselves in several forms. For example, for large magnitudes of applied control actions that are necessary to produce large mirror surface deformations, neighboring actuators become nonlinearly coupled through the observed DM surface deformation. This is especially the case for MEMS DMs~\cite{haber2021general}. Furthermore, a typical voltage-deformation response of a single actuator of a MEMS DM, is nonlinear~\cite{haber2021general}. On the other hand, DM concepts based on piezoelectric actuators are prone to hysteresis nonlinearities~\cite{ma2018hysteresis,schmerbauch2020influence}. In a number of scenarios and applications, DM behavior can change over time, and this requires online procedures for updating the DM models. For example, significant temperature fluctuations of the reflective surfaces, or temperature fluctuations of the main DM components, might significantly alter the DM behavior and dynamics. This is especially the case for space optics and for optics operating with high-power lasers, where DMs can directly absorb a significant portion of external heat fluxes, or the heat conduction from the supporting elements and devices can cause nonuniform DM temperature increases and significant temperature gradients over active DM surface areas~\cite{haber2020modeling,haber2021modeling,haberThesis,haber2013identification,habets2016multi,saathof2015actuation,zhang2020optimization,banyal2013opto}. To properly analyze, predict, and control the influence of thermal phenomena on DM behavior, it is often necessary to use data-driven techniques to estimate thermal dynamics~\cite{haber2019identification,haber2020modeling,haber2019subspace}.
If not properly modeled and if not taken into account when designing control algorithms, these nonlinearities and time-varying DM behavior, can significantly degrade the achievable closed-loop performance of AO systems. Widely used approaches for DM control are based on pre-estimated linear time-invariant DM models in the form of influence matrices, see for example~\cite{Haber:13,haber2021general} and references therein. In addition to this, most of the control approaches do not update DM models during system operation. The widely-used linear control approaches might work satisfactorily well when the nonlinear DM behavior is not excited, and when the DM behavior is time-invariant~\cite{haber2021general}. For example, when the DM operating voltages are close to the bias voltages around which a linear model is estimated, we can expect that the linear DM model will be accurate, and consequently, a linear controller will work satisfactorily well. That is, we can expect that linear control methods will produce good results for small to medium voltage operating ranges. Since the achievable magnitudes of the Peak-to-Valleys (P-Vs) of DM surface deformations are directly correlated with the sizes of the voltage operating ranges, linear control methods can relatively accurately correct for wavefront aberrations with small to medium P-Vs. However, if it is necessary to correct for wavefront aberrations with larger P-Vs, then the operating voltages need to significantly deviate from the linearization points. These deviations can significantly degrade the accuracy of linear models, and consequently, linear controllers might not be able to accurately correct for wavefront aberrations~\cite{haber2021general}. These key facts and observations motivate the development of novel controllers for DMs that will be able to extend the DM voltage operating ranges, as well as to successfully cope with DM nonlinearities and time-varying DM behavior.
Nonlinear estimation and control approaches have received less attention in the AO community. The most probable reason for this is that compared to linear control methods, the design and implementation of nonlinear control methods are significantly more complex. There are several obstacles that need to be overcome to properly design and implement nonlinear DM control algorithms. One of the main obstacles is that it is challenging to postulate appropriate model structures that can accurately capture the nonlinear DM behavior. Too complex nonlinear models with a large number of parameters are impractical for real-time control. The method proposed in~\cite{guzman2010deformable} uses nonparametric estimation techniques to postulate and estimate DM models that to some degree can model the DM nonlinearities. However, in order to accurately fit DM models, this approach might require a large number of wavefront samples. In principle, machine learning and reinforcement learning techniques~\cite{landman2021self,ke2019self,nousiainen2021adaptive,haber2019steady,mashrafi2020machine} can be used to estimate nonlinear DM models and design control algorithms that can deal with nonlinearities. However, these approaches often need a large number of iterations to converge and reach the optimal correction performance. Speeding up the convergence rate by incorporating some forms of \textit{a priori} models is an important research topic.
A viable alternative to purely nonlinear control approaches is to dynamically update linear model parameters on the basis of the actual values of DM control voltages and on the basis of the observed wavefronts. Gain scheduling techniques~\cite{leith2000survey} are based on the idea of defining a number of linear models and corresponding controllers, each of which is defined for a certain voltage operating range. On the basis of the most current values of the control voltages, gain scheduling techniques select the most appropriate model. Another approach is to use adaptive control methods~\cite{aastrom2013adaptive}. Adaptive control methods are dynamically updating model parameters on the basis of the observed wavefront aberrations. To the best of our knowledge, the adaptive control idea for DM control has received less attention, and the true potential of this method for DM control has not been thoroughly investigated. A simplified adaptive control method is used in~\cite{zou2009high} to iteratively calibrate a DM model. A recursive least-squares method~\cite{ljung1999} for updating DM models has been used in~\cite{huang2015high}. The limitation of the methods proposed in~\cite{zou2009high,huang2015high} is that they do not take into account the actuator limits. Furthermore, the adaptive control method proposed in~\cite{huang2015high} works completely in closed-loop, and consequently, special care needs to be dedicated to proper parameter tuning and the problems of persistency of excitation and estimation consistency~\cite{landau2011adaptive}. Recently, in~\cite{haber2021general,haber2021dual} we have proposed adaptive control algorithms for DM control. The approach presented in~\cite{haber2021general} is combining an open-loop control approach with a batch-data multivariable least-square problem for dynamically updating the DM influence matrix. This method does not update the DM model in the closed-loop, and its tuning is relatively simple. However, this method might require a large number of data samples to estimate the DM influence matrix. On the other hand, the method proposed in~\cite{haber2021dual}combines a recursive least-squares method with a feedback control algorithm. However, the tuning of this method is far from trivial and special attention needs to be dedicated to the problems of persistency of excitation and estimation consistency.
In this manuscript, we investigate the possibility of combining a simple open-loop control method with a recursive least squares method for dynamical estimation of the DM influence matrix. The developed method explicitly takes into account actuator constraints, and thus it has significant advantages over the approaches proposed in~\cite{zou2009high,huang2015high}. In every iteration, the DM influence matrix is updated by using the recursive least-squares method, and the updated influence matrix is used to compute open-loop control actions. Consequently, since the control process is performed in open loop and since only the parameters of the recursive least-squares method need to be tuned, the tuning of this method is simpler compared to the tuning of the adaptive feedback control approach proposed in~\cite{haber2021dual}. On the other hand, the advantage of the developed method over the method proposed in~\cite{haber2021general}, is that it requires fewer samples to converge. This is because the adaptive estimation of the influence matrix is performed by using the recursive least-squares method, instead of using the batch-data multivariable least-squares method. We experimentally verify the developed approach on a Boston Micromachines MEMS DM with 140 actuators. In this manuscript, we present preliminary control results, and further experimental investigation and improvements of the proposed approach will be presented in subsequent publications. The preliminary experimental results reported in this manuscript demonstrate good potential for using the developed method for DM control.
This manuscript is organized as follows. In Section~\ref{sec:controlAlgorithm}, we present the control method. In Section~\ref{sec:experimentalResults}, we present the experimental results. Conclusions and future research directions are briefly discussed in Section~\ref{sec:conclusions}.
\section{CONTROL METHOD}
\label{sec:controlAlgorithm}
In this section we present the control method. We assume the following DM model
\begin{align}
\mathbf{z}_{k+1}=L_{k}\mathbf{b}_{k}+\mathbf{d}_{k},
\label{controlEquation}
\end{align}
where $k$ is a discrete time instant at which the DM surface deformation is observed or control actions are sent to the DM ($k$ is also referred to as a control iteration in the sequel), $\mathbf{z}_{k} \in \mathbb{R}^{n}$ is a vector consisting of Zernike coefficients obtained by approximating the DM surface deformation by using Zernike basis functions, $n$ is a total number of Zernike coefficients, $L_{k}\in \mathbb{R}^{n\times m}$ is an influence matrix that can dynamically change at every iteration $k$, $\mathbf{d}_{k}\in \mathbb{R}^{n}$ is a vector modeling the effect of the measurement noise, and $\mathbf{b}_{k} \in \mathbb{R}^{m}$ is a vector of control inputs $u_{k,i}$, $i=1,2,3,\ldots, m$, constructed as follows
\begin{align}
\mathbf{b}_{k}=\begin{bmatrix}u_{k,1}^{\theta} & u_{k,2}^{\theta} & \ldots & u_{k,m}^{\theta} \end{bmatrix}^{T},
\label{controlVectorB}
\end{align}
where $\theta\in \mathbb{R}$ quantifies the voltage-deformation nonlinearity of a single actuator, and $m$ is a total number of DM actuators. We use an experimental setup with a sensor that is able to directly obtain mirror surface deformations (for more details, see Section~\ref{sec:experimentalResults}). Consequently, the objective of the control method is to produce a desired mirror surface deformation that is expressed by using the Zernike basis functions. However, with minor modifications, the proposed method can be used if the control objective is to produce or correct general wavefronts that are expressed by using the Zernike basis functions or some other basis functions.
To simplify the control algorithm implementation, we scale down the control inputs $u_{k,i}$ to the interval $[0,1]$, that is, $0\le u_{k,i } \le 1 $. The value of $\theta$ depends on the used mirror type. To test the developed method, we use a Boston Micromachines MEMS DM with 140 actuators. Often, it is assumed that $\theta=2$ for this DM type. However, in~\cite{haber2021general} we demonstrated that a more accurate estimate of this constant is $\theta= 1.742$. Consequently, we develop the control method by assuming $\theta= 1.742$. If other DM types are used, then this constant should be re-estimated by using the procedure explained in~\cite{haber2021general}.
To apply the recursive least squares method~\cite{ljung1999} for the estimation of the influence matrix, it is necessary to transform the equation \eqref{controlEquation} into a more suitable form. In this form, the entries of the influence matrix should appear as vector entries. For that purpose, we apply the $\text{vec}(\cdot)$ operator~\cite{verhaegen2007filtering} to \eqref{controlEquation}. When applied to a matrix, this operator produces a vector in which the influence matrix entries are stacked column-wise. By applying the $\text{vec}(\cdot)$ operator to \eqref{controlEquation}, we obtain
\begin{align}
\mathbf{z}_{k+1}=H_{k}\mathbf{x}_{k}+\mathbf{d}_{k},
\label{vectorizedEq}
\end{align}
where $H_{k}\in \mathbb{R}^{n\times(n\cdot m)}$, $H_{k}=\mathbf{b}_{k}^{T} \otimes I_{n}$, $I_{n}\in \mathbb{R}^{n\times n}$ is $n\times n $ identity matrix, and $\mathbf{x}_{k}=\text{vec}\big(L_{k}\big)$, $\mathbf{x}_{k}\in \mathbb{R}^{n\cdot m}$ is a vector of influence matrix parameters, and the notation $\otimes$ denotes the Kronecker matrix product. For more information about vectorization of matrix equations and the Kronecker product see~\cite{laub2005matrix}.
To initialize the recursive least-squares method, we need to have an initial value of the vector $\mathbf{x}_{k}$. That is, we need to estimate the initial values of the influence matrix parameters. We use a multivariable least-squares method explained in~\cite{haber2021dual,haber2021general,Haber:13} to estimate these parameters. For the time being, we assume that over a discrete-time horizon $k=0,1,2,\ldots,s-1$, of length $s\ge m$ ($s$ is a parameter selected by the user, however, it has to be larger than the number of DM actuators), the influence matrix $L_{k}$ is constant and equal to an initial value, denoted by $L_{0}$. Then, similarly to the approach used in~\cite{haber2021general}, on the basis of \eqref{controlEquation}, we can form a multivariable least-squares problem, and by solving it, we can obtain the initial value of the influence matrix parameters. This procedure produces the following estimate of the influence matrix
\begin{align}
\hat{L}_{0}=ZB^{T}\Big(BB^{T} \Big)^{-1},
\label{initialValueEstimate}
\end{align}
where the notation $\hat{L}_{0}$ denotes an estimate of the initial value of the influence matrix, and
\begin{align}
Z=\begin{bmatrix}\mathbf{z}_{1} & \mathbf{z}_{2} & \ldots & \mathbf{z}_{s} \end{bmatrix}, \;\; B=\begin{bmatrix} \mathbf{b}_{0} & \mathbf{b}_{1} & \ldots & \mathbf{b}_{s-1} \end{bmatrix},
\end{align}
and where $Z\in \mathbb{R}^{n\times s}$ and $B\in \mathbb{R}^{m\times s}$. The columns of $B$ are vectors $\mathbf{b}_{i}$, $i=0,1,\ldots, s-1$. Every vector $\mathbf{b}_{i}$ is constructed by randomly selecting the control input $u_{i,j}$, $j=1,2,\ldots, m$ from a normal distribution with the mean of $0.5$ and standard deviation of $0.15$. If a randomly generated value of $u_{i,j}$ is larger than $1$, then we set that value to $1$. Similarly, if the randomly generated value of $u_{i,j}$ is smaller than zero, we set that value to $0$. The matrix $Z$ consists of the vectors $\mathbf{z}_{i}$, $i=1,2,\ldots, s$, obtained by decomposing the observed DM surface deformation by using the Zernike basis functions.
The goal of the control algorithm is to produce a desired mirror surface shape. By expressing this desired surface shape in the Zernike basis, we define the vector of desired Zernike coefficients represented by the vector $\mathbf{z}_{D}$. By using the estimated influence matrix $\hat{L}_{0}$, we estimate the initial values of the control actions by solving the following open-loop optimization problem
\begin{align}
& \min_{\mathbf{b}_{0}} \left\| \mathbf{z}_{D} - \hat{L}_{0}\mathbf{b}_{0} \right\|_{2}^{2}, \label{optimizationProblem1} \\
& \text{subject to:} \;\;\; \underline{\mathbf{b}} \le \mathbf{b}_{0} \le \overline{\mathbf{b}}, \label{optimizationProblem2}
\end{align}
where $ \underline{\mathbf{b}} \in \mathbb{R}^{m}$ and $\overline{\mathbf{b}} \in \mathbb{R}^{m}$ are the lower and upper bounds on the optimization variable $\mathbf{b}_{0}$, and the relation $\le$ is applied element-wise. Since the control inputs for the DM are scaled down to the interval $[0,1]$, we select $\underline{\mathbf{b}}$ as a vector of zeros, and $\overline{\mathbf{b}}$ as a vector of ones. We solve the problem \eqref{optimizationProblem1}-\eqref{optimizationProblem2} by using the MATLAB function $\text{lsqlin}(\cdot)$. Let the solution of \eqref{optimizationProblem1}-\eqref{optimizationProblem2} be denoted by $\hat{\mathbf{b}}_{0}$. Once this solution is found, we can easily compute the control inputs $\hat{u}_{0,i}$ from the entries of $\hat{\mathbf{b}}_{0}$.
Beside the initial value of the influence matrix parameters given by the vector $\hat{\mathbf{x}}_{0}=\text{vec}(\hat{L}_{0})$, and the initial vector $\hat{\mathbf{b}}_{0}$ computed by solving \eqref{optimizationProblem1}-\eqref{optimizationProblem2}, we also need an additional matrix to initialize the control method presented below. For the initial iteration $k$ of the method presented below, we define the matrix $S_{0}=\delta I_{n\cdot m}$, $S_{0} \in \mathbb{R}^{(n\cdot m )\times (n\cdot m )}$, where $\delta >0 $ is a positive real parameter selected by the user, and $I_{n\cdot m}\in \mathbb{R}^{(n\cdot m) \times (n\cdot m)}$ is an identity matrix.
\\
\\
The control algorithm consists of the following two steps that are performed iteratively for $k=0,1,2,\ldots$
\begin{enumerate}
\item Observe the DM surface deformation and form the vector of Zernike coefficients $\mathbf{z}_{k+1}$. By using the values of $S_{k}$ and $\hat{\mathbf{x}}_{k}$ computed at the iteration $k$, update the influence matrix parameter vector $\hat{\mathbf{x}}_{k+1}$ by using the recursive least-squares method
\begin{align}
H_{k}&=\hat{\mathbf{b}}_{k}^{T} \otimes I_{n}, \label{recursiveComputation1} \\
F_{k+1}& =S_{k}H_{k}^{T}\Big(\beta I_{n} +H_{k}S_{k}H_{k}^{T} \Big)^{-1}, \label{recursiveComputation2}\\
S_{k+1}& =\frac{1}{\beta} S_{k} -\frac{1}{\beta} F_{k+1}H_{k}S_{k}, \label{recursiveComputation3} \\
\boldsymbol{\varepsilon}_{k+1}& = \mathbf{z}_{k+1}-H_{k}\hat{\mathbf{x}}_{k}, \label{recursiveComputation4} \\
\hat{\mathbf{x}}_{k+1}&=\hat{\mathbf{x}}_{k}+F_{k+1}\boldsymbol{\varepsilon}_{k+1}, \label{recursiveComputation5}
\end{align}
where $0 < \beta \le 1 $ is a parameter selected by the user.
\item For the computed $\hat{\mathbf{x}}_{k+1}$, form the matrix $\hat{L}_{k+1}$ (by inverting the vectorization operator). Compute the control inputs by solving the optimization problem
\begin{align}
& \min_{\mathbf{b}_{k+1}} \left\| \mathbf{z}_{D} -\hat{L}_{k+1}\mathbf{b}_{k+1} \right\|_{2}^{2}, \label{optimizationProblem1iterative} \\
& \text{subject to:} \;\;\; \underline{\mathbf{b}} \le \mathbf{b}_{k+1} \le \overline{\mathbf{b}}. \label{optimizationProblem2iterative}
\end{align}
Once the solution $\hat{\mathbf{b}}_{k+1}$ is computed, form the control input vector $\hat{\mathbf{u}}_{k+1}$, apply the control inputs, and go to step 1.
\end{enumerate}
We use the MATLAB function $\text{lsqlin}(\cdot)$ to solve the problem \eqref{optimizationProblem1iterative}-\eqref{optimizationProblem2iterative}. The vector $\boldsymbol{\varepsilon}_{k+1}$, defined in \eqref{recursiveComputation4}, is called a model error, since this vector is the difference between the observed Zernike coefficients $\mathbf{z}_{k+1}$ and the model prediction given by $H_{k}\hat{\mathbf{x}}_{k}$.
\section{Experimental Results}
\label{sec:experimentalResults}
In this section, we present experimental results obtained by testing the developed control method. The experimental setup for testing the developed control method is the same as the experimental setup used in~\cite{haber2021general}. Consequently, we only briefly describe the experimental setup. To test the control method we use a Boston Micromachines MEMS DM. The mirror surface is deformed by 140 actuators distributed over a 12 by 12 actuation grid with $4$ inactive corner actuators. The mirror has a pitch of $400$ $[\mu m ]$ and a stroke of about $2$ $[\mu m ]$. For more information about the used DM, see~\cite{haber2021general,haber2021dual} and references therein. The mirror surface shape is observed by using the Partitioned Aperture Wavefront (PAW) sensor~\cite{parthasarathy2012quantitative,barankov2013single}. The light source is an LED ($660$ $[nm]$, Thorlabs).
The control algorithm is implemented in MATLAB. The surface deformation (surface profile) observed by the PAW sensor is represented by an image of $1001$ by $999$ pixels. However, this area also covers an inactive mirror surface area. In our experiments, a central circular region of this image is used as an observation area and for approximating the mirror surface deformation by using the Zernike basis coefficients. The diameter of the circular region is $398$ pixels. A small percentage of actuators is outside of this observation area. However, the deformation caused by these boundary actuators can be observed in the circular observation region.
In every control iteration, we quantify the control method accuracy by computing the Root-Mean-Square (RMS) value of the surface shape error. The surface shape error is computed by subtracting the desired surface shape from the produced surface shape.
To properly implement the control method it is important to select a sufficiently large number of Zernike coefficients (represented by $n$) for approximating the mirror surface shape. Namely, our procedure for estimating the initial value of the influence matrix is based on random control inputs for DM actuators. These random control inputs excite higher-order spatial frequencies of the mirror surface shape. Consequently, it is necessary to select a sufficiently large value of $n$ to accurately approximate the mirror surface shape by using the Zernike basis decomposition. Small values of $n$ will produce influence matrix models with significant model uncertainties. These model uncertainties can significantly limit the control accuracy or even cause control instabilities. To illustrate this effect, we vary $n$, and for every value of $n$ we estimate the initial value of the influence matrix by computing \eqref{initialValueEstimate}. For every value of $n$, we use $s=400$ in \eqref{initialValueEstimate} ($s$ is the number of data samples used to estimate the influence matrix). The desired mirror surface shape is equal to a scaled and shifted version of $Z_{4}^{2}$ (vertical secondary astigmatism). For this desired shape, we compute the initial control inputs by solving \eqref{optimizationProblem1}-\eqref{optimizationProblem2}. We then initialize the iterative control method with the initial value of the estimated influence matrix and the initial value of the computed control inputs. We use the following parameters $\beta=0.98$ and $\delta=10^{-2}$ in the control algorithm. After $30$ iterations of the control algorithm, we identify the smallest value of the RMS shape error. While computing the RMS value, we crop the edges of the circular domain over which the surface shape error is defined to exclude the edge effects (the edge effects are illustrated later in the text). The results are shown in Fig.~\ref{fig:Graph1}.
The large RMS values of the surface shape errors for smaller values of $n$ are caused by the Zernike basis decomposition errors. For smaller values of $n$, we are not able to accurately decompose the mirror surface shape. Consequently, the estimated influence matrices are not able to accurately represent the DM behavior. On the other hand, we can observe that after approximately $n=370$, the RMS values of the surface shape errors saturate. That is, by using the values of $n$ larger than $370$, we do not obtain any additional gains in control accuracy. This limit is important to know since larger values of $n$ significantly increase the computational and memory complexities of implementing the recursive least-squares method \eqref{recursiveComputation1}-\eqref{recursiveComputation5}. By excessively increasing $n$, we will not improve the control accuracy, while on the other hand, we will significantly increase the computational and memory complexities.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6,trim=0mm 0mm 0mm 0mm ,clip=true]{figure1}
\caption{RMS of surface shape errors as the function of the number of the Zernike coefficients (represented by $n$) that are used to approximate the mirror surface deformation.}
\label{fig:Graph1}
\end{figure}
Next, we investigate the accuracy and the convergence of the developed method. First, we test the method for the desired surface shape equal to a scaled and shifted version of $Z_{4}^{2}$, and for the control algorithm parameters $\beta=0.98$ and $\delta=10^{-2}$. To approximate the mirror surface deformation we use $n=498$ Zernike coefficients. The P-V of the desired surface shape is $1.1829$ $[\mu m]$. For the estimation of the initial value of the influence matrix, we use $s=400$ data samples. After generating the initial guesses of the control inputs and the influence matrix, we initialize and start the control method. The results are shown in Fig.~\ref{fig:Graph2}. Panel (a) shows the desired surface shape. Panel (b) shows the best-produced shape. This shape is produced at iteration $9$ of the developed method. Panel (c) shows the global surface shape error corresponding to the best-produced shape. We can notice that the close to the edges, the error values significantly increase. Panel (d) is obtained by cropping the surface shape error such that the edge effects are removed. This is the central surface shape error. The RMS value of the central surface shape error is $0.0186$ $[\mu m]$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6,trim=0mm 0mm 0mm 0mm ,clip=true]{figure2}
\caption{Generation of the desired shape that is equal to a scaled and shifted version of $Z_{4}^{2}$. (a) Desired surface shape. (b) Best produced shape. (c) Global surface shape error. (d) Central surface shape error after edge effects are cropped from the global surface shape error shown in panel (c). The RMS value of the central surface shape error is $0.0186$ $[\mu m]$.}
\label{fig:Graph2}
\end{figure}
Figure~\ref{fig:Graph3} shows the convergence of the control method for the desired shape used to generate the results shown in Fig.~\ref{fig:Graph2}. The convergence is quantified by computing the 2-norm of the model error vector $\boldsymbol{\varepsilon}_{k+1}$ defined in \eqref{recursiveComputation4}, and by computing the RMS of the central shape error with the domain shown in Fig.~\ref{fig:Graph2}(d).
\begin{figure}[H]
\centering
\includegraphics[scale=0.75,trim=0mm 0mm 0mm 0mm ,clip=true]{figure3}
\caption{Convergence of the developed control method. (a) Convergence of the model error quantified by the 2-norm $\left\|\boldsymbol{\varepsilon}_{k} \right\|_{2}$. Convergence of the RMS of the (central) shape error.}
\label{fig:Graph3}
\end{figure}
Figure~\ref{fig:Graph4} shows the control performance for producing a shifted and scaled version of $Z_{6}^{2}$. The P-V of the desired shape is $0.9897$ $[\mu m]$. The control parameters are $\beta=0.98$ and $\delta =10^{-6}$. For the estimation of the initial value of the influence matrix, we use $s=400$ data samples. The RMS value of the central surface shape error is $0.0393$ $[\mu m]$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6,trim=0mm 0mm 0mm 0mm ,clip=true]{figure4}
\caption{Generation of the desired shape that is equal to a scaled and shifted version of $Z_{6}^{2}$. (a) Desired surface shape. (b) Best produced shape. (c) Global surface shape error. (d) Central surface shape error after edge effects are cropped from the global surface shape error shown in panel (c). The RMS value of the central surface shape error is $0.0393$ $[\mu m]$.}
\label{fig:Graph4}
\end{figure}
From Figs.~\ref{fig:Graph2} and \ref{fig:Graph4} we can observe that the values of the surface shape error increase close to the edges. There are two approaches for reducing these errors. The first approach is to decrease the diameter of the circular domain area over which Zernike decomposition is defined. In this way, we will have more boundary actuators (some of which will be outside the circular domain area, however, their effect will still be observable) that will be able to control the deformation close to the boundaries. The second approach is to tune the parameters of the control algorithms. The main parameters are $\delta$, $s$, and $\beta$. Optimal tuning of the control parameters is left for future research.
As mentioned previously, the computational complexity of the developed approach significantly increases with the number of Zernike coefficients used to approximate the mirror surface deformation. Furthermore, the computational complexity also increases with the number of actuators of the DM. The main computational bottleneck originates from the recursive least-squares method given by the equations \eqref{recursiveComputation1}-\eqref{recursiveComputation5}. One of the pathways for reducing the computational complexity is to exploit the Kronecker structure and hidden sparsity structure of the problem. The starting point for the research activities directed toward reducing the computational complexity can be approaches presented in~\cite{massioni2011fast,haber2016framework,haber2018sparsity,cerqueira2021sparse,sinquin2018tensor,haber2014subspace,massioni2015approximation,Haber:13mhe,haber2012identification}.
\section{Conclusion and Future Work}
\label{sec:conclusions}
In this manuscript, we developed a novel method for the adaptive control of Deformable Mirrors (DMs). The developed method relies on a recursive least-squares method for updating the influence matrix of the DM, and on an open-loop control method for controlling the DM. The developed method is experimentally verified by using a Boston Micromachines MEMS DM with 140 actuators. Preliminary experimental results reported in this manuscript demonstrate good potential for using the developed method for DM control. The future research direction should be directed toward developing an approach for tuning the parameters of the developed method. Also, research efforts should be directed toward reducing the computational and memory complexities of the developed approach.
\acknowledgments
We would like to thank Professor Thomas Bifano from Boston University for enabling us to use the Boston Micromachines MEMS DM.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.